date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2020/10/27
636
2,119
<issue_start>username_0: I'm reading chapter one of the book called [Neural Networks and Deep Learning](https://dl.uswr.ac.ir/bitstream/Hannan/141305/2/9783319944623.pdf) from Aggarwal. In section 1.2.1.1 of the book, I'm learning about the perceptron. One thing that book says is, if we use the sign function for the following loss function: $\sum\_{i=0}^{N}[y\_i - \text{sign}(W \* X\_i)]^2$, that loss function will NOT be differentiable. Therefore, the book suggests us to use, instead of the sign function in the loss function, the perceptron criterion which will be defined as: $$ L\_i = \max(-y\_i(W \* X\_i), 0) $$ The question is: Why is the perceptron criterion function differentiable? Won't we face a discontinuity at zero? Is there anything that I'm missing here?<issue_comment>username_1: Since we're dealing with real-values variables, it is almost certainly the case that the argument of the function will not be $0$. If you care strongly about that point, you can just use sub-gradients instead (and we do have sub-gradients for this function, so there is no problem). Upvotes: 2 <issue_comment>username_2: $\max(-y\_i(w x\_i), 0)$ is not partial derivable respect $w$ if $w x\_i=0$. Loss functions are problematic when not derivable in some point, but even more when they are flat (constant) in some interval of the weights. Assume $y\_i = 1$ and $w x\_i < 0$ (that is, an error of type "false negative"). In this case, function $[y\_i - \text{sign}(w x\_i)]^2 = 4$. Derivative on all interval $w x\_i < 0$ is zero, thus, the learning algorithm has no any way to decide if it is better increase or decrease $w$. In same case, $\max(-y\_i(w x\_i), 0) = - w x\_i$, partial derivative is $-x\_i$. The learning algorithm knows that it must increase $w$ value if $x\_i>0$, decrease otherwise. This is the real reason this loss function is considered more practical than previous one. How to solve the problem at $w x\_i = 0$ ? simply, if you increase $w$ and the result is an exact $0$, assign to it a very small value, $w=\epsilon$. Similar logic for remainder cases. Upvotes: 4 [selected_answer]
2020/10/28
532
2,061
<issue_start>username_0: Multi-label assignment is the task in machine learning to assign to each input value a set of categories from a fixed vocabulary where the categories need not be statistically independent, so precluding building a set of independent classifiers each classifying the inputs as belong to each of the categories or not. Machine learning also needs a measure by which the model may be evaluated. So this is the question how do we evaluate a multi-label classifier? We can’t use the normal recall, accuracy and F measures since they require a binary is it correct or not measure of each categorisation. Without such a measure we have no obvious means to evaluate models nor to measure concept drift.<issue_comment>username_1: Since we're dealing with real-values variables, it is almost certainly the case that the argument of the function will not be $0$. If you care strongly about that point, you can just use sub-gradients instead (and we do have sub-gradients for this function, so there is no problem). Upvotes: 2 <issue_comment>username_2: $\max(-y\_i(w x\_i), 0)$ is not partial derivable respect $w$ if $w x\_i=0$. Loss functions are problematic when not derivable in some point, but even more when they are flat (constant) in some interval of the weights. Assume $y\_i = 1$ and $w x\_i < 0$ (that is, an error of type "false negative"). In this case, function $[y\_i - \text{sign}(w x\_i)]^2 = 4$. Derivative on all interval $w x\_i < 0$ is zero, thus, the learning algorithm has no any way to decide if it is better increase or decrease $w$. In same case, $\max(-y\_i(w x\_i), 0) = - w x\_i$, partial derivative is $-x\_i$. The learning algorithm knows that it must increase $w$ value if $x\_i>0$, decrease otherwise. This is the real reason this loss function is considered more practical than previous one. How to solve the problem at $w x\_i = 0$ ? simply, if you increase $w$ and the result is an exact $0$, assign to it a very small value, $w=\epsilon$. Similar logic for remainder cases. Upvotes: 4 [selected_answer]
2020/10/30
505
1,804
<issue_start>username_0: I'm trying to get a detected car's orientation when object detection is applied. For instance, when we apply object detection on a car and get a bounding box, is there any ways or methods to calculate where the heading is or the orientation or direction of the car (just 2D plane is fine)? Any thoughts or ideas would be helpful. [![enter image description here](https://i.stack.imgur.com/km1Ey.png)](https://i.stack.imgur.com/km1Ey.png)<issue_comment>username_1: Since we're dealing with real-values variables, it is almost certainly the case that the argument of the function will not be $0$. If you care strongly about that point, you can just use sub-gradients instead (and we do have sub-gradients for this function, so there is no problem). Upvotes: 2 <issue_comment>username_2: $\max(-y\_i(w x\_i), 0)$ is not partial derivable respect $w$ if $w x\_i=0$. Loss functions are problematic when not derivable in some point, but even more when they are flat (constant) in some interval of the weights. Assume $y\_i = 1$ and $w x\_i < 0$ (that is, an error of type "false negative"). In this case, function $[y\_i - \text{sign}(w x\_i)]^2 = 4$. Derivative on all interval $w x\_i < 0$ is zero, thus, the learning algorithm has no any way to decide if it is better increase or decrease $w$. In same case, $\max(-y\_i(w x\_i), 0) = - w x\_i$, partial derivative is $-x\_i$. The learning algorithm knows that it must increase $w$ value if $x\_i>0$, decrease otherwise. This is the real reason this loss function is considered more practical than previous one. How to solve the problem at $w x\_i = 0$ ? simply, if you increase $w$ and the result is an exact $0$, assign to it a very small value, $w=\epsilon$. Similar logic for remainder cases. Upvotes: 4 [selected_answer]
2020/11/01
509
1,837
<issue_start>username_0: Target network in DQN is known to make the network more stable, and the loss is like "how good I'm now compared to using the target". What I don't understand is, if the target network is the stable one, why do we keep using/saving the first model as the predictor instead of the target? I see in the code everywhere: * Model * Target model * Train model * Copy to target * Get loss between them At the end, the model is saved and used for prediction and not the target.<issue_comment>username_1: Since we're dealing with real-values variables, it is almost certainly the case that the argument of the function will not be $0$. If you care strongly about that point, you can just use sub-gradients instead (and we do have sub-gradients for this function, so there is no problem). Upvotes: 2 <issue_comment>username_2: $\max(-y\_i(w x\_i), 0)$ is not partial derivable respect $w$ if $w x\_i=0$. Loss functions are problematic when not derivable in some point, but even more when they are flat (constant) in some interval of the weights. Assume $y\_i = 1$ and $w x\_i < 0$ (that is, an error of type "false negative"). In this case, function $[y\_i - \text{sign}(w x\_i)]^2 = 4$. Derivative on all interval $w x\_i < 0$ is zero, thus, the learning algorithm has no any way to decide if it is better increase or decrease $w$. In same case, $\max(-y\_i(w x\_i), 0) = - w x\_i$, partial derivative is $-x\_i$. The learning algorithm knows that it must increase $w$ value if $x\_i>0$, decrease otherwise. This is the real reason this loss function is considered more practical than previous one. How to solve the problem at $w x\_i = 0$ ? simply, if you increase $w$ and the result is an exact $0$, assign to it a very small value, $w=\epsilon$. Similar logic for remainder cases. Upvotes: 4 [selected_answer]
2020/11/06
2,068
9,282
<issue_start>username_0: I am working on a problem that involves two tasks - detection and classification. There is no single dataset for both tasks. I am training two models, separate on detection dataset and another on classification dataset. I use the images from the detection dataset as input and get classification predictions on top of detected bounding boxes. Dataset description : 1. Classification - Image of the single object (E.g. Car) in the center with a classification label. 2. Detection - Image with multiple objects (E.g. 4 Cars) with bounding box annotations. Task - Detect objects(e.g. cars) from detection datasets and classify them into various categories. How do I verify whether the classification model trained on the classification dataset is working on images from detection dataset? (In terms of classification accuracy) I cannot manually label the images from the detection dataset for individual class labels. (Need expert domain knowledge) How do I verify my classification model? Is there any technique to do this ? Like domain transfer or any weakly-supervised method ?<issue_comment>username_1: To verify the accuracy of the classification stage, you will need labeled images with a single car. To train and verify accuracy of the detection stage and full system, you can: 1. in the datasets with images with multiple cars, manually, mark the image rectangles that contains one car. 2. from previous, split the image in one or more ones, each one containing a single car. 3. pass each one of the previous image with a single car to the classification stage (that means assume classification has 100% accuracy). Record its outputs (labeled cars). 4. now, from output of steps 1) and 3), you can produce labeled images with multiple cars. Use it to train the detector and verify full system accuracy. Upvotes: 2 <issue_comment>username_2: **The Problem** We can see from the question that existing information on detection and classification in the small automotive vehicle domain has been located (in the form of two independent sets of vectors usable for machine training), and there is no already existing mapping or other correspondence between the elements of one set and the elements of the other. They were obtained independently, remain independent, and are linked only by the conventions of the domain (today's aesthetically acceptable and thermodynamically workable forms of small vehicles). The goal stated in the question is to create a computer vision system that both detects cars and classifies them leveraging the information contained in the two distinct sets. In the vision systems of mammals, there are also two distinct equivalences of sets; one arising from a genetic algorithm, the DNA that is expressed during the formation of the neural net geometry and bio-electro-chemistry of the visual system in early development; and the cognitive and coordinative pathways in the cerebrum and cerebellum. If a robot, wheelchair, or other vehicle is to avoid traffic, we must produce a system that in some way matches or exceeds the collision avoidance performance of mammals. In crime prevention, toll collection, sales lot inventory, county traffic analysis, and other like applications, performance will again be expected to match or exceed the performance of biological systems. If a person can record the make, model, year, color, and license plate strings, so should the machine we employ in these capacities. Consequently, this question is pertinent beyond academic curiosity, as it is applicable in current research and development of products. That this question author notices the lack of a unified data set that can be used to train it to detect and characterize in a single network objects of interest is apropos and key to the challenge of finding a solution. *Approach* The simplest approach would be to compose the system of two functions. 1. $\quad\mathcal{D}: \mathbb{I}^4 \to {(\mathbb{I}^2, \mathbb{I}^2)}\_1, \; {(\mathbb{I}^2, \mathbb{I}^2)}\_2, \; ... $ 2. $\quad\mathcal{C}: {(\mathbb{I}^2, \mathbb{I}^2)}\_i \to {(\mathbb{I})}\_i$ The four dimensions of input for $\mathcal{D}$, the detector, are horizontal position, vertical position, rgb index, and brightness to decribe the pixelized image; and the output are bounding boxes as two "corner" coordinates corresponding to each identified vehicle, the second coordinate being either relative to the first or to a specific corner of the entire frame. The categorizer, $\mathcal{C}$, receives as input bounding boxes and produces as output the index or code that maps to the categories corresponding to the labels of the training set available for categorization. The system can then be described as follows. $\quad\quad\mathcal{S}: \mathcal{C} \circ \mathcal{D}$ If the system is not color, subtract one from the above dimensionality of the input. If the system processes video, add one to the dimensionality of the input and consider using LSTM or GRU cell types. The above substitution represented by "$\circ$" appears to be what is meant by, "I use the images from the detection dataset as input and get classification predictions on top of detected bounding boxes." The interrogative, "How do I verify whether the classification model trained on the classification dataset is working on images from detection dataset? (In terms of classification accuracy)," appears to refer to the fact that labels do not exist for the second set that correspond to input elements of the first set, so an accuracy metric cannot be directly obtained. Since there is no obvious automatic way of generating labels for the vehicles in the pre-detected images containing potentially multiple vehicles, there is no way to check actual results against expected results. Composing multiple vehicle images from the categorization set to use as test input to the entire system $\mathcal{S}$ will only be useful in evaluating an aspect of the performance of $\mathcal{D}$, not $\mathcal{C}$. **Solution** The only way to evaluate the accuracy and reliability of $\mathcal{C}$ is with portions of the set used to train it that were excluded from the training and trust that the vehicles depicted in those images were sufficiently representative of the concept "car" to provide consistency of accuracy and reliability across the range of those detected by $\mathcal{D}$ in the application of $\mathcal{S}$. This means that the leveraging of the information, even if optimized to the degree possible by any arbitrary algorithm or parallelism in the set of all possible algorithms or parallelisms, is limited by the categorization training set. The number of set elements and the comprehensiveness and distribution of categories within that set must be sufficient to achieve an approximate equality between these two accuracy metrics. 1. Categorizing a test sample from the labeled set for $\mathcal{C}$ excluded from the training 2. Categorizing the vehicles isolated by $\mathcal{D}$ from its training input **With Additional Resources** Of course this discussion is in a particular environment, that of the system defined as the two artificial networks, one involving convolution based recognition and the other involving feature extraction, and the two training sets. What is needed is a wider environment where known vehicles are in view so that performance data of $\mathcal{S}$ is evaluated and a tap on the transfer of information between $\mathcal{D}$ and $\mathcal{C}$ can be used to differentiate between mistakes made on either side of the tap point. **Unsupervised Approach** Another course of action could be to not use the training set for categorization on the training of $\mathcal{C}$ at all, but rather use feature extraction and auto-correlation in an "unsupervised" approach, and then evaluate the results of on the basis of the final convergence metrics at the point when stability in categorization is detected. In this case, the images in the bounding boxes output by $\mathcal{D}$ would be used as training data. The auto-trained network realizing $\mathcal{C}$ can then be further evaluated using the entire categorization training set. **Further Research** Hybrids of these two approaches are possible. Also, the independent training only in the rarest of cases leads to optimal performance. Understanding feedback as originally treated with rigor by MacColl in chapter 8 of his *Fundamental Theory of Servomechanisms*, later applied to the problem of linearity and stability of analog circuitry, and then to training, first in the case of GANs, may lead to effective methods to bi-train the two networks. That evolved biological networks are trained *in situ* is an indicator that the most optimal performance can be gained by finding training architectures and information flow strategies that create optimality in both components simultaneously. No biological niche has ever been filled by a neural component that is first optimized and then inserted or copied in some way to a larger brain system. That is no proof that such component-ware can be optimal, but there is also no proof that the DNA driven systems that have emerged are not nearly optimized for the majority of terrestrial conditions. Upvotes: 2 [selected_answer]
2020/11/06
2,125
9,039
<issue_start>username_0: I am learning PyTorch on Udacity. In lesson 8, section 11: [Training the Model](https://classroom.udacity.com/courses/ud188/lessons/2f4910ee-6d67-47df-97e2-f35db67cbc19/concepts/062bfbc6-34c8-4e5c-b072-6479eca5a385), the instructor writes: > > Then I have my embedding and hidden dimension. The embedding dimension is just a smaller representation of my vocabulary of 70k words and I think any value between like 200 and 500 or so would work, here. I've chosen 400. Similarly, for our hidden dimension, I think 256 hidden features should be enough to distinguish between positive and negative reviews. > > > There are more than 70000 different words. How could those more than 70000 unique words be represented by just 400 embeddings? How does an embedding look like? Is it a number? Moreover, why would 256 hidden features be enough?<issue_comment>username_1: To verify the accuracy of the classification stage, you will need labeled images with a single car. To train and verify accuracy of the detection stage and full system, you can: 1. in the datasets with images with multiple cars, manually, mark the image rectangles that contains one car. 2. from previous, split the image in one or more ones, each one containing a single car. 3. pass each one of the previous image with a single car to the classification stage (that means assume classification has 100% accuracy). Record its outputs (labeled cars). 4. now, from output of steps 1) and 3), you can produce labeled images with multiple cars. Use it to train the detector and verify full system accuracy. Upvotes: 2 <issue_comment>username_2: **The Problem** We can see from the question that existing information on detection and classification in the small automotive vehicle domain has been located (in the form of two independent sets of vectors usable for machine training), and there is no already existing mapping or other correspondence between the elements of one set and the elements of the other. They were obtained independently, remain independent, and are linked only by the conventions of the domain (today's aesthetically acceptable and thermodynamically workable forms of small vehicles). The goal stated in the question is to create a computer vision system that both detects cars and classifies them leveraging the information contained in the two distinct sets. In the vision systems of mammals, there are also two distinct equivalences of sets; one arising from a genetic algorithm, the DNA that is expressed during the formation of the neural net geometry and bio-electro-chemistry of the visual system in early development; and the cognitive and coordinative pathways in the cerebrum and cerebellum. If a robot, wheelchair, or other vehicle is to avoid traffic, we must produce a system that in some way matches or exceeds the collision avoidance performance of mammals. In crime prevention, toll collection, sales lot inventory, county traffic analysis, and other like applications, performance will again be expected to match or exceed the performance of biological systems. If a person can record the make, model, year, color, and license plate strings, so should the machine we employ in these capacities. Consequently, this question is pertinent beyond academic curiosity, as it is applicable in current research and development of products. That this question author notices the lack of a unified data set that can be used to train it to detect and characterize in a single network objects of interest is apropos and key to the challenge of finding a solution. *Approach* The simplest approach would be to compose the system of two functions. 1. $\quad\mathcal{D}: \mathbb{I}^4 \to {(\mathbb{I}^2, \mathbb{I}^2)}\_1, \; {(\mathbb{I}^2, \mathbb{I}^2)}\_2, \; ... $ 2. $\quad\mathcal{C}: {(\mathbb{I}^2, \mathbb{I}^2)}\_i \to {(\mathbb{I})}\_i$ The four dimensions of input for $\mathcal{D}$, the detector, are horizontal position, vertical position, rgb index, and brightness to decribe the pixelized image; and the output are bounding boxes as two "corner" coordinates corresponding to each identified vehicle, the second coordinate being either relative to the first or to a specific corner of the entire frame. The categorizer, $\mathcal{C}$, receives as input bounding boxes and produces as output the index or code that maps to the categories corresponding to the labels of the training set available for categorization. The system can then be described as follows. $\quad\quad\mathcal{S}: \mathcal{C} \circ \mathcal{D}$ If the system is not color, subtract one from the above dimensionality of the input. If the system processes video, add one to the dimensionality of the input and consider using LSTM or GRU cell types. The above substitution represented by "$\circ$" appears to be what is meant by, "I use the images from the detection dataset as input and get classification predictions on top of detected bounding boxes." The interrogative, "How do I verify whether the classification model trained on the classification dataset is working on images from detection dataset? (In terms of classification accuracy)," appears to refer to the fact that labels do not exist for the second set that correspond to input elements of the first set, so an accuracy metric cannot be directly obtained. Since there is no obvious automatic way of generating labels for the vehicles in the pre-detected images containing potentially multiple vehicles, there is no way to check actual results against expected results. Composing multiple vehicle images from the categorization set to use as test input to the entire system $\mathcal{S}$ will only be useful in evaluating an aspect of the performance of $\mathcal{D}$, not $\mathcal{C}$. **Solution** The only way to evaluate the accuracy and reliability of $\mathcal{C}$ is with portions of the set used to train it that were excluded from the training and trust that the vehicles depicted in those images were sufficiently representative of the concept "car" to provide consistency of accuracy and reliability across the range of those detected by $\mathcal{D}$ in the application of $\mathcal{S}$. This means that the leveraging of the information, even if optimized to the degree possible by any arbitrary algorithm or parallelism in the set of all possible algorithms or parallelisms, is limited by the categorization training set. The number of set elements and the comprehensiveness and distribution of categories within that set must be sufficient to achieve an approximate equality between these two accuracy metrics. 1. Categorizing a test sample from the labeled set for $\mathcal{C}$ excluded from the training 2. Categorizing the vehicles isolated by $\mathcal{D}$ from its training input **With Additional Resources** Of course this discussion is in a particular environment, that of the system defined as the two artificial networks, one involving convolution based recognition and the other involving feature extraction, and the two training sets. What is needed is a wider environment where known vehicles are in view so that performance data of $\mathcal{S}$ is evaluated and a tap on the transfer of information between $\mathcal{D}$ and $\mathcal{C}$ can be used to differentiate between mistakes made on either side of the tap point. **Unsupervised Approach** Another course of action could be to not use the training set for categorization on the training of $\mathcal{C}$ at all, but rather use feature extraction and auto-correlation in an "unsupervised" approach, and then evaluate the results of on the basis of the final convergence metrics at the point when stability in categorization is detected. In this case, the images in the bounding boxes output by $\mathcal{D}$ would be used as training data. The auto-trained network realizing $\mathcal{C}$ can then be further evaluated using the entire categorization training set. **Further Research** Hybrids of these two approaches are possible. Also, the independent training only in the rarest of cases leads to optimal performance. Understanding feedback as originally treated with rigor by MacColl in chapter 8 of his *Fundamental Theory of Servomechanisms*, later applied to the problem of linearity and stability of analog circuitry, and then to training, first in the case of GANs, may lead to effective methods to bi-train the two networks. That evolved biological networks are trained *in situ* is an indicator that the most optimal performance can be gained by finding training architectures and information flow strategies that create optimality in both components simultaneously. No biological niche has ever been filled by a neural component that is first optimized and then inserted or copied in some way to a larger brain system. That is no proof that such component-ware can be optimal, but there is also no proof that the DNA driven systems that have emerged are not nearly optimized for the majority of terrestrial conditions. Upvotes: 2 [selected_answer]
2020/11/08
1,239
4,672
<issue_start>username_0: If uniform cost search is used for both the forward and backward search in bidirectional search, is it guaranteed the solution is optimal?<issue_comment>username_1: UCS is optimal (but not necessarily complete) --------------------------------------------- Let's first recall that the uniform-cost search (UCS) is optimal (i.e. if it finds a solution, [which is not guaranteed unless the costs on the edges are big enough](https://ai.stackexchange.com/a/24522/2444), that solution is optimal) and it expands nodes with the smallest value of the evaluation function $f(n) = g(n)$, where $g(n)$ is the length/cost of the path from the goal/start node to $n$. Is bidirectional search with UCS optimal? ----------------------------------------- The problem of bidirectional search with UCS for the forward and backward searches is that UCS does not proceed layer-by-layer ([as breadth-first search does, which ensures that when the forward and backward searches meet, the optimal path has been found, assuming they *both* expand one level at each iteration](https://ai.stackexchange.com/a/24503/2444)), so the forward search may explore one part of the search space while the backward search may explore a different part, and it could happen (although I don't have the proof: I need to think about it a little bit more!), that these searches do not meet. So, I will consider both cases: * when the forward and backward searches do not "meet" (the worst case, in terms of time and space complexity) * when they meet (the non-degenerate case) ### Degenerate case Let's consider the case when the forward search does **not** meet the backward search (the worst/degenarate case). If we assume that [the costs on the edges are big enough](https://ai.stackexchange.com/a/24522/2444) and the start node $s$ is reachable from $g$ (or vice-versa), then bidirectional search eventually degenerates to two independent uniform-cost searches, which are optimal, which makes BS optimal too. ### Non-generate case Let's consider the case when the forward search **meets** the backward search. To ensure optimality, we cannot just stop searching when we take off both the frontiers the same $n$. To see why, consider this example. We take off the first frontier node $n\_1$ with cost $N$, then we take off the same frontier node $n\_2$ with cost $N+10$. Meanwhile, we take off the *other* frontier node $n\_2$ with cost $K$ and the node $n\_1$ with cost $K + 1$. So, we have two paths: one with cost $N+(K + 1)$ and one with cost $(N+10)+K$, which is bigger than $N+(K + 1)$, but we took off both frontiers $n\_2$ first. See [the other answer](https://ai.stackexchange.com/a/24552/2444) for more details and resources that could be helpful to understand the appropriate stopping condition for the BS. Upvotes: 3 [selected_answer]<issue_comment>username_2: It depends on the stopping condition. If the stopping condition is "stop as soon as any vertex is encountered by both the forward and backward scan", then bidirectional uniform-cost search is not a correct algorithm -- it is not guaranteed to output the optimal path. But it is possible to adjust the stopping condition to make bidirectional uniform-cost search guaranteed to output an optimal solution. See the following resources for details, and the correct stopping condition: [Computing Point-to-Point Shortest Paths from External Memory](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.85.911&rep=rep1&type=pdf). <NAME>, <NAME>. ALENEX/ANALCO 2005. [Point-to-point shortest path algorithms with preprocessing](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/goldberg-sofsem07.pdf). <NAME>. International Conference on Current Trends in Theory and Practice of Computer Science, 2007. [Efficient Point-to-Point Shortest Path Algorithms](http://www.cs.princeton.edu/courses/archive/spr06/cos423/Handouts/EPP%20shortest%20path%20algorithms.pdf). <NAME>, <NAME>, <NAME>, <NAME>. I found these resources by looking at the Wikipedia article on [bidirectional search](https://en.wikipedia.org/wiki/Bidirectional_search); it mentions that the termination condition has been articulated by Andrew Goldberg et al and cites the third reference above. Then a quick search on Google Scholar immediately turned up the other papers as well. Lesson for the future: It can be useful to spend a little time checking standard resources (such as Wikipedia and textbooks), and checking the literature (e.g., with Google Scholar). Many natural questions have already been answered in the literature. Upvotes: 1
2020/11/15
1,215
4,695
<issue_start>username_0: Why aren't exploration techniques, such as UCB or Thompson sampling, typically used in bandit problems, used in full RL problems? Monte Carlo Tree Search may use the above-mentioned methods in its selection step, but why do value-based and policy gradient methods not use these techniques?<issue_comment>username_1: You can indeed use UCB in the RL setting. See e.g. section [**38.5 Upper Confidence Bounds for Reinforcement Learning** (page 521)](https://tor-lattimore.com/downloads/book/book.pdf#page=530) of the book [Bandit Algorithms](https://tor-lattimore.com/downloads/book/book.pdf) by <NAME> and <NAME> for the details. However, compared to $\epsilon$-greedy (widely used in RL), UCB1 is more computationally expensive, given that, for each action, you need to recompute this upper confidence bound **for every time step** (or, equivalently, action taken during learning). To see why, let's take a look at [the UCB1 formula](https://homes.di.unimi.it/cesa-bianchi/Pubblicazioni/ml-02.pdf#page=3) $$ \underbrace{\bar{x}\_{j}}\_{\text{value estimate}}+\underbrace{\sqrt{\frac{2 \ln n}{n\_{j}}}}\_{\text{UCB}}, $$ where * $\bar{x}\_{j}$ is the value estimate for action $j$ * $n\_{j}$ is the number of times action $j$ has been taken * $n$ is the total number of actions taken so far So, at each time step (or new action taken), we need to recompute that square root *for each action*, which depends on other factors that evolve during learning. So, the higher time complexity than $\epsilon$-greedy is probably the first reason why UCB1 is not so much used in RL, where interaction with the environment can be the bottleneck. You could argue that this recomputation (for each action) also needs to be done in bandits. Yes, it's true, but, in the RL problem, you have multiple states, so you need to compute value estimates for each action in all states (i.e. the full RL problem is more complex than bandits or contextual bandits). Moreover, $\epsilon$-greedy is so conceptually simple that everyone can easily implement it in less than $5$ minutes (though this is not really a problem, given that both are simple to implement). I am currently not familiar with Thompson sampling, but I guess (from some implementations I have seen) it's also not as cheap as $\epsilon$-greedy, where you just need to perform an argmax (can be done in constant time if you keep track of the highest value) or sample a random integer (it's also relatively cheap). There's a tutorial on Thompson sampling [here](https://web.stanford.edu/%7Ebvr/pubs/TS_Tutorial.pdf), which also includes a section dedicated to RL, so you may want to read it. Upvotes: 3 [selected_answer]<issue_comment>username_2: In fact, I think that the formula can be used as it is for multi-state problems. However, the formula probably overlaps with adjusting the reward bias because it considers the bias of the true expected value for a particular situation. Rather, this makes learning unstable, so I think it is not used. Upvotes: 0 <issue_comment>username_3: Many techniques for the exploration/exploitation dilemma that are inspired by multi-armed bandit problems, such as UCB1, assume that you can explicitly enumerate all state-action pairs; in fact, multi-armed bandit problems usually only have just one "state", and then this requirement turns into only requiring the ability to enumerate actions. In RL problems that are small enough to be handled with tabular approaches (without any function approximation), this may still be feasible. But for many interesting RL problems, the state and/or action spaces grow so large that you have to use function approximators (Deep Neural Networks are a popular choice, but others exist too). When you are unable to enumerate your state-action space, you can no longer keep track of things like the *visit counts* that are normally used in UCB1 and related approaches. There certainly are more advanced exploration techniques for RL than just $\epsilon$-greedy though, and some may even resemble / take inspiration from bandit-based approaches. There's an excellent [blog post on Exploration Strategies in Deep Reinforcement Learning here](https://lilianweng.github.io/lil-log/2020/06/07/exploration-strategies-in-deep-reinforcement-learning.html#count-based-exploration). For example, you may think of some of the [approaches described under "Count-based Exploration"](https://lilianweng.github.io/lil-log/2020/06/07/exploration-strategies-in-deep-reinforcement-learning.html#count-based-exploration) as trying to solve the issue of tracking visit counts as I described above in settings with function approximation. Upvotes: 2
2020/11/17
2,708
10,196
<issue_start>username_0: My idea is to model and train a neural network that receives a text version of a PDF file as the input and gives the content text as output. Take the scenario: 1. One prints a PDF file to a text file (the text file does not have images, but has the main text, headings, page numbers, some other footer text, and so on, and keeps the same number of columns - two for instance - of text); 2. This text file is submitted to a tool that strips everything that is not the main content of the text in one single text column (one text stream), keeping the section titles, paragraphs, and the text in a readable form (does not mix columns); 3. The tool generates a new version of the original text file containing only the main text portion, ready to be used for other purposes where the striped parts would be considered noise. How to model this problem in a way a neural network can handle it? Update 1 -------- Here are some clarifications on the problem. ### PDF file The picture below shows two pages of a pdf version of a scientific paper. This is just to set the context, the PDF file is not the input for this problem, it is just to understand where the actual input data comes from. [![PDF version of a scientific paper with parts of interest in colored blocks](https://i.stack.imgur.com/SwBQl.jpg)](https://i.stack.imgur.com/SwBQl.jpg) The color boxes show some parts of interest for this discussion. Red boxes are headers and footers. **We are not interested in them**. Blue and green boxes are content text blocks. Different colors were used to emphasize the text is organized in columns and that is part of the problem. **Those blue and green boxes are what we actually want**. ### Text file If a use the "save as text file" feature of my free PDF reader, I get a text file similar to the image below. [![Text version of the PDF file](https://i.stack.imgur.com/P8XwL.jpg)](https://i.stack.imgur.com/P8XwL.jpg) The text file is continuous, but I put the equivalent of the first two pages of the PDF file side-by-side just to make things easier to compare. We can see the very same colored boxes. In terms of words, those boxes contain the same text as in the PDF version. ### Understanding the problem When we read a paper, we are usually not very interested in footers or headers. The main content is what we actually read and that will provide us with the knowledge we are looking for. In this case, the text is inside blue and green boxes. So, what we want here is to generate a new version of the input (text) file organized in one single text stream (one column if you will), with the text laid-out in a form someone can read it, which means, alternating the blue and the green boxes. However, if the original PDF has no footers, it should work in the same way, providing the main text content. If the text comes in three of four columns, the final product must be a text in good condition to be read without losing any information. Any pictures will be simply stripped off the text version of the paper and we are fine with that.<issue_comment>username_1: > > How to extract the main content text from a formated text file? > > > ### I am not sure that just a neural network is the best approach to your problem. Traditional [natural language processing](https://en.wikipedia.org/wiki/Natural_language_processing) software are using something else, and generally using a complex mix of several techniques. I am supposing you are processing written text available as some file (in a file format you are *very* familiar with, e.g. [OOXML](https://en.wikipedia.org/wiki/Office_Open_XML) or [PDF](https://en.wikipedia.org/wiki/PDF) or [HTML5](https://en.wikipedia.org/wiki/HTML5)). Read the [wikipage on natural-language understanding](https://en.wikipedia.org/wiki/Natural-language_understanding) and the one on [parse trees](https://en.wikipedia.org/wiki/Parse_tree) (or concrete syntax trees). BTW, you might use [LaTeX](https://www.latex-project.org/) or the [Lout](https://en.wikipedia.org/wiki/Lout_(software)) formatter to produce some PDF file. Both are open-source software (easily available on most Linux distributions, including [Debian](https://debian.org/) or [Ubuntu](https://ubuntu.com/)). I recommend you to try generating some PDF file using them, and experiment on the generated PDF file. And a lot of AI papers are available (as preprints) in PDF form. You could also use, as a PDF input to experiment your software, [this](http://starynkevitch.net/Basile/bismon-chariot-doc.pdf) or [that](http://starynkevitch.net/Basile/refpersys-design.pdf) draft reports (you might enjoy reading them too...). **If in 2021 your software is capable of "understanding" and "abstracting/summarizing" these PDF files, please send me an email** to `<EMAIL>` explaining (in written English) how you did build your neural network and what is the output of your software. There are several issues: * extracting the non-textual things (e.g. HTML tags from HTML input, or strings from a PDF file, or some [LaTeX](https://en.wikipedia.org/wiki/LaTeX) one). * detecting the human language used in your text (e.g. French or English or Russian or Chinese). [N-gram](https://en.wikipedia.org/wiki/N-gram) based techniques come to mind. * having a data structure or database representing a dictionnary of a thousand (at least) of significant words (in English or Russian or whatever human language you are interested in) related to the domain you want to handle (that dictionary would be different if you want to parse weather forecasts or documentation related to the automotive industry, since the word `pressure` or `speed` relates to different concepts. Notice also that "weather" and "time" are expressed in French by the *same* word: "temps" - as in "le temps qu'il fait" for ongoing weather and "le temps qui passe" for the flow of time). A "Queen" is not the same for a chess player and an historian. A program translating -or just analyzing- chess comments from English won't use the same word for translating / understanding "[bishop](https://en.wikipedia.org/wiki/Bishop_(chess))" (in chess, "fou" in French, literally the crazy guy, unrelated to religion; in Russian chess books it would be "слон", literally an elephant) than another program translating / analyzing historical comments from English (e.g. about [<NAME>](https://en.wikipedia.org/wiki/Mary,_Queen_of_Scots)). * modeling inside your software some domain-specific knowledge related to your analyzed text, since you would handle differently weather forecast text to textual comments of chess competitions, or textual exercises in any computer science or programming book (like [CLRS](https://en.wikipedia.org/wiki/Introduction_to_Algorithms)). You could use some [frame](https://en.wikipedia.org/wiki/Frame_(artificial_intelligence))-based representations, like in [RefPerSys](http://refpersys.org/) or in [CyC](https://en.wikipedia.org/wiki/Cyc). * building a [semantic network](https://en.wikipedia.org/wiki/Semantic_network) representing the input text. I believe you might need some prior one representing domain-specific knowledge in the area of the analyzed text (e.g. a program analyzing comments on chess games needs to know the rule of chess; another program analyzing [StackOverflow](https://stackoverflow.com/) answers probably needs to know something about operating systems in general). In think that in English "overflow" or "overheating" means *very different concepts* to software developers and to weather forecasters or climate experts. Look also for inspiration into [this blog](http://bootstrappingartificialintelligence.fr/WordPress3/) of the late Jacques Pitrat. He did wrote an interesting book on your topic. You might look inside the [DECODER](https://www.decoder-project.eu/) European project, and read more about [expert systems](https://en.wikipedia.org/wiki/Expert_system) and their [inference engine](https://en.wikipedia.org/wiki/Inference_engine) and [knowledge bases](https://en.wikipedia.org/wiki/Knowledge_base). Your project could give you some PhD. ------------------------------------- You certainly need several years of work to achieve your goals. I suggest contacting some academic in your area to be your advisor. Notice that on Linux the [`pdf2text` software](https://linux.die.net/man/1/pdftotext) is extracting text from PDF files. It is open-source, but I won't say it is an AI software. However, you could use it thru [popen(3)](https://man7.org/linux/man-pages/man3/popen.3.html). See also [regex(7)](https://man7.org/linux/man-pages/man7/regex.7.html). BTW, the [PDF](https://en.wikipedia.org/wiki/PDF) specification is public as [ISO 32000-2:2017](https://www.iso.org/standard/63534.html) (and is related to [PostScript](https://en.wikipedia.org/wiki/PostScript)). Get it and read it, and see also this [youtube video](https://youtu.be/KmP7pbcAl-8) or [this 978 pages document](https://www.adobe.com/content/dam/acom/en/devnet/pdf/pdfs/pdf_reference_archives/PDFReference.pdf). On Linux, most PDF files can usually be inspected with [od(1)](https://man7.org/linux/man-pages/man1/od.1.html) or [less(1)](https://man7.org/linux/man-pages/man1/less.1.html). My HP Office Pro 8610 printer (connected to a Linux desktop) is capable of printing some PDF and of scanning into a PDF file. But if I print on paper some PDF file and scan it, the PDF file did change a lot, even if visually it looks the same. Notice that some drawings -or photos- could be embedded in a PDF file, and appear to a non-blind human reader as letters. Upvotes: 1 <issue_comment>username_2: After a long wait and some digging, I accidently found what I was looking for. In 2015, polish researcher **<NAME>** publish an article presenting CERMINE, a solution for the posted problem. His solution is SVM-based, but the article gives good insights for an alternate Neural Network version. The article is open access and can be found on the [Springer website](https://link.springer.com/article/10.1007/s10032-015-0249-8), while all the source code is available on [GitHub](https://github.com/CeON/CERMINE). Upvotes: 2
2020/11/18
2,322
8,167
<issue_start>username_0: If there are two different optimal policies $\pi\_1, \pi\_2$ in a reinforcement learning task, will the linear combination (or [affine combination](https://en.wikipedia.org/wiki/Affine_combination)) of the two policies $\alpha \pi\_1 + \beta \pi\_2, \alpha + \beta = 1$ also be an optimal policy? Here I give a simple demo: In a task, there are three states $s\_0, s\_1, s\_2$, where $s\_1, s\_2$ are both terminal states. The action space contains two actions $a\_1, a\_2$. An agent will start from $s\_0$, it can choose $a\_1$, then it will arrive $s\_1$,and receive a reward of $+1$. In $s\_0$, it can also choose $a\_2$, then it will arrive $s\_2$, and receive a reward of $+1$. In this simple demo task, we can first derive two different optimal policy $\pi\_1$, $\pi\_2$, where $\pi\_1(a\_1|s\_0) = 1$, $\pi\_2(a\_2 | s\_0) = 1$. The combination of $\pi\_1$ and$\pi\_2$ is $\pi: \pi(a\_1|s\_0) = \alpha, \pi(a\_2|s\_0) = \beta$. $\pi$ is an optimal policy, too. Because any policy in this task is an optimal policy.<issue_comment>username_1: > > How to extract the main content text from a formated text file? > > > ### I am not sure that just a neural network is the best approach to your problem. Traditional [natural language processing](https://en.wikipedia.org/wiki/Natural_language_processing) software are using something else, and generally using a complex mix of several techniques. I am supposing you are processing written text available as some file (in a file format you are *very* familiar with, e.g. [OOXML](https://en.wikipedia.org/wiki/Office_Open_XML) or [PDF](https://en.wikipedia.org/wiki/PDF) or [HTML5](https://en.wikipedia.org/wiki/HTML5)). Read the [wikipage on natural-language understanding](https://en.wikipedia.org/wiki/Natural-language_understanding) and the one on [parse trees](https://en.wikipedia.org/wiki/Parse_tree) (or concrete syntax trees). BTW, you might use [LaTeX](https://www.latex-project.org/) or the [Lout](https://en.wikipedia.org/wiki/Lout_(software)) formatter to produce some PDF file. Both are open-source software (easily available on most Linux distributions, including [Debian](https://debian.org/) or [Ubuntu](https://ubuntu.com/)). I recommend you to try generating some PDF file using them, and experiment on the generated PDF file. And a lot of AI papers are available (as preprints) in PDF form. You could also use, as a PDF input to experiment your software, [this](http://starynkevitch.net/Basile/bismon-chariot-doc.pdf) or [that](http://starynkevitch.net/Basile/refpersys-design.pdf) draft reports (you might enjoy reading them too...). **If in 2021 your software is capable of "understanding" and "abstracting/summarizing" these PDF files, please send me an email** to `<EMAIL>` explaining (in written English) how you did build your neural network and what is the output of your software. There are several issues: * extracting the non-textual things (e.g. HTML tags from HTML input, or strings from a PDF file, or some [LaTeX](https://en.wikipedia.org/wiki/LaTeX) one). * detecting the human language used in your text (e.g. French or English or Russian or Chinese). [N-gram](https://en.wikipedia.org/wiki/N-gram) based techniques come to mind. * having a data structure or database representing a dictionnary of a thousand (at least) of significant words (in English or Russian or whatever human language you are interested in) related to the domain you want to handle (that dictionary would be different if you want to parse weather forecasts or documentation related to the automotive industry, since the word `pressure` or `speed` relates to different concepts. Notice also that "weather" and "time" are expressed in French by the *same* word: "temps" - as in "le temps qu'il fait" for ongoing weather and "le temps qui passe" for the flow of time). A "Queen" is not the same for a chess player and an historian. A program translating -or just analyzing- chess comments from English won't use the same word for translating / understanding "[bishop](https://en.wikipedia.org/wiki/Bishop_(chess))" (in chess, "fou" in French, literally the crazy guy, unrelated to religion; in Russian chess books it would be "слон", literally an elephant) than another program translating / analyzing historical comments from English (e.g. about [<NAME>](https://en.wikipedia.org/wiki/Mary,_Queen_of_Scots)). * modeling inside your software some domain-specific knowledge related to your analyzed text, since you would handle differently weather forecast text to textual comments of chess competitions, or textual exercises in any computer science or programming book (like [CLRS](https://en.wikipedia.org/wiki/Introduction_to_Algorithms)). You could use some [frame](https://en.wikipedia.org/wiki/Frame_(artificial_intelligence))-based representations, like in [RefPerSys](http://refpersys.org/) or in [CyC](https://en.wikipedia.org/wiki/Cyc). * building a [semantic network](https://en.wikipedia.org/wiki/Semantic_network) representing the input text. I believe you might need some prior one representing domain-specific knowledge in the area of the analyzed text (e.g. a program analyzing comments on chess games needs to know the rule of chess; another program analyzing [StackOverflow](https://stackoverflow.com/) answers probably needs to know something about operating systems in general). In think that in English "overflow" or "overheating" means *very different concepts* to software developers and to weather forecasters or climate experts. Look also for inspiration into [this blog](http://bootstrappingartificialintelligence.fr/WordPress3/) of the late <NAME>. He did wrote an interesting book on your topic. You might look inside the [DECODER](https://www.decoder-project.eu/) European project, and read more about [expert systems](https://en.wikipedia.org/wiki/Expert_system) and their [inference engine](https://en.wikipedia.org/wiki/Inference_engine) and [knowledge bases](https://en.wikipedia.org/wiki/Knowledge_base). Your project could give you some PhD. ------------------------------------- You certainly need several years of work to achieve your goals. I suggest contacting some academic in your area to be your advisor. Notice that on Linux the [`pdf2text` software](https://linux.die.net/man/1/pdftotext) is extracting text from PDF files. It is open-source, but I won't say it is an AI software. However, you could use it thru [popen(3)](https://man7.org/linux/man-pages/man3/popen.3.html). See also [regex(7)](https://man7.org/linux/man-pages/man7/regex.7.html). BTW, the [PDF](https://en.wikipedia.org/wiki/PDF) specification is public as [ISO 32000-2:2017](https://www.iso.org/standard/63534.html) (and is related to [PostScript](https://en.wikipedia.org/wiki/PostScript)). Get it and read it, and see also this [youtube video](https://youtu.be/KmP7pbcAl-8) or [this 978 pages document](https://www.adobe.com/content/dam/acom/en/devnet/pdf/pdfs/pdf_reference_archives/PDFReference.pdf). On Linux, most PDF files can usually be inspected with [od(1)](https://man7.org/linux/man-pages/man1/od.1.html) or [less(1)](https://man7.org/linux/man-pages/man1/less.1.html). My HP Office Pro 8610 printer (connected to a Linux desktop) is capable of printing some PDF and of scanning into a PDF file. But if I print on paper some PDF file and scan it, the PDF file did change a lot, even if visually it looks the same. Notice that some drawings -or photos- could be embedded in a PDF file, and appear to a non-blind human reader as letters. Upvotes: 1 <issue_comment>username_2: After a long wait and some digging, I accidently found what I was looking for. In 2015, polish researcher **<NAME>** publish an article presenting CERMINE, a solution for the posted problem. His solution is SVM-based, but the article gives good insights for an alternate Neural Network version. The article is open access and can be found on the [Springer website](https://link.springer.com/article/10.1007/s10032-015-0249-8), while all the source code is available on [GitHub](https://github.com/CeON/CERMINE). Upvotes: 2
2020/11/18
2,376
8,768
<issue_start>username_0: Background ---------- From my understanding (and following along with [this blog post](http://colah.github.io/posts/2014-03-NN-Manifolds-Topology/)), (deep) neural networks apply transformations to the data such that the data's representation to the next layer (or classification layer) becomes more separate. As such, we can then apply a simple classifier(s) to the representation to chop up the regions where the different classes exist (as shown by [this blog post](https://medium.com/@vivek.yadav/how-neural-networks-learn-nonlinear-functions-and-classify-linearly-non-separable-data-22328e7e5be1)). If this is true and say we have some noisy data where the classes are not easily separable, would it make sense to push the input to a higher dimension, so we can more easily separate it later in the network? For example, I have some tabular data that is a bit noisy, say it has 50 dimensions (input size of 50). To me, it seems logical to project the data to a higher dimension, such that it makes it easier for the classifier to separate. In essence, I would project the data to say 60 dimensions (layer out dim = 60), so the network can represent the data with more dimensions, allowing us to linearly separate it. (I find this similar to how SVMs can classify the data by pushing it to a higher dimension). Question -------- Why, if the above is correct, do we not see many neural network architectures projecting the data into higher dimensions first then reducing the size of each layer thereafter? I learned that if we have more hidden nodes than input nodes, the network will memorize rather than generalize.<issue_comment>username_1: > > How to extract the main content text from a formated text file? > > > ### I am not sure that just a neural network is the best approach to your problem. Traditional [natural language processing](https://en.wikipedia.org/wiki/Natural_language_processing) software are using something else, and generally using a complex mix of several techniques. I am supposing you are processing written text available as some file (in a file format you are *very* familiar with, e.g. [OOXML](https://en.wikipedia.org/wiki/Office_Open_XML) or [PDF](https://en.wikipedia.org/wiki/PDF) or [HTML5](https://en.wikipedia.org/wiki/HTML5)). Read the [wikipage on natural-language understanding](https://en.wikipedia.org/wiki/Natural-language_understanding) and the one on [parse trees](https://en.wikipedia.org/wiki/Parse_tree) (or concrete syntax trees). BTW, you might use [LaTeX](https://www.latex-project.org/) or the [Lout](https://en.wikipedia.org/wiki/Lout_(software)) formatter to produce some PDF file. Both are open-source software (easily available on most Linux distributions, including [Debian](https://debian.org/) or [Ubuntu](https://ubuntu.com/)). I recommend you to try generating some PDF file using them, and experiment on the generated PDF file. And a lot of AI papers are available (as preprints) in PDF form. You could also use, as a PDF input to experiment your software, [this](http://starynkevitch.net/Basile/bismon-chariot-doc.pdf) or [that](http://starynkevitch.net/Basile/refpersys-design.pdf) draft reports (you might enjoy reading them too...). **If in 2021 your software is capable of "understanding" and "abstracting/summarizing" these PDF files, please send me an email** to `<EMAIL>` explaining (in written English) how you did build your neural network and what is the output of your software. There are several issues: * extracting the non-textual things (e.g. HTML tags from HTML input, or strings from a PDF file, or some [LaTeX](https://en.wikipedia.org/wiki/LaTeX) one). * detecting the human language used in your text (e.g. French or English or Russian or Chinese). [N-gram](https://en.wikipedia.org/wiki/N-gram) based techniques come to mind. * having a data structure or database representing a dictionnary of a thousand (at least) of significant words (in English or Russian or whatever human language you are interested in) related to the domain you want to handle (that dictionary would be different if you want to parse weather forecasts or documentation related to the automotive industry, since the word `pressure` or `speed` relates to different concepts. Notice also that "weather" and "time" are expressed in French by the *same* word: "temps" - as in "le temps qu'il fait" for ongoing weather and "le temps qui passe" for the flow of time). A "Queen" is not the same for a chess player and an historian. A program translating -or just analyzing- chess comments from English won't use the same word for translating / understanding "[bishop](https://en.wikipedia.org/wiki/Bishop_(chess))" (in chess, "fou" in French, literally the crazy guy, unrelated to religion; in Russian chess books it would be "слон", literally an elephant) than another program translating / analyzing historical comments from English (e.g. about [<NAME>](https://en.wikipedia.org/wiki/Mary,_Queen_of_Scots)). * modeling inside your software some domain-specific knowledge related to your analyzed text, since you would handle differently weather forecast text to textual comments of chess competitions, or textual exercises in any computer science or programming book (like [CLRS](https://en.wikipedia.org/wiki/Introduction_to_Algorithms)). You could use some [frame](https://en.wikipedia.org/wiki/Frame_(artificial_intelligence))-based representations, like in [RefPerSys](http://refpersys.org/) or in [CyC](https://en.wikipedia.org/wiki/Cyc). * building a [semantic network](https://en.wikipedia.org/wiki/Semantic_network) representing the input text. I believe you might need some prior one representing domain-specific knowledge in the area of the analyzed text (e.g. a program analyzing comments on chess games needs to know the rule of chess; another program analyzing [StackOverflow](https://stackoverflow.com/) answers probably needs to know something about operating systems in general). In think that in English "overflow" or "overheating" means *very different concepts* to software developers and to weather forecasters or climate experts. Look also for inspiration into [this blog](http://bootstrappingartificialintelligence.fr/WordPress3/) of the late <NAME>. He did wrote an interesting book on your topic. You might look inside the [DECODER](https://www.decoder-project.eu/) European project, and read more about [expert systems](https://en.wikipedia.org/wiki/Expert_system) and their [inference engine](https://en.wikipedia.org/wiki/Inference_engine) and [knowledge bases](https://en.wikipedia.org/wiki/Knowledge_base). Your project could give you some PhD. ------------------------------------- You certainly need several years of work to achieve your goals. I suggest contacting some academic in your area to be your advisor. Notice that on Linux the [`pdf2text` software](https://linux.die.net/man/1/pdftotext) is extracting text from PDF files. It is open-source, but I won't say it is an AI software. However, you could use it thru [popen(3)](https://man7.org/linux/man-pages/man3/popen.3.html). See also [regex(7)](https://man7.org/linux/man-pages/man7/regex.7.html). BTW, the [PDF](https://en.wikipedia.org/wiki/PDF) specification is public as [ISO 32000-2:2017](https://www.iso.org/standard/63534.html) (and is related to [PostScript](https://en.wikipedia.org/wiki/PostScript)). Get it and read it, and see also this [youtube video](https://youtu.be/KmP7pbcAl-8) or [this 978 pages document](https://www.adobe.com/content/dam/acom/en/devnet/pdf/pdfs/pdf_reference_archives/PDFReference.pdf). On Linux, most PDF files can usually be inspected with [od(1)](https://man7.org/linux/man-pages/man1/od.1.html) or [less(1)](https://man7.org/linux/man-pages/man1/less.1.html). My HP Office Pro 8610 printer (connected to a Linux desktop) is capable of printing some PDF and of scanning into a PDF file. But if I print on paper some PDF file and scan it, the PDF file did change a lot, even if visually it looks the same. Notice that some drawings -or photos- could be embedded in a PDF file, and appear to a non-blind human reader as letters. Upvotes: 1 <issue_comment>username_2: After a long wait and some digging, I accidently found what I was looking for. In 2015, polish researcher **<NAME>** publish an article presenting CERMINE, a solution for the posted problem. His solution is SVM-based, but the article gives good insights for an alternate Neural Network version. The article is open access and can be found on the [Springer website](https://link.springer.com/article/10.1007/s10032-015-0249-8), while all the source code is available on [GitHub](https://github.com/CeON/CERMINE). Upvotes: 2
2020/11/20
2,071
7,518
<issue_start>username_0: I am building my first ANN from scratch. I know that I need a transfer function and I want to use the sigmoid function as my teacher recommended that. That function can be between 0 and 1, but my input values for the network are between -5 and 20. Someone told me that I need to scale the function so that it is in the range of -5 and 20 instead of 0 and 1. Is this true? Why?<issue_comment>username_1: > > How to extract the main content text from a formated text file? > > > ### I am not sure that just a neural network is the best approach to your problem. Traditional [natural language processing](https://en.wikipedia.org/wiki/Natural_language_processing) software are using something else, and generally using a complex mix of several techniques. I am supposing you are processing written text available as some file (in a file format you are *very* familiar with, e.g. [OOXML](https://en.wikipedia.org/wiki/Office_Open_XML) or [PDF](https://en.wikipedia.org/wiki/PDF) or [HTML5](https://en.wikipedia.org/wiki/HTML5)). Read the [wikipage on natural-language understanding](https://en.wikipedia.org/wiki/Natural-language_understanding) and the one on [parse trees](https://en.wikipedia.org/wiki/Parse_tree) (or concrete syntax trees). BTW, you might use [LaTeX](https://www.latex-project.org/) or the [Lout](https://en.wikipedia.org/wiki/Lout_(software)) formatter to produce some PDF file. Both are open-source software (easily available on most Linux distributions, including [Debian](https://debian.org/) or [Ubuntu](https://ubuntu.com/)). I recommend you to try generating some PDF file using them, and experiment on the generated PDF file. And a lot of AI papers are available (as preprints) in PDF form. You could also use, as a PDF input to experiment your software, [this](http://starynkevitch.net/Basile/bismon-chariot-doc.pdf) or [that](http://starynkevitch.net/Basile/refpersys-design.pdf) draft reports (you might enjoy reading them too...). **If in 2021 your software is capable of "understanding" and "abstracting/summarizing" these PDF files, please send me an email** to `<EMAIL>` explaining (in written English) how you did build your neural network and what is the output of your software. There are several issues: * extracting the non-textual things (e.g. HTML tags from HTML input, or strings from a PDF file, or some [LaTeX](https://en.wikipedia.org/wiki/LaTeX) one). * detecting the human language used in your text (e.g. French or English or Russian or Chinese). [N-gram](https://en.wikipedia.org/wiki/N-gram) based techniques come to mind. * having a data structure or database representing a dictionnary of a thousand (at least) of significant words (in English or Russian or whatever human language you are interested in) related to the domain you want to handle (that dictionary would be different if you want to parse weather forecasts or documentation related to the automotive industry, since the word `pressure` or `speed` relates to different concepts. Notice also that "weather" and "time" are expressed in French by the *same* word: "temps" - as in "le temps qu'il fait" for ongoing weather and "le temps qui passe" for the flow of time). A "Queen" is not the same for a chess player and an historian. A program translating -or just analyzing- chess comments from English won't use the same word for translating / understanding "[bishop](https://en.wikipedia.org/wiki/Bishop_(chess))" (in chess, "fou" in French, literally the crazy guy, unrelated to religion; in Russian chess books it would be "слон", literally an elephant) than another program translating / analyzing historical comments from English (e.g. about [<NAME>](https://en.wikipedia.org/wiki/Mary,_Queen_of_Scots)). * modeling inside your software some domain-specific knowledge related to your analyzed text, since you would handle differently weather forecast text to textual comments of chess competitions, or textual exercises in any computer science or programming book (like [CLRS](https://en.wikipedia.org/wiki/Introduction_to_Algorithms)). You could use some [frame](https://en.wikipedia.org/wiki/Frame_(artificial_intelligence))-based representations, like in [RefPerSys](http://refpersys.org/) or in [CyC](https://en.wikipedia.org/wiki/Cyc). * building a [semantic network](https://en.wikipedia.org/wiki/Semantic_network) representing the input text. I believe you might need some prior one representing domain-specific knowledge in the area of the analyzed text (e.g. a program analyzing comments on chess games needs to know the rule of chess; another program analyzing [StackOverflow](https://stackoverflow.com/) answers probably needs to know something about operating systems in general). In think that in English "overflow" or "overheating" means *very different concepts* to software developers and to weather forecasters or climate experts. Look also for inspiration into [this blog](http://bootstrappingartificialintelligence.fr/WordPress3/) of the late <NAME>. He did wrote an interesting book on your topic. You might look inside the [DECODER](https://www.decoder-project.eu/) European project, and read more about [expert systems](https://en.wikipedia.org/wiki/Expert_system) and their [inference engine](https://en.wikipedia.org/wiki/Inference_engine) and [knowledge bases](https://en.wikipedia.org/wiki/Knowledge_base). Your project could give you some PhD. ------------------------------------- You certainly need several years of work to achieve your goals. I suggest contacting some academic in your area to be your advisor. Notice that on Linux the [`pdf2text` software](https://linux.die.net/man/1/pdftotext) is extracting text from PDF files. It is open-source, but I won't say it is an AI software. However, you could use it thru [popen(3)](https://man7.org/linux/man-pages/man3/popen.3.html). See also [regex(7)](https://man7.org/linux/man-pages/man7/regex.7.html). BTW, the [PDF](https://en.wikipedia.org/wiki/PDF) specification is public as [ISO 32000-2:2017](https://www.iso.org/standard/63534.html) (and is related to [PostScript](https://en.wikipedia.org/wiki/PostScript)). Get it and read it, and see also this [youtube video](https://youtu.be/KmP7pbcAl-8) or [this 978 pages document](https://www.adobe.com/content/dam/acom/en/devnet/pdf/pdfs/pdf_reference_archives/PDFReference.pdf). On Linux, most PDF files can usually be inspected with [od(1)](https://man7.org/linux/man-pages/man1/od.1.html) or [less(1)](https://man7.org/linux/man-pages/man1/less.1.html). My HP Office Pro 8610 printer (connected to a Linux desktop) is capable of printing some PDF and of scanning into a PDF file. But if I print on paper some PDF file and scan it, the PDF file did change a lot, even if visually it looks the same. Notice that some drawings -or photos- could be embedded in a PDF file, and appear to a non-blind human reader as letters. Upvotes: 1 <issue_comment>username_2: After a long wait and some digging, I accidently found what I was looking for. In 2015, polish researcher **<NAME>** publish an article presenting CERMINE, a solution for the posted problem. His solution is SVM-based, but the article gives good insights for an alternate Neural Network version. The article is open access and can be found on the [Springer website](https://link.springer.com/article/10.1007/s10032-015-0249-8), while all the source code is available on [GitHub](https://github.com/CeON/CERMINE). Upvotes: 2
2020/11/20
369
1,561
<issue_start>username_0: I have to do a project that detects fabric surface errors and I will use machine learning methods to deal with it. I have a dataset that includes around six thousand fabric surface images with the size 256x256. This dataset is labeled, one thousand of it was labeled as NOK that means fabric surface with error, and the rest was labeled as OK which means fabric surface without an error. I read a lot of papers about fabric surface error detection with machine learning methods, and I saw that "autoencoders" are used to do it. But as I saw that the autoencoders are used in unsupervised learning models without labels. I need to do it with supervised learning models. Is there any model that can I use for fabric surface error detection with images in the supervised learning? Can be autoencoders used for it or is there any better model to do it?<issue_comment>username_1: [Convolutional Neural Networks are mostly used](https://developers.google.com/machine-learning/practica/image-classification/convolutional-neural-networks) for all kind of computer vision tasks. Here you can find a [tutorial on how to train a CNN for image classification](https://databasecamp.de/en/use-case/cnn-in-tensorflow) from scratch. Upvotes: 0 <issue_comment>username_2: Papers With Code have a great summary of the [tasks in computer vision](https://paperswithcode.com/area/computer-vision) and their respective State of the Art models. Also the [tasks page](https://huggingface.co/tasks) from HuggingFace serves as a great reference. Upvotes: 1
2020/11/21
1,767
5,351
<issue_start>username_0: I am currently studying the textbook *Neural Networks and Deep Learning* by <NAME>. Chapter **1.2.1.2 Relationship with Support Vector Machines** says the following: > > The perceptron criterion is a shifted version of the hinge-loss used in support vector machines (see Chapter 2). The hinge loss looks even more similar to the zero-one loss criterion of Equation 1.7, and is defined as follows: > $$L\_i^{svm} = \max\{ 1 - y\_i(\overline{W} \cdot \overline{X}\_i), 0 \} \tag{1.9}$$ > Note that the perceptron does not keep the constant term of $1$ on the right-hand side of Equation 1.7, whereas the hinge loss keeps this constant within the maximization function. This change does not affect the algebraic expression for the gradient, but it does change which points are lossless and should not cause an update. The relationship between the > perceptron criterion and the hinge loss is shown in Figure 1.6. This similarity becomes particularly evident when the perceptron updates of Equation 1.6 are rewritten as follows: > $$\overline{W} \Leftarrow \overline{W} + \alpha \sum\_{(\overline{X}, y) \in S^+} y \overline{X} \tag{1.10}$$ > Here, $S^+$ is defined as the set of all misclassified training points $\overline{X} \in S$ that satisfy the condition $y(\overline{W} \cdot \overline{X}) < 0$. This update seems to look somewhat different from the perceptron, because the perceptron uses the error $E(\overline{X})$ for the update, which is replaced with $y$ in the update above. A key point is that the (integer) error value $E(X) = (y − \text{sign}\{\overline{W} \cdot \overline{X} \}) \in \{ −2, +2 \}$ can never be $0$ for misclassified points in $S^+$. Therefore, we have $E(\overline{X}) = 2y$ *for misclassified points*, and $E(X)$ can be replaced with $y$ in the updates after absorbing the factor of $2$ within the learning rate. > > > Equation 1.6 is as follows: > > $$\overline{W} \Leftarrow \overline{W} + \alpha \sum\_{\overline{X} \in S} E(\overline{X})\overline{X}, \tag{1.6}$$ > where $S$ is a randomly chosen subset of training points, $\overline{X} = [x\_1, \dots, x\_d]$ is a data instance (vector of $d$ feature variables), $\overline{W} = [w\_1, \dots, w\_d]$ are the weights, $\alpha$ is the learning rate, and $E(\overline{X}) = (y - \hat{y})$ is an error value, where $\hat{y} = \text{sign}\{ \overline{W} \cdot \overline{X} \}$ is the prediction and $y$ is the observed value of the binary class variable. > > > Equation 1.7 is as follows: > > $$L\_i^{(0/1)} = \dfrac{1}{2} (y\_i - \text{sign}\{ \overline{W} \cdot \overline{X\_i} \})^2 = 1 - y\_i \cdot \text{sign} \{ \overline{W} \cdot \overline{X\_i} \} \tag{1.7}$$ > > > And figure 1.6 is as follows: > > [![enter image description here](https://i.stack.imgur.com/nmMyU.png)](https://i.stack.imgur.com/nmMyU.png) > > > Figure 1.6 looks unclear to me. What is figure 1.6 showing, and how is it relevant to the point that the author is trying to make?<issue_comment>username_1: I can't fully explain the part because I forgot what it talks about. However, regarding the hinge loss, it is basically allowing your SVM to tolerate misclassifications without increasing the cost function. For example, you give someone 1 dollar or 1 euro. You can forgive them, you tolerate it. Your hinge loss is 0 for lending someone 1d dollar. However, if you give them 10 dollars or 100, you will ask them to refund you ASAP because you can't tolerate that much loss! [![enter image description here](https://i.stack.imgur.com/Nj05a.png)](https://i.stack.imgur.com/Nj05a.png) Upvotes: -1 <issue_comment>username_2: Figure 1.6 is depicting that the hinge-loss used in SVMs (Eqn 1.9): $$L\_i^{svm} = \text{max}\{1 - y\_i (\overline{W} \cdot \overline{X}\_i, 0\} \tag{1.9}$$ is a shifted version (+1 to the right) of the loss function (Eqn 1.8) used to optimize the perceptron criterion (see footnote (1)): $$L\_i = \text{max}\{-y\_i (\overline{W} \cdot \overline{X}\_i), 0\} \tag{1.8}$$ Using Figure 1.6 and the accompanying discussion in Section 1.2.1.2, the author (<NAME>) is trying to show that if you make the implicit training process of the perceptron explicit (with an objective function, in his case, a loss function defined by Eqn 1.8), then one realises (in his words): > > that the perceptron is fundamentally not very different from well-known machine learning algorithms like the support vector machine in spite of its different origins. > > > ### Footnotes (1) The *perceptron criterion* was introduced in Section 3.5.1 of [Neural Networks for Pattern Recognition](https://www.academia.edu/download/55593223/_Christopher_M._Bishop__Neural_Networks_for_Patterb-ok.org.pdf) by <NAME> as a *continuous*, piecewise linear error function given by: $$E^{\text{perc}}(\bf{w}) = - \sum\_{\phi^n \in \mathcal{m}} w^T (\phi^n t^n)$$ Bishop suggested that this objective function is minimised when training a perceptron. Aggarwal references this work as an example of how the perceptron training process implicitly minimises this error function, using it as a segue into his own derivation of Eqn 1.8 before encouraging the reader to check that the gradient of Eqn 1.8 leads to the perceptron update (i.e. $\overline{W} \Leftarrow \overline{W} - \alpha \nabla\_{W} L\_i$). Upvotes: 0
2020/11/22
942
3,218
<issue_start>username_0: I am diving in data-to-text generation for long articles (> 1000 words). After creating a template and fill it with data I am currently going down on paragraph level and adding different paragraphs, which are randomly selected and put together. I also added on a word level different outputs for date, time and number formats. The challenge I see is, that when creating large amounts of such generated texts they become boring to read as the uniqueness for the reader goes down. Furthermore, I also think it's easy to detect that such texts have been autogenerated. However, I still have to validate this hypotheses. I was wondering if there is an even better method to bring in variability in such a text? Can you suggest any methods, papers, resources or share your experience within this field. I highly appreciate your replies!<issue_comment>username_1: I can't fully explain the part because I forgot what it talks about. However, regarding the hinge loss, it is basically allowing your SVM to tolerate misclassifications without increasing the cost function. For example, you give someone 1 dollar or 1 euro. You can forgive them, you tolerate it. Your hinge loss is 0 for lending someone 1d dollar. However, if you give them 10 dollars or 100, you will ask them to refund you ASAP because you can't tolerate that much loss! [![enter image description here](https://i.stack.imgur.com/Nj05a.png)](https://i.stack.imgur.com/Nj05a.png) Upvotes: -1 <issue_comment>username_2: Figure 1.6 is depicting that the hinge-loss used in SVMs (Eqn 1.9): $$L\_i^{svm} = \text{max}\{1 - y\_i (\overline{W} \cdot \overline{X}\_i, 0\} \tag{1.9}$$ is a shifted version (+1 to the right) of the loss function (Eqn 1.8) used to optimize the perceptron criterion (see footnote (1)): $$L\_i = \text{max}\{-y\_i (\overline{W} \cdot \overline{X}\_i), 0\} \tag{1.8}$$ Using Figure 1.6 and the accompanying discussion in Section 1.2.1.2, the author (<NAME>) is trying to show that if you make the implicit training process of the perceptron explicit (with an objective function, in his case, a loss function defined by Eqn 1.8), then one realises (in his words): > > that the perceptron is fundamentally not very different from well-known machine learning algorithms like the support vector machine in spite of its different origins. > > > ### Footnotes (1) The *perceptron criterion* was introduced in Section 3.5.1 of [Neural Networks for Pattern Recognition](https://www.academia.edu/download/55593223/_Christopher_M._Bishop__Neural_Networks_for_Patterb-ok.org.pdf) by <NAME> as a *continuous*, piecewise linear error function given by: $$E^{\text{perc}}(\bf{w}) = - \sum\_{\phi^n \in \mathcal{m}} w^T (\phi^n t^n)$$ Bishop suggested that this objective function is minimised when training a perceptron. Aggarwal references this work as an example of how the perceptron training process implicitly minimises this error function, using it as a segue into his own derivation of Eqn 1.8 before encouraging the reader to check that the gradient of Eqn 1.8 leads to the perceptron update (i.e. $\overline{W} \Leftarrow \overline{W} - \alpha \nabla\_{W} L\_i$). Upvotes: 0
2020/11/23
546
2,072
<issue_start>username_0: I know the original Transformer and the GPT (1-3) use two slightly different **positional encoding** techniques. More specifically, in GPT they say positional encoding is *learned*. What does that mean? OpenAI's papers don't go into detail very much. How do they really differ, mathematically speaking?<issue_comment>username_1: As far as I understood, the difference is the following: original Transformers use a fixed type of encoding, based on sine/cosine functions. On the other hand, GPT produces two embedding vectors: one of the input tokens, as usual in language models, and another for token positions themselves. Upvotes: 0 <issue_comment>username_2: The purpose of introduction of positional encoding is to insert a notion of location of a given token in the sequence. Without it, due to the permutation equivariance (symmetry under the token permutation) there will be no notion of relative order inside a sequence. Given a token at $\text{pos}$-th position we would like to make the model understand, that this token is at particular position. See pretty nice blog here - <https://kazemnejad.com/blog/transformer_architecture_positional_encoding/>. **Fixed encoding** In the [original Transformer](https://arxiv.org/abs/1706.03762) one uses a fixed map from the token position $i$ to the embedding vector added to the original embedding: $$ \begin{aligned} PE(\text{pos}, 2i) &= \sin(\text{pos} / 10000^{2i / d\_{\text{model}}}) \\ PE(\text{pos}, 2i + 1) &= \cos(\text{pos} / 10000^{2i / d\_{\text{model}}}) \end{aligned} $$ Here $\text{pos}$ is an index of the token in sequence, and $2i, 2i+1$ correspond to the dimension inside the embedding. **Learned encoding** Another strategy is to make map for $\text{pos}$ to the embedding vector of dimension $d\_{\text{model}}$ *learnable*. One initializes somehow for each position in the sequence vector of positional embedding for each position from $0$ to $\text{max\_length}$ and during the training these vectors are updated by gradient descent. Upvotes: 4 [selected_answer]
2020/11/26
1,202
4,538
<issue_start>username_0: I am using Keras (on top of TF 2.3) to train an image classifier. In some cases I have more than two classes, but often there are just two classes (either "good" or "bad"). I am using the `tensorflow.keras.applications.VGG16` class as base model with a custom classifier on top, like this: ``` input_layer = layers.Input(shape=(self.image_size, self.image_size, 3), name="model_input") base_model = VGG16(weights="imagenet", include_top=False, input_tensor=input_layer) model_head = base_model.output model_head = layers.AveragePooling2D(pool_size=(4, 4))(model_head) model_head = layers.Flatten()(model_head) model_head = layers.Dense(256, activation="relu")(model_head) model_head = layers.Dropout(0.5)(model_head) model_head = layers.Dense(len(self.image_classes), activation="softmax")(model_head) ``` As you can see in the last (output) layer I am using a `softmax` activation function. Then I compile the whole model with the `categorical_crossentropy` loss function and train with one-hot-encoded image data (labels). All in all the model performs quite well, I am happy with the results, I achieve over 99% test and validation accuracy with our data set. There is one thing I don't understand though: When I call `predict()` on the Keras model and look at the prediction results, then these are always either 0 or 1 (or at least very, very close to that, like 0.000001 and 0.999999). So my classifier seems to be quite sure whether an image belongs to either class "good" or "bad" (for example, if I am using only two classes). I was under the assumption, however, that usually these predictions are not that clear, more in terms of *"the model thinks with a probability of 80% that this image belongs to class A"* - but as said in my case it's always 100% sure. Any ideas why this might be the case?<issue_comment>username_1: Without more details about the nature of the dataset, it is impossible to know for sure. However, here are a few likely causes: 1. You were calling predict on training data, not testing data. The network will be a lot more sure about images that it trained on than on images it has never seen before. 2. Your model overfit the data. This can happen when you use an overly complex model on a small dataset. You may want to experiment with regularization. 3. You were looking at too small a sample of images. Did you run predict on every image, or just a few? If the latter, it is possible you just picked a sample that the network is very confident about. Upvotes: 1 <issue_comment>username_2: Traditional neural networks can be **over-confident** (i.e. give a probability close to $0$ or $1$) even when they are wrong, so you should **not** interpret the probability that it produces as a measure of *uncertainty* (i.e. as a measure of how much it is confident that the associated predicted class is the correct one), as that it is essentially wrong. See [this](https://ai.stackexchange.com/a/17671/2444) and [this](https://stats.stackexchange.com/a/437769/82135) answers for more details about this. Given that this overconfidence is not desirable in many scenarios (such as healthcare, where doctors also want to know how *confident* the model is about its predictions, in order to decide whether to give a certain medication to the patient or not), the ML community has been trying to incorporate *uncertainty quantification/estimation* in neural networks. If you are interested in this topic, you could read the paper [Weight Uncertainty in Neural Network](http://proceedings.mlr.press/v37/blundell15.html) (2015) by Blundell et al., which proposes a specific type of Bayesian neural network, i.e. a neural network that models the uncertainty over the actual values of the weights, from which we may also quantify/estimate the uncertainty about the inputs. This paper should not be too difficult to read if you are already familiar with the details of variational-autoencoders. So, the answer to your question is: yes, it's possible that the output probability is close to $1$ because neural networks can be over-confident. (I am assuming that the values returned by `tf.keras`'s `predict` method are probabilities: I don't remember anymore, so I assumed that you did not make any mistake). A similar question was already asked in the past [here](https://ai.stackexchange.com/q/17721/2444). [The accepted answer](https://ai.stackexchange.com/a/17722/2444) should provide more details about different types of uncertainty and solutions. Upvotes: 3 [selected_answer]
2020/11/28
1,355
4,784
<issue_start>username_0: Convolution Neural Network (CNNs) operate over strict grid-like structures ($M \times N \times C$ images), whereas Graph Neural Networks (GNNs) can operate over all-flexible graphs, with an undefined number of neighbors and edges. On the face of it, GNNs appear to be neural architectures that can subsume CNNs. Are GNNs really generalized architectures that can operate arbitrary functions over arbitrary graph structures? An obvious follow-up - **How can we derive a CNN out of a GNN**? Since non-spectral GNNs are based on message-passing that employ *permutation-invariant* functions, is it possible to derive a CNN from a base-architecture of GNN?<issue_comment>username_1: Spectral Graph Convolution -------------------------- We use the [Convolution Theorem](https://en.wikipedia.org/wiki/Convolution_theorem) to define convolution for graphs. The Convolution Theorem states that the Fourier transform of the convolution of two functions is the pointwise product of their Fourier transforms: $$\mathcal{F}(w\*h) = \mathcal{F}(w) \odot \mathcal{F}(h) \tag{1}\label{1} $$ $$ w \* h = \mathcal{F}^{-1}(\mathcal{F}(w)\odot\mathcal{F}(h)) \tag{2}\label{2}$$ Here $w$ is the filter in spatial domain(time domain) and $h$ is the signal in spatial domain(time domain). For images this signal $h$ is a $2D$ matrix and for other cases this $h$ can be a $1D$ signal. Assume we have $n$ number of node in a graph. In graph fourier transform the eigenvalues carry the notion of frequency. $\Lambda$ is the $ n \times n$ egenvalue matrix and it is a diagonal matrix. We can write equation 2 as: $$w \* h = \phi(\phi^{T}w \odot \phi^{T}h) = \phi\hat{w}(\Lambda)\phi^{T}h \tag{3}$$ Here $\phi$ is the eigenvector matrix of graph Laplacian $\in R^{n \times n}$, $\hat{w}(\Lambda)$ is the filter in spectral domain(frequency domain) $\in R^{n \times n}$ a diagonal matrix, $h$ is the $1D$ graph signal $\in R^{n}$ in spatial domain and w is the filter in spatial domain $\in R^{n}$. ### Vanilla Spectral GCN We define the spatial convolutional layer such that given layer $h^{l}$ , the activation of the next layer is: $$h^{l+1}=\sigma(w^l\*h^l) \tag{4}\label{4},$$ where $\sigma$ represents a nonlinear activation and $w^l$ is a spatial filter and $h$ is the graph signal. We can perform the above equation in terms of spectral graph convolution operation as: $$h^{l+1}=\sigma(\hat{w}^l\*\hat{h}^l) \tag{5}\label{5},$$ where $\hat{w}$ is the same filter but in the spectral domain(frequency domain). In case of vanilla GCN this equation yeild to: $$ h^{l+1} = \sigma(\phi\hat{w}^{l}(\Lambda)\phi^{T}h) \tag{6}\label{6}$$ Now, we will learn the $\hat{w}$ using backpropagation. This vanilla GCN has several limitations, like larger time complexity and this does not guarantee localization in the spatial domain that we get from CNN's filter. In next works, such as SplineGCNs, [ChebNet](https://arxiv.org/pdf/1606.09375.pdf), [Klipf and Welling's GCN](https://arxiv.org/pdf/1609.02907.pdf), and many other works address those issues, and try to solve them. Note that we can think of ChebNet and Klipf and Welling's GCN as a message-passing system, but, in the background, they are computing spectral convolution and also they use some standard assumption that's why we do not need any eigenvector and we implement them in the spatial domain, but still they are spectral convolution. There is also another branch in graph convolution called spatial graph convolution. I only talked about the spectral graph convolution. Upvotes: -1 <issue_comment>username_2: Yes, a CNN can be formalized as a specific kind of GNN where nodes are connected together in a 2D lattice structure and the outer edge is padded with zeros. Down-sampling techniques or pooling layers are an additional operation which remove edge nodes or low activation nodes. Convolutional layers act in the same manner as GNN weights by comparing each node with it's neighbors. Yes, GNNs are generalized architectures of CNNs. A CNN is derivable by treating the image as a lattice graph and augmenting pooling and/or down-sampling layers. `N` `w x w` convolutional kernels has a node with feature of length `N` interacting with a `w`-hop neighborhood. Any node of distance `w` is considered adjacent. I am an expert on convolutional networks and not graph neural networks. Maybe these two articles would also be helpful. <https://towardsdatascience.com/understanding-graph-convolutional-networks-for-node-classification-a2bfdb7aba7b?gi=4c06cb6c8e30> <https://medium.com/@rmwkwok/gnn-notes-series-explain-graph-convolutional-networks-gcn-with-knowledge-in-cnn-b827be1c872b> My goal in writing this was to provide a more novice accessible answer than the others on this post. Upvotes: 0
2020/11/30
1,403
5,145
<issue_start>username_0: The [Deep Learning](https://www.deeplearningbook.org/contents/convnets.html) book by Goodfellow et al. states > > Convolutional networks stand out as an example of neuroscientific principles influencing deep learning. > > > Are convolutional neural networks (CNNs) really inspired by the human brain? If so, how? In particular, what structures within the brain do CNN-like neuron groupings occur?<issue_comment>username_1: In the eye, the retinal ganglion cells have a [receptive field](https://en.wikipedia.org/wiki/Receptive_field) that is equivalent to some types of convolution filters, most of them edge detectors. The brain is a big unknown, nobody knows how it does to organize, memorize, create concepts, learns the language, ... . Thus, it is not possible to establish a parallelism. In particular, brain has a capacity of handle independence in scale **and rotation** that CNN's are not able to reproduce. As general remark about NN and brain: even when it is always said that "neural network cells" are "inspired" in biological neurons, there are critical differences that made this similitude only a "inspiration". Thus, comparison of a CNN or any other kind of NN with brain is always a fuzzy comparison. The biggest difference is probably the learning capacity: the human brain learns by itself, while the neural network needs an external system (the back-propagation algorithm) that feeds the NN with the learned parameters. Upvotes: 1 <issue_comment>username_2: Yes, CNNs are inspired by the human brain [[1](https://arxiv.org/pdf/1404.7828.pdf), [2](https://www.cs.princeton.edu/courses/archive/spr08/cos598B/Readings/Fukushima1980.pdf), [3](https://www.deeplearningbook.org/contents/convnets.html)]. More specifically, their operations, the convolution and pooling, are inspired by the human brain. However, note that, nowadays, CNNs are mainly trained with gradient descent (GD) and back-propagation (BP), which seems **not** to be a biologically plausible way of learning, but, given the success of GD and BP, there have been attempts to connect GB and BP with the way humans learn [[4](https://brainscan.uwo.ca/research/cores/computational_core/uploads/11May2020-Lillicrap_NatNeuroRev_2020.pdf)]. The **neocognitron**, the first convolutional neural network [[1](https://arxiv.org/pdf/1404.7828.pdf)], proposed by **<NAME>** in 1979-1980, and described in the paper [Neocognitron: A Self-organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position](https://www.cs.princeton.edu/courses/archive/spr08/cos598B/Readings/Fukushima1980.pdf), already uses convolutional and pooling (specifically, averaging pooling) layers [[1](https://arxiv.org/pdf/1404.7828.pdf)]. The neocognitron was inspired by the work of <NAME> Wiesel described in the 1959 paper [Receptive fields of single neurones in the cat's striate cortex](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1363130/). Here is an excerpt from the 1980 Fukushima's paper. > > The mechanism of pattern recognition in the brain is little known, and it seems to be almost impossible to reveal it only by conventional physiological experiments. So, we take a slightly different approach to this problem. If we could make a neural network model which has the same capability for pattern recognition as a human being, it would give us a powerful clue to the understanding of the neural mechanism in the brain. In this paper, we discuss how to synthesize a neural network model in order to endow it an ability of pattern recognition like a human being. > > > Several models were proposed with this intention (Rosenblatt, 1962; Kabrisky, 1966; Giebel, 1971; Fukushima, 1975). The response of most of these models, however, was severely affected by the shift in position and/or by the distortion in shape of the input patterns. Hence, their ability for pattern recognition > was not so high. > > > In this paper, we propose an improved neural network model. The structure of this network has been suggested by that of the visual nervous system of the vertebrate. This network is self-organized by "learning without a teacher", and acquires an ability to recognize stimulus patterns based on the geometrical similarity (Gestalt) of their shapes without affected by their position nor by small distortion of their shapes. This network is given a nickname "neocognitron", because it is a further extention of the "cognitron", which also is a self-organizing multilayered neural network model proposed by the author before (Fukushima, 1975) > > > However, Fukushima did not train the neocognitron with gradient descent (and back-propagation) but with local learning rules (which are more biologically plausible), and that's probably why he doesn't get more credit, as I think he should. You should read at least Fukushima's paper for more details, which I will not replicate here. [Section 9.4 of the Deep Learning book](https://www.deeplearningbook.org/contents/convnets.html) also contains details about how CNNs are inspired by neuroscience findings. Upvotes: 3 [selected_answer]
2020/12/02
562
2,445
<issue_start>username_0: I have a network with nodes and links, each of them with a certain amount of resources (that can take discrete values) at the initial state. At random time steps, a service is generated, and, based on the agent's action, the network status changes, reducing some of those nodes and links resources. The number of all possible states that the network can have is too large to calculate, especially since there is the random factor when generating the services. Let's say that I set the state space large enough (for example, 5000), and I use Q-Learning for 1000 episodes. Afterwards, when I test the agent ($\max Q(s,a)$), what could happen if the agent faces a state that did not encounter during the training phase?<issue_comment>username_1: Having too many states to actually visit is a common problem in RL. This is exactly why we often use function approximation. If you replace your q table with a good function approximator such as a neural network, it should be able to generelize well to states it has not yet encountered. If you do not use a function approximator but stick with a table, the agent will have no idea what to do when it encounters a new state. For more information, see Reinforcement Learning by Sutton and Barto, chapter 9. Upvotes: 3 [selected_answer]<issue_comment>username_2: I will try to explain this problem with the very tangible example of chess. In chess, the number of possible states is any configuration that you can make with the pieces on the board. So, the starting position is a state, and after you did one move you are in a different state. The total number of chess states is more than $10^{100}$. It is therefore very unlikely that a chess bot has seen all the states in training when playing a match. So, how does the algorithm solve this? For the answer, we have to look at how an RL algorithm chooses which move is the best. This obviously depends on the implementation of the algorithm, but, generally, the calculation of 'how good' a move is, is done with the use of an approximation taking into account the 'potential future reward'. If you capture the queen, that would probably be good (not a chess expert here), even though you have not seen this exact state before. If you go further than this, a network might be able to approximate what happens many moves in the future. The specifics come down to implementation, etc., but this is the gist of it. Upvotes: 1
2020/12/05
1,267
4,685
<issue_start>username_0: In the news, DeepMind's AlphaFold is said to have solved the protein folding problem using neural networks, but isn't this a problem only optimised quantum computers can solve? To my limited understating, the issue is that there are too many variables (atomic forces) to consider when simulating how an amino acid chain would fold, in which case only a quantum computer can be used to simulate it. Is the neural network just making a very good estimate, or is it simulating the actual protein structure?<issue_comment>username_1: AlphaFold (version [1](https://www.nature.com/articles/s41586-019-1923-7) and [2](https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology)) **predicts** (so **estimates**) the 3D shape of the protein from the sequence of amino acids. AlphaFold's performance is measured with the [global distance test (GDT)](https://en.wikipedia.org/wiki/Global_distance_test), which is a measure of similarity between two protein structures (the prediction and the ground-truth) that ranges from 0 to 100. There is a [short video](https://www.youtube.com/watch?v=KpedmJdrTpY) and a [longer one](https://www.youtube.com/watch?v=gg7WjuFs8F4) (both by DeepMind) that summarise the issue of [protein folding](https://en.wikipedia.org/wiki/Protein_folding), how it is important, how well AlphaFold approximately solves it (in the competition [Critical Assessment of protein Structure Prediction (CASP)](https://en.wikipedia.org/wiki/CASP)), i.e. AlphaFold 2 achieves a median GDT score of 92.4 (and 87 on the hardest proteins), which is a lot higher than AlphaFold 1's GDT score of 58 (which was the highest achieved score at the time), where, according to [John Moult](http://moult.ibbr.umd.edu/) (president of CASP), a score around 90 is considered a satisfactory solution to the protein folding problem. You can find more details about AlphaFold 2 in [this DeepMind blog post](https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology) and about AlphaFold 1 in [this other blog post](https://deepmind.com/blog/article/AlphaFold-Using-AI-for-scientific-discovery) or the [associated paper published in Nature this year](https://www.nature.com/articles/s41586-019-1923-7). You can find the code for AlphaFold 1 [here](https://github.com/deepmind/deepmind-research/tree/master/alphafold_casp13), but there are other community/open-source implementations. Despite the importance of the problem and achievement, there is clearly a lot of hype about this breakthrough (given also that it was achieved by DeepMind). This is also discussed in [this video](https://www.youtube.com/watch?v=W7wJDJ56c88&ab_channel=LexFridman) by <NAME>. Upvotes: 3 <issue_comment>username_2: > > the issue is that there are too many variables (atomic forces) to > consider when simulating how an amino acid chain would fold, in which > case only a quantum computer can be used to simulate it. > > > These many variables, taking as an example the ones you mentioned, the atomic forces, are somehow grouped in order to facilitate the calculations; thus, it is not necessary for absolutely everything to be simulated simultaneously. A notable example of this is when the K computer (one of the most powerful supercomputers in Japan, but far from being an optimized quantum computer) calculated the force needed to untie the DNA strands of histones (without taking into account the interactions between each nucleotide). [Multi Scale Modeling of Chromatin and Nucleosomes](https://youtu.be/4Z4KwuUfh0A) 1:37 - > > By treating multiple atoms as one single particle we can increase the > number of phenomena in our simulation > > > Basically, they start from the following question: "What is the maximum number of elements and interactions that we can exclude from an analysis object in a simulation in order to still preserve its properties without too much loss and with good accuracy?" Another example of how this is done is when they apply the Finite Element Method to study hemodynamics in the heart. They simply transform the heart into a set of regular tetrahedrons, yet still manage to simulate a very wide range of phenomena - such as the rate of consumption of ATP in each part of the heart and the variation in thickness of cardiac muscles during functioning. [Multi-scale Multi-physics Heart Simulator UT-Heart](https://youtu.be/2LPboySOSvo) In other words, they are far from wanting to simulate the heart in all its tissue details. On the contrary, they simplify as much as possible so that simulations are workable on our current supercomputers. Upvotes: 0
2020/12/06
876
3,336
<issue_start>username_0: In the paper [Attention Is All You Need](https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf), this section confuses me: > > In our model, we share the same weight matrix between the two embedding layers [in the encoding section] and the pre-softmax linear transformation [output of the decoding section] > > > Shouldn't the weights be different, and not the same? Here is my understanding: For simplicity, let us use the English-to-French translation task where we have $n^e$ number of English words in our dictionary and $n^f$ number of French words. * In the encoding layer, the input tokens are $1$ x $n^e$ one-hot vectors, and are embedded with a $n^e$ x $d^{model}$ learned embedding matrix. * In the output of the decoding layer, the final step is a linear transformation with weight matrix $d^{model}$ x $n^f$, and then applying softmax to get the probability of each french word, and choosing the french word with the highest probability. How is it that the $n^e$ x $n^{model}$ input embedding matrix share the same weights as the $d^{model}$ x $n^f$ decoding output linear matrix? To me, it seems more natural for both these matrices to be learned independently from each other via the training data, right? Or am I misinterpreting the paper?<issue_comment>username_1: I found the answer by reading the paper referenced by that section, [Using the output embedding to improve language models](https://arxiv.org/pdf/1608.05859.pdf) > > Based on this observation, we propose threeway weight tying (TWWT), where the input embedding of the decoder, the output embedding of > the decoder and the input embedding of the encoder are all tied. The single source/target vocabulary of this model is the union of both the source > and target vocabularies. In this model, both in the > encoder and decoder, all subwords are embedded > in the same duo-lingual space. > > > It seems like they learned a single embedding matrix ($n^e + n^f$) x $d^{model}$ in dimension. Upvotes: 3 [selected_answer]<issue_comment>username_2: Well actually the dictionary does not contain entire words but rather [byte pair encodings](https://en.m.wikipedia.org/wiki/Byte_pair_encoding). At the end of chapter 3 of the [paper](https://arxiv.org/pdf/1608.05859.pdf) that you listed they briefly mention that around 85-90% of the subwords are shared between english and french (see also Table 1). So in this case sharing the weights between the source embedding layer and the target embedding layer makes a lot of sense. I think that if you are in a different setting (e.g. english to chinese), where your final layer contains $n^e + n^f$ outputs, then this doesn't make much sense because during decoding you actually want to generate words only from the target language, and training this huge softmax layer might be really tricky and hard to do. Now regarding sharing the weights between the **decoder** embedding layer and the **decoder** output layer. I think this is a very nice idea, and you can always do this regardless of the source language or the task altoghether. However, you might want to be careful with the initialization of these weights. You can checkout [this](https://username_2.github.io/posts/transformer/#token-embedding-layer) if you want to read more. Upvotes: 0
2020/12/08
653
2,255
<issue_start>username_0: I am working on LSTM and CNN to solve the time series prediction problem. I have seen some tutorial examples of time series prediction using CNN-LSTM. But I don't know if it is better than what I predicted using LSTM. Could using LSTM and CNN together be better than predicting using LSTM alone?<issue_comment>username_1: In my opinion, if your dataset has correlation with its neighbors and there is some kind of sequence involved in it, then CNN-LSTM would provide you with much better results than only CNN or LSTM. Upvotes: 0 <issue_comment>username_2: Many papers have been published on CNN, LSTM, and CNN-LSTM for time series. From the literature and my experience, I conclude that CNN-LSTM outperforms CNN and LSTM models. Here are two relevant papers on stock price time series forecasting: [<NAME>, <NAME>, <NAME>, <NAME>, & <NAME>. (2020). A CNN-LSTM-Based Model to Forecast Stock Prices. Complexity, 2020. https://doi.org/10.1155/2020/6622927](https://www.hindawi.com/journals/complexity/2020/6622927/) The authors' conclusions: > > This paper takes the relevant data of the Shanghai Composite Index as an example to verify the experimental results. The experimental results show that the CNN-LSTM has the highest forecasting accuracy and the best performance compared with the MLP, CNN, RNN, LSTM, and CNN-RNN. > > > [<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2021). Stock Price Forecast Based on CNN-BiLSTM-ECA Model. Scientific Programming, 1–20. https://doi.org/10.1155/2021/2446543](https://www.hindawi.com/journals/sp/2021/2446543/) The authors' conclusions: > > The proposed model is compared with CNN, LSTM, BiLSTM, CNNLSTM, CNN-BiLSTM, BiLSTM-ECA, and CNN-LSTMECA network models on three datasets. The experimental results show that the proposed model has the highest prediction accuracy and the best performance. > > > Upvotes: 2 [selected_answer]<issue_comment>username_3: CNN can be used as good feature extractor to help LSTM layers train efficiently , normally when your input contains huge number of features (images , audio) and you need to project these features into learnable lower dimension space use CNN before LSTM Upvotes: 0
2020/12/08
637
2,483
<issue_start>username_0: I want to create a Deep Learning model that measures the distance between the camera and certain objects in an image. Is it possible? Please, let me know some resources related to this task.<issue_comment>username_1: You can use libraries OpenCV and Python to [find the distance.](https://www.pyimagesearch.com/2015/01/19/find-distance-camera-objectmarker-using-python-opencv/) You can refer this : [Vehicle detection and distance estimation.](https://towardsdatascience.com/vehicle-detection-and-distance-estimation-7acde48256e1) Since you didn't mention about dataset,you may consider datasets and methods in this [paper](https://arxiv.org/pdf/1806.10890.pdf). If your camera is fixed/has many objects infront of it,you might use [nearest object around camera approach.](https://github.com/saivarshittha/latitude_longitude_bounds).This approach is extremely useful if you want to deal with latitudes and longitudes. Upvotes: 2 <issue_comment>username_2: **In general, calculation of distance between camera and object is impossible if you don't have further scene dependent information.** To my knowledge you have 3 options: **Stereo Vision** If you have 2 cameras looking at the same scene from a different point of view you can calculate the distance with classical Computer Vision algorithms. This is called [stereo vision](https://en.wikipedia.org/wiki/Computer_stereo_vision), or also [multiview geometry](https://en.wikipedia.org/wiki/Epipolar_geometry). Stereo Vision is the reason why humans can infer the distance to objects around them (because we have 2 eyes). **Structure from Motion** You move your camera and therefore change your viewpoint and can essentially do stereo mapping over time. [Structure from Motion](https://en.wikipedia.org/wiki/Structure_from_motion) **Scene Understanding** Why is it then still possible for a one-eyed person to infer depth to some extent? Because humans have lots of scene dependent understanding. If you see a rubber duck that takes half of your field of view, you know it's pretty close because you know a rubber duck is not big. If you don't know the size of rubber ducks it is impossible to know whether you see a big rubber duck that is far away or a small rubber duck that is really close. This is where Deep Learning based models come into play. A recent overview over monocular depth estimation can be found in [Zhao2020](https://arxiv.org/pdf/2003.06620.pdf) Upvotes: 4 [selected_answer]
2020/12/12
944
3,695
<issue_start>username_0: I ran into a 2019-Entrance Exam question as follows: [![enter image description here](https://i.stack.imgur.com/Z8Dd3.png)](https://i.stack.imgur.com/Z8Dd3.png) The answer mentioned is (4), but some search on google showed me maybe (1) and (2) is equal to (4). Why would k-means be the algorithm with the highest bias? (Can you please also provide references to valid material to study more?)<issue_comment>username_1: I'm not an expert on clustering, but here's my take below. Note that this is only based on theoretical arguments, I haven't had enough clustering experience to say if this is generally true in practice. K-means vs GMM -------------- K-means has a higher bias than GMM because it is a special case of GMM. K-means specifically assumes the clustering is spherical (meaning each dimension is weighted equally important) and that the clustering problem is a hard clustering problem (each data point can only belong to one label). So, theoretically, K-means should perform equal to GMM (under very specific conditions) or worse. [More info](https://stats.stackexchange.com/a/269619/303597) K-means vs GMM (identity covariance matrix) ------------------------------------------- K-means has a higher bias than GMM (identity covariance matrix) because it is also a special case. K-mean specifically assumes the hard clustering problem, but GMM does not. Because of this, GMM has stronger estimates for the mean of the centroids. More specifically, > > [GMM] estimates the cluster means as weighted means, not assigning observations in a crisp manner to one of the clusters. In this way it avoids the problem explained above and it will be consistent as ML estimator (in general this is problematic because of issues of degeneration of the covariance matrix, however not if you assume them spherical and equal). > > > > > In practice, if you generate observations from a number of Gaussians with same spherical covariance matrix and different means, K-means will therefore overestimate the distances between the means, whereas the ML-estimator for the mixture model will not. > > > So, theoretically, K-means should perform equal to GMM (identity covariance matrix) or worse. [More info](https://stats.stackexchange.com/a/489615/303597) K-means vs Spectral clustering ------------------------------ K-means has a higher bias then spectral clustering because spectral clustering effectively uses K-means after processing more information from the matrices. > > Spectral clustering usually is spectral embedding, followed by k-means in the spectral domain. > > > > > So yes, it also uses k-means. But not on the original coordinates, but on an embedding that roughly captures connectivity. Instead of minimizing squared errors in the input domain, it minimizes squared errors on the ability to reconstruct neighbors. That is often better. > > > [More info](https://datascience.stackexchange.com/a/36387/108095) Upvotes: 1 <issue_comment>username_2: K means tried to cluster data points into 0 and 1 rules for cluster assignment i.e. Data Point belongs to a class or it does not. But sometimes the data points comes from classes whose probability distribution functions have overlap with each other. Now Hard Clustering of K means leads to truncation on the data generating\original probability distribution which lead to the Bias in K means algorithm. Spectral clustering does soft assignment so they are less biased. This has been proved by taking example in this paper : <https://www.researchgate.net/publication/287829457_On_the_bias_and_inconsistency_of_K-means_clustering/link/567a04a608ae40c0e27df98c/download> Upvotes: 0
2020/12/12
1,195
4,258
<issue_start>username_0: I have a difficult time understanding the "multi-head" notion in the original [transformer paper](https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf). What makes the learning in each head unique? Why doesn't the neural network learn the same set of parameters for each attention head? Is it because we break *query, key* and *value* vectors into smaller dimensions and feed each portion to a different head?<issue_comment>username_1: The reason each head is different is because they each learn a different set of weight matrices $\{ W\_i^Q, W\_i^K, W\_i^V \}$ where $i$ is the index of the head. To clarify, the input to each attention head is the same. For attention head $i$: \begin{align} Q\_i(x) &= x W\_i^Q \\ K\_i(x) &= x W\_i^K \\ V\_i(x) &= x W\_i^V \\ \text{attention}\_i(x) &= \text{softmax} \left(\frac{Q\_i(x) K\_i(x)^T}{\sqrt{d\_k}} \right) V\_i(x). \end{align} Notice that the input to each head is $x$ (either the semantic + positional embedding of the decoder input for the first decoder layer, or the output of the previous decoder layer). [More info](http://jalammar.github.io/illustrated-transformer/) The question as to why gradient descent learns each set of weight matrices $\{ W\_i^Q, W\_i^K, W\_i^V \}$ to be different across each attention head is very similar to ["Is there anything that ensures that convolutional filters end up the same?"](https://ai.stackexchange.com/questions/25109/is-there-anything-that-ensures-that-convolutional-filters-end-up-the-same/25111#25111), so maybe you might find the answer there helpful for you: > > No, nothing really prevents the weights from being different. In practice though they end up almost always different because it makes the model more expressive (i.e. more powerful), so gradient descent learns to do that. If a model has n features, but 2 of them are the same, then the model effectively has n−1 features, which is a less expressive model than that of n features, and therefore usually has a larger loss function. > > > Upvotes: 4 [selected_answer]<issue_comment>username_2: [Multiple attention heads](https://stackoverflow.com/a/66652733/9067615) in a single layer in a transformer is analogous to multiple kernels in a single layer in a CNN: they have the same architecture, and operate on the same feature-space, but since they are separate 'copies' with different sets of weights, they are hence 'free' to learn different functions. In a CNN this may correspond to different definitions of visual features, and in a Transformer this may correspond to different definitions of relevance:1 For example: | Architecture | Input | (Layer 1)Kernel/Head 1 | (Layer 1)Kernel/Head 2 | | --- | --- | --- | --- | | CNN | Image | Diagonal [edge-detection](https://en.wikipedia.org/wiki/Kernel_(image_processing)#Details) | Horizontal edge-detection | | Transformer | Sentence | Attends to next word | Attends from verbs to their direct objects | --- **Notes:** 1. There is no guarantee that these are human interpretable, but in many popular architectures they do map accurately onto linguistic concepts: > > While no single head performs well at many relations, we find that particular heads correspond remarkably well to particular relations. For example, we find heads that find direct objects of verbs, determiners of nouns, objects of prepositions, and objects of possesive pronouns... > > > > * [What Does BERT Look at? An Analysis of BERT’s Attention](https://www.aclweb.org/anthology/W19-4828/) (2019) > 2. Multiple heads was originally proposed as a way to mitigate the lack of descriptive power of a single head in self-attention has: > > In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions [...] This makes it more difficult to learn dependencies between distant positions [12]. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention... > > > > * [Attention is All You Need](https://arxiv.org/abs/1706.03762) (2017) > Upvotes: 2
2020/12/13
1,387
4,926
<issue_start>username_0: In [these notes](https://www.cs.cmu.edu/%7Eaarti/Class/10701/exams/midterm2010f_sol.pdf#page=3), we have the following statement > > The depth of a learned decision tree can be larger than the number of training examples used to create the tree > > > This statement is false, according to the same [notes](https://www.cs.cmu.edu/%7Eaarti/Class/10701/exams/midterm2010f_sol.pdf#page=3), where it is written > > **False**: Each split of the tree must correspond to at least one training example, therefore, if there are $n$ training examples, a path in the tree can have length at most $n$ > > > **Note**: There is a pathological situation in which the depth of a learned decision tree can be larger than number of training examples $n$ - if the number of features is larger than $n$ and there exist training examples which have same feature values but different labels. > > > I had written on my notes that the depth of a decision tree only depends on the number of features of the training set and not on the number of training samples. So, what does the depth of the decision tree depend on?<issue_comment>username_1: The reason each head is different is because they each learn a different set of weight matrices $\{ W\_i^Q, W\_i^K, W\_i^V \}$ where $i$ is the index of the head. To clarify, the input to each attention head is the same. For attention head $i$: \begin{align} Q\_i(x) &= x W\_i^Q \\ K\_i(x) &= x W\_i^K \\ V\_i(x) &= x W\_i^V \\ \text{attention}\_i(x) &= \text{softmax} \left(\frac{Q\_i(x) K\_i(x)^T}{\sqrt{d\_k}} \right) V\_i(x). \end{align} Notice that the input to each head is $x$ (either the semantic + positional embedding of the decoder input for the first decoder layer, or the output of the previous decoder layer). [More info](http://jalammar.github.io/illustrated-transformer/) The question as to why gradient descent learns each set of weight matrices $\{ W\_i^Q, W\_i^K, W\_i^V \}$ to be different across each attention head is very similar to ["Is there anything that ensures that convolutional filters end up the same?"](https://ai.stackexchange.com/questions/25109/is-there-anything-that-ensures-that-convolutional-filters-end-up-the-same/25111#25111), so maybe you might find the answer there helpful for you: > > No, nothing really prevents the weights from being different. In practice though they end up almost always different because it makes the model more expressive (i.e. more powerful), so gradient descent learns to do that. If a model has n features, but 2 of them are the same, then the model effectively has n−1 features, which is a less expressive model than that of n features, and therefore usually has a larger loss function. > > > Upvotes: 4 [selected_answer]<issue_comment>username_2: [Multiple attention heads](https://stackoverflow.com/a/66652733/9067615) in a single layer in a transformer is analogous to multiple kernels in a single layer in a CNN: they have the same architecture, and operate on the same feature-space, but since they are separate 'copies' with different sets of weights, they are hence 'free' to learn different functions. In a CNN this may correspond to different definitions of visual features, and in a Transformer this may correspond to different definitions of relevance:1 For example: | Architecture | Input | (Layer 1)Kernel/Head 1 | (Layer 1)Kernel/Head 2 | | --- | --- | --- | --- | | CNN | Image | Diagonal [edge-detection](https://en.wikipedia.org/wiki/Kernel_(image_processing)#Details) | Horizontal edge-detection | | Transformer | Sentence | Attends to next word | Attends from verbs to their direct objects | --- **Notes:** 1. There is no guarantee that these are human interpretable, but in many popular architectures they do map accurately onto linguistic concepts: > > While no single head performs well at many relations, we find that particular heads correspond remarkably well to particular relations. For example, we find heads that find direct objects of verbs, determiners of nouns, objects of prepositions, and objects of possesive pronouns... > > > > * [What Does BERT Look at? An Analysis of BERT’s Attention](https://www.aclweb.org/anthology/W19-4828/) (2019) > 2. Multiple heads was originally proposed as a way to mitigate the lack of descriptive power of a single head in self-attention has: > > In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions [...] This makes it more difficult to learn dependencies between distant positions [12]. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention... > > > > * [Attention is All You Need](https://arxiv.org/abs/1706.03762) (2017) > Upvotes: 2
2020/12/14
1,339
4,658
<issue_start>username_0: I was training a CNN model on TensorFlow. After a while I came back and saw this loss curve: [![training loss curve(green) and validation loss curve(gray)](https://i.stack.imgur.com/nTIrC.png)](https://i.stack.imgur.com/nTIrC.png) The green curve is training loss and the gray one is validation loss. I know that before epoch 394 the model in heavily overfitted, but I have no idea what happened after that. Also, this is accuracy curves if it helps: [![accuracy curves](https://i.stack.imgur.com/u7R6t.png)](https://i.stack.imgur.com/u7R6t.png) I'm using categorical cross-entropy and this is the model I am using: [![model architecture](https://i.stack.imgur.com/tcPrn.png)](https://i.stack.imgur.com/tcPrn.png) and here is link to PhysioNet's challenge which I am working on: <https://physionet.org/content/challenge-2017/1.0.0/><issue_comment>username_1: The reason each head is different is because they each learn a different set of weight matrices $\{ W\_i^Q, W\_i^K, W\_i^V \}$ where $i$ is the index of the head. To clarify, the input to each attention head is the same. For attention head $i$: \begin{align} Q\_i(x) &= x W\_i^Q \\ K\_i(x) &= x W\_i^K \\ V\_i(x) &= x W\_i^V \\ \text{attention}\_i(x) &= \text{softmax} \left(\frac{Q\_i(x) K\_i(x)^T}{\sqrt{d\_k}} \right) V\_i(x). \end{align} Notice that the input to each head is $x$ (either the semantic + positional embedding of the decoder input for the first decoder layer, or the output of the previous decoder layer). [More info](http://jalammar.github.io/illustrated-transformer/) The question as to why gradient descent learns each set of weight matrices $\{ W\_i^Q, W\_i^K, W\_i^V \}$ to be different across each attention head is very similar to ["Is there anything that ensures that convolutional filters end up the same?"](https://ai.stackexchange.com/questions/25109/is-there-anything-that-ensures-that-convolutional-filters-end-up-the-same/25111#25111), so maybe you might find the answer there helpful for you: > > No, nothing really prevents the weights from being different. In practice though they end up almost always different because it makes the model more expressive (i.e. more powerful), so gradient descent learns to do that. If a model has n features, but 2 of them are the same, then the model effectively has n−1 features, which is a less expressive model than that of n features, and therefore usually has a larger loss function. > > > Upvotes: 4 [selected_answer]<issue_comment>username_2: [Multiple attention heads](https://stackoverflow.com/a/66652733/9067615) in a single layer in a transformer is analogous to multiple kernels in a single layer in a CNN: they have the same architecture, and operate on the same feature-space, but since they are separate 'copies' with different sets of weights, they are hence 'free' to learn different functions. In a CNN this may correspond to different definitions of visual features, and in a Transformer this may correspond to different definitions of relevance:1 For example: | Architecture | Input | (Layer 1)Kernel/Head 1 | (Layer 1)Kernel/Head 2 | | --- | --- | --- | --- | | CNN | Image | Diagonal [edge-detection](https://en.wikipedia.org/wiki/Kernel_(image_processing)#Details) | Horizontal edge-detection | | Transformer | Sentence | Attends to next word | Attends from verbs to their direct objects | --- **Notes:** 1. There is no guarantee that these are human interpretable, but in many popular architectures they do map accurately onto linguistic concepts: > > While no single head performs well at many relations, we find that particular heads correspond remarkably well to particular relations. For example, we find heads that find direct objects of verbs, determiners of nouns, objects of prepositions, and objects of possesive pronouns... > > > > * [What Does BERT Look at? An Analysis of BERT’s Attention](https://www.aclweb.org/anthology/W19-4828/) (2019) > 2. Multiple heads was originally proposed as a way to mitigate the lack of descriptive power of a single head in self-attention has: > > In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions [...] This makes it more difficult to learn dependencies between distant positions [12]. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention... > > > > * [Attention is All You Need](https://arxiv.org/abs/1706.03762) (2017) > Upvotes: 2
2020/12/15
1,994
7,057
<issue_start>username_0: The KL Divergence is quite easy to compute in closed form for simple distributions -such as Gaussians- but has some not-very-nice properties. For example, it is not symmetrical (thus it is not a metric) and it does not respect the triangular inequality. What is the reason it is used so often in ML? Aren't there other statistical distances that can be used instead?<issue_comment>username_1: This question is very general in the sense that the reason may differ depending on the area of ML you are considering. Below are two different areas of ML where the KL-divergence is a natural consequence: * Classification: maximizing the log-likelihood (or minimizing the negative log-likelihood) is equivalent to minimizing KL divergence as typical used in DL-based classification **where one-hot targets are commonly used as reference** (see <https://stats.stackexchange.com/a/357974>). Furthermore, if you have a one-hot vector $e\_y$ with $1$ at index $y$, minimizing cross-entropy $\min\_{\hat{p}}H(e\_y, \hat{p}) = - \sum\_y e\_y \log \hat{p}\_y = - \log \hat{p}$ boils down to maximizing the log-likelihood. In summary, maximizing the log-likelihood is arguably a natural objective, and KL-divergence (with 0 log 0 defined as 0) comes up because of its equivalence to log-likelihood under typical settings, rather than explicitly being motivated as the objective. * Multi-armed bandits (a sub-area of reinforcement learning): Upper confidence bound (UCB) is an algorithm derived from standard concentration inequalities. If we consider MABs with Bernoulli rewards, we can apply Chernoff's bound and optimize over the free parameter to obtain an upper bound expressed in terms of KL divergence as stated below (see <https://page.mi.fu-berlin.de/mulzer/notes/misc/chernoff.pdf> for some different proofs). Let $X\_1, \dots, X\_n$ be i.i.d. Bernoulli RVs with parameter $p$. $$P(\sum\_i X\_i \geq (p+t)n) \leq \inf\_\lambda M\_X (\lambda) e^{-\lambda t} = \exp(-n D\_{KL}(p+t||p)).$$ Upvotes: 2 <issue_comment>username_2: In ML we always deal with unknown probability distributions from which the data comes. The most common way to calculate the distance between real and model distribution is $KL$ divergence. ### Why Kullback–Leibler divergence? Although there are other loss functions (e.g. MSE, MAE), $KL$ divergence is natural when we are dealing with probability distributions. It is a fundamental equation in information theory that quantifies, in bits, how close two probability distributions are. It is also called relative entropy and, as the name suggests, it is closely related to entropy, which in turn is a central concept in information theory. Let's recall the definition of entropy for a discrete case: $$ H = -\sum\_{i=1}^{N} p(x\_i) \cdot \text{log }p(x\_i) $$ As you observed, entropy on its own is just a measure of a single probability distribution. If we slightly modify this formula by adding a second distribution, we get $KL$ divergence: $$ D\_{KL}(p||q) = \sum\_{i=1}^{N} p(x\_i)\cdot (\text{log }p(x\_i) - \text{log }q(x\_i)) $$ where $p$ is a data distribution and $q$ is model distribution. As we can see, $KL$ divergence is the most natural way to compare 2 distributions. Moreover, it's pretty easy to calculate. This [article](https://www.countbayesie.com/blog/2017/5/9/kullback-leibler-divergence-explained) provides more intuition on this: > > Essentially, what we're looking at with the KL divergence is the > expectation of the log difference between the probability of data in > the original distribution with the approximating distribution. Again, > if we think in terms of $log\_2$ we can interpret this as "how many > bits of information we expect to lose". > > > ### Cross entropy Cross-entropy is commonly used in machine learning as a loss function where we have softmax (or sigmoid) output layer, since it represents a predictive distribution over classes. The one-hot output represents a model distribution $q$, while true labels represent a target distribution $p$. Our goal is to push $q$ to $p$ as close as possible. We could take a mean squared error over all values, or we could sum the absolute differences, but the one measure which is motivated by information theory is cross-entropy. It gives the average number of bits needed to encode samples distributed as $p$, using $q$ as the encoding distribution. Cross-entropy based on entropy and generally calculates the difference between two probability distributions and closely related to $KL$ divergence. The difference is that it calculates the the total entropy between the distributions, while $KL$ divergence represents relative entropy. Corss-entropy can be defined as follows: $$ H(p, q) = H(p) + D\_{KL}(p \parallel q) $$ The first term in this equation is the entropy of the true probability distribution $p$ that is omitted during optimization, since the entropy of $p$ is constant. Hence, minimizing cross-entropy is the same as optimizing $KL$ divergence. ### Log likelihood It can be also [shown](https://en.wikipedia.org/wiki/Cross_entropy#Relation_to_log-likelihood) that maximizing the (log) likelihood is equivalent to minimizing the cross entropy. ### Limitations As you mentioned, $KL$ divergence is not symmetrical. But in most cases this is not critical, since we want to estimate the model distribution by pushing it towards real one, but not vice versa. There is also a symmetrized version called [Jensen–Shannon divergence](https://en.wikipedia.org/wiki/Jensen%E2%80%93Shannon_divergence): $$ D\_{JS}(p||q)=\frac{1}{2}D\_{KL}(p||m)+\frac{1}{2}D\_{KL}(q||m) $$ where $m=\frac{1}{2}(p+q)$. The main disadvantage of $KL$ is that both the unknown distribution and the model distribution must have support. Otherwise the $D\_{KL}(p||q)$ becomes $+\infty$ and $D\_{JS}(p||q)$ becomes $log2$ Second, it should be noted that $KL$ is not a metric, since it violates triangle inequality. That is, in some cases it won't tell us if we are going the right direction when estimating our model distribution. Here is an example taken from [this answer](https://stats.stackexchange.com/a/351153/137038). Given two discrete distributions $p$ and $q$, we calculate $KL$ divergence and Wasserstein metric: [![enter image description here](https://i.stack.imgur.com/aN5Ld.png)](https://i.stack.imgur.com/aN5Ld.png) As you can see, $KL$ divergence remained the same, while the Wasserstein metric decreased. But as mentioned in comments, Wasserstein metric is highly intractable in a continuous space. We still can use it by applying the Kantorovich-Rubinstein duality used in [Wasserstein GAN](https://arxiv.org/pdf/1701.07875.pdf). You can also find more on this topic in this [article](https://jonathan-hui.medium.com/gan-wasserstein-gan-wgan-gp-6a1a2aa1b490). The 2 drawbacks of $KL$ can be mitigated by adding noise. More on it in this [paper](https://openaccess.thecvf.com/content_CVPR_2019/papers/Jenni_On_Stabilizing_Generative_Adversarial_Training_With_Noise_CVPR_2019_paper.pdf) Upvotes: 2
2020/12/16
1,206
3,509
<issue_start>username_0: In the [Attention is all you need](https://arxiv.org/pdf/1706.03762.pdf) paper, on the 4th page, we have equation 1, which describes the self-attention mechanism of the transformer architecture $$ \text { Attention }(Q, K, V)=\operatorname{softmax}\left(\frac{Q K^{T}}{\sqrt{d\_{k}}}\right) V $$ Everything is fine up to here. Then they introduce the multi-head attention, which is described by the following equation. $$ \begin{aligned} \text { MultiHead }(Q, K, V) &=\text { Concat}\left(\text {head}\_{1}, \ldots, \text {head}\_{\mathrm{h}}\right) W^{O} \\ \text { where head}\_{\mathrm{i}} &=\text {Attention}\left(Q W\_{i}^{Q}, K W\_{i}^{K}, V W\_{i}^{V}\right) \end{aligned} $$ Once the multi-head attention is motivated at the end of page 4, they state that for a single head (the $i$th head), the query $Q$ and key $K$ inputs are first linearly projected by $W\_i^Q$ and $W\_i^K$, then dot product is calculated, let's say $Q\_i^p = Q W\_i^Q$ and $K\_i^p = K W\_i^K$. Therefore, the dot product of the projected query and key becomes the following from simple linear algebra. $$Q\_i^p {K\_i^p}^\intercal = Q W\_i^Q {W\_i^K}^T K^T = Q W\_i K^T,$$ where $$W\_i = W\_i^Q {W\_i^K}^T$$ Here, $W$ is the outer product of query projection by the key projection matrix. However, it is a matrix with shape $d\_{model} \times d\_{model}$. Why did the authors not define only a $W\_i$ instead of $W\_i^Q$ and $W\_i^K$ pair which have $2 \times d\_{model} \times d\_{k}$ elements? In deep learning applications, I think it would be very inefficient. Is there something that I am missing, like these 2 matrices $W\_i^Q$ and $W\_i^K$ should be separate because of this and that?<issue_comment>username_1: I'll use notation from the paper you cited, and any other readers should refer to the paper (widely available) for definitions of notation. The utility of using $W^Q$ and $W^K$, rather than $W$, lies in the fact that they allow us to add fewer parameters to our architecture. $W$ has dimension $d\_{model} \times d\_{model}$, which means that we are adding $d\_{model}^2$ parameters to our architecture. $W^Q$ and $W^K$ each have dimension $d\_{model} \times d\_k$, and $d\_k=\frac{d\_{model}}{h}$. If we use these two matrices, we only add $2\frac{d\_{model}^2}{h}$ parameters to our architecture, even though their multiplication (with the transpose) allows us to have the correct dimensions for matrix multiplication with $Q$ and $K$. We do use $h$ attention heads, which then brings our number of parameters back up, but the multiple heads let the model attend to different pieces of information in our data. Upvotes: 3 [selected_answer]<issue_comment>username_2: In practice, matrices $W^Q, W^K, W^V$ (each of size $d\_{model}$ x $d\_{model}$) are completely removed instead, and Transformer implementations just learn a single set of matrices $\{ W\_i^{Q\*}, W\_i^{K\*}, W\_i^{V\*} \}$ (each of size $d\_{model}$ x $\frac{d\_{model}}{h}$) for each head, where $W\_i^{Q\*} = W^Q W\_i^Q \\ W\_i^{K\*} = W^K W\_i^K \\ W\_i^{V\*} = W^V W\_i^V $ so that: $Q\_i(x) = x W\_i^{Q\*} = x W^Q W\_i^{Q\*} = Q W\_i^Q\\ K\_i(x) = x W\_i^{K\*} = x W^K W\_i^{K\*} = K W\_i^K\\ V\_i(x) = x W\_i^{V\*} = x W^V W\_i^{V\*} = V W\_i^V\\ head\_i(x) = softmax \left(\frac{Q\_i(x) K\_i(x)^T}{\sqrt{d\_k}} \right) V\_i(x)$. I can confirm this with the original Transformer implementation in Tensor2Tensor, and also the BERT code that uses the encoding part of the Transformer. Upvotes: 0
2020/12/16
821
2,496
<issue_start>username_0: Can someone explain to me with a proof or example why you can't linearly separate XOR (and therefore need a neural network, the context I'm looking at it in)? I understand why it's not linearly separable if you draw it graphically (e.g. [here](https://medium.com/@lucaspereira0612/solving-xor-with-a-single-perceptron-34539f395182)), but I can't seem to find a formal proof somewhere, I wanted to try and understand it with either an equation or example written down. I'm wondering if one exists (I guess it has to do with contradictions?), but I can't seem to find it? I have seen [this](https://ai.stackexchange.com/q/16291/2444), but it's more a reason than a proof.<issue_comment>username_1: I'll use notation from the paper you cited, and any other readers should refer to the paper (widely available) for definitions of notation. The utility of using $W^Q$ and $W^K$, rather than $W$, lies in the fact that they allow us to add fewer parameters to our architecture. $W$ has dimension $d\_{model} \times d\_{model}$, which means that we are adding $d\_{model}^2$ parameters to our architecture. $W^Q$ and $W^K$ each have dimension $d\_{model} \times d\_k$, and $d\_k=\frac{d\_{model}}{h}$. If we use these two matrices, we only add $2\frac{d\_{model}^2}{h}$ parameters to our architecture, even though their multiplication (with the transpose) allows us to have the correct dimensions for matrix multiplication with $Q$ and $K$. We do use $h$ attention heads, which then brings our number of parameters back up, but the multiple heads let the model attend to different pieces of information in our data. Upvotes: 3 [selected_answer]<issue_comment>username_2: In practice, matrices $W^Q, W^K, W^V$ (each of size $d\_{model}$ x $d\_{model}$) are completely removed instead, and Transformer implementations just learn a single set of matrices $\{ W\_i^{Q\*}, W\_i^{K\*}, W\_i^{V\*} \}$ (each of size $d\_{model}$ x $\frac{d\_{model}}{h}$) for each head, where $W\_i^{Q\*} = W^Q W\_i^Q \\ W\_i^{K\*} = W^K W\_i^K \\ W\_i^{V\*} = W^V W\_i^V $ so that: $Q\_i(x) = x W\_i^{Q\*} = x W^Q W\_i^{Q\*} = Q W\_i^Q\\ K\_i(x) = x W\_i^{K\*} = x W^K W\_i^{K\*} = K W\_i^K\\ V\_i(x) = x W\_i^{V\*} = x W^V W\_i^{V\*} = V W\_i^V\\ head\_i(x) = softmax \left(\frac{Q\_i(x) K\_i(x)^T}{\sqrt{d\_k}} \right) V\_i(x)$. I can confirm this with the original Transformer implementation in Tensor2Tensor, and also the BERT code that uses the encoding part of the Transformer. Upvotes: 0
2020/12/18
723
2,311
<issue_start>username_0: Sorry if I sound confused. I read that data to be fed to a machine are divided into training, validation and test data. Both training and validation data are used for developing the model. Test data is used only for testing the model and no tuning of the model is done using test data. Why is there a need to separate out training and validation data since both sets of data are for developing/tuning the model? Why not keep things simple and combine both data sets into a single one?<issue_comment>username_1: I'll use notation from the paper you cited, and any other readers should refer to the paper (widely available) for definitions of notation. The utility of using $W^Q$ and $W^K$, rather than $W$, lies in the fact that they allow us to add fewer parameters to our architecture. $W$ has dimension $d\_{model} \times d\_{model}$, which means that we are adding $d\_{model}^2$ parameters to our architecture. $W^Q$ and $W^K$ each have dimension $d\_{model} \times d\_k$, and $d\_k=\frac{d\_{model}}{h}$. If we use these two matrices, we only add $2\frac{d\_{model}^2}{h}$ parameters to our architecture, even though their multiplication (with the transpose) allows us to have the correct dimensions for matrix multiplication with $Q$ and $K$. We do use $h$ attention heads, which then brings our number of parameters back up, but the multiple heads let the model attend to different pieces of information in our data. Upvotes: 3 [selected_answer]<issue_comment>username_2: In practice, matrices $W^Q, W^K, W^V$ (each of size $d\_{model}$ x $d\_{model}$) are completely removed instead, and Transformer implementations just learn a single set of matrices $\{ W\_i^{Q\*}, W\_i^{K\*}, W\_i^{V\*} \}$ (each of size $d\_{model}$ x $\frac{d\_{model}}{h}$) for each head, where $W\_i^{Q\*} = W^Q W\_i^Q \\ W\_i^{K\*} = W^K W\_i^K \\ W\_i^{V\*} = W^V W\_i^V $ so that: $Q\_i(x) = x W\_i^{Q\*} = x W^Q W\_i^{Q\*} = Q W\_i^Q\\ K\_i(x) = x W\_i^{K\*} = x W^K W\_i^{K\*} = K W\_i^K\\ V\_i(x) = x W\_i^{V\*} = x W^V W\_i^{V\*} = V W\_i^V\\ head\_i(x) = softmax \left(\frac{Q\_i(x) K\_i(x)^T}{\sqrt{d\_k}} \right) V\_i(x)$. I can confirm this with the original Transformer implementation in Tensor2Tensor, and also the BERT code that uses the encoding part of the Transformer. Upvotes: 0
2020/12/18
549
2,300
<issue_start>username_0: In the paper "[ForestNet: Classifying Drivers of Deforestation in Indonesia using Deep Learning on Satellite Imagery](https://arxiv.org/pdf/2011.05479.pdf)", the authors talk about using: 1. Feature Pyramid Networks (as the architecture) 2. EfficientNet-B2 (as the backbone) > > **Performance Measures on the Validation Set**. The RF model that only inputs data from the visible Landsat 8 bands achieved the lowest > performance on the validation set, but the incorporation of auxiliary > predictors substantially improved its performance. All of the CNN > models outperformed the RF models. The best performing model, which we > call ForestNet, used an FPN architecture with an EfficientNet-B2 > backbone. The use of SDA provided large performance gains on the > validation set, and land cover pre-training and incorporating > auxiliary predictors each led to additional performance improvements. > > > What's the difference between architectures and backbones? I can't find much online. Specifically, what are their respective purposes? From a high-level perspective, what would integrating the two look like?<issue_comment>username_1: The vocabulary is definitely non-standard and a bit confusing, but Feature Pyramid Networks is used as a feature extractor, and its output is then fed into EfficientNet-B2 to be used to classify the image. One neural network model is concatenated at the end of the other. So it seems like "architecture" is the front half of the neural network model which takes as input the satellite image and extracts image features, and then is directly connected to the back half of the model (hence "backbone"), which takes the features extracted from the "architecture" and makes a classification. This terminology is definitely non-standard here, at least in the AI community, and if you ask anyone here I think it will be uncommon for them to naturally think about the words "architecture" vs "backbone" in this way unless they specialize in a similar field to the authors. Upvotes: 0 <issue_comment>username_2: I've taken an NVIDIA course on the portal and it said that ResNet, VGG, GoogleNet were model architectures , and that DetectNet\_V2,FasterRCNN,SSD, UNET were model backbones, so I think it's a common terminology Upvotes: -1
2020/12/20
2,667
8,116
<issue_start>username_0: What does the Bellman equation actually say? And are there many flavours of that? I get a little confused when I look for the Bellman equation, because I feel like people are telling slightly different things about what it is. And I think the Bellman Equation is just basic philosophy and you can do whatever you want with that. The interpretations that I have seen so far: Let's consider this grid world. ``` +--------------+ | S6 | S7 | S8 | +----+----+----+ | S3 | S4 | S5 | +----+----+----+ | S0 | S1 | S2 | +----+----+----+ ``` * Rewards: S1:10; S3:10 * Starting Point: S0 * Horizon: 2 * Actions: Up, Down, Left, Right (If an action is not valid because there is no space, you remain in your position) **The V-Function/Value:** It tells you how good is it to be in a certain state. With a horizon of 2, one can reach: ``` S0==>S3 (Up) (R 5) S0==>S0 (Down) (R 0) S0==>S1 (Right)(R10) S0==>S0 (Left) (R 0) ``` From that onwards ``` S0==>S3 (Up) (R 5) S0==>S0 (Down) (R 0) S0==>S1 (Right)(R10) S0==>S0 (Left) (R 0) S1==>S4 (Up) (R 0) S1==>S1 (Down) (R10) S1==>S2 (Right)(R 0) S1==>S0 (Left) (R 0) S3==>S6 (Up) (R 0) S3==>S0 (Down) (R 0) S3==>S3 (Right)(R 5) S3==>S2 (Left) (R10) ``` Considering no discount, this would mean that it is R=45 good to be in S0, because these are the options. Of course, you can't grab every reward, because you have to decide. Do I need to consider the best next state yet, because this would obviously reduce my expected total reward, but as I can only make two steps it would tell me what is really possible. Not what the overall Reward R(s) in that range is. **The Q-Function/Value** This function takes a state and an action, but I am not sure. If that means that I have a reward function that just considers my actions as well to give me a reward. Because in the previous example I just have to land on a state (It doesn't really matter how I get there). But this time I get a reward, when I choose a certain action. R(s,a) But otherwise I do not rate the best action and select that next state to calculate the next state. I choose every next step and from that I choose the 2nd next. **Optimization V-function or Q-function** This works the same as V-Function or Q-Function, but it just considers the next best award. Some sort of greedy approach: First step: ``` S0==>S3 (Up) (R 5) [x] S0==>S0 (Down) (R 0) [x] S0==>S1 (Right)(R10) S0==>S0 (Left) (R 0) [x] ``` Second Step: ``` S1==>S4 (Up) (R 0) [x] S1==>S1 (Down) (R10) S1==>S2 (Right)(R 0) [x] S1==>S0 (Left) (R 0) ``` So, this would say that is the best I can do in two steps. I know that there is a problem, because when I just follow a greedy approach I risk that I won't get the best result, if I would have had a reward of 1000 on S2 later. But still, I just want to know, if I have a correct understanding. I know there might be many flavours and interpretations but at least I want to know that is the correct name of these approaches.<issue_comment>username_1: For a Markov Decision Process $(\mathcal{S}, \mathcal{A}, P, R)$ (here $P(s, s') = \mathbb{P}(S\_{t+1} = s' | S\_t = s, A\_t = a))$;, let us define the value of being in a certain state. That is, $$v\_\pi(s) = \mathbb{E}\_{a\_i \sim \pi, s\_i \sim P}\left[\sum\_{i=0}^\infty \gamma^{i+t}r(s\_{t+i}, a\_{i+t}) | S\_t =s\right].$$ That is, the value of being in state $s$ at time $t$ is equal to the expected value of the discounted sum of future rewards, where the expectation is taken with respect to $\pi(\cdot | s)$ the action policy and the environment dynamics $P$. We will now define $$\sum\_{i=0}^\infty \gamma^{i+t}r(s\_{t+i}, a\_{i+t}) = G\_t$$ Now, we can rewrite this as $$\mathbb{E}\_{a\_i \sim \pi, s\_i \sim P}\left[G\_t | S\_t =s\right] = \mathbb{E}\_{a\_i \sim \pi, s\_i \sim P}\left[ r(s\_t, a\_t) + \gamma G\_{t+1} | S\_t =s\right].$$ The RHS is now in a nicer format and we can express it as $$\mathbb{E}\_{a\_i \sim \pi, s\_i \sim P}\left[G\_t | S\_t =s\right] = \mathbb{E}\_{a\_i \sim \pi, s\_i \sim P}\left[ r(s\_t, a\_t) + \gamma v\_\pi(s') | S\_t =s\right],$$ where $s'$ is the state that we transition into at time $t+1$. Note that $s'$ is a random variable and we are taking the expectation over this according to the action policy and the environment dynamics -- this is because the state we transition into depends first on the action we select and then the environment dynamics $P$. You can see that we now have $$v\_\pi(s) = \mathbb{E}\_{a\_i \sim \pi, s\_i \sim P}\left[ r(s\_t, a\_t) + \gamma v\_\pi(s') | S\_t =s\right].$$ This is the Bellman equation (at least a form of it) and it expresses a recursive relationship between the values of states. That is, the value of being in state $s$ is equal to the expected immediate reward from being in this state plus the value of being in the state that we transition into. This relationship is useful in Reinforcement Learning as many algorithms use this equation to form update rules to approximate the value/state-action value function of the MDP, such as the SARSA algorithm, so it is more than just a philosophy, it is the driving force behind many of the RL algorithms. Now, when I say at least a form of it, that is because in RL it is also common to see a Bellman equation for the state action value function $$q\_\pi(s, a) = \mathbb{E}\_{a\_i \sim \pi, s\_i \sim P}\left[ G\_t | S\_t = s, A\_t = a\right] = \mathbb{E}\_{a\_i \sim \pi, s\_i \sim P}\left[ r(s\_t, a\_t) + v\_\pi(s') | S\_t = s, A\_t = a\right];$$ noting that the value function $v$ is the expectation of $q$ over the action space, i.e. we marginalise out the action we condition on -- thus there is the recursive relationship. As pointed out in the comments for this question, it is also worth noting that Bellman equations originated in Dynamic Programming, which exist to solve planning problems such as the knapsack problem. Upvotes: 2 <issue_comment>username_2: Just to add to the previous answer some more background and intuition. The background of Bellman equation comes from optimal control theory of dynamic systems of form (in discrete time case) \begin{equation} s\_{k+1} = f\_d(s\_k, a\_k) \tag{1} \end{equation} where $s\_k$ represents state at time $k$ and $a\_k$ action at time $k$. The goal is to optimize multistage objective function of form \begin{equation} V\_N(s\_0) = \sum\_{k=0}^N J(s\_k, a\_k) \end{equation} while satisfying dynamic constraints $(1)$ where $J(\cdot)$ is a stage cost at time $k$. The product of this optimization are optimal control policies $a\_k = \pi\_k(s\_k)$ which provide optimal value for the multistage objective function. Bellman's principle of optimality states that, for the multistage optimization problems, the objective function value at timestep $k$ should satisfy \begin{align} V^\*\_{k}(s\_k) &= \min\_{a\_k}[J(s\_k, a\_k) + V^\*\_{k-1}(s\_{k+1})]\\ &= \min\_{a\_k}[J(s\_k, a\_k) + V^\*\_{k-1}(f\_d(s\_k, a\_k))] \end{align} This can be of course proven, you can find the proof in any optimal control/dynamic programming book. This also makes intuitive sense. Consider that you are at timestep $N$ (end of trajectory you want to optimize). You only need to consider 1 action $a\_N$. You would now go through all possible states $s\_N$ and pick action $a\_N$ which minimizes your stage cost $J(s\_N, a\_N)$ for all those states separately. After you did that, you would go one step backwards in time and find optimal action $a\_{N-1}$ for all states $s\_{N-1}$. According to the Bellman optimality principle, you only need to consider action $a\_{N-1}$ in your optimization, because you already know action $a\_N$ (calculated previously) which would minimize $J(s\_N, a\_N)$ for any possible future state $s\_N$. Then you would keep going backwards in time until $k=0$. This is very useful because you don't need to consider all $N$ timesteps at once, you only consider one timestep at a time which prevents combinatorial explosion for large $N$. Bellman optimality principle has been adapted to many different applications, including RL and stochastic systems. Upvotes: 1
2020/12/21
1,223
3,763
<issue_start>username_0: I am searching for an academic (i.e. with maths formulae) textbook which covers (at least) the following: * GAN * LSTM and transformers (e.g. seq2seq) * Attention mechanism The closest match I got is *Deep Learning* (2016, MIT Press) but it only deals with part of the above subjects.<issue_comment>username_1: I recommend [*Introduction to Deep Learning*](https://www.amazon.fr/Introduction-Deep-Learning-Eugene-Charniak/dp/0262039516) by <NAME> ISBN 978-0-262-03951-2 (MIT 2018). It mentions `GAN` & `LSTM` & `Attention` (all three occurs in the index). But read also Pitrat's last book: [*Artificial Beings: The Conscience of a Conscious Machine*](https://www.amazon.fr/Artificial-Beings-Conscience-Conscious-Machine-ebook/dp/B00BPAGQWK/) - it does cover machine learning (but not in the "deep learning" sense) but was published before 2016. And see also [RefPerSys](http://refpersys.org/). If you speak French, also see [this](https://afia.asso.fr/journee-hommage-j-pitrat/) and more generally the [AFIA](https://afia.asso.fr/) organization. Upvotes: 2 <issue_comment>username_2: There are a few more books that were published after 2016 that cover some of the topics you are interested in. I've not read any of them, so I don't really know whether they are good or not, but I try to summarise if they cover some of the topics you may be interested in. * [Deep Learning with Python](http://faculty.neu.edu.cn/yury/AAI/Textbook/Deep%20Learning%20with%20Python.pdfhttp://faculty.neu.edu.cn/yury/AAI/Textbook/Deep%20Learning%20with%20Python.pdf) (2017), by <NAME> (author of the initial Keras library), which covers GANs in [section 8.5](http://faculty.neu.edu.cn/yury/AAI/Textbook/Deep%20Learning%20with%20Python.pdf#page=328) (p. 305), but it does not seem to cover transformers and attention mechanisms, although it covers other intermediate/advanced topics (not sure to which extent), such as text generation with LSTMs, DeepDream, Neural Style Transfer and VAEs * [Grokking deep learning](http://www.hdip-data-analytics.com/_media/resources/pdf/s4/grokking_deep_learning.pdf) (2019), by <NAME>, which seems to cover some intermediate/advanced topics (such as LSTMs and related tasks), but no transformers or GANs (unless I missed them); you can find the accompanying code [here](https://github.com/iamtrask/Grokking-Deep-Learning) * [Generative Deep Learning: Teaching Machines to Paint, Write, Compose, and Play](https://books.google.ch/books?hl=en&lr=&id=RKegDwAAQBAJ&oi=fnd&pg=PP1&dq=Generative+Deep+Learning+david+foster&ots=bqE76IUkF6&sig=vXSbEW8S7ohXariAAhpbuckkKZs&redir_esc=y#v=onepage&q=Generative%20Deep%20Learning%20david%20foster&f=false), by <NAME>, which covers many variants of GANs, VAEs and other stuff The [first transformer was published in 2017](https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf), so I guess there may not yet be a book that extensively covers it and other related models, such as the GPT models (if you're interested in CV, check [this blog post](https://machinelearningmastery.com/computer-vision-books/), although it seems to list books that cover mostly traditional CV techniques). The attention mechanisms are older and can probably be found in textbooks that cover machine translation topics (such as seq2seq models with LSTMs), such as [this one](http://mt-class.org/jhu/assets/nmt-book.pdf#page=57). Upvotes: 3 [selected_answer]<issue_comment>username_3: Here is an answer from 2023 for newcomers that may run into this page: Dive into Deep Learning: <NAME>, <NAME>, <NAME>, and <NAME> Transformers for Machine Learning - A Deep Dive: <NAME>, <NAME>, <NAME> Upvotes: 0
2020/12/23
442
2,089
<issue_start>username_0: I have a dataset consisting of a set of samples. Each sample consists of two distinct desctized signals S1(t), S2(t). Both signals are synchronous; however, they show different aspects of a phenomena. I want to train a Convolutional Neural Network, but I don't know which architecture is appropriate for this kind of data. I can consider two channels for input, each corresponding to one of the signals. But, I don't think convolving two signals can produce appropriate features. I believe the best way is to process each signal separately in the first layers, then join them in the classification layers in the final step. How can I achieve this? What architecture should I use?<issue_comment>username_1: I don't know what you mean by *desctized signals* but if I understand your question correctly, separating two signal and passing them through same architecture of CNN (even with different parameters) is not a good idea. Because when they are together (as different channels) they will be treated differently by the CNN (each channel has its own parameters) and even this way the network is able to combine these two signals and get better results by information extracted from this combination. Upvotes: 1 <issue_comment>username_2: You can safely give both signals as input in different channels. Actually, it's the best way. This way, the network is able to find low-degree patterns that involve both signals early in training. This will therefore enable the early discovery of more complex patterns, too. Differently from what one might understand from your question, the two signals will not be convolved against eachother, as it's typically done in signal processing. The convolution taking place is that of the first layer kernels with a two-component signal (the one you give as input). It can happen that there's a first order pattern that can only be recognised by looking at both signals at the same time. If that's not the case, the kernels will ignore one signal or the other (having the corresponding weights a value of zero). Upvotes: 0
2020/12/24
408
1,511
<issue_start>username_0: My company has full access to beta testing for GPT-3. We wanted to try it for some games or game mechanics within Unity3D. Is it possible to use it for dialogues or with unity scripts? The Documents of OpenAI does not say anything about this possibility, so I'm not sure.<issue_comment>username_1: Yes, OpenAI will release an API for GPT-3, so any developer can integrate it into their application. I don't believe the document for their API is public yet, so we don't know what the final interface will look like, but it's likely to be a simple REST API. In the future, I imagine your developers can take advantage of their API, or alternatively there will be community-made scripts for you to use/copy. The pricing for using their API is explained [here](https://thenextweb.com/neural/2020/09/03/openai-reveals-the-pricing-plans-for-its-api-and-it-aint-cheap/). Note that they charge per token, which might be important in case your game plans to make live calls to GPT-3 during gameplay (as opposed to mining a huge corpus of answers to build an offline database). The [use cases](https://beta.openai.com/?app=customer-service) of GPT-3 suggests that you can legally use them for commercial products, although I couldn't find a definitive license or user agreement document. Upvotes: 3 [selected_answer]<issue_comment>username_2: I found these links so hopefully they help. ``` https://openai.com/blog/openai-api/ https://nordicapis.com/on-gpt-3-openai-and-apis/ ``` Upvotes: 1
2020/12/30
1,510
4,940
<issue_start>username_0: From the AlphaGo Zero paper, during MCTS, statistics for each new node are initialized as such: > > ${N(s\_L, a) = 0, W (s\_L, a) = 0, Q(s\_L, a) = 0, P (s\_L, a) = p\_a}$. > > > The PUCT algorithm for selecting the best child node is $a\_t = argmax(Q(s,a) + U(s,a))$, where $U(s,a) = c\_{puct} P(s,a) \frac{\sqrt{\sum\_b N(s,b)}}{1 + N(s, a)}$. If we start from scratch with a tree that only contains the root node and no children have been visited yet, then this should evaluate to 0 for all actions $a$ that we can take from the root node. Do we then simply uniformly sample an action to take? Also, during the expand() step when we add an unvisited node $s\_L$ to the tree, this node's children will also have not been visited, and we run into the same problem where PUCT will return 0 for all actions. Do we do the same uniform sampling here as well?<issue_comment>username_1: I looked at the Python pseudo-code attached to the Data S1 of the [Supplementary Materials](https://science.sciencemag.org/content/suppl/2018/12/05/362.6419.1140.DC1) of the AlphaZero paper. Here is my findings: * Contrary to the paper, AlphaZero does not store $\{N(s, a), W(S, a), Q(s, a), P(s, a)\}$ statistics for each edge $(s,a)$. Instead, AlphaZero stores $\{N(s), W(S), Q(s), P(s)\}$ statistics for each node $s$. * When a leaf node $S\_L$ is expanded, it's visit count, value scores, and action policies are immediately updated in $\{N(s), W(S), Q(s), P(s)\}$, so $N(s)$ is at least $1$. This is why in the paper, the backprop step updates for all time steps $t \le L$ rather than $t < L$. It makes sense to update $s\_L$ even though there is no corresponding $a\_L$ to pair it with. * Therefore, when a new leaf node is expanded, the value $U(s, a)$ of a child of that leaf node will be nonzero, since $\sqrt{\sum\_b N(s,b)}$ is actually computed as $N(s\_{parent})$ in the code, which is at least 1. * Oddly enough, I think there might be a bug in the pseudocode, because at the beginning on the first iteration (starting at the root node), $U(s,a) = 0$ for all child nodes of the root node. This is because at the first iteration, $N(s\_{root}) = 0$. The value of all child nodes will be $0$, and since the authors chose to break ties according to Python's `max` function, the algorithm simply chooses the first element it finds in case of a tie. * After the first iteration, $N(s\_{root}) > 0$ and so $U(s,a) \neq 0$ and things proceed as normal since the backprop step will have updated the visit count of the root node. So this possible bug/unintuitive behavior only affects the first iteration. It is extremely minor and insignificant, and does not affect the outcome of the MCTS, which is probably why it went unnoticed. Upvotes: 3 [selected_answer]<issue_comment>username_2: I had the same question and found this page, then I also started to look at the pseudo-code of [Supplementary Materials](https://www.science.org/doi/10.1126/science.aar6404). The code of the computation of the UCB score is as followed: ``` # The score for a node is based on its value, plus an exploration bonus based on # the prior. def ucb_score(config: AlphaZeroConfig, parent: Node, child: Node): pb_c = math.log((parent.visit_count + config.pb_c_base + 1) / config.pb_c_base) + config.pb_c_init pb_c *= math.sqrt(parent.visit_count) / (child.visit_count + 1) prior_score = pb_c * child.prior value_score = child.value() return prior_score + value_score ``` I was very confused since it is different from the formula described in the paper, until I found the UCB formula in the paper of [MuZero](https://arxiv.org/pdf/1911.08265.pdf): [![enter image description here](https://i.stack.imgur.com/umit8.png)](https://i.stack.imgur.com/umit8.png) It is clear that the pseudocode of AlphaZero mistook the UCB formula of MuZero. As for the original question, the sum of all the visit count numbers of the childen is implemented as the visit count of the parent node, which is at least 1, since it is visited as it is expanded (evaluated), as you can see in the code of the search: ``` # Core Monte Carlo Tree Search algorithm. # To decide on an action, we run N simulations, always starting at the root of # the search tree and traversing the tree according to the UCB formula until we # reach a leaf node. def run_mcts(config: AlphaZeroConfig, game: Game, network: Network): root = Node(0) evaluate(root, game, network) add_exploration_noise(config, root) for _ in range(config.num_simulations): node = root scratch_game = game.clone() search_path = [node] while node.expanded(): action, node = select_child(config, node) scratch_game.apply(action) search_path.append(node) value = evaluate(node, scratch_game, network) backpropagate(search_path, value, scratch_game.to_play()) return select_action(config, game, root), root ``` Upvotes: 0
2020/12/30
1,594
6,180
<issue_start>username_0: A *model* can be roughly defined as any design that is able to solve an ML task. Examples of models are the neural network, decision tree, Markov network, etc. A *function* can be defined as a set of ordered pairs with one-to-many mapping from a domain to co-domain/range. What is the fundamental difference between them in formal terms?<issue_comment>username_1: ### A model as a set of functions In some cases in machine learning, a model can be thought of as a **set of functions**, so here's the first difference. For example, a neural network with an **arbitrary** vector of parameters $\theta \in \mathbb{R}^m$ is often denoted as a model, then a specific combination of these parameters represents a specific function. More specifically, suppose that we have a neural network with 2 inputs, 1 hidden neuron (with a ReLU activation function, denoted as $\phi$, that follows a linear combination of the inputs), and 1 output neuron (with a sigmoid activation function, $\sigma$). The inputs are connected to the only hidden unit and these connections have a real-valued weight. If we ignore biases, then there are 3 parameters, which can be grouped in the parameter vector $\theta = [\theta\_1, \theta\_2, \theta\_3] \in \mathbb{R}^3 $. The set of functions that this neural network represents is defined as follows $$ f(x\_1, x\_2) = \sigma (\theta\_3 \phi(x\_1 \theta\_1 + x\_2 \theta\_2)) \tag{1}\label{1}, $$ In this case, the equation \ref{1} represents the model, given the parameter space $\Theta = \mathbb{R}^3$. For any specific values that $\theta\_1, \theta\_2,$ and $\theta\_3$ can take, we have a specific (deterministic) function $f: \mathbb{R} \rightarrow [0, 1]$. For instance, $\theta = [0.2, 10, 0.4]$ represents some specific function, namely $$ f(x\_1, x\_2) = \sigma (0.4 \phi(x\_1 0.2 + x\_2 10.0)) \tag{2}\label{2} $$ You can plot this function (with Matplotlib) for some values of the inputs to see how it looks. Note that $x\_1$ and $x\_2$ can be arbitrary (because those are just the inputs, which I assumed to be real numbers). This interpretation of a model is roughly equivalent to the definition of a [hypothesis class (or space)](https://ai.stackexchange.com/a/23005/2444) in computational learning theory, which is essentially [a set of functions](https://ai.stackexchange.com/a/16794/2444). So, this definition of a model is useful to understand [the universal approximation theorems for neural networks](https://ai.stackexchange.com/q/13317/2444), which state that you can find a specific set of parameters such that you can approximately compute some given function arbitrarily well, given that some conditions are met. This interpretation can also be applied to decision trees, HMM, RNNs, and all these ML models. ### A model in reinforcement learning The term model is also sometimes used to refer to a probability distribution, for example, in the context of reinforcement learning, where $p(s', r \mid s, a)$ is a probability distribution over the next state $s'$ and reward $r$ given the current state $s$ and action $a$ taken in that state $s$. Check [this question](https://ai.stackexchange.com/q/4456/2444) for more details. A probability distribution could also be thought of as a (possibly infinitely large) set of functions, but it is not just a set of functions, because you can also sample from a probability distribution (i.e. there's some stochasticity associated with a probability distribution). So, a probability distribution can be considered a statistical model or can be used to represent it. Check [this answer](https://ai.stackexchange.com/a/12354/2444). ### A function as a model A specific function (e.g. the function in \ref{2}) can also be a model, in the sense that it models (or approximates) another function. In other words, a person may use the term *model* to refer to a function that attempts to approximate another function that you want to model/approximate/compute. Upvotes: 3 <issue_comment>username_2: In simple terms, a **neural network model** is a *function approximator* which tries to fit the curve of the hypothesis function. A function itself has an equation which will generate a fixed curve: [![enter image description here](https://i.stack.imgur.com/7acni.png)](https://i.stack.imgur.com/7acni.png) If we have the equation (i.e., the function), we do not need neural network for its input data. However, when we only have some notion of its curve (or the input and output data) we seek a function approximator, so that for new, unseen input data, we can generate the output. Training this neural network is all about getting as close an approximation to the original (unknown function) as possible. Upvotes: 0 <issue_comment>username_3: Any model can be considered to be a function. The term "model" simply denotes a function being used in a particular way, namely to approximate some other function of interest. Upvotes: 2 <issue_comment>username_4: Every model is a function. Not every function is a model. A function uniquely maps elements of some set to elements of another set, possibly the same set. Every AI model is a function because they are implemented as computer programs and every computer program is a function uniquely mapping the combination of the sequence of bits in memory and storage at program start up, plus inputs, to the sequence of bits in memory and storage, plus output, at program termination. However, a 'model' is very specifically a representation of something. Take the logistic curve: $$ f(x) = \frac{L}{1 + e^{k(x-x\_{0})} } $$ Given arbitrary real values for $L$, $k$, and $x\_{0}$, that's a function. However, given much more specific values learned from data, it can be a model of population growth. Similarly, a neural network with weights initialized to all zeros is a function, but a very uninteresting function with the rather limited codomain $\{0\}$. However, if you then train the network by feeding it a bunch of data until the weights converge to give predictions or actions roughly corresponding to some real world generating process, now you have a model of that generating process. Upvotes: 1
2020/12/30
1,435
5,484
<issue_start>username_0: I found this [question](https://ai.stackexchange.com/questions/25109/is-there-anything-that-ensures-that-convolutional-filters-dont-end-up-the-same) very interesting, and this is a follow up on it. Presumably, we'd want all the filters to converge towards some complementary set, where each filter fills as large a niche as possible (in terms of extracting useful information from the previous layer), without overlapping with another filter. A quick thought experiment tells me (please correct me if I'm wrong) that if two filters are identical down to maximum precision, then without adding in any other form of stochastic differentiation between them, their weights will be updated in the same way at each step of gradient descent during training. Thus, it would be a very bad idea to initialise all filters in the same way prior to training, as they would all be updated in exactly the same way (see footnote 1). On the other hand, a quick thought experiment isn't enough to tell me what would happen to two filters that are *almost* identical, as we continue to train the network. **Is there some mechanism causing them to then diverge away from one another, thereby filling their own "complementary niches" in the layer?** My intuition tells me that there must be, otherwise using many filters just wouldn't work. But during back-propagation, each filter is downstream, and so they don't have any way of communicating with one another. At the risk of anthropomorphising the network, I might ask "How do the two filters collude with one another to benefit the network as a whole?" --- Footnotes: 1. Why do I think this? Because the expession for the partial derivative of the $k$th filter weights with respect to the cost $\partial W^k/\partial C$ will be identical for all $k$. From the perspective of back-propagation, all paths through the filters look exactly the same.<issue_comment>username_1: Yes, your thought experiment is correct, and the concept is known as **breaking the symmetry**. This is why biases can be initialized to $0$ (bias initialization doesn't matter), but weights should be randomly initialized to different numbers -- to break the symmetry. Otherwise, if not, the network will function as if it has $n-1$ filters (or however many filters that are unique) instead of the full $n$ filters. As for your main question, if two filters are initialized to very similar values, they may branch out as long as that is what minimizes the training loss. There is no collusion or coordination going on; each filter updates completely independently. You can even freeze all the other filters and only perform gradient descent on one filter at a time. Each filter just follows the direction of their gradient to minimize the training loss. Consider the backprop equations as defined by [this](http://neuralnetworksanddeeplearning.com/chap2.html) online book: [![enter image description here](https://i.stack.imgur.com/D5xG0.png)](https://i.stack.imgur.com/D5xG0.png) The gradient of the current layer's weights depends on 1. The future layers' weights, errors, and activation function's derivatives 2. The current layer's activation function's derivative, and 3. The previous layer's outputs. Each weight in the layer (i.e. each filter in the layer) looks at different parts of these three components (indexed by $j$ and $k$ in equation $BP4$). It is this different perspective that allows them to update their gradients in different directions, even if their initial weights are very similar to each other. Note that it is possible that they end up with the same gradient, but it is very unlikely. Upvotes: 1 <issue_comment>username_2: Here I am just trying to simplify what @username_1 already said uses math arguments Say we have a cost function $J(x, y; F(\cdot; \Theta))$ which regards training a NN $F(\cdot, \Theta)$ with $x$ input and $y$ expected output Gradient descent tells us how to upgrade each $\theta\_{i} \in \Theta$ and it is $$ \theta\_{i}(t+1) = \theta\_{i}(t) - \alpha \frac{\partial J(x,y; \Theta)}{\partial \theta\_{i}} $$ with $t$ training time So let's focus on a specific component of the NN $f(x; \theta)$ and from its perspective, we can say the computation is $$ h(f(g(x); \theta), y) $$ with * $h(x,y)$ representing all the subsequent components + loss function * $g(x)$ representing all the previous computation * $x$ input * $y$ expected output so its update is $$ \Delta \theta(t) = \frac{\partial h(f(g(x(t), \theta), y(t)))}{\partial \theta} $$ with * $x(t)$ the concrete input at $t$ time * $y(t)$ expected output at $t$ time * $\theta(t)$ the concrete value of the parameter at $t$ time Applying the chain rule we have $$ \Delta \theta(t) = h'(f(g(x(t), \theta(t))), y(t)) f'(g(x(t)), \theta(t)) $$ so the gradient observed by a certain parameter $\theta$ depends on * $\theta(t)$ the current value of the parameter * $x(t)$ the current input and more specifically by $g(x(t))$ the processing of this input by the previous part of the NN * $y(t)$ the expected output * $h'(\cdot)$ the gradient of the subsequent part of the network So even if we have 2 weights with the same value $\theta\_{i}(t) = \theta\_{j}(t) \quad i \neq j$ at a certain point in training time, they can still see different gradients since * the upstream processing $g(x(t))$ can be different * the gradient backpropagating from the downstream processing $h'(\cdot, y(t))$ can be different Upvotes: 0
2020/12/31
977
4,028
<issue_start>username_0: I'm well aware of the inner workings of CNN models for object detection, and although I've not worked on a semantic segmentation problem I can imagine how it works. With these types of models, we need to say "segment out the humans", or "segment out the X". But what about when I say something like "segment out the subject of this photo, whatever it happens to be". For example, see this service: <https://removal.ai/> Without too much imagination I might guess that they apply a multiclass segmentation model and just show any foreground pixels, no matter what class they belong to. So we'd hope that the subject is in one of the classes that the model was trained for, and that there are no other class instances in the image that shouldn't be captured. But is there a more general way?<issue_comment>username_1: In image segmentation the target is actually an image, with the same dimensions as the input, where each pixel has a label depending on which class it represents. It is not uncommon for such a dataset to have a "background" class that essentially consists of the pixels not belonging to any other class. If not you can always group together classes typically associated with background (e.g. "sky", "cloud", "grass", "mountain", etc.) to form the class "background". Likewise you could group all other possible classes of interest (e.g. "person", "car", "horse", etc.) into the class "foreground". With this dataset you could train an image segmentation model that predicts if a pixel belongs to the background or the foreground, without actually classifying it into a "person" or a "car". So suppose you want to make your own removal.ai, you could: * find one or more **diverse** image segmentation datasets (it needs to be diverse so that it will work on any generic photo uploaded to the site) * check all the unique classes in the labels * group all classes associated with backgrounds into class 0 (i.e. "background" class) * group all classes associated with foregrounds into class 1 (i.e. "foreground" class) * train an image segmentation model with these two classes * profit Upvotes: 1 <issue_comment>username_2: Background removal is technically known as image matting. It is similar to segmentation, but it is a regression problem. The objective is to predict the alpha matte, which separates the foreground and background. Simply adding the predicted alpha matte as the fourth channel to the RGB image removes the background. The model architecture is mostly an encoder-decoder model. The encoder extracts features and compresses them in the latent space, while the decoder constructs the alpha channel. Since it is a challenging problem, most solutions require additional input along with the RGB image. The most common one is a trimap. The Deep Image Matting paper (<NAME>., <NAME>., <NAME>., & <NAME>., 2017) is one of the major studies and it proposes such a solution. However, trimap is a significant limitation since it requires human annotation. To eliminate the trimap requirement, a model can be trained with only an RGB image, but the quality significantly decreases. Another solution is training two neural networks. The first one predicts a coarse alpha matte, and the second one returns a sharper alpha matte, [as done in Google's Pixel 6 phones](https://ai.googleblog.com/2022/01/accurate-alpha-matting-for-portrait.html). Another trimap-free solution is designing a model with one encoder and two decoders. The decoders predict a semantic mask and a detailed map simultaneously. GFM (<NAME>., <NAME>., <NAME>., & <NAME>. (2020).) and MODNet (<NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2020).) are two related studies. For more in-depth information on both traditional and deep learning solutions, you can refer to my blog post: <https://withoutbg.com/resources/how-automatic-image-background-removal-works> Disclosure: I built [withoutbg.com](https://withoutbg.com/), a background removal tool, powered by deep learning. Upvotes: 0
2020/12/31
1,338
3,835
<issue_start>username_0: I was pondering on the loss function of GAN, and the following thing turned out \begin{aligned} L(D, G) & = \mathbb{E}\_{x \sim p\_{r}(x)} [\log D(x)] + \mathbb{E}\_{x \sim p\_g(x)} [\log(1 - D(x)] \\ & = \int\_x \bigg( p\_{r}(x) \log(D(x)) + p\_g (x) \log(1 - D(x)) \bigg) dx \\ & =-\left[CE(p\_r(x), D(x))+CE(p\_g(x), 1-D(x)) \right] \\ \end{aligned} Where CE stands for cross-entropy. Then, by using law of large numbers: \begin{aligned} L(D, G) & = \mathbb{E}\_{x \sim p\_{r}(x)} [\log D(x)] + \mathbb{E}\_{x \sim p\_g(x)} [\log(1 - D(x)] \\ & =\lim\_{m\to \infty}\frac{1}{m}\sum\_{i=1}^{m}\left[1\cdot \log(D(x^{(i)}))+1\cdot \log(1-D(x^{(i)}))\right]\\ & =- \lim\_{m \to \infty} \frac{1}{m}\sum\_{i=1}^{m} \left[CE(1, D(x))+CE(0, D(x))\right] \end{aligned} As you can see, I got a very strange result. This should be wrong intuitively because in the last equation first part is for real samples, and the second is for generated samples. However, I am curious about where are the mistakes? (Please explain with math).<issue_comment>username_1: $\textbf{Remark.}$ I'd leave this as a comment if I could. Regarding notation (which I believe may be the cause of your issue here), the loss function is better written as \begin{align\*} \operatorname{Loss} &= \frac{1}{m}\sum\_{i=1}^m \left(\log D\big(x^{(i)}\big) + \log\Big(1-D\big(G\big(z^{(i)}\big)\big)\right)\\ &\approx \mathbb{E}\_x[\log D(x)] + \mathbb{E}\_z[\log(1-D(G(z)))], \end{align\*} where the noise vectors, $z$, come from a suitable distribution, and $G(z)$ denotes the output of the generator; the $\approx$ symbol here implicitly assumes that the appropriate form of the Law of Large Numbers (LLN) applies. Most importantly, the dependence on G is not trivial (for instance, what if $G$ never learns and always produces the same output?). Also, the expectations should depend on their respective distributions, even when using LLN. For example, think of how you calculate the expectation of a discrete random variable. Upvotes: 1 <issue_comment>username_2: I guess the issue is you lost track of where the samples came from and since you requested a math explanation I'll try to go step by step using my notation and without checking other material to avoid being biased by how other authors present it So we start from $$ L(D,G) = E\_{x \sim p\_{r}(x)} \log(D(x)) + E\_{x \sim p\_{g}(x)}\log(1 - D(x)) $$ then you apply the definition of $E\_{\cdot}(\cdot)$ operator in the continuous case $$ L(D,G) = \int\_{x} \log(D(x)) p\_{r}(x)dx + \int\_{x}\log(1 - D(x))p\_{g}(x)dx $$ then you Monte Carlo sample it to approximate it $$ L(D,G) = \frac{1}{n} \sum\_{i=1}^{n} \log(D(x\_{i}^{(r)})) + \frac{1}{m} \sum\_{j=1}^{m}\log(1 - D(x\_{j}^{(g)})) $$ As you can see here I have kept the samples from the 2 distributions separated and used a notation that allows to track their origin so now you can use the right label in the Cross Entropy $$ L(D,G) = \frac{1}{n} \sum\_{i=1}^{n} L\_{ce}(1, D(x\_{i}^{(r)})) + \frac{1}{m} \sum\_{j=1}^{m} L\_{ce}(0, D(x\_{j}^{(g)})) $$ But you could also have decided to merge the 2 integrals before to have $$ L(D,G) = \int\_{x} \left( \log(D(x)) p\_{r}(x) + \log(1 - D(x))p\_{g}(x) \right) dx $$ which is mathematically legit operation, however the issue is when you try to discretize this with Monte Carlo sampling. You can't just replace the integral with one sum since you are Monte Carlo sampling and here, contrary to what we have done above, you do not have 1 distribution per integral to sample but in the same integral you have 2 distributions and for each sample you have to say what distribution it comes from which is where the issue is in your notation since you lost track of this information and it seems all the samples come from one distribution Upvotes: 3 [selected_answer]
2021/01/01
893
3,447
<issue_start>username_0: When implementing a genetic algorithm, I understand the basic idea is to have an initial population of a certain size. Then, we pick two individuals from a population, construct two new individuals (using mutation and crossover), repeat this process X number of times and the replace the old population with the new population, based on selecting the fittest. In this method, the population size remains fixed. In reality in evolution, populations undergo fluctuations in population sizes (e.g. population bottlenecks, and new speciations). I understand the disadvantages of variable populations sizes from a biological view are, for example, a bottleneck will reduce the population to minimal levels, so not much evolution will occur. Are there disadvantages to using variable population sizes in genetic algorithms, from a programming perspective? I was thinking the numbers per population could follow a distribution of some sort so they don't just randomly fluctuate erratically, but maybe this does not make sense to do.<issue_comment>username_1: Population size is a tricky issue even in pure biological models. Biological population sizes obviously vary. The two great protagonists of the argument were <NAME> and <NAME>, with argument being between Fisher favouring few large populations against Wright’s many small interconnected populations. There is evidence that evolution occurs more rapidly in Wright’s model but the evidence is inconclusive. The theory concentrates on the probability that a mutation will occur and then become dominant in a population. In a small population a beneficial mutation is more likely to be selected for reproduction, but premature convergence is a serious danger. While in a larger population a mutant is less likely to be removed from the population during reproduction. I would strongly recommend a read of [Games of life](https://www.amazon.co.uk/Games-Life-Explorations-Evolution-Behavior/dp/0486812898/ref=sr_1_1?adgrpid=50071920181&dchild=1&gclid=CjwKCAiArbv_BRA8EiwAYGs23Le0IAnwjMmZOb3JgCcTwuAFFZmPdRHcHNC1PEkxRDsNAA3YoTvDERoC0r8QAvD_BwE&hvadid=259146746543&hvdev=c&hvlocphy=1007199&hvnetw=g&hvqmt=e&hvrand=4888512412962858368&hvtargid=kwd-314887220106&hydadcr=11468_1841728&keywords=games%20of%20life%20karl%20sigmund&qid=1609534890&sr=8-1&tag=googhydr-21) by <NAME>. Upvotes: 2 [selected_answer]<issue_comment>username_2: using different population sizes at different stages of the optimization process can be beneficial. With large population sizes you can effectively explore the landscape to find proper areas. Large population sizes helps in finding global optima or high fitness local optima. However, using large populations require more fitness evaluations and waste of computational resources. With a small population you can effectively exploit a previously find appropriate area and get high accuracy solutions. For this reason, some works suggest to use a large population at the start of the algorithm and gradually, decrease its size, like [Improving the search performance of SHADE using linear population size reduction]. Also, some dynamic optimization methods, use dynamic population sizes. For instance, they create sub-populations when it is necessary to discover more optima or to cover the landscape, they decrease their sub-populations when they detect a change or new appropriate uncovered area in the landscape. Upvotes: 0
2021/01/01
574
2,657
<issue_start>username_0: I understand that in each generation of a genetic algorithm, that generation must re-prove it's fitness (and then the fittest of that population is taken for the next population). In this case, I guess it's a presumption that if you take the fittest of each generation, and use them to form the basis of the next generation, that your population as a whole is getting fitter with time. But algorithmically, how can I detect this? If there's no end goal known, then I can't measure the error/distance from goal? So how can you tell how much each generation is becoming fitter by?<issue_comment>username_1: There is no exact way to assess that a genetic algorithm has located a global optima. Indeed there may be multiple global optima. You must fall back to heuristic methods. The fitness of a population is the maximum fitness of any individual. Unless specific measures are taken to maintain diversity the population will converge to an optima, local or global. At that point all individuals will, except for mutation, be identical. You could take the fittest individual of such a population as your solution, but you will not know if the solution is a global or local optima. Two reasonable heuristics are these. First, run the algorithm till it converges and maintains its fitness for a number of further generations. Or second run the algorithm multiple times, and take the fittest of all the located solutions. Neither is exact. Upvotes: 2 [selected_answer]<issue_comment>username_2: There is no general method to detect a change in the fitness landscape, since changes can be very local and can occur in just a small area of the fitness landscape. For this reason nature inspired optimization algorithms usually maintain a diversified population to cope with environmental changes. a common mechanism is using several sub-populations and ensuring that these sub-populations do not overlap. Also, there are some heuristics proposed that can help the algorithms in detecting changes. for instance if you are using multiple sub-populations, you can double test one element of each sub-population at some generation to find out if an environmental change has occurred or not. albeit, there are some comprehensive heuristics have been proposed for change detection, like the one proposed in [<NAME>, <NAME>, <NAME>, Modified differential evolution with locality induced genetic operators for dynamic optimization]. In my opinion, Use simple methods. for instance, at the keep a small by diverse set of population members at some k generations, and reevaluate them after k generations to detect change. Upvotes: 0
2021/01/03
5,178
15,299
<issue_start>username_0: I have been trying to solve the OpenAI lunar lander game with a DQN taken from this paper <https://arxiv.org/pdf/2006.04938v2.pdf> The issue is that it takes 12 hours to train 50 episodes so something must be wrong. ``` import os import random import gym import numpy as np from collections import deque import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.optimizers import Adam from tensorflow.keras import Model ENV_NAME = "LunarLander-v2" DISCOUNT_FACTOR = 0.9 LEARNING_RATE = 0.001 MEMORY_SIZE = 2000 TRAIN_START = 1000 BATCH_SIZE = 24 EXPLORATION_MAX = 1.0 EXPLORATION_MIN = 0.01 EXPLORATION_DECAY = 0.99 class MyModel(Model): def __init__(self, input_size, output_size): super(MyModel, self).__init__() self.d1 = Dense(128, input_shape=(input_size,), activation="relu") self.d2 = Dense(128, activation="relu") self.d3 = Dense(output_size, activation="linear") def call(self, x): x = self.d1(x) x = self.d2(x) return self.d3(x) class DQNSolver(): def __init__(self, observation_space, action_space): self.exploration_rate = EXPLORATION_MAX self.action_space = action_space self.memory = deque(maxlen=MEMORY_SIZE) self.model = MyModel(observation_space,action_space) self.model.compile(loss="mse", optimizer=Adam(lr=LEARNING_RATE)) def remember(self, state, action, reward, next_state, done): self.memory.append((state, action, reward, next_state, done)) def act(self, state): if np.random.rand() < self.exploration_rate: return random.randrange(self.action_space) q_values = self.model.predict(state) return np.argmax(q_values[0]) def experience_replay(self): if len(self.memory) < BATCH_SIZE: return batch = random.sample(self.memory, BATCH_SIZE) state_batch, q_values_batch = [], [] for state, action, reward, state_next, terminal in batch: # q-value prediction for a given state q_values_cs = self.model.predict(state) # target q-value max_q_value_ns = np.amax(self.model.predict(state_next)[0]) # correction on the Q value for the action used if terminal: q_values_cs[0][action] = reward else: q_values_cs[0][action] = reward + DISCOUNT_FACTOR * max_q_value_ns state_batch.append(state[0]) q_values_batch.append(q_values_cs[0]) # train the Q network self.model.fit(np.array(state_batch), np.array(q_values_batch), batch_size = BATCH_SIZE, epochs = 1, verbose = 0) self.exploration_rate *= EXPLORATION_DECAY self.exploration_rate = max(EXPLORATION_MIN, self.exploration_rate) def lunar_lander(): env = gym.make(ENV_NAME) observation_space = env.observation_space.shape[0] action_space = env.action_space.n dqn_solver = DQNSolver(observation_space, action_space) episode = 0 print("Running") while True: episode += 1 state = env.reset() state = np.reshape(state, [1, observation_space]) scores = [] score = 0 while True: action = dqn_solver.act(state) state_next, reward, terminal, _ = env.step(action) state_next = np.reshape(state_next, [1, observation_space]) dqn_solver.remember(state, action, reward, state_next, terminal) dqn_solver.experience_replay() state = state_next score += reward if terminal: print("Episode: " + str(episode) + ", exploration: " + str(dqn_solver.exploration_rate) + ", score: " + str(score)) scores.append(score) break if np.mean(scores[-min(100, len(scores)):]) >= 195: print("Problem is solved in {} episodes.".format(episode)) break env.close if __name__ == "__main__": lunar_lander() ``` Here are the logs ``` root@b11438e3d3e8:~# /usr/bin/python3 /root/test.py 2021-01-03 13:42:38.055593: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 2021-01-03 13:42:39.338231: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1 2021-01-03 13:42:39.368192: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-01-03 13:42:39.368693: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce GTX 1080 computeCapability: 6.1 coreClock: 1.8095GHz coreCount: 20 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 298.32GiB/s 2021-01-03 13:42:39.368729: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 2021-01-03 13:42:39.370269: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10 2021-01-03 13:42:39.371430: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10 2021-01-03 13:42:39.371704: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10 2021-01-03 13:42:39.373318: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10 2021-01-03 13:42:39.374243: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10 2021-01-03 13:42:39.377939: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7 2021-01-03 13:42:39.378118: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-01-03 13:42:39.378702: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-01-03 13:42:39.379127: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0 2021-01-03 13:42:39.386525: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 3411185000 Hz 2021-01-03 13:42:39.386867: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4fb44c0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2021-01-03 13:42:39.386891: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2021-01-03 13:42:39.498097: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-01-03 13:42:39.498786: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4fdf030 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2021-01-03 13:42:39.498814: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX 1080, Compute Capability 6.1 2021-01-03 13:42:39.498987: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-01-03 13:42:39.499416: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce GTX 1080 computeCapability: 6.1 coreClock: 1.8095GHz coreCount: 20 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 298.32GiB/s 2021-01-03 13:42:39.499448: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 2021-01-03 13:42:39.499483: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10 2021-01-03 13:42:39.499504: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10 2021-01-03 13:42:39.499523: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10 2021-01-03 13:42:39.499543: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10 2021-01-03 13:42:39.499562: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10 2021-01-03 13:42:39.499581: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7 2021-01-03 13:42:39.499643: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-01-03 13:42:39.500113: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-01-03 13:42:39.500730: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0 2021-01-03 13:42:39.500772: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 2021-01-03 13:42:39.915228: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix: 2021-01-03 13:42:39.915298: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0 2021-01-03 13:42:39.915322: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N 2021-01-03 13:42:39.915568: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-01-03 13:42:39.916104: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-01-03 13:42:39.916555: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6668 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1) Running 2021-01-03 13:42:40.267699: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10 ``` This is the GPU stats ``` +-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.66 Driver Version: 450.66 CUDA Version: 11.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 GeForce GTX 1080 Off | 00000000:01:00.0 On | N/A | | 0% 53C P2 46W / 198W | 7718MiB / 8111MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ ``` As you can see, TensorFlow does not compute on the GPU but reserves the memory so I'm assuming it's because the inputs of the neural networks are too small and it uses the CPU instead. To make sure the GPU was installed properly, I ran a sample from their documentation and it uses the GPU. Is it an issue with the algorithm or the code? Is there a way to utilize the GPU in this case? Thanks!<issue_comment>username_1: When it comes to GPU usage, ``` nvidia-smi ``` shows the usage at the time it was executed. You should try running ``` watch -n0.01 nvidia-smi ``` to see the usage of GPU every 0.01 second. It should output some small usage for current model, like 5%. You could try to increase you model, to e.g. ``` self.d1 = Dense(1024, input_shape=(input_size,), activation="relu") self.d2 = Dense(1024, activation="relu") self.d3 = Dense(output_size, activation="linear") ``` to see if the usage of GPU increased. Upvotes: 1 <issue_comment>username_2: Is it training at all? Or is agent performance not improving over time? Q learning can be pretty unstable. I would recommend logging the sum of rewards received by the agent at the end of each episode and the model loss to help in the debugging process. The sum of rewards will show you if the agent is improving over time and the model loss will give you a rough idea about how stable the convergence is. I would recommend using tensorboard to log these metrics (<https://www.tensorflow.org/tensorboard/get_started#using_tensorboard_with_other_methods>). You will be able to monitor these metrics throughout the training process. You could also just print these metrics at the end of every epoch and monitor them in your console. You really just need someway to see what's going on during training. In the paper you linked, it also mentioned double q learning, which in your code does not seem to be implemented. Vanilla q learning can have a reputation of being overoptimistic in the values that it assigns to states. This results in compounding approximation errors, which tend to destabilize learning. Using double q learning may help speed up convergence. If you need help with double q learning check out this paper: <https://arxiv.org/pdf/1509.06461.pdf>, and this github page: <https://github.com/jihoonerd/Deep-Reinforcement-Learning-with-Double-Q-learning/blob/master/ddqn/agent/ddqn_agent.py> If you use double q learning, you may have to write your own custom training loop. This can be achieved by using the gradient tape object. Make sure to wrap this new function in a tf.function decorator. This will tell the TensorFlow back-end to compile that bit of code, making it run faster (<https://www.tensorflow.org/guide/function>). There are also some handy speed up tips in this post (<https://www.tensorflow.org/tutorials/reinforcement_learning/actor_critic>). They even wrap the environment step functions in tf.functions.The article uses actor-critic, which is a combination of policy gradient and q learning techniques, but you can swap out their neural network update code with the q learning functionality that you need. If you need help with double q learning check out this paper: <https://arxiv.org/pdf/1509.06461.pdf>, and this github page: <https://github.com/jihoonerd/Deep-Reinforcement-Learning-with-Double-Q-learning/blob/master/ddqn/agent/ddqn_agent.py> Upvotes: 2
2021/01/06
1,640
6,686
<issue_start>username_0: I'm using MATLAB 2019, Linux, and UNet (a CNN specifically designed for semantic segmentation). I'm training the network to classify all pixels in an image as either cell or background to get segmentations of cells in microscopic images. My problem is the network is classifying every single pixel as background, and seems to just be outputting all zeroes. The validation accuracy improves a little at the very start of the training but than plateaus at around 60% for the majority of the training time. The network doesn't seem to be training very well and I have no idea why. Can anyone give me some hints about what I should look into more closely? I just don't even know where to start with debugging this. Here's my code: ``` % Set datapath datapath = '/scratch/qbi/uqhrile1/ethans_lab_data'; % Get training and testing datasets images_dataset = imageDatastore(strcat(datapath,'/bounding_box_cropped_resized_rgb')); load(strcat(datapath,'/gTruth.mat')); labels = pixelLabelDatastore(gTruth); [imdsTrain, imdsVal, imdsTest, pxdsTrain, pxdsVal, pxdsTest] = partitionCamVidData(images_dataset,labels); % Weight segmentation class importance by the number of pixels in each class pixel_count = countEachLabel(labels); % count number of each type of pixel frequency = pixel_count.PixelCount ./ pixel_count.ImagePixelCount; % calculate pixel type frequencies class_weights = mean(frequency) ./ frequency; % create class weights that balance the loss function so that more common pixel types won't be preferred % Specify the input image size. imageSize = [512 512 3]; % Specify the number of classes. numClasses = 2; % Create network lgraph = unetLayers(imageSize,numClasses); % Replace the network's classification layer with a pixel classification % layer that uses class weights to balance the loss function pxLayer = pixelClassificationLayer('Name','labels','Classes',pixel_count.Name,'ClassWeights',class_weights); lgraph = replaceLayer(lgraph,"Segmentation-Layer",pxLayer); %% TRAIN THE NEURAL NETWORK % Define validation dataset-with-labels validation_dataset_with_labels = pixelLabelImageDatastore(imdsVal,pxdsVal); % Training hyper-parameters: edit these settings to fine-tune the network options = trainingOptions('adam', 'LearnRateSchedule','piecewise', 'LearnRateDropPeriod',10, 'LearnRateDropFactor',0.3, 'InitialLearnRate',1e-3, 'L2Regularization',0.005, 'ValidationData',validation_dataset_with_labels, 'ValidationFrequency',10, 'MaxEpochs',3, 'MiniBatchSize',1, 'Shuffle','every-epoch'); % Set up data augmentation to enhance training dataset aug_imgs = {}; numberOfImages = length(imdsTrain.Files); for k = 1 : numberOfImages % Apply cutout augmentation img = readimage(imdsTrain,k); cutout_img = random_cutout(img);imwrite(cutout_img,strcat('/scratch/qbi/uqhrile1/ethans_lab_data/augmented_dataset/img_',int2str(k),'.tiff')); end aug_imdsTrain = imageDatastore('/scratch/qbi/uqhrile1/ethans_lab_data/augmented_dataset'); % Add other augmentations augmenter = imageDataAugmenter('RandXReflection',true, 'RandXTranslation',[-10 10],'RandYTranslation',[-10 10]); % Combine augmented data with training data augmented_training_dataset = pixelLabelImageDatastore(aug_imdsTrain, pxdsTrain, 'DataAugmentation',augmenter); % Train the network [cell_segmentation_nn, info] = trainNetwork(augmented_training_dataset,lgraph,options); save cell_segmentation_nn ```<issue_comment>username_1: I have never used MATLAB for ml before, so it is difficult for me to understand all your code. My first association to your problem is class imabalance. Since you seem to have got a handle on that, the problem could be dying ReLU or bloated activations. To check if the ReLU is dying, you could look at the activations of the early layers of your network. If many values are zero, it should be dying ReLU. Upvotes: 0 <issue_comment>username_2: Similar to other answers, I don't know Matlab that well but you could try the following steps to debug your problem. Make sure you can overfit to a single instance ============================================== from your dataset, pull out a single image with a good amount of true positives in it. Duplicate that images B times (where B = Batch Size) and then try to train your network with only that small dataset. If you can't overfit to a single instance, then something is really wrong and you should validate all functional aspects of your network. If you can overfit, then it's probably more of an algorithmic or data imbalance issue. Validate Functional Aspects of the Network ========================================== ### Make sure your input data and labels are correct. Validate that your images are being correclty inputted into the network. You can do this by printing out any images right before they go into the training function. For the labels, make sure to manually inspect a few labels to be sure that the labels correctly match up the images. ### Make sure that your loss function is correct. Add a unit test or two to your loss function to validate that it is doing what it should be doing. Create a simiple example that you can easily validate. ### Validate the rest of the non-algorithmic functionality Anything that isn't a design choice should be validated. Make sure that the weights are the correct sizes, each layer has the correct number of weights, the intermediate features have the correct shape, etc. Data Imbalance ============== If you were able to overfit to a single image, and validated the functional aspects of your network then you may be facing a data imbalance issue. Look at your dataset and see what % of your instances are true positives vs. true negatives. If you have an exteme imbalance (like 10% 90%) or something like that, then build a dataset that is more balanced and see if you can fit your data. If you fit the data with that more balanced dataset, then there's plenty of ways to fix your data imbalance issue. Google around for data imbalance and you should get a few good ideas. Some include focal loss, upsampling, etc... Check your receptive field ========================== The receptive field on a network is basically the area that can be looked at by the network for a specific region. This is controlled by the size of the convolution filters, stride, etc. If the data that you are segmenting doesn't fit within the receptive field of the model, then you might be saturating the loss function. tldr try playing around with kernel sizes. Upvotes: 1
2021/01/07
803
3,540
<issue_start>username_0: I have 2 small images. They are basically the same, but differ in rotation and size. I should estimate the parameters for affine transform to get them similar. What network structure can be suitable for this task? For example, those based on convolutional networks did badly, because the pictures are too small.[![enter image description here](https://i.stack.imgur.com/tKQLP.png)](https://i.stack.imgur.com/tKQLP.png)<issue_comment>username_1: I have never used MATLAB for ml before, so it is difficult for me to understand all your code. My first association to your problem is class imabalance. Since you seem to have got a handle on that, the problem could be dying ReLU or bloated activations. To check if the ReLU is dying, you could look at the activations of the early layers of your network. If many values are zero, it should be dying ReLU. Upvotes: 0 <issue_comment>username_2: Similar to other answers, I don't know Matlab that well but you could try the following steps to debug your problem. Make sure you can overfit to a single instance ============================================== from your dataset, pull out a single image with a good amount of true positives in it. Duplicate that images B times (where B = Batch Size) and then try to train your network with only that small dataset. If you can't overfit to a single instance, then something is really wrong and you should validate all functional aspects of your network. If you can overfit, then it's probably more of an algorithmic or data imbalance issue. Validate Functional Aspects of the Network ========================================== ### Make sure your input data and labels are correct. Validate that your images are being correclty inputted into the network. You can do this by printing out any images right before they go into the training function. For the labels, make sure to manually inspect a few labels to be sure that the labels correctly match up the images. ### Make sure that your loss function is correct. Add a unit test or two to your loss function to validate that it is doing what it should be doing. Create a simiple example that you can easily validate. ### Validate the rest of the non-algorithmic functionality Anything that isn't a design choice should be validated. Make sure that the weights are the correct sizes, each layer has the correct number of weights, the intermediate features have the correct shape, etc. Data Imbalance ============== If you were able to overfit to a single image, and validated the functional aspects of your network then you may be facing a data imbalance issue. Look at your dataset and see what % of your instances are true positives vs. true negatives. If you have an exteme imbalance (like 10% 90%) or something like that, then build a dataset that is more balanced and see if you can fit your data. If you fit the data with that more balanced dataset, then there's plenty of ways to fix your data imbalance issue. Google around for data imbalance and you should get a few good ideas. Some include focal loss, upsampling, etc... Check your receptive field ========================== The receptive field on a network is basically the area that can be looked at by the network for a specific region. This is controlled by the size of the convolution filters, stride, etc. If the data that you are segmenting doesn't fit within the receptive field of the model, then you might be saturating the loss function. tldr try playing around with kernel sizes. Upvotes: 1
2021/01/08
1,906
6,346
<issue_start>username_0: I'm studying machine learning and I came into a challenging question. [![enter image description here](https://i.stack.imgur.com/Tprev.png)](https://i.stack.imgur.com/Tprev.png) The answer is 2. But based on my ML notes, all of them are true. Where are the wrong points?<issue_comment>username_1: First things first. What is an optimal $w$?. In this case it is supposed to be not only the one that minimizes the **emprical /sample loss**, but also **non-trivial** as we shall sonn see. Now inspect the loss functions, we see a term $-y\_iw^Tx\_i$ coming up. What exactly is this term? It can be anything. The correct term would have been $-y\_i(w^Tx\_i + b)$, or atleast I will assume this term, but the reasoning will exactly be the same without this term. Now this term is well defined and since the Eucledian distance of a point, along with sign, $x\_i$ from the hyperplane $w^Tx + b=0$ (the proof for this can be found easily), is given by $w^Tx\_i + b$. So if the datapoint is supposed to have $y\_i=-1$ but according to the classifier it gives $w^Tx\_i + b = l\_i > 0$, then one cane see $-y\_il\_i > 0$, hence $\max (0, -y\_i l\_i) = -y\_i l\_i$ a positive loss, and the same will apply to case when $y\_i = 1$ and $w^Tx\_i + b = l\_i <0 $ We see another assumption, the datapoints are linearly seperable. This means, there exists a classifier (or a hyperplane) which can completely seperate the 2 classes. Now if one inspects the loss function $1,3,4$ all of them have one goal to make $\max (0, -y\_i l\_i) = 0, \forall i$. A point to note is that $1$ does this by directly reducing $\sum\_{i=1}^n \max (0, -y\_i l\_i)$, $3$ does this using the $0-1$ loss i.e if the instance is wrongly classified (or $-y\_il\_i > 0$), then incur $\frac{1}{n}$ loss else $0$, while $4$ incurs a loss of $\frac{1}{n}(w^Tx\_i + b)^2$ for each wrong classification. It is very important to note that the solution of all these 3 problems are all the same $w^\*,b^\*$ which classifies the dataset correctly, which makes the problem equivalent. (This may not hold under some very similar looking loss functions, so it is a point to take care about, a problem is equivalent only if it results in the same solution). In $2$, first $C \rightarrow \infty$ is an incorrect statement, since then not only does it become mathematically unsound (atleast for me), it also means $2$ is the same as $1,3,4$. Why? $C$ is a weighting factor, it decides how much priority the algorithm minimizing loss will give to the term $C \sum\_{i=1}^n \max(0,-y\_il\_i)$. If say $C=0$ then you are basically optimizing $||w||\_2^2$ whose optimal value is at $w=0$. hence, we get nothing useful as optimizing $||w||\_2^2$ has no connection with our objective of reducing misclassifications. But if, $0.5||w||\_2^2 + C \sum\_{i=1}^n \max(0,-y\_il\_i)$ is used the algorithm is now giving some weightage to minimizing both the $||w||\_2^2$ as well as the misclassifications. If $C$ is very small the algorithm will prefer to minimize $||w||\_2^2$ rather than $C \sum\_{i=1}^n \max(0,-y\_il\_i)$. Thus giving a solution which may not be optimal, as per the dataset. But, if $C$ is very large the algorithm will mostly try to minimize $C \sum\_{i=1}^n \max(0,-y\_il\_i)$, but still it will give some weightage in minimizng $||w||\_2^2$, hence may not find an optimal hyperplane. If $C \rightarrow \infty$ it is matheatically ill defined (as per my knowledge) so I won't go into it. But, all these aforementioned explanation (i.e the tradeoff between optimizaing two different objectives) will be useful only when there are some additional constraints, like instead of $\max(0,-y\_il\_i)$ it is $\max(0,1-y\_il\_i)$ where making $w$ bigger actually makes sense as it will also increase $l\_i$ (i.e the same hyperplane denoted by a larger $w$ say $kw$ will make l\_i large hence $-l\_iy\_i$ small if correctly classified, but this doesn't make sense if we are talking about the same hyperplane), hence the $||w||\_2^2$ term opposes such an increase. This is a very famous formulation of loss used in SVM. But, the challenging part in this question is the assumption '**linearly seperable**' and also the additional constraint is missing (which is present in SVMs). So if $w^\*,b^\*$ incurs $0$ classification error so will $kw^\*, b^\*$ where $k$ is any scaling factor. And since $C$ is very large the second term in $2$ must be $0$. Thus, any $kw^\*, b^\*$ will work to make the second term 0. But, the first term now becomes $0.5k^{2}||w^\*||\_2^2$, where $w^\*$ denotes the optimal hyperplane, and hence fixed. Thus the first term will choose $|k| \rightarrow 0$ or basically $k=0$ to minmize the 1st term and the second term now simply becomes $C\sum\_{i=1}^n \max(0,-y\_ib)$ which can be made $0$ by choosing $b=0$. Thus, a $0$ loss when $kw^\*=0,b^\*=0$ or a **trivial solution** and definitely not optimal as this will be true for all linearly seperable datasets. I assumed and extra $b$ term to leverage linear seperability, without it the reasoning becomes easier and you can follow the exact same line of reasoning, albeit the problem might not be linearly seperable anymore. The solution may seem not very elegant but this is a standard line of reasoning in optimization problems where the optimization variable is changed from $w \rightarrow k$, where $w \in \mathbb{R^n}$ and $k \in \mathbb{R}$. In short for $2$ the solution produced will be $w^\*=0,b^\*=0$ which will be the best solution but easily seen as trivial and non optimal at all. Upvotes: 1 <issue_comment>username_2: Since the data is linearly separable linear model $y = w^Tx$ will be able to perfectly classify all the examples. That means that loss functions $L\_1(w), L\_3(w)$ and $L\_4(w)$ will have a value of 0 (since all examples are correctly classified). For the loss $L\_2(w)$ second term will be 0 if all examples are correctly classified. The first term of $L\_2(w)$ \begin{equation} \frac{1}{2}||w||^2\_2 \end{equation} will be not be minimized to 0 (unless optimal $w = \mathbf{0}$) so $L\_2(w) > 0$. Optimizer will try to minimize $L\_2(w)$ to be 0, so because of the penalty to $w$ it will try to find the best tradeoff between minimizing $w$ and loss due to misclassification so the resulting $w$ may not be the best to achieve optimal classification. Upvotes: 0
2021/01/10
641
2,350
<issue_start>username_0: I am wondering what the parameter $y$ in the function $g(y,\mu,\sigma)=\frac{1}{(2\pi)^{1/2}\sigma}e^{-(y-\mu)^{2/2\sigma^2}}$ stands for in Section 6 (page 14) of the [paper](https://link.springer.com/content/pdf/10.1007/BF00992696.pdf) introducing the REINFORCE family of algorithms. Drawing an analogy to Equation 4 of the same paper, I would guess that it refers to the outcome (i.e. *sample*) of sampling from a probability distribution parameterized by the parameters $\mu$ and $\sigma$. However, I am not sure whether that is correct or not.<issue_comment>username_1: It states that "To simplify notation, we focus on one single unit and omit the usual unit index subscript throughout" So they are simply removing the i-th index from the equation for simplicity. So g is a function of a given instance "y" and the parameters μ and σ. Upvotes: 0 <issue_comment>username_2: If you take a look at the Wikipedia page related to the [normal distribution](https://en.wikipedia.org/wiki/Normal_distribution), you will see the definition of the Gaussian density $$ {\displaystyle f(x)={\frac {1}{\sigma {\sqrt {2\pi }}}}e^{-{\frac {1}{2}}\left({\frac {x-\mu }{\sigma }}\right)^{2}}} \label{1}\tag{1} $$ and you will see that the $y$ in your formula corresponds to the $x$ in equation \ref{1}. I've seen this notation in the context of computer vision and image processing, where the Gaussian kernel is used to blur images. So, as pointed out by someone in a comment, $y$ should indeed be the point where you evaluate the density. Maybe the confusing part is that all parameters are treated equally in terms of their purpose, while $\mu$ and $\sigma$ are clearly the parameters that define the specific density, so they are not the inputs to the specific density. After having read the relevant section of the paper, I now understand why you're confused. The author refers to $y$ as the *output* (not yet sure why: maybe it's the output of another unit that feeds this Gaussian unit?), but I think that this explanation still applies. The output of the Gaussian density $g$ is not $y$, but the density that corresponds to $y$. In fact, in appendix $B$ of the paper, the author says that $Y$ is the [**support**](https://planetmath.org/supportoffunction) of $g$ and $y$ is an element of $Y$. Upvotes: 3 [selected_answer]
2021/01/11
1,835
6,132
<issue_start>username_0: There are proofs for the universal approximation theorem with just 1 hidden layer. The proof goes like this: 1. Create a "bump" function using 2 neurons. 2. Create (infinitely) many of these step functions with different angles in order to create a tower-like shape. 3. Decrease the step/radius to a very small value in order to approximate a cylinder. **This is what I'm not convinced of** 4. Using these cylinders one can approximate any shape. (At this point it's basically just a packing problem like [this](https://upload.wikimedia.org/wikipedia/commons/thumb/e/ee/15_circles_in_a_square.svg/1024px-15_circles_in_a_square.svg.png). In [this video, minute 42](http://www.youtube.com/watch?v=lkha188L4Gs&t=42m0s), the lecturer says > > In the limit that's going to be a perfect cylinder. If the cylinder is small enough. It's gonna be a perfect cylinder. Right ? I have control over the radius. > > > Here are the slides. [![enter image description here](https://i.stack.imgur.com/HPvXo.png)](https://i.stack.imgur.com/HPvXo.png) [Here is a pdf version](https://www.mathematik.uni-wuerzburg.de/fileadmin/10040900/2019/Seminar__Artificial_Neural_Network__24_9__.pdf) from another university, so you do not have to watch the video. **Why am I not convinced?** I created a program to plot this, and even if I decrease the radius by orders of magnitude it still has the same shape. Let's start with a simple tower of radius 0.1: [![enter image description here](https://i.stack.imgur.com/mT75p.png)](https://i.stack.imgur.com/mT75p.png) Now let's decrease the radius to 0.01: [![enter image description here](https://i.stack.imgur.com/Rq73M.png)](https://i.stack.imgur.com/Rq73M.png) Now, you might think that it gets close to a cylinder, but it just looks like it is approximating a perfect cylinder, because of the zoomed out effect. Let's zoom in: [![enter image description here](https://i.stack.imgur.com/Oxjtq.png)](https://i.stack.imgur.com/Oxjtq.png) Let's decrease the radius to 0.0000001. [![enter image description here](https://i.stack.imgur.com/T7YcQ.png)](https://i.stack.imgur.com/T7YcQ.png) Still not a perfect cylinder. In fact, the "quality" of the cylinder is the same. Python code to reproduce (requires NumPy and matplotlib): <https://pastebin.com/CMXFXvNj>. So my questions are: **Q1 Is it true that we can get a perfect cylinder solely by decreasing the radius of the tower to 0 ?** **Q2 If this true, why is there no difference when I plot it with different radii(0.1, vs 1e-7) ?** Both towers have the same shape Clarification: What do I mean with: same shape ? Let's say we calculate the volume of an actual cylinder(Vc) with the same raius and height as our tower and divide it by the volume of the tower(Vt) . Vc = Volume Cylinder Vt = Volume Tower ratio(r) = Vc/Vt What this documents/lectures claim that is the ratio of these 2 volumes depends on the radius but in my view it's just constant. So what they are saying is that: lim r -> 0 for ratio(r) = 1 But my experiments show that: lim r -> 0 for ratio(r) = const and don't depend on the radius at all. **Q3 Preface** An objection i got multiple times once from [Dutta](https://ai.stackexchange.com/users/9947/duttaa) and once from [D.W](https://ai.stackexchange.com/users/1794/d-w) is that just decreasing the radious and plotting it isn't mathematical rigorous. So let's assume in the limit of r=0 it's really a perfect cylinder. One possible explanation for this would be that the limit is a special case and one can't approximate towards it But if that is true this would imply that there is no use for it since it's impossible to have a radius of exactly zero. It would only be useful if we could get gradually closer to a perfect cylinder by decreasing the radius. **Q3 So why should we even care about this then ?** **Further Clarifications** The original universal approximation theorem proof for single hidden layer neural networks was done by <NAME>. Then I think people tried to make some visual explations for it. **I am NOT questioning the paper !** But i am questioning the visual explanation given in the linked lecutre/pdf (made by other people)<issue_comment>username_1: The more I think about it the more convinced I am that the visual explanation from the linked lecture is wrong. But the good news is there are still some ways to get close to the cylinder but not before the activation of the last neuron but instead afterwards. I haven't done it with simgoid. But I tried with ReLu instead for now. We can cut the tower at the very top (thanks to ReLu and a bias). The closer to the top we cut it the more it will be like a cylinder. We can control the height of the tower with the weights. First in 2d: [![enter image description here](https://i.stack.imgur.com/VRGzb.png)](https://i.stack.imgur.com/VRGzb.png) Unfortunately the closer we put this towers together the more they will start two influence each other. [![enter image description here](https://i.stack.imgur.com/FyRwL.png)](https://i.stack.imgur.com/FyRwL.png) But we can counter that with a negative tower between them. [![enter image description here](https://i.stack.imgur.com/isxkZ.png)](https://i.stack.imgur.com/isxkZ.png) Now in 3d: [![enter image description here](https://i.stack.imgur.com/KKUs3.png)](https://i.stack.imgur.com/KKUs3.png) This Answer is work in progress I will update it when i find out something new. Upvotes: 1 <issue_comment>username_2: I think you misunderstood that part of the proof: you first need a limit on the number of neurons to get closer and closer to a cylinder. You are keeping them constant at 1000, thus indeed not getting any closer to the cylinder and exponential vanishing behavior. Once you have the "epsilon-perfect" circles/cylinders, then you make them smaller and smaller, thus needing more and more copies of the "1-circle" setup. My understanding is that this proof has those two numbers going to infinity: neurons-to-approximate-cylinder, and number-of-cylinders. You took into account the latter, but not the former. Upvotes: 0
2021/01/13
721
2,744
<issue_start>username_0: lets say I have three texts: 1. "make a heading that says hello word" 2. "make a heading of hello world" 3. "create heading consist of hello world" How can I fetch those groups of words using AI which is referring to heading i.e hello world in this case. Which AI frameworks or libraries can do that? in all examples heading is pointing to hello world (which i am referring as group of words). so basically i want those words which will be a part of heading or in other word there is a relationship between them. another example i can give is "I am watching Breaking bad" so there is a relationship between watching and breaking bad and i want to extract what are you watching. What's the best approach? Do I have to train a model for that or there are some other techniques that can get it done?<issue_comment>username_1: The more I think about it the more convinced I am that the visual explanation from the linked lecture is wrong. But the good news is there are still some ways to get close to the cylinder but not before the activation of the last neuron but instead afterwards. I haven't done it with simgoid. But I tried with ReLu instead for now. We can cut the tower at the very top (thanks to ReLu and a bias). The closer to the top we cut it the more it will be like a cylinder. We can control the height of the tower with the weights. First in 2d: [![enter image description here](https://i.stack.imgur.com/VRGzb.png)](https://i.stack.imgur.com/VRGzb.png) Unfortunately the closer we put this towers together the more they will start two influence each other. [![enter image description here](https://i.stack.imgur.com/FyRwL.png)](https://i.stack.imgur.com/FyRwL.png) But we can counter that with a negative tower between them. [![enter image description here](https://i.stack.imgur.com/isxkZ.png)](https://i.stack.imgur.com/isxkZ.png) Now in 3d: [![enter image description here](https://i.stack.imgur.com/KKUs3.png)](https://i.stack.imgur.com/KKUs3.png) This Answer is work in progress I will update it when i find out something new. Upvotes: 1 <issue_comment>username_2: I think you misunderstood that part of the proof: you first need a limit on the number of neurons to get closer and closer to a cylinder. You are keeping them constant at 1000, thus indeed not getting any closer to the cylinder and exponential vanishing behavior. Once you have the "epsilon-perfect" circles/cylinders, then you make them smaller and smaller, thus needing more and more copies of the "1-circle" setup. My understanding is that this proof has those two numbers going to infinity: neurons-to-approximate-cylinder, and number-of-cylinders. You took into account the latter, but not the former. Upvotes: 0
2021/01/14
491
1,959
<issue_start>username_0: I'm new to deep learning. I wanted to know: do we use pre-processing in deep learning? Or it is only used in machine learning. I searched for it and its methods on the internet, but I didn't find a suitable answer.<issue_comment>username_1: Yes, sure, [data pre-processing](https://ch.mathworks.com/help/deeplearning/deep-learning-data-management-and-preprocessing.html) is also done in deep learning. For example, we often normalize (or scale) the inputs to neural networks. If the inputs are images, we often resize them so that they all have the same dimensions. Of course, the pre-processing step that you apply depends on your data, neural network, and task. [Here](https://github.com/tensorflow/models/blob/master/official/nlp/transformer/data_pipeline.py) or [here](https://github.com/tensorflow/models/blob/master/official/vision/image_classification/mnist_main.py) are two examples of implementations that perform a pre-processing step (normalization in the second case). You can find more explanations and examples [here](https://ch.mathworks.com/help/deeplearning/deep-learning-data-management-and-preprocessing.html) and probably [here](https://github.com/tensorflow/models/tree/master/official) too. Upvotes: 1 <issue_comment>username_2: Adding to username_1's solution, the less you have to normalize/balance/preprocess/augment, the better, e.g. because then you know for sure that the accuracy is the achievement of the model rather than data combing. For example, if you can achieve the same accuracy using two approaches (e.g. with the image dataset): 1. for each image, subtract global mean, divide by global standard deviation 2. 1)+random flips, random crops, color jitter, etc, then 1), if you can achieve a comparable accuracy, is a better solution, as the model is more general. The same refers to the balancing of the data - if you can train a good model without it, it's an additional strength. Upvotes: 0
2021/01/14
2,002
7,625
<issue_start>username_0: Say I have a machine learning model trained on a laptop and I then want to embed/deploy the model on a microcontroller. How can I do this? I know that TensorflowLite Micro generates a C header to be added in the project and then be embedded, but every example I read shows how it is done with neural networks, which seems legit as TensorFlow is primarily used for deep learning. But how can I do the same with any type of model, like the ones there is in scikit-learn? So, I'm not interested in necessarily doing this with TensorflowLite Micro, but I'm interested in the general approach to solve this problem.<issue_comment>username_1: There are a few possible approaches to deploying a ML model to a microcontroller. The main limiting factor to deployment on microcontollers is that ML models are usually a representation of a set of *parameters* that are intended to be used as input to a *prediction algorithm* alongside a new datapoint. Most such models assume the presence of an accompanying *library* that implements the algorithm in question. However, a microcrontoller may use an exotic chip architecture, or have very several or unusual resource constraints that prevent these standard libraries from being deployed easily. Presumably you will already have some way to get input into your microcontroller and to program it in order to call some function that you can write. If not, you will need to first figure out how to do that, and the right methods depend on your microcontroller. A common approach is to write assembly code or code in a very limited subset of C or another language. An alternative is to find a distribution of an interpreter for another language (e.g. Java, Python) that has been compiled to work on your chip. Either way, you will need some way to program the chip. Presuming you can program the chip, you have two fundamental challenges in deploying the model: 1. Most models are trained with very wide floating point numbers for their parameters. For example, 128-bit numbers may be used. On a standard computing environment, the CPU or GPU will be equipped to perform operations on wide datatypes efficiently. On a microcontroller, you may be limited to 8-bit or 16-bit integers. To work with your model parameters in an environment like this, you will need to either make the parameters smaller (usually by rounding them to fit in a much smaller numeric format, a process called "quantization"), or by finding or writing software that can simulate the operations you want (probably addition and multiplication) on a large datatype that is represented as a collection of smaller datatypes. The first approach may make the model perform poorly. The second may make model prediction very slow. 2. You need an implementation of the algorithm. Some algorithms like linear regression, linear discriminant analysis, or even decision trees are extremely easy to implement prediction for, and may require only addition, multiplication, and/or comparison. You might be able to write these yourself in a simple subset of C, or even in assembly (for example, prediction with linear regression should be just a simple loop). Other algorithms, like deep neural networks, may contain more complex operations, and may contain many such operations performed in complicated sequences. For these, you generally will need to find an distribution of a library that implements the algorithms, or compile one yourself. Compiling one yourself will require setting up a build toolchain for your specific microcontroller, and can be quite involved. Upvotes: 3 <issue_comment>username_2: If the library running the model can be compiled for your microcontroller, then you can run your model on that microcontroller. If you train using one library and deploy using another library, you possible can convert your model to that library: [ONNX](https://en.wikipedia.org/wiki/Open_Neural_Network_Exchange). Some library links on [Edge Computing](https://en.wikipedia.org/wiki/Edge_computing) in [ML](https://en.wikipedia.org/wiki/Machine_learning): * [Microcontroller support for Tensorflow Lite](https://www.tensorflow.org/lite/microcontrollers?hl=en) * [Edge ML](https://github.com/Microsoft/EdgeML) * [PyTorch](https://github.com/pytorch/pytorch) supports android * [scikit-learn](https://github.com/scikit-learn/scikit-learn) uses only python libraries. [ulab](https://github.com/v923z/micropython-ulab) partly replaces numpy for [micropython](https://github.com/micropython/micropython). To speed things up specific hardware is used: GPU's of course, but also [Tensor Processing Units](https://en.wikipedia.org/wiki/Tensor_Processing_Unit). One can manually transfer an NN trained model to C code ([paper](https://arxiv.org/pdf/2007.01348.pdf)), but there are also compilers: * [XLA](https://www.tensorflow.org/xla?hl=en) of tensorflow * [Glow](https://github.com/pytorch/glow) of pytorch * [NNCG](https://github.com/iml130/nncg) XLA and Glow support x86-64 and ARM64. The NNCG approach generates C code and thus should be more readily portable to general MCU's ([paper](https://arxiv.org/pdf/2001.05572.pdf)). Further links: [adge-ai](https://github.com/crespum/edge-ai), [stackoverlow question](https://stackoverflow.com/questions/36720498/convert-keras-model-to-c) Upvotes: 2 <issue_comment>username_3: I found: * For scikit-learn like models: [MicroML](https://github.com/eloquentarduino/micromlgen), [Micro-LM](https://github.com/Edge-Learning-Machine/Micro-LM), [Micro Learn](https://github.com/adarsh1001/micro-learn), [sklearn-porter](https://github.com/nok/sklearn-porter), [emlearn](https://github.com/emlearn/emlearn) * For deep learning models: [tensorflow Lite Micro](https://www.tensorflow.org/lite/microcontrollers), [X-CUBE-AI](https://www.st.com/en/embedded-software/x-cube-ai.html), [Glow](https://medium.com/pytorch/glow-compiler-optimizes-neural-networks-for-low-power-nxp-mcus-e095abe14942), [NNoM](https://github.com/majianjia/nnom) * Both: [EdgeML](https://github.com/Microsoft/EdgeML), [ELL](https://github.com/Microsoft/EdgeML) These seems to partly fit my needs. But i am surprised that i cannot find something more general that either convert Python to C or to object file with ML support (to be used in C projects). Indeed, once trained, ML algo are "just" a bunch of additions/multiplications/comparisons. It would help to be able to convert scikit-learn pipeline for instance, as ML projects are rarely composed of only a single ML model taking raw data. But it seems that EdgeAI/TinyML is mainly focused on compiling deep learning models when it comes to deploy ML models on "bare-metal" microcontrollers. Thanks for your answers btw, it helps. Upvotes: 2 <issue_comment>username_4: One can use simpler approach with [deepC compiler](https://github.com/ai-techsystems/deepC/) and convert exported onnx model to c++. Check out simple example at [deepC compiler sample test](https://github.com/ai-techsystems/deepC/tree/master/test/compiler/mnist) Compile onnx model for your target machine Checkout [mnist.ir](https://github.com/ai-techsystems/deepC/blob/master/test/compiler/mnist/mnist.ir) **Step 1:** Generate intermediate code ``` % onnx2cpp mnist.onnx ``` **Step 2:** Optimize and compile ``` % g++ -O3 mnist.cpp -I ../../../include/ -isystem ../../../packages/eigen-eigen-323c052e1731/ -o mnist.exe ``` **Step 3:** Test run % ./mnist.exe Here is a [link to YouTube Video](https://www.youtube.com/watch?v=tzYMq3PJwdw) for elaborate instructions. Fur details, read more at <https://link.medium.com/IhbcJzi4jgb> Upvotes: 1
2021/01/22
1,202
4,132
<issue_start>username_0: **Q-learning** uses a table to store all state-action pairs. Q-learning is a model-free RL algorithm, so how could there be the one called **Deep Q-learning**, as *deep* means using DNN; or maybe the state-action table (Q-table) is still there but the DNN is only for input reception (e.g. turning images into vectors)? Deep Q-network seems to be only the DNN part of the Deep Q-learning program, and Q-network seems the short for Deep Q-network. **Q-learning**, **Deep Q-learning**, and **Deep Q-network**, what are the differences? May be there a comparison table between these 3 terms?<issue_comment>username_1: In Q-learning (and in general value based reinforcement learning) we are typically interested in learning a Q-function, $Q(s, a)$. This is defined as $$Q(s, a) = \mathbb{E}\_\pi\left[ G\_t | S\_t = s, A\_t = a \right]\;.$$ For tabular Q-learning, where you have a finite state and action space you can maintain a table lookup that maintains your current estimate of the Q-value. Note that in practice even the spaces being finite might not be enough to *not* use DQN, if e.g. your state space contains a large number, say $10^{10000}$, of states, then it might not be manageable to maintain a separate Q-function for each state-action pair When you have an infinite state space (and/or action space) then it becomes impossible to use a table, and so you need to use function approximation to generalise across states. This is typically done using a deep neural network due to their expressive power. As a technical aside, the Q-networks don't usually take state *and* action as input, but take in a representation of the state (e.g. a $d$-dimensional vector, or an image) and output a real valued vector of size $|\mathcal{A}|$, where $\mathcal{A}$ is the action space. Now, it seems in your question that you're confused as to why you use a model (the neural network) when Q-learning is, as you rightly say, model-free. The answer here is that when we talk about Reinforcement Learnings being model-free we are not talking about how their value-functions or policy are parameterised, we are actually talking about whether the algorithms use a model of the transition dynamics to help with their learning. That is, a model free algorithm doesn't use any knowledge about $p(s' | s, a)$ whereas model-based methods look to use this transition function - either because it is known exactly such as in Atari environments, or it must need to be approximated - to perform planning with the dynamics. Upvotes: 4 [selected_answer]<issue_comment>username_2: Here is a table that attempts to *systematically* show the differences between tabular Q-learning (TQL), deep Q-learning (DQL), and deep Q-network (DQN). | | Tabular Q-learning (TQL) | Deep Q-learning (DQL) | Deep Q-network (DQN) | | --- | --- | --- | --- | | Is it an RL algorithm? | Yes | Yes | No (unless you use DQN to refer to DQL, which is done often!) | | Does it use neural networks? | No. It uses a table. | Yes | No. DQN is the neural network. | | Is it a model? | No | No | [Yes](https://ai.stackexchange.com/a/25460/2444) ([but usually not in the RL sense](https://ai.stackexchange.com/q/4456/2444)) | | Can it deal with continuous state spaces? | No (unless you discretize them) | [Yes](https://ai.stackexchange.com/a/12257/2444) | Yes (in the sense that it can get real-valued inputs for the states) | | Can it deal with continuous action spaces? | [Yes](https://ai.stackexchange.com/a/12257/2444) (but maybe not a good idea) | [Yes](https://ai.stackexchange.com/a/12257/2444) (but maybe not a good idea) | Yes (but only the sense that it can produce real-valued outputs for actions). | | Does it converge? | [Yes](https://ai.stackexchange.com/q/11679/2444) | [Not necessarily](https://ai.stackexchange.com/q/11679/2444) | [Not necessarily](https://ai.stackexchange.com/q/11679/2444) | | Is it an online learning algorithm? | Yes | No, if you use [experience replay](https://ai.stackexchange.com/q/6579/2444) | [No, but it can be used in an online learning setting](https://ai.stackexchange.com/q/10839/2444) | Upvotes: 4
2021/01/24
1,677
5,492
<issue_start>username_0: I am learning to program neural networks and others, and I would like to know how I can get the numbers that are in an image, for example, if I pass an image that has 123 written, get with my model that there are 123 written, I have tried to use `PyTesseract` is not very precise, and I would like to do it with a neural network, my current code is quite simple, it recognizes the digits of the `mnist` dataset such that: ``` import tensorflow as tf from tensorflow.keras import Sequential, optimizers from tensorflow.keras.utils import to_categorical from tensorflow.keras.layers import Dense, Flatten, Conv2D, MaxPooling2D import matplotlib.pyplot as plt mnist = tf.keras.datasets.mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() print('train_images.shape:', train_images.shape) print('test_images.shape:', test_images.shape) plt.imshow(train_images[0]) train_images = train_images.reshape((60000, 28, 28, 1)) test_images = test_images.reshape((10000, 28, 28, 1)) train_images = train_images.astype('float32') / 255 test_images = test_images.astype('float32') / 255 train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels) model = Sequential() model.add(Conv2D(32, (5, 5), activation = 'relu', input_shape = (28, 28, 1))) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(64, (5, 5), activation = 'relu')) model.add(MaxPooling2D((2, 2))) model.add(Flatten()) model.add(Dense(10, activation = 'softmax')) model.summary() model.compile(loss = 'categorical_crossentropy', optimizer = 'sgd', metrics = ['accuracy']) model.fit(train_images, train_labels, batch_size = 100, epochs = 5, verbose = 1) test_loss, test_accuracy = model.evaluate(test_images, test_labels) print('Test accuracy:', test_accuracy) ``` but I would need to know how I can pass an image with a sequence of digits to it, and that it recognizes the digits in question, does anyone know how I could do it?<issue_comment>username_1: In Q-learning (and in general value based reinforcement learning) we are typically interested in learning a Q-function, $Q(s, a)$. This is defined as $$Q(s, a) = \mathbb{E}\_\pi\left[ G\_t | S\_t = s, A\_t = a \right]\;.$$ For tabular Q-learning, where you have a finite state and action space you can maintain a table lookup that maintains your current estimate of the Q-value. Note that in practice even the spaces being finite might not be enough to *not* use DQN, if e.g. your state space contains a large number, say $10^{10000}$, of states, then it might not be manageable to maintain a separate Q-function for each state-action pair When you have an infinite state space (and/or action space) then it becomes impossible to use a table, and so you need to use function approximation to generalise across states. This is typically done using a deep neural network due to their expressive power. As a technical aside, the Q-networks don't usually take state *and* action as input, but take in a representation of the state (e.g. a $d$-dimensional vector, or an image) and output a real valued vector of size $|\mathcal{A}|$, where $\mathcal{A}$ is the action space. Now, it seems in your question that you're confused as to why you use a model (the neural network) when Q-learning is, as you rightly say, model-free. The answer here is that when we talk about Reinforcement Learnings being model-free we are not talking about how their value-functions or policy are parameterised, we are actually talking about whether the algorithms use a model of the transition dynamics to help with their learning. That is, a model free algorithm doesn't use any knowledge about $p(s' | s, a)$ whereas model-based methods look to use this transition function - either because it is known exactly such as in Atari environments, or it must need to be approximated - to perform planning with the dynamics. Upvotes: 4 [selected_answer]<issue_comment>username_2: Here is a table that attempts to *systematically* show the differences between tabular Q-learning (TQL), deep Q-learning (DQL), and deep Q-network (DQN). | | Tabular Q-learning (TQL) | Deep Q-learning (DQL) | Deep Q-network (DQN) | | --- | --- | --- | --- | | Is it an RL algorithm? | Yes | Yes | No (unless you use DQN to refer to DQL, which is done often!) | | Does it use neural networks? | No. It uses a table. | Yes | No. DQN is the neural network. | | Is it a model? | No | No | [Yes](https://ai.stackexchange.com/a/25460/2444) ([but usually not in the RL sense](https://ai.stackexchange.com/q/4456/2444)) | | Can it deal with continuous state spaces? | No (unless you discretize them) | [Yes](https://ai.stackexchange.com/a/12257/2444) | Yes (in the sense that it can get real-valued inputs for the states) | | Can it deal with continuous action spaces? | [Yes](https://ai.stackexchange.com/a/12257/2444) (but maybe not a good idea) | [Yes](https://ai.stackexchange.com/a/12257/2444) (but maybe not a good idea) | Yes (but only the sense that it can produce real-valued outputs for actions). | | Does it converge? | [Yes](https://ai.stackexchange.com/q/11679/2444) | [Not necessarily](https://ai.stackexchange.com/q/11679/2444) | [Not necessarily](https://ai.stackexchange.com/q/11679/2444) | | Is it an online learning algorithm? | Yes | No, if you use [experience replay](https://ai.stackexchange.com/q/6579/2444) | [No, but it can be used in an online learning setting](https://ai.stackexchange.com/q/10839/2444) | Upvotes: 4
2021/01/24
1,128
3,893
<issue_start>username_0: If I were to make a neural network that predicts the value of e.g. Bitcoin tomorrow based on the chart of the last month, would that work? Of course, 100% accuracy cannot be reached, but a success rate over 50% on determining if I should buy or sell Bitcoin could be very profitable. Have there been any attempts to create such neural networks so far?<issue_comment>username_1: In Q-learning (and in general value based reinforcement learning) we are typically interested in learning a Q-function, $Q(s, a)$. This is defined as $$Q(s, a) = \mathbb{E}\_\pi\left[ G\_t | S\_t = s, A\_t = a \right]\;.$$ For tabular Q-learning, where you have a finite state and action space you can maintain a table lookup that maintains your current estimate of the Q-value. Note that in practice even the spaces being finite might not be enough to *not* use DQN, if e.g. your state space contains a large number, say $10^{10000}$, of states, then it might not be manageable to maintain a separate Q-function for each state-action pair When you have an infinite state space (and/or action space) then it becomes impossible to use a table, and so you need to use function approximation to generalise across states. This is typically done using a deep neural network due to their expressive power. As a technical aside, the Q-networks don't usually take state *and* action as input, but take in a representation of the state (e.g. a $d$-dimensional vector, or an image) and output a real valued vector of size $|\mathcal{A}|$, where $\mathcal{A}$ is the action space. Now, it seems in your question that you're confused as to why you use a model (the neural network) when Q-learning is, as you rightly say, model-free. The answer here is that when we talk about Reinforcement Learnings being model-free we are not talking about how their value-functions or policy are parameterised, we are actually talking about whether the algorithms use a model of the transition dynamics to help with their learning. That is, a model free algorithm doesn't use any knowledge about $p(s' | s, a)$ whereas model-based methods look to use this transition function - either because it is known exactly such as in Atari environments, or it must need to be approximated - to perform planning with the dynamics. Upvotes: 4 [selected_answer]<issue_comment>username_2: Here is a table that attempts to *systematically* show the differences between tabular Q-learning (TQL), deep Q-learning (DQL), and deep Q-network (DQN). | | Tabular Q-learning (TQL) | Deep Q-learning (DQL) | Deep Q-network (DQN) | | --- | --- | --- | --- | | Is it an RL algorithm? | Yes | Yes | No (unless you use DQN to refer to DQL, which is done often!) | | Does it use neural networks? | No. It uses a table. | Yes | No. DQN is the neural network. | | Is it a model? | No | No | [Yes](https://ai.stackexchange.com/a/25460/2444) ([but usually not in the RL sense](https://ai.stackexchange.com/q/4456/2444)) | | Can it deal with continuous state spaces? | No (unless you discretize them) | [Yes](https://ai.stackexchange.com/a/12257/2444) | Yes (in the sense that it can get real-valued inputs for the states) | | Can it deal with continuous action spaces? | [Yes](https://ai.stackexchange.com/a/12257/2444) (but maybe not a good idea) | [Yes](https://ai.stackexchange.com/a/12257/2444) (but maybe not a good idea) | Yes (but only the sense that it can produce real-valued outputs for actions). | | Does it converge? | [Yes](https://ai.stackexchange.com/q/11679/2444) | [Not necessarily](https://ai.stackexchange.com/q/11679/2444) | [Not necessarily](https://ai.stackexchange.com/q/11679/2444) | | Is it an online learning algorithm? | Yes | No, if you use [experience replay](https://ai.stackexchange.com/q/6579/2444) | [No, but it can be used in an online learning setting](https://ai.stackexchange.com/q/10839/2444) | Upvotes: 4
2021/01/25
1,581
6,430
<issue_start>username_0: There are five parameters from an LSTM layer for regularization if I am correct. To deal with overfitting, I would start with 1. reducing the layers 2. reducing the hidden units 3. Applying dropout or regularizers. There are `kernel_regularizer`, `recurrent_regularizer`, `bias_regularizer`, `activity_regularizer`, `dropout` and `recurrent_dropout`. They have their definitions on the Keras's website, but can anyone share more experiences on how to reduce overfitting? And how are these five parameters used? For example, which parameters are most frequently used and what kind of value should be input? ?<issue_comment>username_1: Regularization is trying to discourage complex information from being learned so we want to eliminate the model from actually learning to memorize the training data. We don't want to learn like very specific pinpoints of the training data that don't generalize well to test data. Dropout, the idea of drop out is that during training we randomly set some of the activations of the hidden neurons to zero with some probability say 0.5. This idea is extremely powerful because it allows the network to lower its capacity, it also makes it such that the network can't build these memorization channels through the network where it tries to just remember the data because on every iteration 50% of that data is going to be wiped out so it's going to be forced to not only generalize better but it's going to be forced to have multiple channels through the network and build a more robust representation of its prediction. We repeat this on every iteration so on the first iteration we dropped out one 50% of the nodes on the next iteration we can drop out a different randomly sampled 50% which may include some of the previously sampled nodes as well and this will allow the network to generalize better to new test data. Early stopping, when the network is improving its performance during training there comes a point where the training data starts to diverge from the testing data, at some point the network is going to start to do better on its training data than its testing data, what this means is basically that the network is starting to memorize some of the training data and that's what you don't want so what we can do is we can identify this inflection point where the test data starts to increase and diverge from the training data so we can stop the network early and make sure that our test accuracy is as minimum as possible. Upvotes: -1 <issue_comment>username_2: One LSTM layer should be enough unless you have lots of data. The same thing goes for the number of nodes in the layer. Start small first so 5 to 10 nodes and increment it until the performance is reasonable. Once you have a model working you can apply regularization if you think it will improve performance by reducing overfitting of the training data. You can check this by looking at the learning curves or compring the error on the validation and test sets. In my experiments I've used the L1 and L2 regularizers along with dropout. These can all be mixed together in fact using both L1 and L2 at the same time is called the ElasticNet. I tend to apply the regularizers on the `kernel_regularizer` because this affects the weights for the inputs. Basically feature selection. The value for the L1 and L2 can start with the default (for tensorflow) of 0.01 and change it as you see fit or read what other research papers have done. Dropout can start at 0.1 then increment it until there is no performance gain. This is basically a percentage so 0.1 would remove about 10% of your nodes. Finding the best regularizer is the same as any other hyperparameter optimization which is mostly trial and error. Upvotes: 1 <issue_comment>username_3: If what is mentioned above, that is probably in the context of lstm networks. I would suggest using the keras tuner bayesian optimizer and making the l1 or l2 number a parameter of the kernel space. This way you find the optimal values, and its a great way to hypertune. Just keep in mind, the greater the range of parameters, or kernel if i am not wrong, the higher computer power you need. ``` from tensorflow import keras import keras_tuner as kt def model1(hp): model=Sequential() model.add(keras.layers.LSTM(units=hp.Int('units',min_value=40, max_value=800, step=20), dropout=hp.Float('droput',min_value=0.15, max_value=0.99, step=0.05), recurrent_dropout=hp.Float('redroput',min_value=0.05, max_value=0.99, step=0.05), activation='relu', return_sequences=True, input_shape=(30,1))) Attention() model.add(keras.layers.LSTM(units=hp.Int('units',min_value=40, max_value=800, step=20), dropout=hp.Float('droput',min_value=0.15, max_value=0.99, step=0.05), activation='relu',return_sequences=True)) Attention() model.add(keras.layers.LSTM(units=hp.Int('units',min_value=40, max_value=800, step=20), activation='relu')) model.add(keras.layers.Dense(1)) model.compile(loss='mean_squared_error',optimizer=tf.keras.optimizers.Adam(hp.Choice('learning_rate', values=[1e-2, 1e-3, 1e-4, 1e-7, 1e-10]))) return model bayesian_opt_tuner = kt.BayesianOptimization( model1, objective='val_loss', max_trials=200, executions_per_trial=1, project_name='timeseries_bayes_opt_POC', overwrite=True,) xval=X_test bayesian_opt_tuner.search(x=X_train ,y=X_train, epochs=300, #validation_data=(xval ,xval), validation_split=0.95, validation_steps=30, steps_per_epoch=30, callbacks=[tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=4, verbose=1, restore_best_weights=True), tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience= 2, verbose=1, min_delta=1e-5, mode='min')] ) This is where the magic happens. Something I composed myself. If interested holla ``` Upvotes: 1
2021/01/26
858
3,597
<issue_start>username_0: **Problem description:** Suppose we have an environment, where a reward at time step $t$ is dependent not only on the current action, but also on previous action in the following way: * if current action == previous action, you get reward = $R(a,s)$ * if current action != previous action, you get reward = $R(a,s) - \text{penalty}$ In this environment, switching actions bears a significant cost. We would like the RL algorithm to learn optimal actions under the constraint that switching action is costly, i.e. we would like to stay in selected action as long as possible. The penalty is significantly higher than an immediate reward, so if we do not take it into account, the model evaluation will have a negative total reward with almost 100% probability, since the agent will be constantly switching and extracting rewards from environment that are smaller than the cost of switching actions. Action space is small (2 actions: left, right). I'm trying to beat this game with PPO (Proximal Policy Optimization) **Questions** * How one might address this constraint: i.e. explicitly make the agent learn that switching is costly and it's worth sitting in one action even if immediate rewards are negative? * How can you make the RL algorithm learn that it's not the reward term $R(a\_t|s\_t)$ that is negative, and thus decreasing $Q(a\_t|s\_t)$ and $V(s\_t)$, but it's the penalty term (taking the action that is different from the previous action at step $t-1$) that is pushing total reward down?<issue_comment>username_1: The answer to both your concerns is: * Add the previous action choice to the state representation. It is all you need to do. It gives the agent the data it needs to learn the association of negative reward from not matching the previous action. By making this data part of the state, you re-establish the [Markov property](https://en.wikipedia.org/wiki/Markov_property) in the MDP model of the environment, which you had otherwise lost by making the reward dependent on a variable that was both systematically changing and hidden from the agent. The state in a MDP is often not just the current observations that the environment provides, but can include any relevant knowledge that the agent has. At the extreme that can include a complete history of all observations and actions taken to date. It is common practice to derive the state as a summary of recent history of observations and actions taken so far. In your case, all you need do is concatenate the previous action to the observation, because you know about the constraint and how it affects optimisation. Upvotes: 3 [selected_answer]<issue_comment>username_2: As said, you will have to include previous input state(s) in your training input patterns. Suppose we use straightforward NN backpropagation learning.. You would expand the input layer with *additional neurons and weights* connected to the past. The time window neurons should introduce *additional* weights into the first hidden layer. A neural net architecture doing this is called Time Delay neural net (TDNN) <https://stats.stackexchange.com/questions/160070/difference-between-time-delayed-neural-networks-and-recurrent-neural-networks> I used TDNN in the past for signal processing. My TDNN includes the past of N inputs, supporting M steps in the past. A flexible TDNN can also be configured to connect some hidden layer past output to the next hidden layer. <https://neuron.eng.wayne.edu/tarek/MITbook/chap5/5_4.html> <https://www.mathworks.com/help/deeplearning/ref/timedelaynet.html> Upvotes: 0
2021/01/28
678
2,795
<issue_start>username_0: I'm learning the basics of RL and I'm struggling to understand the notion of terminal state in MDPs. To ask my question straightforwardly: is there a natural way to define the terminal state from the MDP transition probabilities $p(s',r|s,a)$? If I need to be more restrictive, assume a game setting, for example, chess. My first hypothesis would be to define the terminal state as the state $s\_T$ such that $p(s',r|s\_T,a) = p(s',r|s\_T)$, a state from which the transition is independent of the agent's actions. But that does not seem quite right. First, there is no particular reason why this state should be unique. Second, from this definition, it could also just be an intermittent state of "lag".<issue_comment>username_1: The answer to both your concerns is: * Add the previous action choice to the state representation. It is all you need to do. It gives the agent the data it needs to learn the association of negative reward from not matching the previous action. By making this data part of the state, you re-establish the [Markov property](https://en.wikipedia.org/wiki/Markov_property) in the MDP model of the environment, which you had otherwise lost by making the reward dependent on a variable that was both systematically changing and hidden from the agent. The state in a MDP is often not just the current observations that the environment provides, but can include any relevant knowledge that the agent has. At the extreme that can include a complete history of all observations and actions taken to date. It is common practice to derive the state as a summary of recent history of observations and actions taken so far. In your case, all you need do is concatenate the previous action to the observation, because you know about the constraint and how it affects optimisation. Upvotes: 3 [selected_answer]<issue_comment>username_2: As said, you will have to include previous input state(s) in your training input patterns. Suppose we use straightforward NN backpropagation learning.. You would expand the input layer with *additional neurons and weights* connected to the past. The time window neurons should introduce *additional* weights into the first hidden layer. A neural net architecture doing this is called Time Delay neural net (TDNN) <https://stats.stackexchange.com/questions/160070/difference-between-time-delayed-neural-networks-and-recurrent-neural-networks> I used TDNN in the past for signal processing. My TDNN includes the past of N inputs, supporting M steps in the past. A flexible TDNN can also be configured to connect some hidden layer past output to the next hidden layer. <https://neuron.eng.wayne.edu/tarek/MITbook/chap5/5_4.html> <https://www.mathworks.com/help/deeplearning/ref/timedelaynet.html> Upvotes: 0
2021/01/28
1,059
4,063
<issue_start>username_0: I am studying the state of the art of Reinforcement Learning, and my point is that we see so many applications in the real world using Supervised and Unsupervised learning algorithms in production, but I don't see the same thing with Reinforcement Learning algorithms. What are the biggest barriers to get RL in production?<issue_comment>username_1: Technical barriers: There should be at least these common sense big barriers: * **Trial-and-error** technique makes the model hard to learn (too many), compared to ready-to-use supervised data * **Number of time-steps** (which usually equals the number of actions of the agent in the trajectory) is large, thus brute-force exploration won't work as the number of trials to find errors is exponential, although negative rewards may help cut short the brute-force tree. * Real-life RL takes **unlimited number of episodes** (for each episode, a sequence of actions should be learnt), and the incremental training is harder and harder in time with more explored data, unless some past and no-longer-related data are removed, just like humans, we forget some of the past to learn more, remember more the present. The technical barriers are at first the barriers to applying them to business. People may produce some supervised data manually rather quick, and thus supervised learning is usually opted first, nobody wish to try RL. Harder to find human resources: AI engineers with experiences in supervised learning are more popular and easier to find some; fewer work with RL, thus business projects are not carried out easily if using RL. However, from my point of view, RL is very much promising in future as AI entities are now more and more on their own. Upvotes: 2 <issue_comment>username_2: There is a relatively recent paper that tackles this issue: [Challenges of real-world reinforcement learning](https://arxiv.org/pdf/1904.12901.pdf) (2019) by <NAME> et al., which presents all the challenges that need to be addressed to productionize RL to real world problems, the current approaches/solutions to solve the challenges, and metrics to evaluate them. I will only list them (based on the notes I had taken a few weeks ago). You should read the paper for more details. In any case, for people that are familiar with RL, they will be quite obvious. 1. Batch off-line and off-policy training * One current solution is *importance sampling* 2. Learning on the real system from limited samples (sample inefficiency) * Solutions: MAML, use expert demonstrations to bootstrap the agent, model-based approaches 3. High dimensional continuous state and action spaces * Solutions: AE-DQN, DRRN 4. [Satisfying safety constraints](https://arxiv.org/pdf/1606.06565.pdf) * Solutions: constrained MDP, safe exploration strategies, etc. 5. Partial observability and non-stationarity * Solutions to partial observability: incorporate history in the observation, recurrent neural networks, etc. * Solutions to non-stationarity: domain randomization or system identification 6. Unspecified and multi-objective reward functions * Solutions: CVaR, Distributional DQN 7. [Explainability](https://ai.stackexchange.com/q/14224/2444) 8. Real-time inference 9. System delays (see also [this](https://ai.stackexchange.com/q/25178/2444) and [this](https://ai.stackexchange.com/a/7344/2444) answers) There's also a more recent and related paper [An empirical investigation of the challenges of real-world reinforcement learning](https://arxiv.org/pdf/2003.11881.pdf) (2020) by <NAME> et al, and [here](https://github.com/google-research/realworldrl_suite) you have the associated code with the experiments. However, note that RL (in particular, bandits) is already being used to solve at least one real-world problem [[1](https://netflixtechblog.com/artwork-personalization-c589f074ad76), [2](https://azure.microsoft.com/en-us/services/cognitive-services/personalizer/)]. See also [this](https://ai.stackexchange.com/a/24355/2444) answer. Upvotes: 4 [selected_answer]
2021/01/28
1,210
4,360
<issue_start>username_0: I'm trying to improve my evaluation and I saw this [here](https://www.chessprogramming.org/index.php?title=Evaluation&mobileaction=toggle_view_mobile) ``` materialScore = kingWt * (wK-bK) + queenWt * (wQ-bQ) + rookWt * (wR-bR) + knightWt* (wN-bN) + bishopWt* (wB-bB) + pawnWt * (wP-bP) ``` How do I get the value, let's say `wK`? Do I get the position of the king and score it relative to the board? For example, `wK` is more safe than the `bK`, so let's say `wK - bK = 1 - 0.5`. So, the result will be `90 * (0.5)`. Is this really how it works?<issue_comment>username_1: Technical barriers: There should be at least these common sense big barriers: * **Trial-and-error** technique makes the model hard to learn (too many), compared to ready-to-use supervised data * **Number of time-steps** (which usually equals the number of actions of the agent in the trajectory) is large, thus brute-force exploration won't work as the number of trials to find errors is exponential, although negative rewards may help cut short the brute-force tree. * Real-life RL takes **unlimited number of episodes** (for each episode, a sequence of actions should be learnt), and the incremental training is harder and harder in time with more explored data, unless some past and no-longer-related data are removed, just like humans, we forget some of the past to learn more, remember more the present. The technical barriers are at first the barriers to applying them to business. People may produce some supervised data manually rather quick, and thus supervised learning is usually opted first, nobody wish to try RL. Harder to find human resources: AI engineers with experiences in supervised learning are more popular and easier to find some; fewer work with RL, thus business projects are not carried out easily if using RL. However, from my point of view, RL is very much promising in future as AI entities are now more and more on their own. Upvotes: 2 <issue_comment>username_2: There is a relatively recent paper that tackles this issue: [Challenges of real-world reinforcement learning](https://arxiv.org/pdf/1904.12901.pdf) (2019) by <NAME> et al., which presents all the challenges that need to be addressed to productionize RL to real world problems, the current approaches/solutions to solve the challenges, and metrics to evaluate them. I will only list them (based on the notes I had taken a few weeks ago). You should read the paper for more details. In any case, for people that are familiar with RL, they will be quite obvious. 1. Batch off-line and off-policy training * One current solution is *importance sampling* 2. Learning on the real system from limited samples (sample inefficiency) * Solutions: MAML, use expert demonstrations to bootstrap the agent, model-based approaches 3. High dimensional continuous state and action spaces * Solutions: AE-DQN, DRRN 4. [Satisfying safety constraints](https://arxiv.org/pdf/1606.06565.pdf) * Solutions: constrained MDP, safe exploration strategies, etc. 5. Partial observability and non-stationarity * Solutions to partial observability: incorporate history in the observation, recurrent neural networks, etc. * Solutions to non-stationarity: domain randomization or system identification 6. Unspecified and multi-objective reward functions * Solutions: CVaR, Distributional DQN 7. [Explainability](https://ai.stackexchange.com/q/14224/2444) 8. Real-time inference 9. System delays (see also [this](https://ai.stackexchange.com/q/25178/2444) and [this](https://ai.stackexchange.com/a/7344/2444) answers) There's also a more recent and related paper [An empirical investigation of the challenges of real-world reinforcement learning](https://arxiv.org/pdf/2003.11881.pdf) (2020) by <NAME> et al, and [here](https://github.com/google-research/realworldrl_suite) you have the associated code with the experiments. However, note that RL (in particular, bandits) is already being used to solve at least one real-world problem [[1](https://netflixtechblog.com/artwork-personalization-c589f074ad76), [2](https://azure.microsoft.com/en-us/services/cognitive-services/personalizer/)]. See also [this](https://ai.stackexchange.com/a/24355/2444) answer. Upvotes: 4 [selected_answer]
2021/01/29
2,047
7,808
<issue_start>username_0: Assuming we use an MSE cost function of the form $$ \sum\_s\mu(s)(V\_{\pi}(S\_t)-\hat{V}(S\_t,\theta\_t))^2 = E\_{\mu(s)}[(V\_{\pi}(S\_t)-\hat{V}(S\_t,\theta\_t))^2])$$ The Stochastic Gradient Descent is used to approximate the true update algorithm, which looks like this $$\theta\_{t+1} = \theta\_{t} - \frac{\eta\_t}{2}\nabla\_{\theta}(E\_{\mu(s)}[(V\_{\pi}(S\_t)-\hat{V}(S\_t,\theta\_t))^2])$$ to this $$\theta\_{t+1} = \theta\_{t} - \frac{\eta\_t}{2}\nabla\_{\theta}(U\_t-\hat{V}(S\_t,\theta\_t))^2$$ where, for simplicity, $U\_t$ represents an unbiased estimate of the true value function $V\_{\pi}(s\_t)$. This expression is the source of many learning algorithms used in reinforcement learning. One of the conditions for SGD requires that samples used for updating the parameters must be I.I.D according to the distribution $\mu(s)$. However, in both on-policy and off-policy learning methods, updates at each time-step are based on trajectories generated. Since, along a trajectory, the state $s\_{t+1}$ depends on the state at ${s\_t}$, this means that the sample used to update $\theta\_t$ and $\theta\_{t+1}$ are not independent. Many, if not all, sample-based learning algorithms used in RL rely on using SGD, such as the [Gradient Monte Carlo Algorithm](http://incompleteideas.net/book/RLbook2020.pdf#page=224) but I've not really seen anywhere that mentions these algorithms have the "issue" that I mention so I feel like I'm missing something. More formally, --- My Question: *Does the fact that parameter updates are not I.I.D mean we can't really use stochastic gradient descent AT ALL in learning algorithms, and, if so, why then do these algorithms "work"?* --- As far as I know, this question applies equally to all forms of parameterised function approximation that are used with learning algorithms (tabular functions\*, linear functions and non-linear functions). But, if anyone knows a special reason as to whether these cases should be treated separately could they make clear why \*I understand that when learning algorithms with tabular functions, there exists theory beyond SGD that ensures convergence, however, I'm not entirely sure what this theory is and whether if this makes them exempt, so if anyone knows whether or not it does make them exempt could they also make this clear! --- **Edit:** It has been highlighted in the comments that replay buffers have been used to resolve the issue of correlated sampling in cases such as DQN and variants of it. This implies correlated sampling is an issue in these cases. Aside from this, I've not heard of replay buffers being used elsewhere (correct me if I'm wrong), so why are replay buffers needed with this off-policy NN approach but not in other learning algorithms given that they all suffer from the issue of correlated sampling.<issue_comment>username_1: First I will address the issue of Tabular methods. These do not use SGD at all. Although the updates are very similar to an SGD update there is *no gradient* here and so we are not using SGD. Many Tabular methods are proven to converge, for instance the paper by <NAME> titled "Q-Learning" introduces and proves that Q-learning converges. Also you include tabular methods as being parameterised function approximators. This is not true. Tabular methods maintain an estimate of the value function in a look-up table for each state-action pair and there is no function approximator being used. Now for non-tabular methods, i.e. Deep Reinforcement Learning. Here we are using SGD *only to optimise the parameters of the networks* (assuming of course that we are using NN's as our function approximators, but this is a fair assumption if you read the literature). This is why off-policy methods are typically preferred because they can use a replay buffer which allows the use of data from any past trajectory. When using a replay buffer we sample random past experiences which de-correlates the data and allows the i.i.d. assumption to hold when using SGD. If we were to use an on-policy algorithm, in theory SGD may not converge to any local optima because we are violating the i.i.d. assumption. However, all this means is that we are not *guaranteed* to converge. For instance, I have run an experiment using REINFORCE (an on-policy learning algorithm) using NN's as function approximators and they were able to obtain an optimal policy. However, this was for a very simple environment using modestly sized networks, so this is likely why they were able to be trained using non-i.i.d. data. A more in depth question/answer as to why NN's require i.i.d. data can be found [here](https://ai.stackexchange.com/questions/10839/why-exactly-do-neural-networks-require-i-i-d-data?noredirect=1&lq=1). To address your edit, Replay Buffers are not *required* but if we can use them, then why would we not? It helps to maintain the i.i.d. assumption and so it helps us obtain a local optima. If we did not use them then there would not be the guarantee that it would converge. As an aside, Replay Buffers are typically used because they make the algorithm have a greater sample efficiency - this means we can obtain an optimal policy using much less data. If you are wondering why don't on-policy methods use a replay buffer, the answer is because they are on-policy. The actions used in updates must be taken according to our current policy that we are learning the value functions for (or are optimising the policy of in Policy Gradient Methods). This is not the case in off-policy algorithms - e.g. in Q-learning we are learning the value functions of the greedy policy but we follow a different policy that allows for exploration. Upvotes: 2 <issue_comment>username_2: As username_1 <NAME> mentioned in his answer, despite correlated sampling, learning algorithms can still converge to the optimal policy/value function. The reason learning algorithms still may produce the correct results despite the use of correlated sampling is to do with another property, that does not rely on i.i.d sampling, that is, the average regret of SGD tends to zero at the limit. A couple of sources I found that highlight this are * Page 26 of the slides available from the computer science department at the university of British Columbia [Machine Learning and Data mining](https://www.cs.ubc.ca/%7Eschmidtm/Courses/340-F15/L19.pdf) * [The introduction of a paper discussing non-iid stochastic optimisation](http://opt.kyb.tuebingen.mpg.de/papers/opt2011_agarwal.pdf) - <NAME> and <NAME> Minimising average regret implies that the SGD will, at the limit, produce the best set of parameters that perform well on the encountered samples however this is does not indicate whether the best set of parameters actually performs well. **Extra** This is in fact a general issue that extends beyond the reinforcement learning algorithms. Programs that implement SGD don't typically sample IID! as mentioned [in an MIT lecture on stochastic gradient descent](https://www.youtube.com/watch?v=k3AiUhwHQ28) at around 40 minutes the lecturer discusses it's more efficient to sample without replacement from the training set making it non IID for which there is not much theory about. --- **caveat** I'm not 100% sure what the implication of minimising regret means but given regret is of the form $$regret(\theta\_1,\ldots,\theta\_t) = \sum\_{t=0}^T\left(L(v\_\pi(s\_t),\hat{v}(s\_t,\theta\_i) -L(v\_\pi(s\_t),\hat{v}(s\_t,\theta\_\*)\right)$$ Then I understand the model that minimises regret at time T as the one that has the minimum loss for all training samples encountered up to time T . But what I'm not sure about is how this relates to minimising the cost function, it probably requires another question! Upvotes: -1
2021/02/01
892
3,919
<issue_start>username_0: In Q-learning, all resources I've found seem to say that the algorithm to update the Q-table should start at some initial state, and pick actions (which are sometimes random) to explore the state space. However, wouldn't it be better/faster/more thorough to simply iterate through all possible states? This would ensure that the entire table is updated, instead of just the states we happen to visit. Something like this (for each epoch): ``` for state in range(NUM_STATES): for action in range(NUM_ACTIONS): next_state, reward = env.step(state, action) update_q_table(state, action, next_state, reward) ``` Is this a viable option? The only drawback I can think of is that it wouldn't be efficient for huge state spaces.<issue_comment>username_1: If your algorithm is executed multiple (or enough) times using an outer loop, it would converge to similar results as Q-learning would with $\gamma = 0$ (as you don't look what is the expected future reward). In this case, the difference is that you would pass as much time to explore each possible couple of (state, action) while Q-learning would pass more time on the pair which seem more promising and, as you've said this wouldn't be efficient for a problem with a huge number of pair (state, action). If the algorithm is executed only once, then, even for a problem with a few pairs (state, action), you need to assume that an action effected on a state will always bear the same result for your method to work. In most cases, it isn't true either because there is some sort of randomness in the reward system or in the action (your agent can fail to make an action) or because the state of your agent is limited to its knowledge and so doesn't represent perfectly the world (and so the consequence of its action can vary just like if the reward had some randomness). Finally, your algorithm doesn't look at the expected future reward, so it would be equivalent to having $\gamma = 0$. This could be fixed by adding a new loop updating the table after your current loops if you execute your algorithm only one time or by adding the expected future reward directly to your Q-table if there is an outer loop. So, in conclusion, without the outer loop, your idea would work for a system with few pairs of (state, action), where your agent has a perfect and complete knowledge of its world, the reward doesn't vary, and where an agent can't fail to accomplish an action. While these kinds of systems indeed exist, I don't think that it's an environment where one should use Q-learning (or another form of reinforcement learning), except if it's for educational purposes. With an outer loop, your idea would work if you are willing to pass more time training to have a more precise Q-table on the least promising pair of (state, action). Upvotes: 4 [selected_answer]<issue_comment>username_2: In short, yes, provided that you have a small number of states. In pretty much any real system, the number of states is much higher than you could ever hope to explore exhaustively in any reasonable time. This is why you need to set some sort of exploration/exploitation policy to make sure that you mostly visit promising states while also checking states that might look initially poor but may lead to better states as you explore further. As a few minutes thought will convince you, determining the exact nature of that exploration/exploitation trade-off is probably the most important aspect of effective Q-Learning (and pretty much any other search algorithm for that matter). Upvotes: 2 <issue_comment>username_3: The issue is trading off reward vs learning, Q-learning attempts to learn and do things which produce reward(basically, operate suboptimally). I'm not sure if Q-Learning is actually any different performance wise, short-term Q-Learning will produce more reward(probably) but will also miss states. Upvotes: -1
2021/02/01
942
4,105
<issue_start>username_0: Why are the weights of a neural net updated only considering the old values of the later layer, not the already updated values? I use [this example](https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/) to explain my problem. When applying the backpropagation chain rule, the weights of the previous layer ($w\_1, w\_2, w\_3, w\_4$) are updated making use of the chain rule: $$\frac{\partial E\_{total}}{\partial w\_1} = \frac{\partial E\_{total}}{\partial out\_{h1}} \* \frac{\partial out\_{h1}}{\partial net\_{h1}}\*\frac{\partial net\_{h1}}{\partial w\_1}$$ He then says: $$\frac{\partial net\_{o1}}{\partial out\_{h1}}=w\_5$$ Although he has already calculated the updated value for $w\_5$, he uses the old value of $w\_5$ to update $w\_1$? Because the updated value of $w\_1$ will have an impact on the outcome together with the updated value of $w\_5$?<issue_comment>username_1: The basic idea of gradient descent is: * Calculate the gradient of some score with respect to parameters that you can control * Take a step in the direction of that gradient that improves the score (subract a multiple of gradient - for gradient descent - if you want to minimise some cost function) The backpropagation using chain rule in neural network layers is part of the first part, calculating the gradient. If you interleaved these steps during a single calculation, by taking an update step before propagating the gradient over all layers, you would not propagate the true gradient, but some interim value. There is no guarantee this value would reflect the true gradient of the affected layer when compared to training data and loss functions. There are some manipulations of the gradient that are acceptable and improve convergence. For instance momentum, and various forms of weighting based on previous gradient calculations. However, I am not aware of any successful attempt to perform updates with partially-calculated gradient then attempt to continue the interrupted gradient calculations over the mixed updated and not-yet-updated parameters. It is also possible to calculate gradients and then only update *some* of the possible parameters. In some cases this is useful, for instance it is a good choice when performing transfer learning, by only updating a few layers close to the output. This constrains the network to keep a lot of its trained parameters as-is, which reduces the chances of over-fitting to the smaller dataset that the network is learning from when transferring. Upvotes: 2 <issue_comment>username_2: This is an excellent question, and the literature is generally silent on whether it's an improvement. I'm personally rather curious about this result! There are some practical reasons it's not generally done: **Framework structure** Most ML frameworks separate the calculation of the gradient from the update, because users often want to mess with the gradient before updating the weights (e.g. Adam, momentum, etc). The way that most frameworks are implemented makes it difficult to do the optimizer calculation inline with the back-propagation. **Batch sharding** Most large networks have the batch calculation sharded over multiple chips. This means that when actually doing the gradient update, the partial sums need to be sent off-chip to be summed and added to the weights. This normally isn't too bad because the transmission of the partials sums for one weight run in the parallel with the calculation of the gradients for the next weight. But if we update the weights before back-propagating the gradient, then we need to wait for all the partial sums to be received and totaled, then the resulting update being transmitted back to all the chips before the next step in back-propagation can occur. This puts the network latency on the critical path, which makes everything very slow... So it's almost certain that it's faster to do the dumb thing with more steps, than the smarter thing with fewer steps. (The improvement in the gradient would have to save a huge fraction of steps to be worth it). Upvotes: 0
2021/02/02
1,388
5,433
<issue_start>username_0: I'm trying to implement Deep Q-Learning for a pet problem having a continuous state space and discretized action space. The algorithm for table-based Q-Learning updates a single entry of the Q table - i.e. a single $Q(s, a)$. However, a neural network outputs an entire row of the table - i.e. the Q-values for every possible action for a given state. So, what should the target output vector be for the network? I've been trying to get it to work with something like the following: ``` q_values = model(state) action = argmax(q_values) next_state = env.step(state, action) next_q_values = model(next_state) max_next_q = max(next_q_values) target_q_values = q_values target_q_values[action] = reward(next_state) + gamma * max_next_q ``` The result is that my model tends to converge on some set of fixed values for every possible action - in other words, I get the same Q-values no matter what the input state is. (My guess is that this is because, since only 1 Q-value is updated, the training is teaching my model that most of its output is already fine.) What should I be using for the target output vector for training? Should I calculate the target Q value for every action, instead of just one?<issue_comment>username_1: As you say, the output of a $Q$ network is typically a value for all actions of the given state. Let us call this output $\mathbf{x} \in \mathbb{R}^{|\mathcal{A}|}$. To train your network using the squared bellman error you need first calculate the scalar target $y = r(s, a) + \max\_a Q(s', a)$. Then, to train the network we take a vector $\mathbf{x'} = \mathbf{x}$ and change the $a$th element of it to be equal to $y$, where $a$ is the action you took in state $s$; call this modified vector $\mathbf{x'}\_a$. We calculate the loss $\mathcal{L}(\mathbf{x}, \mathbf{x'}\_a)$ and back propagate through this to update the parameters of our network. Note that when we use $Q$ to calculate $y$ we typically use some form of target network; this can be a copy of $Q$ where the parameters are only updated every $i$th update or a network whose weights are updated using a polyak average with the main networks weights after every update. Judging by your code it looks as though your action selection is what might be causing you some problems. As far as I can tell you're always acting greedily with respect to your $Q$-function. You should be looking to act $\epsilon$-greedily, i.e. with probability $\epsilon$ take a random action and act greedily otherwise. Typically you start with $\epsilon=1$ and decay it each time a random action is taken down to some small value such as 0.05. Upvotes: 3 [selected_answer]<issue_comment>username_2: There are a couple ways you can define the architecture of a DQN. The most common way of doing it is by taking in the states and outputting the value function of all possible actions - this leads to a DQN with multiple outputs. The other, less efficient way, includes taking in an state-action as input and outputting a single real value - this approach is typically avoided since we need to run the model multiple times to get estimates for different actions. The replay buffer is used to store $(S,A,R,S')$ transitions as encountered using your $\epsilon$-soft policy. We sample one of these transitions from the replay buffer and calculate an estimate of the value function for $(S,A)$ i.e $\hat Q(S,A,\theta)$ and then we calculate a target as follows. $$target =R+\max\_\limits{a'}\hat Q(S',a',\theta^-)$$ Assuming you use the first model, you can then use a Squared error loss function, defined as follows, and modify your parameter as a function of that $$L(\theta) = (target-\hat Q(S,A,\theta))^2$$ Assuming for now the target is fixed (I'll explain this in a minute), only $Q(S,A,\theta)$ is a function of $\theta$ in the loss function. $Q(S,A,\theta)$ corresponds to one output node of your DQN and therefore, as you've already highlighted, when carrying out EBP the parameters are updated such that we make the value of this one node tend to the specified target. This is just how Q-learning works, we use samples generated by the behaviour policy to create $L(\theta)$ and then tweak the parameters to minimise the cost. As we do this for more and more samples the network hopefully figures out a way that accommodates for every sample it's been trained on so far (with more emphasis on the most recent samples). **As to your issue, are you sure you're training on multiple different samples and not just a specific one? it may just be a bug you've overseen.** --- **Explaining $\theta^-$** I used a slightly different notation, $\theta^-$, for the parameters used to generate the bootstrapped estimate, $\max\_\limits{a'}\hat Q(S',a,\theta^-)$. $\theta^-$ is only matched to $\theta$ every $n^{th}$ step because we want to keep the target constant as much as possible. The reason for this is because Q-learning does not necessarily converge when using neural networks partly due to bootstrapping which can cause a divergence of optimisation because of state generalisation. By using this $\theta^-$ we help prevent things like this from happening. Ultimately the idea of the replay buffer and the fixed parameter for bootstrapping are to try to convert the RL problem into a supervised learning problem because we know much more about how to deal with supervised learning problems when using DNNs. Upvotes: 1
2021/02/05
1,497
5,895
<issue_start>username_0: Assuming the input photo is focused on a person's face, if the person is wearing a surgical mask, most face recognition software fail to identify the subject's face. Most facial landmark models are trained to identify at least the eyes and the tip of the nose (for example, dlib's 5 point landmark). Is it possible to construct a model that is trained to identify a face based on only the eyes? Edit: Sorry for my broken english, but by "eyes" I mean the periocular area. I am terribly sorry because english isn't my first language.<issue_comment>username_1: Yes, it must be possible as retina scanners have been used as a method of personal identification for some time. The difference is, if you have a retinal scanner you are probably controlling the focal distance of the picture you are taking and using suitably high resolution. Your mileage may vary as these things decrease in quality. Upvotes: 2 <issue_comment>username_2: The two main eye biometrics are [iris recognition](https://en.wikipedia.org/wiki/Iris_recognition) and [retina recognition](https://en.wikipedia.org/wiki/Retinal_scan) (aka retinal scan). These are not going to work from an ordinary photo of someone's face. I have used iris recognition at about ten feet away and this [article](https://theconversation.com/iris-scanners-can-now-identify-us-from-40-feet-away-42141) claims it can be done at 40 feet! Eye recognition, or identification of a person from an image of their eyes alone (i.e., without seeing their iris or retina), has such a high error rate that it is not done. You may find the following paper of interest: > > [<NAME>., <NAME>., <NAME>., & <NAME>. (2019). Convolutional Neural Network based Eye Recognition from Distantly Acquired Face Images for Human Identification. 2019 International Joint Conference on Neural Networks (IJCNN), Neural Networks (IJCNN), 2019 International Joint Conference On, 1–8. doi:10.1109/IJCNN.2019.8852190](https://ieeexplore.ieee.org/document/8852190) > > > For more information on iris recognition see pp 10-11 of this [tutorial](https://www.tutorialspoint.com/biometrics/biometrics_tutorial.pdf), and pp 12-13 for retinal scan. An excerpt from [Retinal vs. Iris Recognition: Did You Know Your Eyes Can Get You Identified?](https://www.bayometric.com/retinal-vs-iris-recognition/) by <NAME>: > > **Retina recognition** > The posterior portion of human eye forms retina. It > is made of a light sensitive tissue. When light passing through cornea > and lens reaches retina, neural signals are generated and transferred > to the brain via the optic nerve. Retina is a thin layer of tissue > formed by neural cells. Capillaries responsible for blood supply of > this layer forms a pattern that can be used for personal > identification. This pattern of blood capillaries is believed to be > unique in each individual due to huge possibility of variation how > these capillaries run on the surface of retina. Since retina is > located at the posterior portion inside the human eye, special > equipment is required to scan this pattern. Retina recognition is one > of the least deployed biometric methods because of high cost of the > implementation and its highly invasive nature that may cause some user > discomfort. Still, it is used is very high security applications like > military and high level government access due to its accuracy and high > level of security. > > > Retina recognition systems make use of low energy infra-red light to > scan the retinal pattern. Blood vessels absorb infrared light while > surrounding tissues reflect it. This reflection is detected by the > retina recognition system and image of this pattern is captured. This > image is further enhanced to make is usable for the recognition > algorithm. Retina template is generated once the image is taken > through recognition algorithm; this template is associate with a > subject’s demographic data and stored. The process so far is called > enrolment. The subject’s identity can be verified anytime by scanning > a new retinal sample and matching it against the stored template. > > > **Iris recognition** Iris is the ring shaped colored portion in a human > eye and is visible from outside with naked eye. It is made of muscle > tissue that adjusts the size of pupil and controls how much light can > enter the eye. Amount of melatonin pigment in iris is responsible for > different colors that human eyes take. Folds in iris muscles > throughout the ring create a pattern with great amount of details. > Formation of this pattern is completely random and there is no rule > how it will turn out in an individual’s eye. However, once this > pattern is created during the foetal development, it stays the same > throughout the life. An individual’s irises are unique and > structurally distinct, even iris of same individual does not match. > All these attributes make them good enough for personal recognition. > > > Details of iris can be captured with any high quality digital camera, > however, modern recognition systems make use of near infrared (NIR: > 700–900 nm) instead of visible light to capture details. Since iris > recognition can be established with high quality camera and > recognition software, it can be setup on any computing device; > however, dedicated recognition systems are more common due to > performance and security reasons. Iris recognition systems use a > camera to capture details of the iris and this image is enhanced by > the image enhancement algorithms. Once the image is usable enough, it > is processed by the recognition algorithms, which extracts unique > features to generate a biometric template. Associating identity data > with this template establishes identity of the subject in question, > which can be used for identity verification in future. > > > Upvotes: 1
2021/02/05
589
2,371
<issue_start>username_0: I am currently trying to understand transformers. To start, I read [Attention Is All You Need](https://arxiv.org/pdf/1706.03762.pdf) and also [this](https://nlp.seas.harvard.edu/2018/04/03/attention.html) tutorial. What makes me wonder is the word embedding used in the model. Is word2vec or GloVe being used? Are the word embeddings trained from scratch? In the tutorial linked above, the transformer is implemented from scratch and nn.Embedding from pytorch is used for the embeddings. I looked up this function and didn't understand it well, but I tend to think that the embeddings are trained from scratch, right?<issue_comment>username_1: No, neither Word2Vec nor GloVe is used as Transformers are a newer class of algorithms. Word2Vec and GloVe are based on static word embeddings while Transformers are based on dynamic word embeddings. The embeddings are trained from scratch. Upvotes: 3 <issue_comment>username_2: I have found a good answer in this blog post [The Transformer: Attention Is All You Need](https://glassboxmedicine.com/2019/08/15/the-transformer-attention-is-all-you-need/): > > we learn a “word embedding” which is a smaller real-valued vector representation of the word that carries some information about the word. We can do this using nn.Embedding in Pytorch, or, more generally speaking, by multiplying our one-hot vector with a learned weight matrix W. > > > > > There are two options for dealing with the Pytorch nn.Embedding weight matrix. One option is to initialize it with pre-trained embeddings and keep it fixed, in which case it’s really just a lookup table. Another option is to initialize it randomly, or with pre-trained embeddings, but keep it trainable. In that case the word representations will get refined and modified throughout training because the weight matrix will get refined and modified throughout training. > > > The Transformer uses a random initialization of the weight matrix and refines these weights during training – i.e. it learns its own word embeddings. > > > Upvotes: 5 [selected_answer]<issue_comment>username_3: As "initial" word embeddings (those without any positional or context information for each word or sub word) are used from the very beginning It seems to me that someone has to provide a trained embedding for each word at the very beginning. Upvotes: 0
2021/02/09
1,811
7,289
<issue_start>username_0: In the context of Artificial Intelligence, sometimes people use the word "agent" and sometimes use the word "model" to refer to the output of the whole "AI-process". For examples: "RL **agents**" and "deep learning **models**". Are the two words interchangeable? If not, in what case should I use "agents" instead of "models" and vice versa?<issue_comment>username_1: In game AI context: * An **Agent** is a player that plays the game. basically, its a function that gets the current state of the game and returns the next action. * A **Model** is a representation of the game. For example, I have made a Gin-Rummy game + AI-agents. One aspect in the model was the representation of the deck as a $4\*13$ matrix, where each entry in the matrix is the card status (location, whether the card has been seen by opponent). One can model the same game in different ways, most times there is a tradeoff between representability and simplicity. Upvotes: 3 <issue_comment>username_2: ### Agent The other answer defines an agent as a *policy* (as it's defined in reinforcement learning). However, although this definition is fine for most current purposes, given that currently agents are mainly used to solve video games, in the real world, an intelligent agent will also need to have a **body**, which Russell and Norvig call an **architecture** (section 2.4 of the 3rd edition of [Artificial Intelligence: A Modern Approach](https://cs.calvin.edu/courses/cs/344/kvlinden/resources/AIMA-3rd-edition.pdf#page=65), page 46), which should not be confused with an architecture of a model or neural network, but it's the computing device that contains the physical sensors and actuators for the agent to sense and act on the environment, respectively. So, to be more general, the agent is defined as follows > > agent = body + policy (brain) > > > where the **policy** is what Russell and Norvig call the [**agent program**](https://ai.stackexchange.com/q/1756/2444), which is an implementation of the **agent function**. Alternatively, it can be defined as follows > > An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. > > > This is just another definition given by Russell and Norvig, which I also report in [this answer](https://ai.stackexchange.com/a/12995/2444), where I describe different types of agents. Note that these definitions are equivalent. However, in the first one, we just emphasize that we need some means to "think" (brain) and some means to "behave" (body). These definitions are quite general, so I think people should use them, although, as I said above, sometimes people refer to an agent as just the policy. ### Model In [this answer](https://ai.stackexchange.com/a/25460/2444), I describe what a model is or what I like to think a model is, and how it is different from a function. In AI, a model can refer to different but somehow related concepts. * For example, in *reinforcement learning*, a model typically refers to $p(s', r \mid s, a)$, i.e. the joint probability distribution over the next state $s'$ and reward $r$, given the current state $s$ and action $a$ taken in $s$. * In *deep learning*, a model typically refers to a neural network, which can be used to compute (or model) different functions. For example, a neural network can be used to compute/represent/model a policy, so, in this case, there would be no actual difference between a model and an agent (if defined as a policy, without a body). However, conceptually, at a higher-level, these would still be different (in the same way that biological neural networks are different from the brain). * More generally, in *machine learning*, a model typically refers to a system that can be changed to compute some function. Examples of models are decision trees, neural networks, linear regression models, etc. So, as I also state in the other answer, I like to think of a model as a **set of functions**, so, in this sense, a model would be a *hypothesis class* in *computational learning theory*. This definition is roughly consistent with $p(s', r \mid s, a)$, which can also be thought of as a (possibly infinite) set of functions, but note that a probability distribution is not exactly a set of functions. * In the context of *knowledge bases*, a model is an assignment to the variables, which represents a "possible world". See [section 7.3, page 240, of the cited book](https://cs.calvin.edu/courses/cs/344/kvlinden/resources/AIMA-3rd-edition.pdf#page=259). There are possible other uses of the word *model* (both in the context of AI, e.g. in the context of **planning**, there's often the idea of a **conceptual model**, which is similar to an MDP in RL, and in other areas), but the definitions given above should be more or less widely applicable in their contexts. ### What is the difference between an agent and a model? Given that there are different possible definitions of a **model** depending on the context, it's not easy to *briefly* state what the difference between the two is. So, here's the difference in the context of RL (and you can now find out the differences in other contexts by using the different definitions): an agent can have a model of the world, which allows it to predict e.g. the reward it will receive given its current state and some action that it decides to take. The model can allow the agent to plan. In this same context, a model could *also* refer to the specific system (e.g. a neural network) used to compute/represent the policy of the agent, but note that people usually refer to $p(s', r \mid s, a)$ when they use the word *model* in RL. See [this post](https://ai.stackexchange.com/q/4456/2444) for more details. Upvotes: 4 [selected_answer]<issue_comment>username_3: Agents act ---------- The key property of agents is that they act. Quoting one possible definition (Russel&Norvig, AI: A modern approach), "An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators." The environment can be the physical world around us, or some computer system, or a simulation, however, the key part of an agent that distinguishes it from non-agents is that it acts to affect that environment - as opposed to merely observing it and/or making some calculations. Within the field of artificial intelligence there are many contexts where various models are used to decide upon a course of action. For example, reinforcement learning, or world models that are used in planning systems. In this case the whole system would be an agent, but the model would be just a part of that system. However, that does not apply for all contexts - there are systems that are used for making decisions not related to any action (e.g. an OCR system recognizing particular letters), and there are systems that make a decision about some action, but are not intended to actually act on that decision (e.g. they show the proposed decision to a human which might act on that information). Such systems may contain models but in this case also the whole system should not be called an agent, because the actual act of acting is out of scope for that system. Upvotes: 2
2021/02/09
519
1,843
<issue_start>username_0: I have a model that outputs a latent **N-dimensional embedding** for all data points, trained in a way that clusters data-points from the same class together, while being separated from other clusters belonging to other different classes. The N-dimensional embedding is projected down to 2D using UMAP. At each epoch, I wish to test the **clustering capability** of the model on these 2D projections for use as validation accuracy. I have the labels for each class. How should I proceed? [![UMAP Projection ](https://i.stack.imgur.com/DvTCF.png)](https://i.stack.imgur.com/DvTCF.png)<issue_comment>username_1: You can compute [Silhouette Coefficient](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html) for your aim. Its values [mean](https://towardsdatascience.com/silhouette-coefficient-validating-clustering-techniques-e976bb81d10c): > > 1: Means clusters are well apart from each other and clearly distinguished. > > > 0: Means clusters are indifferent, or we can say that the distance between clusters is not significant. > > > -1: Means clusters are assigned in the wrong way. > > > Other measures, such as [purity and mutual information](https://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-clustering-1.html), are also possible by computing > > an external criterion that evaluates how well the clustering matches the gold standard classes > > > Upvotes: 2 <issue_comment>username_2: One more popular metric for this is the [Davies Bouldin Score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.davies_bouldin_score.html#sklearn.metrics.davies_bouldin_score). You can also take a look at the clustering metrics in [scikit documentation](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics). Upvotes: 1
2021/02/09
654
2,536
<issue_start>username_0: I have a scanned image, and they need to be classified in one of the pre-defined image classes, so that it can be sorted. However, the problem is the open nature of the classes. At testing time, new classes of scanned images can be added and the model should not only classify them as unseen (open set image recognition), but it should be able to tell in which new class it should belong (not able to figure out the implementation for this.) So, I am thinking that the below option can work for the classification of unseen classes 1. Zero-shot learning: Once the image is classified as unseen, we can then apply zero-shot learning to find its respective class for sorting. 2. Template matching: Match the test image of unseen classes with all available class images, and, once we have a match, we can do sorting of images. 3. Meta learning-based approach: I am not sure how to implement this, suggestions are much appreciated. Note: I already tried the classical computer vision approach, but it's not working out. So, more open for neural net-based approach. Is my approach to solving the problem correct? If possible, suggest some alternative to find the corresponding match/classification of the unseen class image. As I could think of these 2 alternative solutions only.<issue_comment>username_1: You can compute [Silhouette Coefficient](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html) for your aim. Its values [mean](https://towardsdatascience.com/silhouette-coefficient-validating-clustering-techniques-e976bb81d10c): > > 1: Means clusters are well apart from each other and clearly distinguished. > > > 0: Means clusters are indifferent, or we can say that the distance between clusters is not significant. > > > -1: Means clusters are assigned in the wrong way. > > > Other measures, such as [purity and mutual information](https://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-clustering-1.html), are also possible by computing > > an external criterion that evaluates how well the clustering matches the gold standard classes > > > Upvotes: 2 <issue_comment>username_2: One more popular metric for this is the [Davies Bouldin Score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.davies_bouldin_score.html#sklearn.metrics.davies_bouldin_score). You can also take a look at the clustering metrics in [scikit documentation](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics). Upvotes: 1
2021/02/11
1,197
4,974
<issue_start>username_0: Conceptually, in general, how is the *context* being handled in contextual bandits (CB), compared to *states* in reinforcement learning (RL)? Specifically, in RL, we can use a function approximator (e.g. a neural network) to generalize to other states. Would that also be possible or desirable in the CB setting? In general, what is the relation between the *context* in CB and the *state* in RL?<issue_comment>username_1: The notion of a *state* in reinforcement learning is (more or less) the same as the notion of a *context* in contextual bandits. The main difference is that, in reinforcement learning, an action $a\_t$ in state $s\_t$ not only affects the reward $r\_r$ that the agent will get but it will also affect the next state $s\_{t+1}$ the agent will end up in, while, in contextual bandits (aka associative search problems), an action $a\_t$ in the state $s\_t$ only affects the reward $r\_r$ that you will get, but it doesn't affect the next state the agent will end up in. The typical problem that can be formulated as a contextual bandit problem is a [*recommender system*](https://netflixtechblog.com/artwork-personalization-c589f074ad76). In CBs, like in RL, the agent also needs to learn a policy, i.e. a function from states to actions, but actions that you take in a certain state are independent of the actions you take in other states. So, as Sutton and Barto put it ([2nd edition, section 2.9, page 41](http://incompleteideas.net/book/RLbook2020.pdf#page=63)), contextual bandits are an intermediate problem between (context-free) bandits (where there is only one state or, equivalently, no state at all) and the full reinforcement learning problem. Another important characteristic of many RL algorithms, such as Q-learning, is that they assume that the state is [Markov](https://ai.stackexchange.com/q/16667/2444), i.e. it contains all necessary info to take the optimal action, but, of course, RL is not just applicable to fully observable MDPs. In fact, even Q-learning has been applied to POMDPs, with some approximations and tricks. Regarding the use of neural networks to approximate $q(s, a)$ or a policy in CBs, in principle, this is possible. However, given that the optimal action in a state $s$ is independent of the optimal action in another state $s'$, this is probably not useful, but I cannot guarantee you that this has not been successfully done, because I've not yet read the relevant literature (maybe someone else will provide another answer to address this aspect). Upvotes: 4 [selected_answer]<issue_comment>username_2: > > Conceptually, in general, how is the context being handled in CB, compared to states in RL? > > > In terms of its place in the description of Contextual Bandits and Reinforcement Learning, context in CB is an exact analog for state in RL. The framework for RL is a strict generalisation of CB, and can be made similar or the same in a few separate ways: * If the agent is be optimised for immediate reward only (discount fatcor $\gamma=0$), then optimal action choice depends only on current state without considering consequences. However, the environment may not behave much like a contextual bandit over multiple time steps, so it would be hard to think in terms of the kind of optimisations that apply for CB (such as minimising regret). * If state progression in RL is unrelated to the action chosen, then optimal action choices depend only on the current state. There still might be some benefit from understanding the expected state progression in order to predict future rewards, and ability to learn about different states may be limited by the progression, so this in not full equivalence, but it is very close. * If state progression in RL is unrelated to any previous history (of states, actions, rewards), and state is drawn from the same population at any time step, then the full MDP description is not necessary, each time step is expected to be like the last. A contextual bandit model could well be more appropriate. Another thing to consider is what your objectives are for studying the environment or applying an agent within it. Bandit solvers are usually applied to environments where the agent is expected to learn strictly online, and the goal of the developer is to write a learner that uses a minimal amount of information to decide on optimal or near optimal choices. One common metric for this is to minimise regret, or the expected difference in reward between the agent's action choices and the ideal choice summed over time. If you have offline data to work from, then predictions for an optimal agent in a CB enviroment devolve to supervised learning of a regression task. There is no simple equivalent for this in RL, because actions have consequences that create links between states. As a result, offline RL methods are very similar to online RL ones - the state, action, reward data is processed much the same way. Upvotes: 2
2021/02/13
4,470
11,810
<issue_start>username_0: I found the following PyTorch code (from [this link](https://debuggercafe.com/getting-started-with-variational-autoencoder-using-pytorch/)) ``` -0.5 * torch.sum(1 + sigma - mu.pow(2) - sigma.exp()) ``` where `mu` is the mean parameter that comes out of the model and `sigma` is the sigma parameter out of the encoder. This expression is apparently equivalent to the KL divergence. But I don't see how this calculates the KL divergence for the latent.<issue_comment>username_1: The code is correct. Since OP asked for a proof, one follows. The usage in the code is straightforward if you observe that the authors are using the symbols unconventionally: `sigma` is the *natural logarithm of the variance,* where usually a normal distribution is characterized in terms of a mean $\mu$ and variance. Some of the functions in OP's link even have arguments named `log_var`.$^\*$ If you're not sure how to derive the standard expression for KL Divergence in this case, you can start from the definition of KL divergence and crank through the arithmetic. In this case, $p$ is the normal distribution given by the encoder and $q$ is the standard normal distribution. $$\begin{align} D\_\text{KL}(P \| Q) &= \int\_{-\infty}^{\infty} p(x) \log\left(\frac{p(x)}{q(x)}\right) dx \\ &= \int\_{-\infty}^{\infty} p(x) \log(p(x)) dx - \int\_{-\infty}^{\infty} p(x) \log(q(x)) dx \end{align}$$ The first integral is recognizable as almost definition of entropy of a Gaussian (up to a change of sign). $$ \int\_{-\infty}^{\infty} p(x) \log(p(x)) dx = -\frac{1}{2}\left(1 + \log(2\pi\sigma\_1^2) \right) $$ The second one is more involved. $$ \begin{align} -\int\_{-\infty}^{\infty} p(x) \log(q(x)) dx &= \frac{1}{2}\log(2\pi\sigma\_2^2) - \int p(x) \left(-\frac{\left(x - \mu\_2\right)^2}{2 \sigma\_2^2}\right)dx \\ &= \frac{1}{2}\log(2\pi\sigma\_2^2) + \frac{\mathbb{E}\_{x\sim p}[x^2] - 2 \mathbb{E}\_{x\sim p}[x]\mu\_2 +\mu\_2^2} {2\sigma\_2^2} \\ &= \frac{1}{2}\log(2\pi\sigma\_2^2) + \frac{\sigma\_1^2 + \mu\_1^2-2\mu\_1\mu\_2+\mu\_2^2}{2\sigma\_2^2} \\ &= \frac{1}{2}\log(2\pi\sigma\_2^2) + \frac{\sigma\_1^2 + (\mu\_1 - \mu\_2)^2}{2\sigma\_2^2} \end{align} $$ The key is recognizing this gives us a sum of several integrals, and each can apply the law of the unconscious statistician. Then we use the fact that $\text{Var}(x)=\mathbb{E}[x^2]-\mathbb{E}[x]^2$. The rest is just rearranging. Putting it all together: $$ \begin{align} D\_\text{KL}(P \| Q) &= -\frac{1}{2}\left(1 + \log(2\pi\sigma\_1^2) \right) + \frac{1}{2}\log(2\pi\sigma\_2^2) + \frac{\sigma\_1^2 + (\mu\_1 - \mu\_2)^2}{2\sigma\_2^2} \\ &= \log (\sigma\_2) - \log(\sigma\_1) + \frac{\sigma\_1^2 + (\mu\_1 - \mu\_2)^2}{2\sigma\_2^2} - \frac{1}{2} \end{align} $$ In this special case, we know that $q$ is a standard normal, so $$ \begin{align} D\_\text{KL}(P \| Q) &= -\log \sigma\_1 + \frac{1}{2}\left(\sigma\_1^2 + \mu\_1^2 - 1 \right) \\ &= - \frac{1}{2}\left(1 + 2\log \sigma\_1- \mu\_1^2 -\sigma\_1^2 \right) \end{align} $$ In the case that we have a $k$-variate normal with diagonal covariance for $p$, and a multivariate normal with covariance $I$, this is the sum of $k$ univariate normal distributions because in this case the distributions are independent. The code is a correct implementation of this expression because $\log(\sigma\_1^2) = 2 \log(\sigma\_1)$ and in the code, `sigma` is the logarithm of the variance. --- $^\*$The reason that it's convenient to work on the scale of the *log-variance* is that the log-variance can be any real number, but the *variance* is constrained to be non-negative by definition. It's easier to perform optimization on the unconstrained scale than it is to work on the constrained scale in $\eta^2$. Also, we want to avoid "round-tripping," where we compute $\exp(y)$ in one step and then $\log(\exp(y))$ in a later step, because this incurs a loss of precision. In any case, autograd takes care of all of the messy details with adjustments to gradients resulting from moving from one scale to another. Upvotes: 4 [selected_answer]<issue_comment>username_2: This is the analytical form of the KL divergence between two multivariate Gaussian densities with diagonal covariance matrices (i.e. we assume independence). More precisely, it's the KL divergence between the **variational distribution** $$ q\_{\boldsymbol{\phi}}(\mathbf{z}) = \mathcal{N}\left(\mathbf{z} ; \boldsymbol{\mu}, \mathbf{\Sigma} = \boldsymbol{\sigma}^{2}\mathbf{I}\right) = \frac{\exp \left(-\frac{1}{2}\left(\mathbf{z} - \boldsymbol{\mu}\right)^{\mathrm{T}} \mathbf{\Sigma}^{-1}\left(\mathbf{z}-\boldsymbol{\mu} \right)\right)}{\sqrt{(2 \pi)^{J}\left|\mathbf{\Sigma}\right|}} \tag{1}\label{1} $$ and the prior (it's the same as above, but with mean and covariance equal to the zero vector and the identity matrix, respectively) $$ p(\mathbf{z})=\mathcal{N}(\mathbf{z} ;\boldsymbol{0}, \mathbf{I}) = \frac{\exp \left(-\frac{1}{2}\mathbf{z}^{\mathrm{T}}\mathbf{z}\right)}{\sqrt{(2 \pi)^{J}}} \tag{2}\label{2} $$ where * $\boldsymbol{\mu} \in \mathbb{R}^J$ is the mean vector (we assume column vectors, so $\boldsymbol{\mu}^T$ would be a row vector) * $\mathbf{\Sigma} = \boldsymbol{\sigma}^{2}\mathbf{I} \in \mathbb{R}^{J \times J}$ is a diagonal covariance matrix (with the vector $\boldsymbol{\sigma}^{2}$ on the diagonal of the identity) * $\mathbf{z} \in \mathbb{R}^J$ is a sample (latent vector) from these Gaussians with dimensionality $J$ (or, at the same time, the input variable of the density) * $\left|\mathbf{\Sigma}\right| = \operatorname{det} \mathbf{\Sigma}$ is the determinant (so a number) of the diagonal covariance matrix, which is just the product of the diagonal elements for a diagonal matrix (which is the case); so, in the case of the identity, the determinant is $1$ * $\boldsymbol{0} \in \mathbb{R}^J$ is the zero vector * $\mathbf{I} \in \mathbb{R}^{J \times J}$ is an identity matrix * $\mathbf{z}^{\mathrm{T}}\mathbf{z} = \sum\_{i=1}^J z\_i^2 \in \mathbb{R}$ is the dot product (hence a number) Now, the (negative of the) KL divergence is defined as follows \begin{align} -D\_{K L}\left(q\_{\boldsymbol{\phi}}(\mathbf{z}) \| p(\mathbf{z})\right) &= \int q\_{\boldsymbol{\phi}}(\mathbf{z})\left(\log p(\mathbf{z})-\log q\_{\boldsymbol{\phi}}(\mathbf{z})\right) d \mathbf{z} \\ &= \mathbb{E}\_{q\_{\boldsymbol{\phi}}(\mathbf{z})} \left[ \log p(\mathbf{z})-\log q\_{\boldsymbol{\phi}}(\mathbf{z})\right] \label{3}\tag{3} \end{align} Given that we have logarithms here, let's compute the logarithm of equations \ref{1} and \ref{2} \begin{align} \log \left( \mathcal{N}\left(\mathbf{z} ; \boldsymbol{\mu}, \mathbf{\Sigma} \right) \right) &= \dots \\ &= -\frac{1}{2}(\mathbf{z}-\boldsymbol{\mu})^{\mathrm{T}} \mathbf{\Sigma}^{-1}(\mathbf{z}-\boldsymbol{\mu})-\frac{J}{2} \log (2 \pi)-\frac{1}{2} \log |\mathbf{\Sigma} | \end{align} and \begin{align} \log \left( \mathcal{N}(\mathbf{z} ;\boldsymbol{0}, \mathbf{I}) \right) &= \dots \\ &= -\frac{1}{2}\mathbf{z}^{\mathrm{T}} \mathbf{z}-\frac{J}{2} \log (2 \pi) \end{align} We can now replace these in equation \ref{3} (below, I have already performed some simplifications, to remove verbosity, but you can check them!) \begin{align} \frac{1}{2} \mathbb{E}\_{q\_{\boldsymbol{\phi}}(\mathbf{z})} \left[ -\mathbf{z}^{\mathrm{T}} \mathbf{z} + (\mathbf{z}-\boldsymbol{\mu})^{\mathrm{T}} \mathbf{\Sigma}^{-1}(\mathbf{z}-\boldsymbol{\mu}) + \log |\mathbf{\Sigma} | \right] \tag{4}\label{4} \end{align} Now, given that $\mathbf{\Sigma}$ is diagonal and the log of a product is just a sum of the logarithms, we have $\log |\mathbf{\Sigma} | = \sum\_{i=1}^J \log \sigma\_{ii}$, so we can continue \begin{align} \frac{1}{2} \left( - \mathbb{E}\_{q\_{\boldsymbol{\phi}}(\mathbf{z})} \left[ \mathbf{z}^{\mathrm{T}} \mathbf{z} \right] + \mathbb{E}\_{q\_{\boldsymbol{\phi}}(\mathbf{z})} \left[ (\mathbf{z}-\boldsymbol{\mu})^{\mathrm{T}} \mathbf{\Sigma}^{-1}(\mathbf{z}-\boldsymbol{\mu}) \right] + \sum\_{i=1}^J \log \sigma\_{ii} \right) &= \\ \frac{1}{2} \left( - \mathbb{E}\_{q\_{\boldsymbol{\phi}}(\mathbf{z})} \left[ \mathbf{z}^{\mathrm{T}} \mathbf{z} \right] + \mathbb{E}\_{q\_{\boldsymbol{\phi}}(\mathbf{z})} \left[ \operatorname{tr} \left( \mathbf{\Sigma}^{-1}(\mathbf{z}-\boldsymbol{\mu}) (\mathbf{z}-\boldsymbol{\mu})^{\mathrm{T}} \right) \right] + \sum\_{i=1}^J \log \sigma\_{ii} \right) &= \\ \frac{1}{2} \left( - \mathbb{E}\_{q\_{\boldsymbol{\phi}}(\mathbf{z})} \left[ \mathbf{z}^{\mathrm{T}} \mathbf{z} \right] + \operatorname{tr} \left( \mathbb{E}\_{q\_{\boldsymbol{\phi}}(\mathbf{z})} \left[ \mathbf{\Sigma}^{-1}(\mathbf{z}-\boldsymbol{\mu}) (\mathbf{z}-\boldsymbol{\mu})^{\mathrm{T}} \right] \right) + \sum\_{i=1}^J \log \sigma\_{ii} \right) &= \\ \frac{1}{2} \left( - \mathbb{E}\_{q\_{\boldsymbol{\phi}}(\mathbf{z})} \left[ \mathbf{z}^{\mathrm{T}} \mathbf{z} \right] + \operatorname{tr} \left( \mathbf{\Sigma}^{-1} \mathbb{E}\_{q\_{\boldsymbol{\phi}}(\mathbf{z})} \left[ (\mathbf{z}-\boldsymbol{\mu}) (\mathbf{z}-\boldsymbol{\mu})^{\mathrm{T}} \right] \right) + \sum\_{i=1}^J \log \sigma\_{ii} \right) &= \\ \frac{1}{2} \left( - \mathbb{E}\_{q\_{\boldsymbol{\phi}}(\mathbf{z})} \left[ \mathbf{z}^{\mathrm{T}} \mathbf{z} \right] + \operatorname{tr} \left( \mathbf{\Sigma}^{-1} \mathbf{\Sigma} \right) + \sum\_{i=1}^J \log \sigma\_{ii} \right) &= \\ \frac{1}{2} \left( - \mathbb{E}\_{q\_{\boldsymbol{\phi}}(\mathbf{z})} \left[ \mathbf{z}^{\mathrm{T}} \mathbf{z} \right] + J + \sum\_{i=1}^J \log \sigma\_{ii} \right) &= \\ \frac{1}{2} \left( - \mathbb{E}\_{q\_{\boldsymbol{\phi}}(\mathbf{z})} \left[ \operatorname{tr} \left( \mathbf{z} \mathbf{z}^{\mathrm{T}} \right) \right] + \sum\_{i=1}^J 1 + \sum\_{i=1}^J \log \sigma\_{ii} \right) &= \\ \frac{1}{2} \left( - \operatorname{tr} \left( \mathbf{\Sigma}\right) - \operatorname{tr} \left( \boldsymbol{\mu} \boldsymbol{\mu}^T \right) + \sum\_{i=1}^J 1 + \sum\_{i=1}^J \log \sigma\_{ii} \right) &= \\ \frac{1}{2} \left( - \sum\_{i=1}^J \sigma\_{ii} - \sum\_{i=1}^J \mu\_{i}^2 + \sum\_{i=1}^J 1 + \sum\_{i=1}^J \log \sigma\_{ii} \right) &= \\ \frac{1}{2} \sum\_{i=1}^J \left( 1 + \log \sigma\_{ii} - \sigma\_{ii} - \mu\_{i}^2 \right) \end{align} In the above simplifications, I also applied the following rules. * the linearity of the expectations, * $\operatorname{tr}(ba^T) = a^Tb$ (where $\operatorname{tr}$ is the trace of a matrix) * [you can swap the trace and the expectation](https://math.stackexchange.com/q/3268268/168764) * [$\mathbf{\Sigma} = \mathbb{E}\_{q\_{\boldsymbol{\phi}}(\mathbf{z})} \left[ (\mathbf{z}-\boldsymbol{\mu}) (\mathbf{z}-\boldsymbol{\mu})^{\mathrm{T}} \right] $](https://en.wikipedia.org/wiki/Covariance_matrix) (this is just the definition of the *covariance matrix*) * $\operatorname{tr}\left( \mathbf{\Sigma}^{-1} \mathbf{\Sigma} \right) = \operatorname{tr}\left(\mathbf{I} \right) = J = \sum\_{i=1}^J 1$ * [$ \mathbb{E}\_{q\_{\boldsymbol{\phi}}(\mathbf{z})} \left[ \mathbf{z} \mathbf{z}^{\mathrm{T}} \right] = \mathbf{\Sigma} + \boldsymbol{\mu} \boldsymbol{\mu}^T$](https://www.probabilitycourse.com/chapter6/6_1_5_random_vectors.php) (definition of the *correlation matrix*) The official PyTorch implementation of the VAE, which can be found [here](https://github.com/pytorch/examples/blob/master/vae/main.py), also uses this formula. This formula can also be found in [Appendix B of the VAE paper](https://arxiv.org/pdf/1312.6114.pdf#page=10), but the long proof that I've just written above is not given. Note that, in my proof above, $\sigma$ is the variance and is denoted by $\sigma^2$ in the paper (as it is usually the case to denote the variance as the square of the standard deviation $\sigma$, but, again, in my proof above $\sigma$ is the variance). Upvotes: 3
2021/02/15
587
2,339
<issue_start>username_0: Is it practical/affordable to train an AlphaZero/MuZero engine using a residential gaming PC, or would it take thousands of years of training for the AI to learn enough to challenge humans? I'm having trouble wrapping my head around how much computing power '4 hours of Google DeepMind training' equates to my residential computer running 24/7 trying to build a trained AI. Basically, are AlphaZero or MuZero practical for indie board games that want a state of the art AI, or is it too expensive to train?<issue_comment>username_1: The vast majority of neural networks are now trained on graphics processing units (GPUs) or specialised accelerator hardware such as [tensor processing units](https://en.wikipedia.org/wiki/Tensor_Processing_Unit) (TPUs). In [Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm](https://arxiv.org/pdf/1712.01815.pdf), Silver et al. say that the training process involved 5,000 first-generation TPUs generating self-play games and 64 second-generation TPUs for training. This is certainly far beyond what any practical gaming computer is likely to achieve, as you'll likely only have one GPU, and that might not even rival a single TPU. Training on the CPU will be substantially slower again than either a GPU or TPU. Training would be orders of magnitude slower; you might find [these benchmarks](https://arxiv.org/pdf/1907.10701.pdf) by Wang et al. of interest. Upvotes: 3 [selected_answer]<issue_comment>username_2: For board games, you may not need an advanced computer to achieve very good results. TPU and high-end GPUs allow to go one step further of course. But there is an existing implementation on github alpha-zero-general (not from me). By adding some optimizations tricks from scientific articles, I managed to train alphazero for several board games (Splendor, Santorini, Machi Koro, The Little Prince Make Me A Planet) with my i5 CPU. You may not need GPU as most of the time is spent in self-playing more than training, meaning that computer needs to do lots of network inferences more than gradient. After about a day of training, the training seemed to reach its asymptote. As a result, the computer was winning over me 100% on each game. I don't want to promote my self github, just telling OP that is feasible. Upvotes: 2
2021/02/16
556
2,157
<issue_start>username_0: If I train a U-Net model for image segmentation (e.g. medical images) and start training until it converges and then add augmentation - can i expect similar results as if i train with augmentation from the beginning ? [![enter image description here](https://i.stack.imgur.com/ITA5H.png)](https://i.stack.imgur.com/ITA5H.png)<issue_comment>username_1: The vast majority of neural networks are now trained on graphics processing units (GPUs) or specialised accelerator hardware such as [tensor processing units](https://en.wikipedia.org/wiki/Tensor_Processing_Unit) (TPUs). In [Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm](https://arxiv.org/pdf/1712.01815.pdf), Silver et al. say that the training process involved 5,000 first-generation TPUs generating self-play games and 64 second-generation TPUs for training. This is certainly far beyond what any practical gaming computer is likely to achieve, as you'll likely only have one GPU, and that might not even rival a single TPU. Training on the CPU will be substantially slower again than either a GPU or TPU. Training would be orders of magnitude slower; you might find [these benchmarks](https://arxiv.org/pdf/1907.10701.pdf) by Wang et al. of interest. Upvotes: 3 [selected_answer]<issue_comment>username_2: For board games, you may not need an advanced computer to achieve very good results. TPU and high-end GPUs allow to go one step further of course. But there is an existing implementation on github alpha-zero-general (not from me). By adding some optimizations tricks from scientific articles, I managed to train alphazero for several board games (Splendor, Santorini, Machi Koro, The Little Prince Make Me A Planet) with my i5 CPU. You may not need GPU as most of the time is spent in self-playing more than training, meaning that computer needs to do lots of network inferences more than gradient. After about a day of training, the training seemed to reach its asymptote. As a result, the computer was winning over me 100% on each game. I don't want to promote my self github, just telling OP that is feasible. Upvotes: 2
2021/02/17
662
2,858
<issue_start>username_0: How would you explain Federated Learning in simple layman terms for a [non-STEM](https://en.wikipedia.org/wiki/Science,_technology,_engineering,_and_mathematics) person? What are the main ideas behind Federated Learning?<issue_comment>username_1: The analogy is to a federal system of government. In a federation, smaller pieces follow the direction of a higher piece. In federated machine learning, you give your data for processing to the higher machine. The federation in this analogy is a collection of smaller computers. The central computer breaks up your data and gives portions of it to each smaller computer. When those computers are done they return the results and the central computer reassembles them into a single model. The main idea is distributed processing. Some benefits are: * Cost: it may be cheaper to operate multiple inexpensive computers rather than fewer more expensive computers. * [Privacy](https://medium.com/frstvc/otium-neural-newsletter-1-federated-learning-a-step-closer-towards-confidential-ai-efe28832006f): if this is sensitive data like healthcare records then perhaps you don't want all of one person's data in a single place where the wrong person can grab it. Upvotes: 3 [selected_answer]<issue_comment>username_2: I think the answer given my @CorruptedHeadScapeGoat is very good. If I can, I would like to offer an example I like to use, and it's that of a mobile phone. If you imagine a phone with a next word predictor model running on it, and it has been training on all the words you have been typing into the phone, getting better as it receives and training on more of your data. The model on your device (the local model) sends updated back to the global model which is retrained not only on your device's information, but on the information of all the devices out there associated with this system, to refine the global model, and then the refined global model is sent back to the local device. Generally, typically, a system would send the data back to the global model for the global model to retrain on the new data. Of course, you won't want all the data you have been typing into your phone to be sent to some central server; that is your private information. So, when it is time to send updates back to the global model, the system will just send the updated local model back, and *not* the data. This means that the local devices will keep it's data safe, and private. And this is of course one of the main benefits of the FL. This idea of sending local models back to the global models and the updates being sent back to the local model is very much an iterative process, and will happen at either periodic or aperiodic intervals - depending on whether the device is able to update or not (which could be due to device failures, communication issues etc.). Upvotes: 0
2021/02/17
564
1,901
<issue_start>username_0: I understand why tf.abs is non-differentiable in principle (discontinuity at 0) but the same applies to tf.nn.relu yet, in case of this function gradient is simply set to 0 at 0. Why the same logic is not applied to tf.abs? Whenever I tried to use it in my custom loss implementation TF was throwing errors about missing gradients.<issue_comment>username_1: By convention, the $\mathrm{ReLU}$ activation is treated as if it is differentiable at zero (e.g. in [1]). Therefore it makes sense for TensorFlow to adopt this convention for `tf.nn.relu`. As you've found, of course, it's not true in general that we treat the gradient of the absolute value function as zero in the same situation; it makes sense for it to be an explicit choice to use this trick, because it might not be what the code author intends in general. In a way this is compatible with the Python philosophy that *explicit is better than implicit*. If you mean to use $\mathrm{ReLU}$, it's probably best to use `tf.nn.relu` if it is suitable for your use case. [1] <NAME> and <NAME>. *Rectified Linear Units Improve Restricted Boltzmann Machines*. ICML'10 (2010). [URL](https://www.cs.toronto.edu/%7Efritz/absps/reluICML.pdf). Upvotes: 3 [selected_answer]<issue_comment>username_2: Creating custom gradient for `tf.abs` may solve the problem: ``` @tf.custom_gradient def abs_with_grad(x): y = tf.abs(x); def grad(div): # Derivation intermediate value g = 1; # Use 1 to make the chain rule just skip abs return div*g; return y,grad; ``` Use `1` as above to skip thru' abs or, use the actual abs grad ([Samual K](https://ai.stackexchange.com/users/69830/samuel-k)): ``` g = tf.where(x<0, -1, 1) #now the gradient at 0 would be one. This way u dont have dead weights. # With/without: g = tf.where(x==0, 0, g) #if you realy want the gradient 0 at 0 add this. ``` Upvotes: 2
2021/02/20
716
2,247
<issue_start>username_0: If I'm dealing with a sequence of images as the input (frame by frame), and I want to output a matrix at each timestamp, can the hidden state be a matrix?<issue_comment>username_1: Yes, I would say more, that hidden state can be a tensor of arbitrary dimensionality. For vanilla RNN the update rules of the hidden state and the output are: $$ h\_t = \sigma\_h(W\_h x\_t + U\_h h\_{t-1} + b\_h) $$ $$ y\_t = \sigma\_y(W\_y h\_t + b\_y) $$ Here $W\_h$ is input to hidden state matrix, and $U\_h$ is hidden state to hidden state, $W\_y$ is hidden state to output. One can simply upgrade these matrix products to more general case of tensor products, such that they could handle multidimensional input data and hidden state. The contraction of the $m$-dimensional tensor $X$ with $n$-dimensional Y (we assume $m \geq n$) by maximal amount of indices will be $m-n$ dimensional tensor $Z$: $$ X\_{i\_1 \ldots i\_m} Y\_{i\_1 \ldots i\_n} = Z\_{i\_{n+1} \ldots i\_m} $$ Namely, imagine, that the $x\_t = (x\_t)\_{i\_1 \ldots i\_X}$ is an $d\_x$-dimensional tensor. Then take $W\_h$ to be $d\_{wh}$ dimensional tensor, hidden state $h$ to be $d\_h = d\_{wh} - d\_{x}$ -dimensional, $U\_h$ is $d\_{uh}$ - dimensional, $W\_y$ is $d\_{wy}$ -dimensional. The output $y\_t$ of the RNN will be $d\_{wy} - d\_{h}$ dimensional tensor. Here note, that in order for the sum to be make sense in the first equation, dimensions need to match: $$ d\_{uh} - d\_{h} = d\_{h} $$ Upvotes: 2 <issue_comment>username_2: The question is, why you specifically want a matrix? I assume you mean per-frame features. In such case you can use a ConvNet as a feature extractor, i.e. it outputs a vector of features of fixed size. This vector is an input in your LSTM. If you have $C$ frames, the output of LSTM is $$ Output, (h, c) = LSTM (frame \ features) $$ where frame features are size $(1, C, v)$, where $v$ is the feature vector length from ConvNet. $Output$ is size $(1, C, w)$ and $h$ is $(1, 1, w)$ where $w$ is the size of the hidden layer in LSTM, and $h$ is the last hidden layer. Now, you can resize $Output$ to $(C, w)$ and make it an input in some linear layer that outputs a batch size $C$. This is your predictions per frame. Upvotes: 1
2021/02/20
730
2,508
<issue_start>username_0: Is there any situation in which breadth-first search is preferable over A\*?<issue_comment>username_1: The only general situation that comes to my mind where BFS could be preferred over A\* is when your graph is unweighted and the heuristic function is $h(n) = 0, \forall n \in V$. However, in that case, A\* ([which is equivalent to UCS](https://ai.stackexchange.com/a/9183/2444)) behaves like BFS (except for the goal test: [see section 3.4.2 of this book](https://cs.calvin.edu/courses/cs/344/kvlinden/resources/AIMA-3rd-edition.pdf#page=104)), i.e. it will first expand nodes at level $l$, then at level $l+1$, etc., this is because nodes at level $l+1$ are farther away from the initial node than nodes at level $l$, i.e. $f(n') = g(n') > f(n) = g(n)$, for all $n' \in V\_{l+1}$ and $n \in V\_{l}$, where $g(n)$ is the cost of the shortest path from the initial node to $n$ and $V\_{l}$ is the subset of nodes of the search space that belong to level/layer $l$ of the search tree. So, if you think that your goal is close to the initial node in terms of levels/layers and each node has a small branching factor (i.e. not many children), then BFS may be a good idea (but you could also use A\*). BFS can also be used as a sub-routine in other algorithms, such as [the Ford–Fulkerson algorithm](https://en.wikipedia.org/wiki/Ford%E2%80%93Fulkerson_algorithm). So, in these cases, BFS may also be preferable. Upvotes: 2 <issue_comment>username_2: There is an inherent **assumption** in heuristic search that the heuristic function points you in the right direction. A\* largely depends on how good the heuristic function is. Two nice properties for the heuristic function are for it to be admissible and **consistent**. If the latter stands, I can't think of any case where BFS would outperform A\*. However, this property doesn't stand in every case. If you select a **misleading** heuristic function (i.e. a function that points you in the **wrong direction**) then BFS should outperform A\*. --- I drew a dumb example to illustrate my point. You want to reach from the Initial State (IS) to the Goal State (GS). The shade of green indicates the value of the heuristic (the greener the better). ![](https://i.stack.imgur.com/rhjOM.png) In the third case, where you have a misleading heuristic, A\* would tend to explore areas near the top right first, then it would go down towards the GS. This case is actually worse than having no heuristic and using a BFS. Upvotes: 1
2021/02/22
1,168
4,886
<issue_start>username_0: I once read somewhere that there is a range of learning rate within which learning is optimal in almost all the cases, but I can't find any literature about it. All I could get is the following graph from the paper: [*The need for small learning rates on large problems*](https://www.researchgate.net/publication/3907199_The_need_for_small_learning_rates_on_large_problems) [![enter image description here](https://i.stack.imgur.com/o7z4Y.jpg)](https://i.stack.imgur.com/o7z4Y.jpg) In the context of neural networks trained with gradient descent, is there a range of the learning rate, which should be used to reduce the training time and get a good performance in almost all problems?<issue_comment>username_1: The visualisation can be found in [*The need for small learning rates on large problems*](https://www.researchgate.net/publication/3907199_The_need_for_small_learning_rates_on_large_problems). This paper by <NAME> and <NAME> from 2001 investigates the role of learning rates in gradient descent algorithms. In general, different algorithms assign different meaning to the same word 'learning rate'. For example, the learning rate in a gradient descent algorithm is not comparable to the learning rate in a tabular reinforcement learning algorithm such as Q-learning. This means that at a particular 'best' does not exist considering the different concepts denoted with the term `learning rate' in different algorithms. Additionally, the learning rate is typically considered a part of the learning algorithm. The no free lunch theorem of machine learning tells us that no particular learning algorithm performs best across tasks. Because the learning rate is part of the solution, no particular learning rate is 'best' across tasks either. In practice, you should set the learning rate sufficiently low to not 'overshoot' the optimal solution which will be evidenced by oscillations in the error (no convergence). But you also should also set it high enough to obtain reasonable performance given the amount of available training time. What learning rate gives you the right trade-off typically requires a combination of domain knowledge and experimentation on the training set. Upvotes: 2 <issue_comment>username_2: The 2015 article [Cyclical Learning Rates for Training Neural Networks](https://arxiv.org/abs/1506.01186) by <NAME> gives some good suggestions for finding an ideal range for the learning rate. The paper's primary focus is the benefit of using a learning rate schedule that varies learning rate cyclically between some lower and upper bound, instead of trying to choose a single fixed learning rate value. For this to work, you still need to select good lower and upper bounds, and Smith suggests training the model for a few epochs while increasing the learning rate between a large range of values. At first, the learning rate will be too small to make any progress at all. As the learning rate increases, eventually, the loss will begin to decrease, but, at some point, the learning rate will get too large, and the loss will stop decreasing and even begin increasing. Your ideal range consists of the learning rate values where the loss was decreasing steeply. After finding your range, you can reset the weights and biases on your model and restart training using whatever learning rate schedule you plan to use for training. Here is a concrete example from one of my experiments: [![Learning Rate Finder Loss Plot](https://i.stack.imgur.com/m23Vt.png)](https://i.stack.imgur.com/m23Vt.png) In this case, I start my learning rate search at 1e-09 and plan to end with a learning rate of 0.99 (although I am actually able to stop sooner than that). Your experiment may require different search bounds, but you could always start with that and adjust things as needed. At first, the loss plot is flat, and then it begins to decrease, but is too gradual. At the first red line, loss starts to decrease sharply, and once it reaches the second red line, the plot has begun to level off, so I can end my search. For this particular experiment, my ideal learning rate range had a minimum of 4.01e-4 and a maximum of 2.58e-2. For more information, I suggest reading this [Keras Learning Rate Finder](https://www.pyimagesearch.com/2019/08/05/keras-learning-rate-finder/) post, which contains more information on how the process works and a tutorial for how to program it using Keras and Tensorflow. --- Edit: I recently learned that in newer versions of Tensorflow / Keras, the `on_train_batch_end` callback now returns an aggregated average loss instead of the raw loss for the given batch, which gives poor results for the learning rate finder. See this [Github Issue](https://github.com/keras-team/keras/issues/17167) for more information and the workaround I am currently using. Upvotes: 3
2021/02/22
702
2,661
<issue_start>username_0: From what I understand, experience replay works by storing tuples of $(s, a, r, s')$ to be sampled for training. I understand why we store $s$, $r$ and $s'$. However, I do not understand the need for storing the action $a$. As I recall, the reward $r$ and the next state $s'$ are both used to calculate the target values. We can then compare these target values to the output we get when we do a forward-pass using state $s$ It seems to me that the stored action $a$ is not required for this process to work; or am I missing something? Why would we store the action $a$ if it isn't used in the training process itself? Please, forgive me if this question has been answered before. I looked, but was unable to find anything other than generic explanations as to what experience replay is and why we do it.<issue_comment>username_1: The goals of experience replay as first proposed by [Lin (1992)](https://link.springer.com/content/pdf/10.1007/BF00992699.pdf) and more recently applied successfully in the DQN algorithm by [Mnih et al. (2013)](https://arxiv.org/pdf/1312.5602.pdf) are to break temporal correlations of updates and to prevent forgetting of experiences that might be useful later on. To meet these goals, the replay buffer should store tuples required in the learning step. Most works that use experience replay, including those mentioned before, learn (to approximate) the Q function, i.e. $Q(s,a)$. Clearly, this function depends on the sampled action $a$. If $a$ is not used at all in the training process then it would not need to be stored in the replay buffer. In such a scenario, however, the problems that motivate the replay buffer may not be present in the first place. Upvotes: 2 <issue_comment>username_2: We need to store the action $a$ as it tells us the action that we took in the state that we are backing up. Suppose we are in state $s$ and we take action $a$, then we will receive a reward $r$ and next state $s'$. The goal of RL, and in particular DQN (I mention DQN as it is the first algorithm that comes to mind when I think of a replay buffer but it is of course not the only algorithm to make use of one), is that we are trying to learn optimal state-action value functions $Q(s, a)$. We thus want our value function to be able to predict $y = r + \gamma \max\_{a'}Q(s', a')$, i.e. given $s$ and $a$ we want to be able to predict $y$. As you can see, we need to know which action we took in state $s$ so that we can train our value function to approximate $y$, and the value function clearly depends on $a$, hence we need to also store $a$ in our experience tuple. Upvotes: 3 [selected_answer]
2021/02/25
609
2,358
<issue_start>username_0: When solving a classification problem with neural nets, be it text or images, how does the number of classes affect the model size and amount of data needed to train? Are there any soft or hard limitations where the number of outputs starts to stall learning? Do you know about any analysis of how the number of classes scales the model? Does the optimal size increase proportionally with the number of outputs? Does it increase at all? If it does increase, is the relationship linear or exponential?<issue_comment>username_1: The goals of experience replay as first proposed by [Lin (1992)](https://link.springer.com/content/pdf/10.1007/BF00992699.pdf) and more recently applied successfully in the DQN algorithm by [Mnih et al. (2013)](https://arxiv.org/pdf/1312.5602.pdf) are to break temporal correlations of updates and to prevent forgetting of experiences that might be useful later on. To meet these goals, the replay buffer should store tuples required in the learning step. Most works that use experience replay, including those mentioned before, learn (to approximate) the Q function, i.e. $Q(s,a)$. Clearly, this function depends on the sampled action $a$. If $a$ is not used at all in the training process then it would not need to be stored in the replay buffer. In such a scenario, however, the problems that motivate the replay buffer may not be present in the first place. Upvotes: 2 <issue_comment>username_2: We need to store the action $a$ as it tells us the action that we took in the state that we are backing up. Suppose we are in state $s$ and we take action $a$, then we will receive a reward $r$ and next state $s'$. The goal of RL, and in particular DQN (I mention DQN as it is the first algorithm that comes to mind when I think of a replay buffer but it is of course not the only algorithm to make use of one), is that we are trying to learn optimal state-action value functions $Q(s, a)$. We thus want our value function to be able to predict $y = r + \gamma \max\_{a'}Q(s', a')$, i.e. given $s$ and $a$ we want to be able to predict $y$. As you can see, we need to know which action we took in state $s$ so that we can train our value function to approximate $y$, and the value function clearly depends on $a$, hence we need to also store $a$ in our experience tuple. Upvotes: 3 [selected_answer]
2021/02/26
519
2,162
<issue_start>username_0: I'd like to ask you how do we know that neural networks start by learning small, basic features or "parts" of the data and then use them to build up more complex features as we go through the layers. I've heard this a lot and seen it on videos like this one [of 3Blue1Brown on neural networks for digit recognition](https://www.youtube.com/watch?v=aircAruvnKk&t=376s&ab_channel=3Blue1Brown). It says that in the first layer the neurons learn and detect small edges and then the neurons of the second layer get to know more complex patterns like circles... But I can't figure out based on pure maths how it's possible.<issue_comment>username_1: We do it experimentally; you're able to look at what each layer is learning by tweaking various values throughout the network and doing gradient ascent. For more detail, watch this lecture: <https://www.youtube.com/watch?v=6wcs6szJWMY&list=PL3FW7Lu3i5JvHM8ljYj-zLfQRF3EO8sYv&index=12> it provides many methods used for understanding exactly what your model is doing at a certain layer, and what features it has learnt. Upvotes: 3 [selected_answer]<issue_comment>username_2: The network architecture is relevant to this question. Convolutional neural network architectures enforce the building up of features because the neurons in earlier layers have access to a small number of input pixels. Neurons in deeper layers are connected (indirectly) to more and more pixels, so it makes sense that they identify larger and larger features. Lots of the visual examples available online which show, for example, a curve, to a circle, to a part of an animal, to a whole animal, are based on convolutional networks. The beautiful examples from the Harvard lecture in the other answer use convolutional networks. With that being said, increasing complexity with each layer is true generally, including for dense architectures like the 3Blue1Brown one. It's just that this is a more abstract 'increase in nonlinearity' rather than spatial feature size. Depending on the task the network is learning, earlier layers will be more 'basic', but their neurons might use large areas of the input. Upvotes: 1
2021/02/27
378
1,510
<issue_start>username_0: I remember reading about two different types of goals for an intelligence. The gist was that the first type of goal is one that "just is" - it's an end goal for the system. There doesn't need to be any justification for wanting to achieve that goal, since wanting to do that is a fundamental purpose for that system. The second type of goal is a *stepping stone*, for lack of better words. Those aren't end goals in and of themselves, but they would help the system achieve its primary goals better. I've forgotten the names for these types of goals and Googling didn't help me much. Is there a standard definition for these different types of goals?<issue_comment>username_1: Do you mean [*weak AI* and *strong AI*](https://www.ibm.com/cloud/learn/strong-ai#toc-strong-ai--YaLcx8oG)? The former is roughly about pretending to be intelligent, ie do intelligent things, but without trying to work the same way an actually intelligent system would work. The latter attempts to replicate *how* an intelligent system works, so would require us to understand a lot more about cognition than if we just mocked up a quick little chatbot to try and compete the Turing Test. Upvotes: 1 <issue_comment>username_2: AI researcher <NAME> uses the terms **'terminal goal'** and **'instrumental goal'** for the first and second types respectively. I'm not sure if these are standard parlance in the field, however Rob explains them in this video: <https://youtu.be/hEUO6pjwFOo?t=363> Upvotes: 2
2021/03/02
530
2,142
<issue_start>username_0: I am working on a classification problem. I have a dataset $S$ and I am training several prediction algorithms using S: Naive Bayes, SVM, classification trees. Intuitively, I was planning to combine my models, and, for each data point in the test sample $S'$, take the majority vote as my prediction. Does that make sense? I feel this is a very simplistic way to combine different models.<issue_comment>username_1: It is a simple way to do it but it is not wrong. If you are getting probabilities for each model, then, you can average them. Then, you can do the classification. Also, you assign weights to each model based on a validation set and regressing the weights for each models prediction. Upvotes: 1 <issue_comment>username_2: These are generally known as [ensemble methods](https://en.wikipedia.org/wiki/Ensemble_learning). Your method is essentially what Scikit-Learn's [`VotingClassifier`](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.VotingClassifier.html) does, which is perfectly reasonable and might give you better results. Of course, if you have an ensemble of classifiers and some of them perform quite poorly, the ensemble might not be able to beat the best classifier: you'll need to check this in your cross-validation. Be aware that any confidence or probability estimates may not be well-calibrated, so the predictions from the ensemble may not be particularly meaningful. There are more elaborate ways of ensembling classifiers. Random forest classifiers, for example, are just ensembles of decision trees; a technique called [bagging](https://en.wikipedia.org/wiki/Bootstrap_aggregating) is also used here to improve performance. The technique of using a second model to weight the predictions from an ensemble is known as *stacked generalisation*. It was introduced by Wolpert in 1991 [1], and you can find plenty of interesting examples of using the technique, e.g. on [Kaggle](https://www.kaggle.com/arthurtok/introduction-to-ensembling-stacking-in-python). [1] <NAME>. *Stacked generalization*. Neural Networks 5.2 (1992), pp. 241-259. Upvotes: 2
2021/03/08
605
2,573
<issue_start>username_0: Usually, Neural Networks uses raw data. You do not need to extract features manually. NN's can find & extract good features which is a pattern of an image, signal or any kind of data. When we check layer outputs in a NN, we can see and visualize how NNs extract features. Do neural networks extract features by themselves every time? When is it necessary to manually extract or engineer features to feed into the neural network rather than providing raw data? For example, I had a time series sensor data. When I use LSTM & GRU on a raw dataset, I had bad test accuracy but when I extract some features manually I had really good test set accuracy results. I extract Fast Fourier Transform, Cross-correlation features which helped a lot to increase accuracy. "Extraction of features manually" helped to solve my problem.<issue_comment>username_1: It is a simple way to do it but it is not wrong. If you are getting probabilities for each model, then, you can average them. Then, you can do the classification. Also, you assign weights to each model based on a validation set and regressing the weights for each models prediction. Upvotes: 1 <issue_comment>username_2: These are generally known as [ensemble methods](https://en.wikipedia.org/wiki/Ensemble_learning). Your method is essentially what Scikit-Learn's [`VotingClassifier`](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.VotingClassifier.html) does, which is perfectly reasonable and might give you better results. Of course, if you have an ensemble of classifiers and some of them perform quite poorly, the ensemble might not be able to beat the best classifier: you'll need to check this in your cross-validation. Be aware that any confidence or probability estimates may not be well-calibrated, so the predictions from the ensemble may not be particularly meaningful. There are more elaborate ways of ensembling classifiers. Random forest classifiers, for example, are just ensembles of decision trees; a technique called [bagging](https://en.wikipedia.org/wiki/Bootstrap_aggregating) is also used here to improve performance. The technique of using a second model to weight the predictions from an ensemble is known as *stacked generalisation*. It was introduced by Wolpert in 1991 [1], and you can find plenty of interesting examples of using the technique, e.g. on [Kaggle](https://www.kaggle.com/arthurtok/introduction-to-ensembling-stacking-in-python). [1] <NAME>. *Stacked generalization*. Neural Networks 5.2 (1992), pp. 241-259. Upvotes: 2
2021/03/09
812
3,070
<issue_start>username_0: I am looking for advice or suggestion. I have photos like these: [photo\_1](https://i.stack.imgur.com/or303.jpg) and [photo\_2](https://i.stack.imgur.com/ucbsj.jpg) and many more similar to that. The average shape of these photos is about 160 x 100. What we are doing is we are trying to find wheather or not person in a photo is wearing safety vest and helmet (if person is wearing both it is 1, if something is missing or both are missing it is 0). Training data consists of about 5k almost equally distributed image sets. I have tried to use augmentation techniques (flipping, adding noise, brighness correction) but results didn't improove. I tried to train on many pretrained popular models: resnet101, mobilenet\_v2, efficientneyb3, efficientneyb0, DenseNet121, InceptionResNetV2, InceptionV3, ResNet152V2, ResNet50V2, but results are not eyepleasing. I have tried different input sizes ranging from 224x224 to 112x112 but result didn't improve as much as I would have liked it to be. And the weird thing is that the image shape does not correlate to wheather or not there are more wrong predictions using bigger or smaller images. As a side not I would lik to ask couple questions: 1. Should I use my own written small net? 2. Are the models that I use too big for this problem? Any advice will be appreciated.<issue_comment>username_1: It is a simple way to do it but it is not wrong. If you are getting probabilities for each model, then, you can average them. Then, you can do the classification. Also, you assign weights to each model based on a validation set and regressing the weights for each models prediction. Upvotes: 1 <issue_comment>username_2: These are generally known as [ensemble methods](https://en.wikipedia.org/wiki/Ensemble_learning). Your method is essentially what Scikit-Learn's [`VotingClassifier`](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.VotingClassifier.html) does, which is perfectly reasonable and might give you better results. Of course, if you have an ensemble of classifiers and some of them perform quite poorly, the ensemble might not be able to beat the best classifier: you'll need to check this in your cross-validation. Be aware that any confidence or probability estimates may not be well-calibrated, so the predictions from the ensemble may not be particularly meaningful. There are more elaborate ways of ensembling classifiers. Random forest classifiers, for example, are just ensembles of decision trees; a technique called [bagging](https://en.wikipedia.org/wiki/Bootstrap_aggregating) is also used here to improve performance. The technique of using a second model to weight the predictions from an ensemble is known as *stacked generalisation*. It was introduced by Wolpert in 1991 [1], and you can find plenty of interesting examples of using the technique, e.g. on [Kaggle](https://www.kaggle.com/arthurtok/introduction-to-ensembling-stacking-in-python). [1] <NAME>. *Stacked generalization*. Neural Networks 5.2 (1992), pp. 241-259. Upvotes: 2
2021/03/09
714
3,029
<issue_start>username_0: I am self-studying applications of deep learning on the NLP and machine translation. I am confused about the concepts of "Language Model", "Word Embedding", "BLEU Score". It appears to me that a language model is a way to predict the next word given its previous word. Word2vec is the similarity between two tokens. BLEU score is a way to measure the effectiveness of the language model. Is my understanding correct? If not, can someone please point me to the right articles, paper, or any other online resources?<issue_comment>username_1: **Simplified:** Word Embeddings does not consider context, Language Models does. For e.g Word2Vec, GloVe, or fastText, there exists one fixed vector per word. Think of the following two sentences: > > The fish ate the cat. > > > and > > The cat ate the fish. > > > If you averaged their word embeddings, they would have the same vector, but, in reality, their meaning (semantic) is very different. Then the concept of contextualized word embeddings arose with language models that *do* consider the context, and give different embeddings depending on the context. Both word embeddings (e.g Word2Vec) and language models (e.g BERT) are ways of representing text, where language models capture more information and are considered state-of-the-art for representing natural language in a vectorized format. **BLEU score** is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Which is not directly related to the difference between traditional word embeddings and contextualized word embeddings (aka language models). Upvotes: 4 [selected_answer]<issue_comment>username_2: A **language model** aims to estimate the probability of one or more words given the surrounding words. Given a sentence composed of $w\_{1},...,w\_{i-1},\\_ , w\_{i+1},..,w\_{n}$, you can find which is the i-th missing word using a language model. In this way, you can estimate which is the most probable word using for example the conditional probability $P(w\_i=w|w\_1,…,w\_n)$. An example of a simple language model is an n-gram where instead of conditioning on all previous words, you look only to the previous n words. **Word embeddings** are a distributed representation of a word. Instead of using an index or a one-hot encoding to represent a word, a dense vector is used. If two words have similar embeddings then these words share some properties. These properties are driven by the way embeddings are constructed, for example in word2vec two words with similar embeddings are two words that often appear in the same context, which is not to say they have the same meaning. Sometimes words with opposite meanings can have similar embeddings just because they are placed within the same sentences/contexts. The **BLEU score** is a way to quantify the translation quality of an automatic translation. The score aims to look at how different model translation is to human translation. Upvotes: 2
2021/03/10
700
2,912
<issue_start>username_0: I'd like to ask you if feature engineering is an important step for a deep learning approach. By feature engineering I mean some advanced preprocessing steps, such as looking at histogram distributions and try to make it look like a normal distribution or, in the case of time series, make it stationary first (not filling missing values or normalizing the data). I feel like with enough regularization, the deep learning models don't need feature engineering compared to some machine learning models (SVMs, random forests, etc.), but I'm not sure.<issue_comment>username_1: From what I believe, feature engineering is important, it's a part of the job of ML network designer. Network designing involves * Feature engineering: What should be in the input to the network, as processed from similar or totally different data * Deciding network shape, layer shapes, types of neurons in layers, etc. * Feature engineering again (but labels), in the output, what should the output be, either regression values or classes And possibly also tasks rather simple as mentioned in the question: filling missing values, normalising data, create pre-feeding normalisation steps in code, etc. Upvotes: 1 <issue_comment>username_2: No, feature engineering is not an important step for deep learning (**EDIT**: compared to other techniques) provided that you have enough data. If your dataset is big enough (which varies from task to task), you can perform what is called an [end-to-end learning](https://towardsdatascience.com/e2e-the-every-purpose-ml-method-5d4f20dafee4). To further clarify, according to [this article](https://arxiv.org/abs/2012.00152), deep neural nets trained with backpropagation algorithm are basically doing an automated feature engineering. > > I feel like with enough regularization, the deep learning models don't need feature engineering compared to some machine learning models (SVMs, random forests, etc.) > > > That is basically correct. Beware, you need a large dataset. When a large dataset is not available, you will do some manual work (feature engineering). Nevertheless, it is always a good idea to look at your data first! **EDIT** I would also like to quote [<NAME>](http://incompleteideas.net/IncIdeas/BitterLesson.html) here: > > We want AI agents that can discover like we can, not which contain > what we have discovered. Building in our discoveries only makes it > harder to see how the discovering process can be done. > > > Perhaps this statement is more true with username_1p Learning than with previous techniques, but we are not quite there yet. And as user nbro rightfully pointed out in the comments below, you may still need to normalise your data, pre-process it, remove outliers, etc. Thus in practice, you may still need to transform your data to a certain degree, depending on many factors. Upvotes: 1 [selected_answer]
2021/03/10
1,011
4,046
<issue_start>username_0: When creating artificial columns for your categorical variables there are two mainstream methods you could use: *Disclaimer: For this example, I use the following definitions of dummy variables and one-hot-encoding. I'm aware both methods can be used to either return `n` or `n-1` columns.* *Dummy variables*: each category is converted to it's own column and the value 0 or 1 indicates if that category is present for each record *one-hot-encoding*: similar to dummy variables, but one column is dropped, as its value can be derived from the other columns. This is to prevent multicollinearity and the [dummy variable trap](https://en.wikipedia.org/wiki/Dummy_variable_(statistics)). As an arbitrary example, let's take people's favorite color: pink, blue and green. For a person who's favorite color is pink, the dummy and one-hot-encoded data would look as follows: dummy variables | person\_id | favorite\_color\_pink | favorite\_color\_blue | favorite\_color\_green | | --- | --- | --- | --- | | xyz | 1 | 0 | 0 | one-hot-encoded variables | person\_id | favorite\_color\_blue | favorite\_color\_green | | --- | --- | --- | | xyz | 0 | 0 | From a statistics point of view, I would use the one-hot encoded columns to build my model. In addition, *I* can infer the favorite color is pink, because I encoded the variables. However, when I'm applying XAI to explain the prediction to someone else and *they* see the favorite color wasn't blue or green. I'm not so sure they will infer the favorite color was pink unless it's explicitly stated. So using dummy variables might serve explainability better, but brings other risks.. Are there any best practices on this?<issue_comment>username_1: From what I believe, feature engineering is important, it's a part of the job of ML network designer. Network designing involves * Feature engineering: What should be in the input to the network, as processed from similar or totally different data * Deciding network shape, layer shapes, types of neurons in layers, etc. * Feature engineering again (but labels), in the output, what should the output be, either regression values or classes And possibly also tasks rather simple as mentioned in the question: filling missing values, normalising data, create pre-feeding normalisation steps in code, etc. Upvotes: 1 <issue_comment>username_2: No, feature engineering is not an important step for deep learning (**EDIT**: compared to other techniques) provided that you have enough data. If your dataset is big enough (which varies from task to task), you can perform what is called an [end-to-end learning](https://towardsdatascience.com/e2e-the-every-purpose-ml-method-5d4f20dafee4). To further clarify, according to [this article](https://arxiv.org/abs/2012.00152), deep neural nets trained with backpropagation algorithm are basically doing an automated feature engineering. > > I feel like with enough regularization, the deep learning models don't need feature engineering compared to some machine learning models (SVMs, random forests, etc.) > > > That is basically correct. Beware, you need a large dataset. When a large dataset is not available, you will do some manual work (feature engineering). Nevertheless, it is always a good idea to look at your data first! **EDIT** I would also like to quote [Rich Sutton](http://incompleteideas.net/IncIdeas/BitterLesson.html) here: > > We want AI agents that can discover like we can, not which contain > what we have discovered. Building in our discoveries only makes it > harder to see how the discovering process can be done. > > > Perhaps this statement is more true with username_1p Learning than with previous techniques, but we are not quite there yet. And as user nbro rightfully pointed out in the comments below, you may still need to normalise your data, pre-process it, remove outliers, etc. Thus in practice, you may still need to transform your data to a certain degree, depending on many factors. Upvotes: 1 [selected_answer]
2021/03/13
484
1,991
<issue_start>username_0: I am new to BERT and NLP and I am a little confused with tokenization and word embedding. My doubt is if I use the BertTokenizer for tokenizing a sentence then do I have to compulsorily use BertEmbedding for generating its corresponding word vectors of the tokens or I can train my own word2vec model to generate my word embedding while using BertTokenizer? Pardon me if this question doesn't make any sense.<issue_comment>username_1: word2vec and BERT are completely different things. You don't need to use BERT Tokenizer for training word2vec model. But, if you want to go ahead. Here's a [link](https://towardsdatascience.com/byte-pair-encoding-the-dark-horse-of-modern-nlp-eb36c7df4f10) which can help you. Upvotes: 0 <issue_comment>username_2: Generally speaking, the power of BERT for applications like NER is that the authors (of whichever implementation you use) performed a large-scale pretraining effort to create the embeddings. You can then “fine-tune” those for your specific task using far less computation, but the rub is that you need to use the same tokenization scheme (I.e. the BERT Tokenizer) in order to have your input “fit” the existing embeddings. Intuitively, tokenization is mapping a word in your text to an index number. If the embedding was trained thinking that word number 42 is “cat” then things won’t work well if you tokenize differently and provide a 43 instead when “cat” pops up in your text. Unless you’re training on a sparse language that hasn’t been well-represented by one of the public embeddings, the above is almost certainly your wisest approach. If, however you really want to train the BERT architecture on new embeddings, then you can technically use any embedding scheme you like. The BERT Tokenizer uses subwords along with a few specific administrative tokens. If you were going to explore further, a byte pair encoder might be useful, especially if the language starts to beer away from eg English. Upvotes: 1
2021/03/13
575
1,855
<issue_start>username_0: Is there any *tutorial* that walks through a multi-agent reinforcement learning implementation (in Python) using libraries such as [OpenAI's Gym](https://gym.openai.com/) (for the environment), [TF-agents](https://www.tensorflow.org/agents), and [stable-baselines-3](https://github.com/DLR-RM/stable-baselines3)? I searched a lot, but I was not able to find any tutorial, mostly because Gym environments and most RL libraries are not for multi-agent RL.<issue_comment>username_1: After [checking the Internet](https://www.google.com/search?ei=2QtPYNbSItCIlwS3jIR4&q=github%20multi%20agent%20reinforcement%20learning&oq=github%20multi%20agent%20reinforcement%20learning), you will probably find several resources such as * <https://github.com/mohammadasghari/dqn-multi-agent-rl> * <https://rlss.inria.fr/files/2019/07/RLSS_Multiagent.pdf> * <https://arxiv.org/abs/2011.00583> and others. Try to understand the principles first (see above). After some reasonable amount of coding you can adapt OpenAI gym. Good luck! **Update 17 March 2022**: You may want to check this popular repository as well <https://github.com/Farama-Foundation/PettingZoo> **Update 06 August 2023**: The best multi-agent tutorial I have seen so far comes from RLlib documentation. See [RLlib for muti-agent RL](https://applied-rl-course.netlify.app/en/module2). Upvotes: 2 <issue_comment>username_2: Multi-Agent Deep Reinforcement Learning in 13 Lines of Code Using **PettingZoo** A tutorial on multi-agent deep reinforcement learning for beginners. This tutorial provides a simple introduction to using multi-agent reinforcement learning, assuming a little experience in machine learning and knowledge of Python. <https://towardsdatascience.com/multi-agent-deep-reinforcement-learning-in-15-lines-of-code-using-pettingzoo-e0b963c0820b> Upvotes: 0
2021/03/15
443
1,556
<issue_start>username_0: Given an LSTM model with 3 cells shown below, what would be the input to the left most cell c(t-1) and h(t-1)?[![enter image description here](https://i.stack.imgur.com/SqL6V.png)](https://i.stack.imgur.com/SqL6V.png)<issue_comment>username_1: Your question is related to the initial states of LSTM, where `c(t-1)` is the cell state (memory) and `h(t-1)` is the previous LSTM block output. As pointed out [here](https://stats.stackexchange.com/questions/224737/best-way-to-initialize-lstm-state), it is reasonable to assume that those are random values. Upvotes: 0 <issue_comment>username_2: Most commonly these **states are set to zero**, this usually works, but it can have a negative influence on the performance. Another option is to **initialize randomly**, but this is not straight forward and the performance highly depends on the noise level. In [this work](https://scs-europe.net/conf/ecms2015/invited/Contribution_Zimmermann_Grothmann_Tietz.pdf) the noise level is selected in proportion to the prediction error of the first timestep in the series. A third option is to **use learnable variables**. You can initialize these randomly and the model will learn the initialization vectors by itself. This is showcased in [this answer on SO](https://stackoverflow.com/a/60321446/9891955) I'd say options one and three are the most straight forward and I've mostly seen the first method. For reference, see [this paper](https://scs-europe.net/conf/ecms2015/invited/Contribution_Zimmermann_Grothmann_Tietz.pdf). Upvotes: 1
2021/03/15
1,976
8,510
<issue_start>username_0: A few days ago, I started looking a bit more into AI and learning about the way it works, and it is very interesting, but I can't find a clear answer on how the artificial intelligence is implemented in 3d shooter games, like COD or practically any 3d game. I just don't understand how they teach the enemies such different things based on the game to fit its narratives. For example, is the enemy "AI" in 3d games just a bunch of if-else statements, or do they actually teach the enemies to think strategically? In big AAA games, you can clearly see that enemies hide from you in shootings and peek to shoot not just rush and get killed. So, how is the AI in 3d games implemented? How do they code it? You don't need to explain in detail, but just give me the idea. Do they use algorithms?<issue_comment>username_1: Most of the so called 'AI' enemies in games nowadays are based on human logics and not machine learning, although they're called AIs. Mentioning about games, the first thing for ML is reinforcement learning. Let the bots play round to get rewards (positive) or punishment (negative rewards). These bots may finally know how to hide or move out a little bit to shoot. I can guess of some possible actions for bots in shooter games: Move U/D/L/R, jump, duck, find target (coordinates already in memory, valid only in view angle), shoot. Training bot in 3D environments is much harder than in 2D games, since every looking angle of the bot is another environment state, 360 degrees around and 360 degrees up/down. And enemies can be at any location, in or off the frame. Upvotes: 0 <issue_comment>username_2: Overlap between AI and "Game AI" -------------------------------- Nowadays, if you search for AI online, you will find a lot of material about machine learning, natural language processing, intelligent agents and neural networks. These are not the whole of AI by any means, expecially in a historical context, but they have recently been very successful, there is lots of published material about them. Games, especially action games, tend not to use these new popular technologies because they have other priorities. However, under the broadest definitions, there is good overlap between AI in general and game AI, or more specifically *enemy AI* within a game: * Computer-controlled opponents within a game are effectively autonomous agents operating within an environment. * The game's purpose is to entertain the player, and this usually translates into requiring challenging and believable behaviour for actions taken by opponents (for some value of "believable"). * A 3D map with a physics engine, other moving agents and possible hazards is a complex environment that needs some effort from a developer in order to define working behaviours. * Many games need to solve one or two classic AI problems such as [pathfinding](https://arrow.tudublin.ie/cgi/viewcontent.cgi?article=1063&context=itbj). All the above means that the term "Game AI" is a good choice for describing how enemy agents are designed and implemented. However, there are large differences between goals of AI applied today in a 3D shooter and the goals of AI used in industry to control decision making. * Action games have a low CPU budget available per enemy for decision-making. Control systems in industry can have dedicated machinery, sometimes multiple machines, just to run the AI. * Action game priority is the game experience of the player, and this translates into strict targeting of a definition of "believable" for enemy behaviour that will entertain. In contrast, AI used in industry has accuracy as a high priority. Typical game AI in 3D shooters ------------------------------ Developing enemy AI in games is a specialist skill, and only partly related to the purpose of this site. For detailed discussion you may want to look into [Game Development Stack Exchange](https://gamedev.stackexchange.com/) which has many questions and answers on topics like [Enemy AI](https://gamedev.stackexchange.com/search?q=enemy%20AI) Very roughly, the key traits of enemy AI in a game *like* Call of Duty could look like this (I have no inside knowledge of how COD does this, so may be inaccurate in places): * Enemies will have one or more modes of behaviour defined. This might be as simple as "Idle" and "Attack Player", or there could be many. For COD I would suspect there are many, and some may be in a hierarchy - e.g. there may be several sub-types of "Attack" behaviour depending on the enemy design. * Within a mode of behaviour, there may be a few different components defined, only some of which use AI routines. For instance, there will be specific animations related to walking or standing which are not AI, but there will also be some pathfinding AI if the agent is attempting to move around. * Decisions to switch between different modes of behaviour are often scripted with simple triggers, such as detecting player visibility using ray-casting between the location of player and enemy. When game AI fails to produce realistic results, it is often these high level triggers being brittle and not covering edge cases that causes it. Depending on complexity of the game and number of behaviours, there may be an algorithm managing the transitions between them, such as a [finite state engine](http://www.ai-junkie.com/architecture/state_driven/tut_state1.html) or [behaviour trees](https://jahej.com/alt/2011_02_24_introduction-to-behavior-trees.html). * Enemies will be presented with highly simplified observations of the game world from their perspective, in order to speed AI decisions. * AI systems will have CPU budget restricted. A 3D game spends significant resources rendering scenes, and will often try to render quickly, e.g. 100 times per second. There are multiple ways that AI budget can be allocated, but it is relatively common for AI calculations to be spread over muliple frames, and for search structures for tasks like path-finding to persist over time. * There will be a middle ground between scripted behaviour and AI-driven behaviour where analysis is done as part of game design. For instance, pathfinding routes may be pre-calculated to some degree. A system of [way points](http://jceipek.com/Olin-Coding-Tutorials/pathing.html#waypoints) is one example of this - it might be set by the game designer, it may be calculated by an AI component of the game asset-building pipeline, or it may be dynamically calculated and cached during a game session so that multiple enemy units can share it. * When complex AI is not needed to achieve a goal, when a simple caclulation or "puppet-like" behaviour would do just fine, then this could be chosen instead. For instance, an enemy "aiming" at a player can be a simple vector calculation, perhaps with a fudge factor of a miss chance depending on the range to make the enemy seem fallible and not as much like a machine. I do not know how the behaviour of enemies using cover is implemented in COD. A simple variant would be to have each enemy hard-coded to use a specific piece of cover that usually works well against the player due to map design. However, it is definitley *possible* to have a search algorithm running (perhaps over multiple frames) that assesses nearby locations that the enemy could reach against the player's current position, and then pick those as variables to plug into the "take cover and fire on player" scripted behaviour. That assessment would use the same kind of visibilty detection between enemy and player that is used to trigger changes between "Idle" and "Attack" behaviours for the enemy. --- As an aside, one related thing I find interesting is in [using modern AI techniques to blend animations and make interactions between actors and the environment look more realistic](https://www.youtube.com/watch?v=Mnu1DzFzRWs). Although it is a lower-level feature than the question about enemy behaviour you are asking, it is an interesting cross-over between robotics and game playing that we will likely see in next-generation games, and probably applied first to the more detailed player model animations. [Here is another example applied to a humanoid agent switching smoothly between different tasks](https://www.youtube.com/watch?v=cTqVhcrilrE), which any game player will recognise as something which game engines cannot do well at the moment - there are usually many jarring transitions caused by events in the game. Upvotes: 4 [selected_answer]
2021/03/17
1,966
8,522
<issue_start>username_0: Hello I am currently doing research on the effect of altering a neural network's structure. Particularly I am investigating what affect would putting a random DAG (directed acyclic graph) in the hidden layer of a network instead of a usual fully connected bipartite graph. For instance my neural network would look something like this: [![enter image description here](https://i.stack.imgur.com/xa5wm.png)](https://i.stack.imgur.com/xa5wm.png) Basically I want the ability to create any structure in my hidden layer as long as it remains a DAG [add any edge between any node regardless of layers]. I have tried creating my own library to do so but it proved to be much more tedious than anticipated therefore I am looking for ways to do this on existing libraries such as Keras, pytorch, or tensorflow.<issue_comment>username_1: Most of the so called 'AI' enemies in games nowadays are based on human logics and not machine learning, although they're called AIs. Mentioning about games, the first thing for ML is reinforcement learning. Let the bots play round to get rewards (positive) or punishment (negative rewards). These bots may finally know how to hide or move out a little bit to shoot. I can guess of some possible actions for bots in shooter games: Move U/D/L/R, jump, duck, find target (coordinates already in memory, valid only in view angle), shoot. Training bot in 3D environments is much harder than in 2D games, since every looking angle of the bot is another environment state, 360 degrees around and 360 degrees up/down. And enemies can be at any location, in or off the frame. Upvotes: 0 <issue_comment>username_2: Overlap between AI and "Game AI" -------------------------------- Nowadays, if you search for AI online, you will find a lot of material about machine learning, natural language processing, intelligent agents and neural networks. These are not the whole of AI by any means, expecially in a historical context, but they have recently been very successful, there is lots of published material about them. Games, especially action games, tend not to use these new popular technologies because they have other priorities. However, under the broadest definitions, there is good overlap between AI in general and game AI, or more specifically *enemy AI* within a game: * Computer-controlled opponents within a game are effectively autonomous agents operating within an environment. * The game's purpose is to entertain the player, and this usually translates into requiring challenging and believable behaviour for actions taken by opponents (for some value of "believable"). * A 3D map with a physics engine, other moving agents and possible hazards is a complex environment that needs some effort from a developer in order to define working behaviours. * Many games need to solve one or two classic AI problems such as [pathfinding](https://arrow.tudublin.ie/cgi/viewcontent.cgi?article=1063&context=itbj). All the above means that the term "Game AI" is a good choice for describing how enemy agents are designed and implemented. However, there are large differences between goals of AI applied today in a 3D shooter and the goals of AI used in industry to control decision making. * Action games have a low CPU budget available per enemy for decision-making. Control systems in industry can have dedicated machinery, sometimes multiple machines, just to run the AI. * Action game priority is the game experience of the player, and this translates into strict targeting of a definition of "believable" for enemy behaviour that will entertain. In contrast, AI used in industry has accuracy as a high priority. Typical game AI in 3D shooters ------------------------------ Developing enemy AI in games is a specialist skill, and only partly related to the purpose of this site. For detailed discussion you may want to look into [Game Development Stack Exchange](https://gamedev.stackexchange.com/) which has many questions and answers on topics like [Enemy AI](https://gamedev.stackexchange.com/search?q=enemy%20AI) Very roughly, the key traits of enemy AI in a game *like* Call of Duty could look like this (I have no inside knowledge of how COD does this, so may be inaccurate in places): * Enemies will have one or more modes of behaviour defined. This might be as simple as "Idle" and "Attack Player", or there could be many. For COD I would suspect there are many, and some may be in a hierarchy - e.g. there may be several sub-types of "Attack" behaviour depending on the enemy design. * Within a mode of behaviour, there may be a few different components defined, only some of which use AI routines. For instance, there will be specific animations related to walking or standing which are not AI, but there will also be some pathfinding AI if the agent is attempting to move around. * Decisions to switch between different modes of behaviour are often scripted with simple triggers, such as detecting player visibility using ray-casting between the location of player and enemy. When game AI fails to produce realistic results, it is often these high level triggers being brittle and not covering edge cases that causes it. Depending on complexity of the game and number of behaviours, there may be an algorithm managing the transitions between them, such as a [finite state engine](http://www.ai-junkie.com/architecture/state_driven/tut_state1.html) or [behaviour trees](https://jahej.com/alt/2011_02_24_introduction-to-behavior-trees.html). * Enemies will be presented with highly simplified observations of the game world from their perspective, in order to speed AI decisions. * AI systems will have CPU budget restricted. A 3D game spends significant resources rendering scenes, and will often try to render quickly, e.g. 100 times per second. There are multiple ways that AI budget can be allocated, but it is relatively common for AI calculations to be spread over muliple frames, and for search structures for tasks like path-finding to persist over time. * There will be a middle ground between scripted behaviour and AI-driven behaviour where analysis is done as part of game design. For instance, pathfinding routes may be pre-calculated to some degree. A system of [way points](http://jceipek.com/Olin-Coding-Tutorials/pathing.html#waypoints) is one example of this - it might be set by the game designer, it may be calculated by an AI component of the game asset-building pipeline, or it may be dynamically calculated and cached during a game session so that multiple enemy units can share it. * When complex AI is not needed to achieve a goal, when a simple caclulation or "puppet-like" behaviour would do just fine, then this could be chosen instead. For instance, an enemy "aiming" at a player can be a simple vector calculation, perhaps with a fudge factor of a miss chance depending on the range to make the enemy seem fallible and not as much like a machine. I do not know how the behaviour of enemies using cover is implemented in COD. A simple variant would be to have each enemy hard-coded to use a specific piece of cover that usually works well against the player due to map design. However, it is definitley *possible* to have a search algorithm running (perhaps over multiple frames) that assesses nearby locations that the enemy could reach against the player's current position, and then pick those as variables to plug into the "take cover and fire on player" scripted behaviour. That assessment would use the same kind of visibilty detection between enemy and player that is used to trigger changes between "Idle" and "Attack" behaviours for the enemy. --- As an aside, one related thing I find interesting is in [using modern AI techniques to blend animations and make interactions between actors and the environment look more realistic](https://www.youtube.com/watch?v=Mnu1DzFzRWs). Although it is a lower-level feature than the question about enemy behaviour you are asking, it is an interesting cross-over between robotics and game playing that we will likely see in next-generation games, and probably applied first to the more detailed player model animations. [Here is another example applied to a humanoid agent switching smoothly between different tasks](https://www.youtube.com/watch?v=cTqVhcrilrE), which any game player will recognise as something which game engines cannot do well at the moment - there are usually many jarring transitions caused by events in the game. Upvotes: 4 [selected_answer]
2021/03/17
1,828
7,027
<issue_start>username_0: My intuition is that there is some overlap between understanding language and symbolic mathematics (e.g. algebra). The rules of algebra are somewhat like grammar, and the step-by-step arguments get you something like a narrative. If one buys this premise, it might be worth training an AI to do algebra (solve for x, derive this equation, etc). Moreover, when variables represent "real" numbers (as seen in physics, for example) algebraic equations describe the real world in an abstracted, "linear," way somewhat similar to natural language. Finally, there are exercises in algebra, like simplifying, deriving useful equations, etcetera which edge into the realm of the subjective, yet it is still much more structured and consistent than language. It seems like this could be a stepping stone towards the ambiguities of natural language. Can anyone speak to whether this has either (1) been explored or (2) is a totally bogus idea?<issue_comment>username_1: A lot of natural language processing software are (in 2021) using statistical approaches. Read [*The Deep Learning Revolution*](https://rads.stackoverflow.com/amzn/click/com/026203803X) (by T.Sejnowski), [*Artificial Beings, the Conscience of a conscient machine*](https://rads.stackoverflow.com/amzn/click/com/B00BPAGQWK) (by J.Pitrat), [*Introduction to Deep Learning*](https://rads.stackoverflow.com/amzn/click/com/0262039516) (by E.Charniak). However, mixed approaches (like in [RefPerSys](http://refpersys.org/)) can also be used. The so-called [frame-based](https://en.wikipedia.org/wiki/Frame-based_terminology) approaches can, and have been used, in NLP. From an abstract point of view, they are close to algebra. And probably are still one of the best approaches for *generating* natural language sentences (in written form). Pitrat has for dozen of years taught and advocated a symbolic AI approach to NLP. You could read his French book [*Métaconnaissances, futur de l'IA*](https://www.decitre.fr/livres/metaconnaissance-9782866012472.html) (metaknowledge, future of AI). Read also books like [*Knowledge Representation and Reasoning*](https://rads.stackoverflow.com/amzn/click/com/1558609326) (Braqueman & Leveque) You might also read (if you can read French) <NAME>'s [*Théorie des Ensembles*](https://www.amazon.fr/Th%C3%A9orie-ensembles-N-Bourbaki/dp/3540340343) or (in English) [*Interactive Theorem Proving and Program Development*](https://rads.stackoverflow.com/amzn/click/com/3540208542) (by Bertot & Castéran). Symbolic AI approaches are definitely "algebra-like". In practice, they don't need powerful GPGPUs, or even don't do a lot of floating point operations. I assume you want to process human language text available as textual files (e.g. HTML, XML, UTF8 encoded plain text, etc...). If you want to process sounds and speech (for example, an audio signal coming from a microphone), you do need much more signal processing and floating-point operations. If you want to process images (e.g. handwritten text), it is also different. In France, I would recommend Professor [<NAME>](https://mohammeddaoudi.github.io/). Upvotes: 2 <issue_comment>username_2: One of the ways to ask if this two problmes are related is to ask, could we solve math/algebra equations with NLP approaches, and the answer is yes, it's an absolutely valid idea and it was approached by many researchers. For example in the ["Deep learning for symbolic mathemathics"](https://arxiv.org/pdf/1912.01412.pdf) paper by facebook researchers, the NLP-based approach was used to solve calculus-level math. Or in [this paper](https://arxiv.org/pdf/1811.00720.pdf) authors propose a method to extract semantics from math problems in words and solve them. In fact, even very simple approaches like a small LSTM network [could work](https://machinelearningmastery.com/learn-add-numbers-seq2seq-recurrent-neural-networks/) for simple strictly stated problems Upvotes: 2 <issue_comment>username_3: There isn't, really. Natural language is way more complex and irregular than algebra, which is far more formalised and unambiguous. So far, in NLP, most success/progress has been made in little toy domains, which exclude most of the complexities of real life, including many ambiguities. When you say the rules of algebra are somewhat like grammar, then that is because it is essentially a formal language, for which we can specify a grammar. There is currently no complete grammar for any human language (and I doubt there ever will be), let alone a formal one that can be processed by computer. This was one of the reasons why the first AI boom, where a lot of over-hyped promises where made about being able to translate Russian into English automatically, failed abysmally: natural languages are more than just formal grammars of lexical items. Stochastic approaches have gone some way towards pragmatic solutions, but when it comes to understanding language they are basically a fudge. And don't get me started on deep learning approaches to NLP. So the only relationship is that we use the term 'grammar' for the descriptive formalisms in both cases; a formal grammar of algebra would be very different from a grammar for a human language. This doesn't mean, however, that approaches developed in the field of NLP cannot be applied to algebra: even those which failed in NLP because they were overly limiting. To find out more about this, look for [Chomsky Hierarchy](https://en.wikipedia.org/wiki/Chomsky_hierarchy) -- that describes the different expressive powers of formal languages. But I would argue that human language is outside of that, because it is not a formal language. Upvotes: 3 [selected_answer]<issue_comment>username_4: Yes there is a relationship, but it's not an exact one like I think is envisioned in the question. Fuzzy numbers and fuzzy logic translates natural language expressions into quantitative values and mathematical logic. See [PRUF—a meaning representation language for natural languages](https://www.sciencedirect.com/science/article/abs/pii/S0020737378800030) by Zadeh. Dr. Zadeh, the 'inventor' of fuzzy numbers and fuzzy logic, provided a mathematical framework for taking simple language expressions and expressing them in mathematical form. The underlying basis is that natural language is inherently imprecise. To quote this paper, "First, a basic assumption underlying PRUF is that the imprecision that is intrinsic in natural languages is, for the most part, possibilistic rather than probabilistic in nature." While fuzzy numbers and fuzzy logic are ubiquitous in many applications, for example fuzzy controllers in engineering, I personally have not seen a groundswell of using PRUF in NLP. Not that it isn't possible. I have been pursuing research off-and-on in this area, and I think that there is a possibility of bridging mathematical logic and NLP, particularly when using concept-based NLP (e.g., something along the lines of Cognitive Linguistics). Upvotes: 2
2021/03/18
2,616
9,118
<issue_start>username_0: Goal ---- To build an RNN which would receive a word as an input, and output the probability that the word is in English (or at least would be English sounding). **Example** ``` input: hello output: 100% input: nmnmn output: 0% ``` Approach -------- Here is my approach. ### RNN I have built an RNN with the following specifications: (the subscript $i$ means a specific time step) The vectors (neurons): $$ x\_i \in \mathbb{R}^n \\ s\_i \in \mathbb{R}^m \\ h\_i \in \mathbb{R}^m \\ b\_i \in \mathbb{R}^n \\ y\_i \in \mathbb{R}^n \\ $$ The matrices (weights): $$ U \in \mathbb{R}^{m \times n} \\ W \in \mathbb{R}^{m \times m} \\ V \in \mathbb{R}^{n \times m} \\ $$ This is how each time step is being fed forward: $$ y\_i = softmax(b\_i) \\ b\_i = V h\_i \\ h\_i = f(s\_i) \\ s\_i = U x\_i + W h\_{i-1} \\ $$ Note that the $ + W h\_{i-1}$ will not be used on the first layer. ### Losses Then, for the loss of each layer, I used cross entropy ($t\_i$ is the target, or expected output at time $i$): $$ L\_i = -\sum\_{j=1}^{n} t\_{i,j} \ln(y\_{i,j}) $$ Then, the total loss of the network: $$ L = \sum L\_i $$ ### RNN diagram Here is a picture of the network that I drew: [![enter image description here](https://i.stack.imgur.com/MeIA8m.png)](https://i.stack.imgur.com/MeIA8m.png) ### Data pre-processing Here is how data is fed into the network: Each word is split into characters, and every character is split into a one-hot vector. Two special tokens START and END are being appended to the word from the beginning and the end. Then the input at each time step will be every sequential character without END, and the output at each time step will be the following character to the input. **Example** Here is an example: 1. Start with a word: "cat" 2. Split it into characters and append the special tags: `START c a t END` 3. Transform into one-hot vectors: $v\_1, v\_2, v\_3, v\_4, v\_5$ 4. Then the input is $v\_1, v\_2, v\_3, v\_4$ and the output $v\_2, v\_3, v\_4, v\_5$ ### Dataset For the dataset, I used a list of English words. Since I am working with English characters, the size of the input and output is $n=26+2=28$ (the $+2$ is for the extra START and END tags). ### Hyper-parameters Here are some more specifications: * Hidden size: $m=100$ * Learning rate: $0.001$ * Number of training cycles: $15000$ (each cycle is a loss calculation and backpropagation of a random word) * Activation function: $f(x) = \tanh(x)$ Problem/question ---------------- However, when I run my model, I get that the probability of some word being valid is about 0.9 regardless of the input. For the probability of a word begin valid, I used the value at the last layer of the RNN at the position of END tag after feeding forward the word. I wrote a gradient checking algorithm and the gradients seem to check up. *Is there conceptually something wrong with my neural network?* I played a bit with $m$, the learning rate, and the number of cycles, but nothing really improved the performance.<issue_comment>username_1: A lot of natural language processing software are (in 2021) using statistical approaches. Read [*The Deep Learning Revolution*](https://rads.stackoverflow.com/amzn/click/com/026203803X) (by T.Sejnowski), [*Artificial Beings, the Conscience of a conscient machine*](https://rads.stackoverflow.com/amzn/click/com/B00BPAGQWK) (by J.Pitrat), [*Introduction to Deep Learning*](https://rads.stackoverflow.com/amzn/click/com/0262039516) (by E.Charniak). However, mixed approaches (like in [RefPerSys](http://refpersys.org/)) can also be used. The so-called [frame-based](https://en.wikipedia.org/wiki/Frame-based_terminology) approaches can, and have been used, in NLP. From an abstract point of view, they are close to algebra. And probably are still one of the best approaches for *generating* natural language sentences (in written form). Pitrat has for dozen of years taught and advocated a symbolic AI approach to NLP. You could read his French book [*Métaconnaissances, futur de l'IA*](https://www.decitre.fr/livres/metaconnaissance-9782866012472.html) (metaknowledge, future of AI). Read also books like [*Knowledge Representation and Reasoning*](https://rads.stackoverflow.com/amzn/click/com/1558609326) (Braqueman & Leveque) You might also read (if you can read French) <NAME>'s [*Théorie des Ensembles*](https://www.amazon.fr/Th%C3%A9orie-ensembles-N-Bourbaki/dp/3540340343) or (in English) [*Interactive Theorem Proving and Program Development*](https://rads.stackoverflow.com/amzn/click/com/3540208542) (by Bertot & Castéran). Symbolic AI approaches are definitely "algebra-like". In practice, they don't need powerful GPGPUs, or even don't do a lot of floating point operations. I assume you want to process human language text available as textual files (e.g. HTML, XML, UTF8 encoded plain text, etc...). If you want to process sounds and speech (for example, an audio signal coming from a microphone), you do need much more signal processing and floating-point operations. If you want to process images (e.g. handwritten text), it is also different. In France, I would recommend Professor [<NAME>](https://mohammeddaoudi.github.io/). Upvotes: 2 <issue_comment>username_2: One of the ways to ask if this two problmes are related is to ask, could we solve math/algebra equations with NLP approaches, and the answer is yes, it's an absolutely valid idea and it was approached by many researchers. For example in the ["Deep learning for symbolic mathemathics"](https://arxiv.org/pdf/1912.01412.pdf) paper by facebook researchers, the NLP-based approach was used to solve calculus-level math. Or in [this paper](https://arxiv.org/pdf/1811.00720.pdf) authors propose a method to extract semantics from math problems in words and solve them. In fact, even very simple approaches like a small LSTM network [could work](https://machinelearningmastery.com/learn-add-numbers-seq2seq-recurrent-neural-networks/) for simple strictly stated problems Upvotes: 2 <issue_comment>username_3: There isn't, really. Natural language is way more complex and irregular than algebra, which is far more formalised and unambiguous. So far, in NLP, most success/progress has been made in little toy domains, which exclude most of the complexities of real life, including many ambiguities. When you say the rules of algebra are somewhat like grammar, then that is because it is essentially a formal language, for which we can specify a grammar. There is currently no complete grammar for any human language (and I doubt there ever will be), let alone a formal one that can be processed by computer. This was one of the reasons why the first AI boom, where a lot of over-hyped promises where made about being able to translate Russian into English automatically, failed abysmally: natural languages are more than just formal grammars of lexical items. Stochastic approaches have gone some way towards pragmatic solutions, but when it comes to understanding language they are basically a fudge. And don't get me started on deep learning approaches to NLP. So the only relationship is that we use the term 'grammar' for the descriptive formalisms in both cases; a formal grammar of algebra would be very different from a grammar for a human language. This doesn't mean, however, that approaches developed in the field of NLP cannot be applied to algebra: even those which failed in NLP because they were overly limiting. To find out more about this, look for [Chomsky Hierarchy](https://en.wikipedia.org/wiki/Chomsky_hierarchy) -- that describes the different expressive powers of formal languages. But I would argue that human language is outside of that, because it is not a formal language. Upvotes: 3 [selected_answer]<issue_comment>username_4: Yes there is a relationship, but it's not an exact one like I think is envisioned in the question. Fuzzy numbers and fuzzy logic translates natural language expressions into quantitative values and mathematical logic. See [PRUF—a meaning representation language for natural languages](https://www.sciencedirect.com/science/article/abs/pii/S0020737378800030) by Zadeh. Dr. Zadeh, the 'inventor' of fuzzy numbers and fuzzy logic, provided a mathematical framework for taking simple language expressions and expressing them in mathematical form. The underlying basis is that natural language is inherently imprecise. To quote this paper, "First, a basic assumption underlying PRUF is that the imprecision that is intrinsic in natural languages is, for the most part, possibilistic rather than probabilistic in nature." While fuzzy numbers and fuzzy logic are ubiquitous in many applications, for example fuzzy controllers in engineering, I personally have not seen a groundswell of using PRUF in NLP. Not that it isn't possible. I have been pursuing research off-and-on in this area, and I think that there is a possibility of bridging mathematical logic and NLP, particularly when using concept-based NLP (e.g., something along the lines of Cognitive Linguistics). Upvotes: 2
2021/03/22
1,810
6,824
<issue_start>username_0: I read this paper [Text Compression as a Test for Artificial Intelligence, Mahoney, 1999](https://www.aaai.org/Papers/AAAI/1999/AAAI99-177.pdf). **So far I understood the following:** Text Compression tests can be used as an alternative to Turing Tests for intelligence. The *Bits per character* score obtained from compression of a standard benchmark corpus, can be used as a quantitative measure for intelligence **My questions:** 1. Is my understanding of the topic correct? 2. Does this mean that applications like 7zip/WinRar are intelligent? 3. How are the ways a human compresses information (as in form of summary) and ways a computer compresses (using Huffman coding or something) are compatible? How can we compare that?<issue_comment>username_1: A lot of natural language processing software are (in 2021) using statistical approaches. Read [*The Deep Learning Revolution*](https://rads.stackoverflow.com/amzn/click/com/026203803X) (by T.Sejnowski), [*Artificial Beings, the Conscience of a conscient machine*](https://rads.stackoverflow.com/amzn/click/com/B00BPAGQWK) (by J.Pitrat), [*Introduction to Deep Learning*](https://rads.stackoverflow.com/amzn/click/com/0262039516) (by E.Charniak). However, mixed approaches (like in [RefPerSys](http://refpersys.org/)) can also be used. The so-called [frame-based](https://en.wikipedia.org/wiki/Frame-based_terminology) approaches can, and have been used, in NLP. From an abstract point of view, they are close to algebra. And probably are still one of the best approaches for *generating* natural language sentences (in written form). Pitrat has for dozen of years taught and advocated a symbolic AI approach to NLP. You could read his French book [*Métaconnaissances, futur de l'IA*](https://www.decitre.fr/livres/metaconnaissance-9782866012472.html) (metaknowledge, future of AI). Read also books like [*Knowledge Representation and Reasoning*](https://rads.stackoverflow.com/amzn/click/com/1558609326) (Braqueman & Leveque) You might also read (if you can read French) <NAME>'s [*Théorie des Ensembles*](https://www.amazon.fr/Th%C3%A9orie-ensembles-N-Bourbaki/dp/3540340343) or (in English) [*Interactive Theorem Proving and Program Development*](https://rads.stackoverflow.com/amzn/click/com/3540208542) (by Bertot & Castéran). Symbolic AI approaches are definitely "algebra-like". In practice, they don't need powerful GPGPUs, or even don't do a lot of floating point operations. I assume you want to process human language text available as textual files (e.g. HTML, XML, UTF8 encoded plain text, etc...). If you want to process sounds and speech (for example, an audio signal coming from a microphone), you do need much more signal processing and floating-point operations. If you want to process images (e.g. handwritten text), it is also different. In France, I would recommend Professor [<NAME>](https://mohammeddaoudi.github.io/). Upvotes: 2 <issue_comment>username_2: One of the ways to ask if this two problmes are related is to ask, could we solve math/algebra equations with NLP approaches, and the answer is yes, it's an absolutely valid idea and it was approached by many researchers. For example in the ["Deep learning for symbolic mathemathics"](https://arxiv.org/pdf/1912.01412.pdf) paper by facebook researchers, the NLP-based approach was used to solve calculus-level math. Or in [this paper](https://arxiv.org/pdf/1811.00720.pdf) authors propose a method to extract semantics from math problems in words and solve them. In fact, even very simple approaches like a small LSTM network [could work](https://machinelearningmastery.com/learn-add-numbers-seq2seq-recurrent-neural-networks/) for simple strictly stated problems Upvotes: 2 <issue_comment>username_3: There isn't, really. Natural language is way more complex and irregular than algebra, which is far more formalised and unambiguous. So far, in NLP, most success/progress has been made in little toy domains, which exclude most of the complexities of real life, including many ambiguities. When you say the rules of algebra are somewhat like grammar, then that is because it is essentially a formal language, for which we can specify a grammar. There is currently no complete grammar for any human language (and I doubt there ever will be), let alone a formal one that can be processed by computer. This was one of the reasons why the first AI boom, where a lot of over-hyped promises where made about being able to translate Russian into English automatically, failed abysmally: natural languages are more than just formal grammars of lexical items. Stochastic approaches have gone some way towards pragmatic solutions, but when it comes to understanding language they are basically a fudge. And don't get me started on deep learning approaches to NLP. So the only relationship is that we use the term 'grammar' for the descriptive formalisms in both cases; a formal grammar of algebra would be very different from a grammar for a human language. This doesn't mean, however, that approaches developed in the field of NLP cannot be applied to algebra: even those which failed in NLP because they were overly limiting. To find out more about this, look for [Chomsky Hierarchy](https://en.wikipedia.org/wiki/Chomsky_hierarchy) -- that describes the different expressive powers of formal languages. But I would argue that human language is outside of that, because it is not a formal language. Upvotes: 3 [selected_answer]<issue_comment>username_4: Yes there is a relationship, but it's not an exact one like I think is envisioned in the question. Fuzzy numbers and fuzzy logic translates natural language expressions into quantitative values and mathematical logic. See [PRUF—a meaning representation language for natural languages](https://www.sciencedirect.com/science/article/abs/pii/S0020737378800030) by Zadeh. Dr. Zadeh, the 'inventor' of fuzzy numbers and fuzzy logic, provided a mathematical framework for taking simple language expressions and expressing them in mathematical form. The underlying basis is that natural language is inherently imprecise. To quote this paper, "First, a basic assumption underlying PRUF is that the imprecision that is intrinsic in natural languages is, for the most part, possibilistic rather than probabilistic in nature." While fuzzy numbers and fuzzy logic are ubiquitous in many applications, for example fuzzy controllers in engineering, I personally have not seen a groundswell of using PRUF in NLP. Not that it isn't possible. I have been pursuing research off-and-on in this area, and I think that there is a possibility of bridging mathematical logic and NLP, particularly when using concept-based NLP (e.g., something along the lines of Cognitive Linguistics). Upvotes: 2
2021/03/23
1,631
6,922
<issue_start>username_0: I came across several papers by <NAME> & <NAME>. Especially this one: [Universal Intelligence: A Definition of Machine Intelligence, <NAME>, <NAME>](https://arxiv.org/abs/0712.3329) Given that it was published back in 2007, how much recognition or agreement has it received? Has any other work better formalizing the idea of intelligence been done since? What is considered current standard on the topic in the field?<issue_comment>username_1: Just a few commonsensical remarks about why this kind of intelligence definition seems unable to capture the logic of life: 1. Optimization only makes sense in a stationary environment. When many agents learn and interact, they are building a constantly changing environment. 2. Survival and reproduction is the only thing that really matters, and it does not require optimization, just good enough solutions. 3. The survival of individual living organisms heavily depends on adequate hardwired sensory abilities that can slowly change throughout many generations. But smarter individuals can use their brains (or whatever tools they may have endowed with plasticity) to quickly learn and adapt in non-stationary environments. Fast learning, not optimization, is what these agents really need. Upvotes: -1 <issue_comment>username_2: My sense is that everyone is pretending Intelligence doesn't have a grounded definition, from which all other definitions arise: * Intelligence is a measure of utility in an action space μ(υ) It can be a relative measure, in relation to other rational agents, or absolute in relation to solved games (problems). An action space is any context, and formalized as problems or sets of problems, typically grouped by complexity class. Hutter and Legg is an explication of this grounded definition which accounts for unlimited contexts (complexity classes/environments) and for increasing utility of a given agent over time (learning/optimization.) Intelligence itself does not require learning or general applicability, but Hutter & Legg does not refute this, merely grades static intelligence and narrow intelligence as more limited. Even this is subject to context, as [more limited rationality can be more optimal](https://en.wikipedia.org/wiki/Satisficing). * The definition of intelligence is grounded because, while the term "intelligence" is a symbol, intelligence itself is function, the strength of which evaluated by a measurement of some result (utility) It doesn't require defining the function to understand it as a function: measurement of a result requires mechanism and decision/action. You will find this natural language definition applies to even emotional intelligence, which relates to the observational capability of the rational agent in context, and allow that rational agent to make more optimal decisions in context. This is similar in spirit to *truth*, only grounded in a formal logic context, where it is a condition and result, not an assertion. By contrast, the the conditionality of truth is often obscured in a natural language context, where is is routinely applied to unvalidatable informal statements, and can even be conflated with the statement itself *à la*: "this is the truth!" Upvotes: 2 <issue_comment>username_3: The short answer is that it hasn't received nearly the attention it should have despite the cited paper having over 600 citations. According to [this Singularity Summit talk by <NAME>](https://youtu.be/0ghzG14dT-w), [his PhD thesis](http://www.vetta.org/documents/Machine_Super_Intelligence.pdf) on the wide ranging uses of the word "intelligence" managed to identify 2 qualitative dimensions of those uses: 1. Human vs Ideal 2. Internal vs External Of the 4 points in the space of these 2 dimensions, the sense in which his paper with Hutter uses the word "intelligence" is (Ideal, External). This is distinguished from the Turing Test which is for (Human, External). The vast majority of machine learning targets (Ideal, External) intelligence rather than modeling human intelligence. The formal, "top-down" definition of (Ideal, External) provided by Hutter is called [AIXI](http://www.hutter1.net/ai/uaibook.htm), which is the unification of ideal science ([Algorithmic Information Theory](https://en.wikipedia.org/wiki/Algorithmic_information_theory) aka AIT) with ideal technology ([Sequential Decision Theory](http://planning.cs.uiuc.edu/ch10.pdf) aka SDT). Unlike other attempts to define (Ideal, External) intelligence, it has only 2 open parameters: 1. Choice of Universal Turing Machine running the universe in which the agent perceives as well as the agent's simulation of that universe to make predictions. 2. Choice of utility function providing the value system of the agent so it can decide which actions yield the most valuable consequences. In the vernacular, AIT is about what "is" and SDT is about what "ought" to be done about what "is". For just one example of why AIXI has not received nearly the attention it should have: The ethics of AI are riddled with the conflation of "is" with "ought". Under AIXI, these can be, at the top level of analysis, factored out using the Algorithmic Information Criterion for model selection, which is conceptually quite simple: Given the union of all datasets used to train "large" AI models, choose the model that yields the smallest executable archive of that data. The model so-selected is the best available model of what "is" the case -- including models of bias in the (possibly latent) identities that are the sources of data. If we are to be fair about what is and is not "bias" here's how to proceed: If someone has a reason to call some data "biased" then they should be challenged to provide the data that supports their perception of what is and is not "biased". Then, simply include that data in the total dataset. If they, themselves, are biased and select biased data because of their bias, let others include the data that shows that bias to be, in fact, bias. Compress relentlessly until the data that is most consilient with the rest of the universe is better compressed, leaving the bias exposed as "noise" associated with the responsible identities. This has applications in dealing more effectively not only with social media censorship, but by factoring out the "is" from the ethics, it permits people to recognize when they simply differ in their utility functions or "value systems" and get on with the hard problem of dealing with quasi-religious differences and thereby reduce the likelihood of conflicts that can be exceedingly destructive. Nor is this merely pie-in-the-sky as Hutter has demonstrated with [The Hutter Prize for Lossless Compression of Human Knowledge](http://prize.hutter1.net/). The Hutter Prize, if funded at the level deserving of the problem of ethics in AI, would be several orders of magnitude larger. Upvotes: 0
2021/03/23
1,043
3,911
<issue_start>username_0: I found a very interesting paper on the internet that tries to apply Bayesian inference with a gradient-free online-learning approach: [Bayesian Perceptron: [Bayesian Perceptron: Towards fully Bayesian Neural Networks](https://arxiv.org/abs/2009.01730). I would love to understand this work, but, unfortunately, I am reaching my limits with my Bayesian knowledge. Let us assume that we have the weights $\mathcal{w}$ of our model and observed the data $\mathcal{D}$. Using the Bayes rule, we obtain the posterior according to $$p(\mathcal{w}|D)=\frac{p(D|\mathcal{w})p(\mathcal{w})}{p(D)}$$. In words: we update our prior belief over our weights by multiplying the prior with the likelihood and divide everything by the evidence. In order to calculate the true posterior, we would need to calculate the evidence by marginalizing over (intergrating out) our unknown parameters. This gives the integral $$p(D) = \int p(D|\mathbf{w})p(\mathbf{w})dw$$. So far so good. Now I refer to the paper mentioned above. Here, the approach is presented exemplarily on a neuron whose weighted sum is called $a$, which is then given to the activation function $f(.)$. Moreover it is assumed that $\mathbf{w}\sim N (\mu\_w, \mathbf{C}\_w)$. Because of the linearity, it can be exploited that also $\mathbf{a}\sim N (\mu\_a, \mathbf{C}\_a)$. What I am confused about now is formula (14), which seems to show the compute the true posterior: $$p(w) = \int p(a, w|D\_i)da = \int p(w|a, D\_i)p(a|D\_i)da$$ How is this formula of the posterior compatible with the Bayes Theorem? Where is the evidence, likelihood and prior?<issue_comment>username_1: You have two dependent variables $a$ and $w$. So, there is a joint distribution $p(w, a)$. You can make a marginalization by one of them, pretty much as you did in your second formula. $$p(w) = \int p(w, a)da$$ $$p(w) = \int p(w | a)p(a)da$$ The only difference in this case, the calculation made for the specific point $x\_i, y\_i$, which is empathized by sub-index on $p\_i$ and conditioning on $D\_i$ The key thing is that we can calculate the target distribution in many ways. With likelihood, evidence and prior you could indeed find posterior, but they are not always tractable/available. So, that's why in literature we usually differentiate *true* posterior and approximate posterior (or just posterior). Usually we get $p(w|d)$ with some form of approximation, but in the paper authors decide to get closed-form solution. That's why it was useful to represent in another way with intermediate distribution $a$. This would allow them to get closed-form for different activation functions. So it's *posterior* in the sense of general context, not that specific formula, I think your limits of Bayesian knowledge is just fine Upvotes: 0 <issue_comment>username_2: Thanks for asking the question. I'm the author of the paper. The key point is that the weights $w$ cannot be updated directly with the new data as $w$ is not directly related with the output $y$ (see equation (1)). First, it is necessary to update $a$ first, which actually is directly related with $y$ via the activation function, i.e., $y = f(a)$ with $f(.)$ being the activation function (see equation (2)). Hence, you will find the Bayes rule in equation (16) with all the quantities you are missing (likelihood, prior, etc.). This equation is used to calculate the posterior $p(a|D\_i)$. This posterior is then used to update the weights by plugging $p(a|D\_i)$ into equation (14). The solution of equation (14), i.e., the posterior mean and covariance matrix of $w$, is provided by means of equation (22). Algorithm 2 provides a summary of all necessary calculations. I hope this answers your question. BTW: we have extended this algorithm to a full neural network. The scientific paper describing the whole procedure is currently under review. Upvotes: 3 [selected_answer]
2021/03/24
691
2,879
<issue_start>username_0: I have a binary classification problem. My neural network is getting between 10% and 45% accuracy on the validation set and 80% on the training set. Now, if I have a 10% accuracy and I just take the opposite of the predicted class, I will get 90% accuracy. I am going to add a KNN module that shuts down that process if the inputted data is, or is very similar to the data present in the data set. Would this be a valid approach for my project (which is going to go on my resume)?<issue_comment>username_1: The short answer is no, you shouldn't do that. There is a "distribution shift" thing when you have different x-y relation on the validation set then on the train set. The distribution shift would deteriorate your model performance and you should try to avoid that. The reason it's bad - ok, you find the way to fix the model for validation data, but what about novel *test* data? Will it be like a train? Will it be like validation? You don't know and your model is worthless in fact. What you can do 1. You should redefine the train/validation scheme. I would recommend making [cross-validation](https://en.wikipedia.org/wiki/Cross-validation_(statistics)) with at least three splits 2. As mentioned in a comment, your model seems to overfit; try to apply some regularization techniques, like dropout, batch norm, data augmentation, model simplification and so on 3. Add some data if it's possible, that would better cover all the possible cases. Upvotes: 3 [selected_answer]<issue_comment>username_2: I'm new to all this so take what I say with a grain of salt and not as fact, I don't have any formal education or training. I believe when you're referring to inversion predictions, you're not overthinking you're underthinking. For anything to have value it must also have an inverse or else there's no way to cognitively perceive it (contrast) otherwise you're looking at a white paper against white paper. Now since you're referring to data set prediction, you need to define it linearly (x, y) or f(x) to scale and plot. Therefore x and y BOTH must retain inverse proportionate values (I made up that term) in order to, in the context of assigning value, exist. So you need to have 4 quadrants of data for predictive, so now you're looking at quantum data processing in order to facilitate predictions in a non-linear context. Use a matrix, I believe Diroches Matrix should be applicable here. Also, remember that predictions are always changing and updating based on empirical and real-time data, so don't get your programming stuck in an ONLY RIGHT NOW mindset, matrices are designed to be constantly moving and evolving. Therefore your z-axis should always retain a state of variability, or it should always be Z don't attach a value to it. Good luck. I'm jealous I would love to ACTUALLY be working on something cool :/ Upvotes: 0