date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2020/04/20
1,963
7,594
<issue_start>username_0: "[If you can't tell, does it matter?](https://www.youtube.com/watch?v=kaahx4hMxmw&ab_channel=HelixNebula)" was one of the first lines of dialogue of the Westworld television series, presented as a throwaway in the [first episode of the first season](https://en.wikipedia.org/wiki/The_Original_(Westworld)), in response to the question "Are you real?" In the [sixth episode of the third season](https://en.wikipedia.org/wiki/Decoherence_(Westworld)), the line becomes a central realization of one of the main characters, [The Man in Black](https://westworld.fandom.com/wiki/Man_in_Black). *This is, in fact, a central premise of the show—what is reality, what is identity, what does it mean to be alive?—and has a basis in the philosophy of <NAME>.* * What are the implications of this statement in relation to AI? In relation to experience? In relation to the self?<issue_comment>username_1: I haven't watched Westworld from the beginning to the end, but I've read the synopsis of it. The implications of the statement above might be related to the experiences of what the androids (or cyborgs) might think of themselves. *Are their identity and experiences that they had gone through "real" or not?* It seems that, in the series, this question is actually central, and the series seems to be about an identity crisis, the search for a true identity. In my view, without really any fail-safe programming, once the androids understand that all of their experiences are false (or unreal) and that everything that they do is monitored and tested by some individuals for science purposes, the AI would go rogue. A conscious being with no real background experiences being put into an environment that 'should' have been their 'life' all along, but, after realizing the real truth, everything does matter, and the conscious being would go rogue to find their true self in order to recover their origin and their true identity. Upvotes: 0 <issue_comment>username_2: The question in [this video](https://www.youtube.com/watch?v=kaahx4hMxmw&ab_channel=HelixNebula) is > > Are you real? > > > What does this question really mean? Is the guy asking whether the *apparent* female (I don't know if she is a cyborg or not because I did not yet watch the TV series) is a human? So, is "real" a synonym for "human"? If that's the case, then the first implication (in the form of a question) of the statement > > If you can't tell, does it matter? > > > in relation to AI is > > 1. Can we create an AGI that is sufficiently similar to a human that we can't tell whether it's a human or not (by just normally interacting with it)? > > > Of course, it's not clear what we mean by "normally interacting". As far as I remember, this issue is also raised in the film [A.I. Artificial Intelligence](https://en.wikipedia.org/wiki/A.I._Artificial_Intelligence), where the AI (the kid interpreted by <NAME>) looks sufficiently real to the other kid, so he behaves as if he was a human kid, but then the human kid understands that he's a machine, and starts to behave differently (I hope I'm remembering the film correctly). So, the second question that we could ask is > > 2. Once we understand that it's not a human (for example, because it's made of other substances), would we humans start to behave differently and start treating the AGI differently? > > > As opposed to the first question, which is still an open problem, this second question can probably be answered by looking at our relationships with other humans (or entities, such as other animals). Often, we have an idea of a person. Once we discover something new about that person, which maybe we dislike, we may start to treat that person differently. I think this would very likely also happen in our eventual relationship with a sufficiently advanced AGI too, as depicted in [the mentioned film](https://en.wikipedia.org/wiki/A.I._Artificial_Intelligence). Now, let me try to address the other question > > What are the implications of this statement in relation to experience, in relation to the self? > > > I think that you're asking whether a sufficiently advanced AGI could be considered conscious or not. Of course, this is a very hard question to answer, because [we still don't have a clear definition of consciousness or we don't yet agree on a standard definition](https://plato.stanford.edu/entries/consciousness/), so I don't really have a definitive answer to this question. However, if consciousness is just a byproduct of perception and the ability to understand the world and its (physical) rules, then an AGI could be conscious (in a similar way that humans are also conscious). However, consciousness may not actually be necessary to correctly act in the world. In any case, the AI probably needs to know that it has a body and that it needs to protect it for its survival, if that's its main goal. Upvotes: 2 <issue_comment>username_3: It's useful to understand HBO's *Westworld* as an extension of <NAME>'s *Do Androids Dream of Electric Sheep*. Most of Dick's novels involve the nature of reality in relation to perception, and how that informs identity. A major feature of Dick's book, and BladeRunner, is a form of Turing Test (Voight-Kampff) to which humans are subjected to determine if they are replicants. At a certain point, Deckard, the hero, begins to question whether he is human or an android. (This is never fully made clear in the book, and Deckard's wife's alienation from him may indicate his non-human status. The film adaptation similarly raises this question over potentially implanted memories, which would mark Deckard as a replicant, even though the director reversed course and later stated otherwise.) Westworld continues this idea where there are characters who turn out to be definitively androids, such as the Man in Black, who, presumably, has had artificial memories created, and believes he is "real". BladeRunner 2049 also involves this theme, which could be said to be the "unreliability of memory" in relation to experience and identity. Even in mundane circumstances, *two humans can remember the same event differently!* * The point of the Electric Sheep hypothesis is ambiguity—we can only validate our own qualia, and even that is not entirely reliable due to the nature of perception and subjectivity. The novel ends with Deckard finding a frog in the ashes, and initially thinking it is real. It turns out to be robotic, but Deckard ultimately decides it doesn't really matter. * This is important because empathy is the main theme, and altruistic behavior in nature is supported by evolutionary game theory. The central plot device is that replicants don't have empathy, a design flaw that becomes a "feature not a bug" in that it keeps replicants from banding together to overthrow their oppressors. But the new generation of Nexus androids are intelligent enough to develop empathy naturally. Dick was a [Christian philosopher](https://en.wikipedia.org/wiki/The_Exegesis_of_Philip_K._Dick) who worked mainly in popular narrative and believed empathy is a natural function of intelligence sufficiently advanced. * If the suffering witnessed by an entity appears real, but we cannot validate that entity's qualia, is it a moral imperative to make a [Pascal's Wager](https://en.wikipedia.org/wiki/Pascal%27s_wager)? i.e. err on the side of caution and compassion, just in case the entity is conscious. * If altruistic behavior is expressed by an algorithm, is that altruism invalid? Upvotes: 0
2020/04/20
1,967
7,771
<issue_start>username_0: Apart from [Journal of Artificial General Intelligence](https://content.sciendo.com/view/journals/jagi/jagi-overview.xml) (a peer-reviewed open-access academic journal, owned by the [Artificial General Intelligence Society (AGIS)](http://www.agi-society.org/)), are there any other journals (or [proceedings](https://en.wikipedia.org/wiki/Conference_proceeding)) completely (or partially) dedicated to *artificial **general** intelligence*? If you want to share a journal that is only partially dedicated to the topic, please, provide details about the relevant subcategory or examples of papers on AGI that were published in such a journal. So, a paper that talks about e.g. an RL technique (that *only* claims that the idea could be useful for AGI) is not really what I am looking for. I am looking for journals where people publish papers, reviews or surveys that develop or present theories and implementations of AGI systems. It's possible that these journals are more associated with the cognitive science or neuroscience communities and fields.<issue_comment>username_1: I haven't watched Westworld from the beginning to the end, but I've read the synopsis of it. The implications of the statement above might be related to the experiences of what the androids (or cyborgs) might think of themselves. *Are their identity and experiences that they had gone through "real" or not?* It seems that, in the series, this question is actually central, and the series seems to be about an identity crisis, the search for a true identity. In my view, without really any fail-safe programming, once the androids understand that all of their experiences are false (or unreal) and that everything that they do is monitored and tested by some individuals for science purposes, the AI would go rogue. A conscious being with no real background experiences being put into an environment that 'should' have been their 'life' all along, but, after realizing the real truth, everything does matter, and the conscious being would go rogue to find their true self in order to recover their origin and their true identity. Upvotes: 0 <issue_comment>username_2: The question in [this video](https://www.youtube.com/watch?v=kaahx4hMxmw&ab_channel=HelixNebula) is > > Are you real? > > > What does this question really mean? Is the guy asking whether the *apparent* female (I don't know if she is a cyborg or not because I did not yet watch the TV series) is a human? So, is "real" a synonym for "human"? If that's the case, then the first implication (in the form of a question) of the statement > > If you can't tell, does it matter? > > > in relation to AI is > > 1. Can we create an AGI that is sufficiently similar to a human that we can't tell whether it's a human or not (by just normally interacting with it)? > > > Of course, it's not clear what we mean by "normally interacting". As far as I remember, this issue is also raised in the film [A.I. Artificial Intelligence](https://en.wikipedia.org/wiki/A.I._Artificial_Intelligence), where the AI (the kid interpreted by <NAME>) looks sufficiently real to the other kid, so he behaves as if he was a human kid, but then the human kid understands that he's a machine, and starts to behave differently (I hope I'm remembering the film correctly). So, the second question that we could ask is > > 2. Once we understand that it's not a human (for example, because it's made of other substances), would we humans start to behave differently and start treating the AGI differently? > > > As opposed to the first question, which is still an open problem, this second question can probably be answered by looking at our relationships with other humans (or entities, such as other animals). Often, we have an idea of a person. Once we discover something new about that person, which maybe we dislike, we may start to treat that person differently. I think this would very likely also happen in our eventual relationship with a sufficiently advanced AGI too, as depicted in [the mentioned film](https://en.wikipedia.org/wiki/A.I._Artificial_Intelligence). Now, let me try to address the other question > > What are the implications of this statement in relation to experience, in relation to the self? > > > I think that you're asking whether a sufficiently advanced AGI could be considered conscious or not. Of course, this is a very hard question to answer, because [we still don't have a clear definition of consciousness or we don't yet agree on a standard definition](https://plato.stanford.edu/entries/consciousness/), so I don't really have a definitive answer to this question. However, if consciousness is just a byproduct of perception and the ability to understand the world and its (physical) rules, then an AGI could be conscious (in a similar way that humans are also conscious). However, consciousness may not actually be necessary to correctly act in the world. In any case, the AI probably needs to know that it has a body and that it needs to protect it for its survival, if that's its main goal. Upvotes: 2 <issue_comment>username_3: It's useful to understand HBO's *Westworld* as an extension of <NAME>'s *Do Androids Dream of Electric Sheep*. Most of Dick's novels involve the nature of reality in relation to perception, and how that informs identity. A major feature of Dick's book, and BladeRunner, is a form of Turing Test (Voight-Kampff) to which humans are subjected to determine if they are replicants. At a certain point, Deckard, the hero, begins to question whether he is human or an android. (This is never fully made clear in the book, and Deckard's wife's alienation from him may indicate his non-human status. The film adaptation similarly raises this question over potentially implanted memories, which would mark Deckard as a replicant, even though the director reversed course and later stated otherwise.) Westworld continues this idea where there are characters who turn out to be definitively androids, such as the Man in Black, who, presumably, has had artificial memories created, and believes he is "real". BladeRunner 2049 also involves this theme, which could be said to be the "unreliability of memory" in relation to experience and identity. Even in mundane circumstances, *two humans can remember the same event differently!* * The point of the Electric Sheep hypothesis is ambiguity—we can only validate our own qualia, and even that is not entirely reliable due to the nature of perception and subjectivity. The novel ends with Deckard finding a frog in the ashes, and initially thinking it is real. It turns out to be robotic, but Deckard ultimately decides it doesn't really matter. * This is important because empathy is the main theme, and altruistic behavior in nature is supported by evolutionary game theory. The central plot device is that replicants don't have empathy, a design flaw that becomes a "feature not a bug" in that it keeps replicants from banding together to overthrow their oppressors. But the new generation of Nexus androids are intelligent enough to develop empathy naturally. Dick was a [Christian philosopher](https://en.wikipedia.org/wiki/The_Exegesis_of_Philip_K._Dick) who worked mainly in popular narrative and believed empathy is a natural function of intelligence sufficiently advanced. * If the suffering witnessed by an entity appears real, but we cannot validate that entity's qualia, is it a moral imperative to make a [Pascal's Wager](https://en.wikipedia.org/wiki/Pascal%27s_wager)? i.e. err on the side of caution and compassion, just in case the entity is conscious. * If altruistic behavior is expressed by an algorithm, is that altruism invalid? Upvotes: 0
2020/04/20
2,025
7,989
<issue_start>username_0: I am a newbie in deep learning and wanted to know if the problem I have at hand is a suitable fit for deep learning algorithms. I have thousands of fragments each of about 1000 bytes size (i.e. numbers in the range of 0 to 255). There are two classes in the fragments: 1. Some fragments have a high frequency of two particular byte values appearing next to one another: "0 and 100". This kind of pattern roughly appears once every 100 to 200 bytes. 2. In the other class, the byte values are more randomly distributed. We have the ability to produce as many numbers of instances of each class as needed for training purposes. However, I would like to differentiate with a machine learning algorithm without explicitly identifying the "0 and 100" pattern in the 1st class myself. Can deep learning help us solve this? If so, what kind of layers might be useful? As a preliminary experiment, we tried to train a deep learning network made up of 2 hidden layers of TensorFlow's "Dense" layers (of size 512 and 256 nodes in each of the hidden layers). However, unfortunately, our accuracy was indicative of simply a random guess (i.e. 50% accuracy). We were wondering why the results were so bad. Do you think a Convolutional Neural Network will better solve this problem?<issue_comment>username_1: I haven't watched Westworld from the beginning to the end, but I've read the synopsis of it. The implications of the statement above might be related to the experiences of what the androids (or cyborgs) might think of themselves. *Are their identity and experiences that they had gone through "real" or not?* It seems that, in the series, this question is actually central, and the series seems to be about an identity crisis, the search for a true identity. In my view, without really any fail-safe programming, once the androids understand that all of their experiences are false (or unreal) and that everything that they do is monitored and tested by some individuals for science purposes, the AI would go rogue. A conscious being with no real background experiences being put into an environment that 'should' have been their 'life' all along, but, after realizing the real truth, everything does matter, and the conscious being would go rogue to find their true self in order to recover their origin and their true identity. Upvotes: 0 <issue_comment>username_2: The question in [this video](https://www.youtube.com/watch?v=kaahx4hMxmw&ab_channel=HelixNebula) is > > Are you real? > > > What does this question really mean? Is the guy asking whether the *apparent* female (I don't know if she is a cyborg or not because I did not yet watch the TV series) is a human? So, is "real" a synonym for "human"? If that's the case, then the first implication (in the form of a question) of the statement > > If you can't tell, does it matter? > > > in relation to AI is > > 1. Can we create an AGI that is sufficiently similar to a human that we can't tell whether it's a human or not (by just normally interacting with it)? > > > Of course, it's not clear what we mean by "normally interacting". As far as I remember, this issue is also raised in the film [A.I. Artificial Intelligence](https://en.wikipedia.org/wiki/A.I._Artificial_Intelligence), where the AI (the kid interpreted by <NAME>) looks sufficiently real to the other kid, so he behaves as if he was a human kid, but then the human kid understands that he's a machine, and starts to behave differently (I hope I'm remembering the film correctly). So, the second question that we could ask is > > 2. Once we understand that it's not a human (for example, because it's made of other substances), would we humans start to behave differently and start treating the AGI differently? > > > As opposed to the first question, which is still an open problem, this second question can probably be answered by looking at our relationships with other humans (or entities, such as other animals). Often, we have an idea of a person. Once we discover something new about that person, which maybe we dislike, we may start to treat that person differently. I think this would very likely also happen in our eventual relationship with a sufficiently advanced AGI too, as depicted in [the mentioned film](https://en.wikipedia.org/wiki/A.I._Artificial_Intelligence). Now, let me try to address the other question > > What are the implications of this statement in relation to experience, in relation to the self? > > > I think that you're asking whether a sufficiently advanced AGI could be considered conscious or not. Of course, this is a very hard question to answer, because [we still don't have a clear definition of consciousness or we don't yet agree on a standard definition](https://plato.stanford.edu/entries/consciousness/), so I don't really have a definitive answer to this question. However, if consciousness is just a byproduct of perception and the ability to understand the world and its (physical) rules, then an AGI could be conscious (in a similar way that humans are also conscious). However, consciousness may not actually be necessary to correctly act in the world. In any case, the AI probably needs to know that it has a body and that it needs to protect it for its survival, if that's its main goal. Upvotes: 2 <issue_comment>username_3: It's useful to understand HBO's *Westworld* as an extension of Phillip Dick's *Do Androids Dream of Electric Sheep*. Most of Dick's novels involve the nature of reality in relation to perception, and how that informs identity. A major feature of Dick's book, and BladeRunner, is a form of Turing Test (Voight-Kampff) to which humans are subjected to determine if they are replicants. At a certain point, Deckard, the hero, begins to question whether he is human or an android. (This is never fully made clear in the book, and Deckard's wife's alienation from him may indicate his non-human status. The film adaptation similarly raises this question over potentially implanted memories, which would mark Deckard as a replicant, even though the director reversed course and later stated otherwise.) Westworld continues this idea where there are characters who turn out to be definitively androids, such as the Man in Black, who, presumably, has had artificial memories created, and believes he is "real". BladeRunner 2049 also involves this theme, which could be said to be the "unreliability of memory" in relation to experience and identity. Even in mundane circumstances, *two humans can remember the same event differently!* * The point of the Electric Sheep hypothesis is ambiguity—we can only validate our own qualia, and even that is not entirely reliable due to the nature of perception and subjectivity. The novel ends with Deckard finding a frog in the ashes, and initially thinking it is real. It turns out to be robotic, but Deckard ultimately decides it doesn't really matter. * This is important because empathy is the main theme, and altruistic behavior in nature is supported by evolutionary game theory. The central plot device is that replicants don't have empathy, a design flaw that becomes a "feature not a bug" in that it keeps replicants from banding together to overthrow their oppressors. But the new generation of Nexus androids are intelligent enough to develop empathy naturally. Dick was a [Christian philosopher](https://en.wikipedia.org/wiki/The_Exegesis_of_Philip_K._Dick) who worked mainly in popular narrative and believed empathy is a natural function of intelligence sufficiently advanced. * If the suffering witnessed by an entity appears real, but we cannot validate that entity's qualia, is it a moral imperative to make a [Pascal's Wager](https://en.wikipedia.org/wiki/Pascal%27s_wager)? i.e. err on the side of caution and compassion, just in case the entity is conscious. * If altruistic behavior is expressed by an algorithm, is that altruism invalid? Upvotes: 0
2020/04/21
631
2,574
<issue_start>username_0: I was exploring image/video compression using Machine Learning. In there I discovered that autoencoders are used very frequently for this sort of thing. So I wanted to enquire:- 1. How fast are autoencoders? I need something to *compress* an image in milliseconds? 2. How much resources do they take? I am not talking about the training part but rather the deployment part. Could it work fast enough to compress a video on a Mi phone (like note8 maybe)? Do you know of any particularly new and interesting research in AI that has enabled a technique to this fast and efficiently?<issue_comment>username_1: Actually it depends on the size of your AE, if you use a small AE with just 500.000 to 1M weigths, the inferencing can be stunningly fast. But even large networks can run very fast, using Tensorflow lite for example, models are compressed and optimized to run faster on Edge-devices (Handys for example, end-user devices). You can find a lot of videos on Youtube, where people test inferencing large networks like Resnet-51 or Resnet-101 on a raspberrypi, or other SOC Chips. Handys are comparable to that, but maybe not that optimized. For example,I have an Jetson Nano (SOC of Nvidia costs arround 100 euro) and i tried to inference a large Resnet with arround 30 million parameters over my fullHD Webcam. Stable 30 FPS, so speaking in milliseconds its around 33 ms per image. To answer your question, yes Autoencoders can be fast, also very fast in combination with an optimized model and hardware. Autoencoder structures are quite easy, check out this [medium](https://medium.com/@jannik.zuern/but-what-is-an-autoencoder-26ec3386a2af),[keras example](https://blog.keras.io/building-autoencoders-in-keras.html) Upvotes: 1 <issue_comment>username_2: It depends on your image size and the size of the compression you want! Usually deep learning algorithms are not so fast as why they run on GPU, and we have highly optimized frameworks like TensorFlow! Something I can say for sure is: 1. Compressing video using autoencoders means compressing each frame one by one! However, video compressions usually contain the calculation of the deference of every frame with the previous frame. This means the compressing video is much more time consuming than compressing just a single image. 2. The encoder is half part of the autoencoder, so the compression is faster than training the whole autoencoder. 3. Use GPU! It really makes much different! 4. Try Google Colab! You can choose between CPU and GPU and then make a decision. Upvotes: 0
2020/04/21
781
3,162
<issue_start>username_0: I am trying to implement value and policy iteration algorithms. My value function from policy iteration looks vastly different from the values from value iteration, but the policy obtained from both is very similar. How is this possible? And what could be the possible reasons for this?<issue_comment>username_1: Both value iteration (VI) and policy iteration (PI) algorithms are guaranteed to converge to the optimal policy, so it is expected that you get similar policies from both algorithms (if they have converged). However, they do this differently. VI can be seen as truncated version of PI. Let me first illustrate the pseudocode of both algorithms (taken from [Barto and Sutton's book](http://incompleteideas.net/book/RLbook2020.pdf)), which I suggest you get familiar with (but you are probably already familiar with them if you implemented both algorithms). [![enter image description here](https://i.stack.imgur.com/kKZx7.png)](https://i.stack.imgur.com/kKZx7.png) As you can see, policy iteration updates the policy multiple times, because it alternates a step of policy evaluation and a step of policy improvement, where a better policy is derived from the current best estimate of the value function. [![enter image description here](https://i.stack.imgur.com/CAAu5.png)](https://i.stack.imgur.com/CAAu5.png) On the other hand, value iteration updates the policy only once (at the end). In both cases, the policies are derived from the value functions in the same way. So, if you obtain similar policies, you may think that they are necessarily derived from similar *final* value functions. However, in general, this may not the case, and this is actually the motivation for the existence of value iteration, i.e. you may derive an optimal policy from an non-optimal value function. Barto and Sutton's book provide an example. See figure 4.1 on page 77 ([p. 99 of the pdf](http://incompleteideas.net/book/RLbook2020.pdf#page=99)). For completeness, here's a screenshot of the figure. [![enter image description here](https://i.stack.imgur.com/DwyQL.png)](https://i.stack.imgur.com/DwyQL.png) Upvotes: 3 [selected_answer]<issue_comment>username_2: More comments in addition to the accepted answer. The OP says the two algorithms have different value functions. This is actually not precise and may be the source of confusion. In particular, only in the policy iteration algorithm, the value of $v$ is the state value function, which is the solution to the Bellman equation. However, the value of $v$ in value iteration is not a state value function! That is simply because it is not the solution to any Bellman equation in general. Then, what is the $v$ in value iteration? [See another answer of mine.](https://ai.stackexchange.com/a/32149/50121) Why value iteration, which does not calculate the state values, can find the optimal policy? It would be easier to see that if you think of it as a simple numerical iterative algorithm solving the Bellman optimality equation. The algorithm follows from the contraction (or called fixed-point) theorem when we analyze the Bellman optimality equation. Upvotes: 0
2020/04/22
883
3,656
<issue_start>username_0: Say the game is tic tac toe. I found two possible output layers: 1. Vector of length 9: each float of the vector represents 1 action (one of the 9 boxes in Tic Tac Toe). The agent will play the corresponding action with the highest value. The agent learns the rules through trial and error. When the agent tries to make an illegal move (i.e. placing a piece on a box where there is already one), the reward will be harshly negative (-1000 or so). 2. A single float: the float represents who is winning (positive = "the agent is winning", negative = "the other player is winning"). The agent does not know the rules of the game. Each turn the agent is presented with all the possible next states (resulting from playing each legal action) and it chooses the state with the highest output value. What other options are there? I like the first option because it's cleaner, but it's not feasible with games that have thousands or millions of actions. Also, I am worried that the game might not really learn the rules. E.g. Say that in state S the action A is illegal. Say that state R is extremely similar to state S but action A is legal in state R (and maybe in state R action A is actually the best move!). Isn't there the risk that by learning not to play action A in state S it will also learn not to play action A in state R? Probably not an issue in Tic Tac Toe, but likely one in any game with more complex rules. What are the disadvantages of option 2? Does the choice depend on the game? What's your rule of thumb when choosing the output layer?<issue_comment>username_1: in reinforcement learning, neural networks are used to estimate the value function (board state worth), not to choose the action directly. In most games, the actions available are state-dependent anyway, so you cannot easily formulate them as ANN outputs. So the idea is that at each state, you consider the alternative actions, and the one that leads to the most valuable state is the action of choice (without using lookahead). Your ANN will thus be approximating the board state values Strictly speaking for tic-tac-toe you don't need a neural network, the tabular Q-learning method would suffice. Have you read Sutton and Barto book on RL? Upvotes: 0 <issue_comment>username_2: It depends on whether the action is part of the input or output of a neural network estimating the Q-value(state, action).[![alternative architectures for Q-function neural network](https://i.stack.imgur.com/csAky.png)](https://i.stack.imgur.com/csAky.png) The network on the left has the state as input and outputs one scalar value for each of the categorical actions. It has the advantage of being easy to setup and only needs one network evaluation to predict the Q-value for all actions. If the action space is categorical and single-dimensional I would use it. The network on the right has both the state and a representation of the action as input and outputs one single scalar value. This architecture also allows to compute the Q-value for multi-dimensional and continuous action spaces. The action space of tic-tac-toe can be easily represented by a vector of length 9, thus I would recommend the left NN-architecture. However, if your game has continuous-valued variables in the action space (e.g. the position of your mouse pointer), you should use the NN-architecture on the right. The approach to prevent illegal moves is only partially dependent on the choice of the Q-function architecture and covered by another question: [How to forbid actions](https://datascience.stackexchange.com/questions/37519/rl-agent-how-to-forbid-actions) Upvotes: 1
2020/04/22
1,044
4,125
<issue_start>username_0: I am currently using TensorFlow and have simply been trying to train a neural network directly against a large continuous data set, e.g. $y = [0.014, 1.545, 10.232, 0.948, ...]$ corresponding to different points in time. The loss function in the fully connected neural network (input layer: 3 nodes, 8 inner layers: 20 nodes each, output layer: 1 node) is just the squared error between my prediction and the actual continuous data. It appears the neural network is able to learn the high magnitude data points relatively well (e.g. Figure 1 at time = 0.4422). But the smaller magnitude data points (e.g. Figure 2 at time = 1.1256) are quite poorly learned without any sharpness and I want to improve this. I've tried experimenting with different optimizers (e.g. mini-batch with Adam, full batch with L-BFGS), compared `reduce_mean` and `reduce_sum`, normalized the data in different ways (e.g. median, subtract the sample mean and divide by the standard deviation, divide the squared loss term by the actual data), and attempted to simply make the neural network deeper and train for a very long period of time (e.g. 7+ days). But after approximately 24 hours of training and the aforementioned tricks, I am not seeing any significant improvements in predicted outputs especially for the small magnitude data points. --- **Figure 1** [![Figure 1 at time = 0.4422](https://i.stack.imgur.com/NvLRi.png)](https://i.stack.imgur.com/NvLRi.png) --- **Figure 2** [![Figure 2 at time = 1.1256](https://i.stack.imgur.com/e64Lb.png)](https://i.stack.imgur.com/e64Lb.png) --- Therefore, do you have any recommendations on how to improve training particularly when there are different data points of varying magnitude I am trying to learn? I believe [this](https://stats.stackexchange.com/questions/221476/feature-varying-in-orders-of-magnitude) is a related question, but any explicit examples of implementations or techniques to handle varying orders of magnitude within a single large data set would be greatly appreciated.<issue_comment>username_1: in reinforcement learning, neural networks are used to estimate the value function (board state worth), not to choose the action directly. In most games, the actions available are state-dependent anyway, so you cannot easily formulate them as ANN outputs. So the idea is that at each state, you consider the alternative actions, and the one that leads to the most valuable state is the action of choice (without using lookahead). Your ANN will thus be approximating the board state values Strictly speaking for tic-tac-toe you don't need a neural network, the tabular Q-learning method would suffice. Have you read Sutton and Barto book on RL? Upvotes: 0 <issue_comment>username_2: It depends on whether the action is part of the input or output of a neural network estimating the Q-value(state, action).[![alternative architectures for Q-function neural network](https://i.stack.imgur.com/csAky.png)](https://i.stack.imgur.com/csAky.png) The network on the left has the state as input and outputs one scalar value for each of the categorical actions. It has the advantage of being easy to setup and only needs one network evaluation to predict the Q-value for all actions. If the action space is categorical and single-dimensional I would use it. The network on the right has both the state and a representation of the action as input and outputs one single scalar value. This architecture also allows to compute the Q-value for multi-dimensional and continuous action spaces. The action space of tic-tac-toe can be easily represented by a vector of length 9, thus I would recommend the left NN-architecture. However, if your game has continuous-valued variables in the action space (e.g. the position of your mouse pointer), you should use the NN-architecture on the right. The approach to prevent illegal moves is only partially dependent on the choice of the Q-function architecture and covered by another question: [How to forbid actions](https://datascience.stackexchange.com/questions/37519/rl-agent-how-to-forbid-actions) Upvotes: 1
2020/04/24
714
3,161
<issue_start>username_0: In [the previous research](https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf), in 2015, Deep Q-Learning shows its great performance on single player Atari Games. But why do AlphaGo's researchers use CNN + MCTS instead of Deep Q-Learning? is that because Deep Q-Learning somehow is not suitable for Go?<issue_comment>username_1: Deep Q Learning is a model-free algorithm. In the case of Go (and chess for that matter) the model of the game is very simple and deterministic. It's a perfect information game, so it's trivial to predict the next state given your current state and action (this is the model). They take advantage of this with MCTS to speed up training. I suppose Deep Q Learning would also work, but it would be at a huge disadvantage. Upvotes: 3 <issue_comment>username_2: $Q$-learning (and also its deep variant, and most of the other well-known reinforcement learning algorithms) are inherently learning approaches for **single-agent environments**. The entire problem setting that these algorithms are developed for (Markov decision processes, or MDPs) is always framed in terms of a single agent situated in some environment, where that agent can take actions that have some level of influence over the states that they lead to, and rewards may be observed. If you have a problem that is, in reality, a multi-agent environment, there is a way to translate this environment to a single-agent setting; you simply have to assume that all other agents (i.e. your opponent in Go) are an inherent part of "the world" or "the environment", and that all the states in which these other agents make moves are not really states (not visible to your agent), but just intermediate steps where these part-of-the-environment-agents cause the environment to change and, as a result, create state transitions. The primary issue with this approach is; we still need to model the decision-making of these agents in order to implement this new view of "the world", where our opponents are really a part of the world. Whatever implementation we give them, that is what our single-agent RL algorithm will learn to play against. We can just implement our opponents to be random agents, and run a single-agent RL algorithm like DQN, and then we will likely learn to play well against random agents. We'll probably still be very bad against strong opponents though. **If we want to use a single-agent RL algorithm to learn to play well against strong opponents, we need to have an implementation for those strong opponents first.** But if we already have that... why even bother with learning at all? We've already got the strong Go player, so we're already done and don't need to learn! **MCTS** is a tree search algorithm, one that actively takes into account the fact that there is an opponent with opposing goals, and tries to model the choices that this opponent can make, and can do so better the more computation time we give it. **This algorithm, and learning approaches built around it, are inherently designed to tackle the multi-agent setting** (with agents having opposing goals). Upvotes: 4 [selected_answer]
2020/04/25
2,649
9,273
<issue_start>username_0: Desperate trying to understand something for couple of weeks. All those questions are actually one big question.Please help me. Time-codes and screens in my question refer to this great(IMHO) 3d explanation: <https://www.youtube.com/watch?v=UojVVG4PAG0&list=PLVZqlMpoM6kaJX_2lLKjEhWI0NlqHfqzp&index=2> .... Here is the case: Say I have 2 inputs (lets call them X1 and X2) into my ANN. Say X1= persons age and X2=years of education. 1) **First question:** do I plug those numbers as is or normalize them 0-1 as a "preprocessing"? 2) As I have 2 weights and 1 bias, actually I am going to plug my inputs to X1\*W1+X2\*W2=output formula. This is 2d plane in a 3d space if I am not mistaken(time-code 5:31): [![2d plane in 3d space](https://i.stack.imgur.com/oV1Nj.png)](https://i.stack.imgur.com/oV1Nj.png) Thus when I plug in my variables, like in regression I will get on a Z axis my output. **So the second question is: am I right up to here?** -----------------From here come real important couple of questions. 3) My output (before I plug it into the activation function) is just a simple number, IT IS NOT A PLANE and NOT A SURFACE, but a simple scalar, without any sigh on it coming from 2d surface in a 3d space(though it does come from there). Thus, when I plug this number (which was Z value in a previous step) into the activation function (say sigmoid) my number enters there in to the X axis, and we get as an output some Y value. As I understand this operation was **totally 2d operation**, is was 2d sigmoid and not some kind of 3dsigmoidal surface. **So here is the question:** If I am right, why do we see in this movie (and couple of other places) such an explanation? (time-code 12:55):[![ 3d-sigmoidal surface ](https://i.stack.imgur.com/9Xomw.png)](https://i.stack.imgur.com/9Xomw.png) 4)Now lets say that I was right in the previous step and as an output from the activation function I do get a simple number not a 2d surface and not a 3d one. I just have some number like I had in the very beginning of the ANN as an input (age, education etc). If i want to add another layer of neurons, this very number enters there **as is** not telling any one the "secret" that it was created by some kind of sigmoid. In this next layer this number is about to take similar transformations as it happened to age and education in a previous layer, it is going to be Xn in just the same scenario: sigmoid(Xn*Wn+Xm*Wm=output) and in the end we will get once again just a number. If I am right, why in the movie they say (time-code 14:50 ) that when we add together two activation functions we get something unlinear. They show result of such "addition" first as 2d (time-code 14:50 and 14:58). [![addition of 2 activation functions ](https://i.stack.imgur.com/FR2Hh.png)](https://i.stack.imgur.com/FR2Hh.png) **So, Here comes my question:** how come that they "add" two activation functions, if to the second activation function reaches just a simple number as said above he is not telling any one the "secret" that it was created by some kind of sigmoid. 5) And then again, they show this addition of 3d surfaces (time-code 19:39 ) [![19:39 ](https://i.stack.imgur.com/AUtkg.png)](https://i.stack.imgur.com/AUtkg.png) How it is possible? I mean again there should not happen any addition of surfaces, because no surface passes to next step but a number. What do I miss?<issue_comment>username_1: The output of any node is simply a scalar number. For a given input you get a specific scalar output. What is being shown is the surfaces that get generated as you VARY x1 and x2 over their input range. To answer your first question it is always best to scale your inputs. Upvotes: 1 <issue_comment>username_2: Hi and welcome to the community. It's important to understand these basic concepts very clearly. You have to first understand the basic unit of a neural network, a single node/neuron/perceptron. Let us forget all about Neural Networks for a bit, and talk about something far simpler. **Linear Regression** [![A Linear Regression](https://i.stack.imgur.com/p0Ot9.png)](https://i.stack.imgur.com/p0Ot9.png) In the above figure, we clearly have one independent variable on the x-axis, and one dependent variable on the y-axis. The red line has an intercept of zero, and let's say a slope of 0.5. Therefore, $$ y = 0.5x + 0 $$ This, right here, is a single perceptron. You take a value of x, lets say 8, pass it through the the node, and get a value as output, 4. Simple! But what is the model in this case? Is it the output? No. Its the set [0.5, 0] that represents the red line above. The outputs are simply points on that line. **A neural network model is always a set of values - a matrix or a tensor, if you will.** The plots in your question, do not represent outputs. **They represent the models.** But now that you've possibly understood what a linear model with one indpendent variable looks like, I hope you can appreciate that having 2 independent variables will give us a plane in 3-D space. This is called multiple regression. [![multiple regression](https://i.stack.imgur.com/dMONi.png)](https://i.stack.imgur.com/dMONi.png) This forms the first layer of a neural network with linear activation functions. Assuming $ x\_{i} $ and $ x\_{j} $ as the two independent variables, the first layer computes $$ y\_{1} = w\_{1}x\_{i} + w\_{2}x\_{j} + b\_{1} $$ Note that while $ y\_{1} $ is the output of the first layer, the set $ [w\_{1}, w\_{2}, b\_{1}] $ is the model of the first layer and can be plotted as a plane in 3D space. The second layer, again a linear layer, computes $$ y\_{2} = w\_{3}y\_{1} + b\_{2} $$ Substitute $ y\_{1} $ in above and what do you get? Another linear model! $$ y\_{2} = w\_{3}(w\_{1}x\_{i} + w\_{2}x\_{j} + b\_{1}) + b\_{2} $$ Adding layers to a neural network is only compounding of functions. **Compounding linear functions on linear functions result in linear functions.** Well, then, what was the point of adding a layer? Seems useless, right? Yes, adding linear layers to a neural network is absolutely useless. But what happens if the activation functions of each perceptron, each layer was not linear? For example the sigmoid or the most widely used today, ReLU. **Compounding non-linear functions on non-linear functions can increase non-linearity.** The ReLU looks like this $$ y = max(0, x) $$ [![relu](https://i.stack.imgur.com/3oIXS.png)](https://i.stack.imgur.com/3oIXS.png) This is definitely non-linear but not as non-linear as let's say the sine wave. But can we approximate the sine wave by somehow "compounding" multiple, say $ N $ ReLUs? $$ \sin(x) \approx a + \sum\_{N}b\*max(0, c + dx)$$ And here the variables $ a, b,c, d $ are the trainable "weights" in neural network terminology. [![sinx_as_relu](https://i.stack.imgur.com/qR5eG.png)](https://i.stack.imgur.com/qR5eG.png) If you remember the structure of the perceptron, the first operation is often denoted as a summation over all the inputs. This is how non-linearity is approximated in Neural Networks. Now one may ask: *So, summing over non-linear functions can approximate any function, right? So a single hidden layer between input layer and output layer (one that sums over all the outputs of the hidden layer units) should be enough? Why do we often see neural network architectures with so many hidden layers?* This is one of the most important yet often overlooked aspect of neural networks and deep-learning. To quote, Dr. <NAME>, one of the brightest minds in AI, > > A feedforward network with a single (hidden) layer is sufficient to represent any function, but the layer may be infeasibly large and may fail to learn and generalize correctly. > > > So, what is the ideal number of hidden layers? There's no magic number! ;-) For more mathematical rigor on how neural networks approximate non-linear functions, one should learn about the [Universal Approximation Theorem](https://en.wikipedia.org/wiki/Universal_approximation_theorem). Beginners should check [this](http://neuralnetworksanddeeplearning.com/chap4.html) out. But why should we care for increased non-linearity? For that I'd direct you to [this.](https://stats.stackexchange.com/questions/275358/why-is-increasing-the-non-linearity-of-neural-networks-desired) Note that all of the above discussion is with respect to regression. For classification, the non-linear surface learned is regarded as a decision boundary and points above and below the surface are classified into different classes. However, an alternative, and arguably better, way to look at this is that given a dataset that is not linearly seperable, a neural network first transforms the input dataset into a linearly seperable form and then uses a linear decision boundary on it. For more on this, definitely check out <NAME>'s amazing [blog](http://colah.github.io/posts/2014-03-NN-Manifolds-Topology/). Finally, yes all independent variables must be normalized before training a neural network. This is to equalize the scale of different variables. More info [here](https://medium.com/@urvashilluniya/why-data-normalization-is-necessary-for-machine-learning-models-681b65a05029). Upvotes: 3 [selected_answer]
2020/04/27
723
3,354
<issue_start>username_0: I am using the following architechture: ``` 3*(fully connected -> batch normalization -> relu -> dropout) -> fully connected ``` Should I add the `batch normalization -> relu -> dropout` part after the last fully connected layer as well (the output is positive anyway, so the relu wouldn't hurt I suppose)?<issue_comment>username_1: No, the activation of the output layer should instead be tailored to the labels you're trying to predict. The network prediction can be seen as a distribution, for example a categorical for classification or a Gaussian (or something more flexible) for regression. The output of your network should predict the sufficient statistics of this distribution. For example, a softmax activation on the last layer ensures that the outputs are positive and sum up to one, as you would expect for a categorical distribution. When you predict a Gaussian with mean and variance, you don't need an activation for the mean but the variance has to be positive, so you could use exp as activation for that part of the output. Upvotes: 0 <issue_comment>username_2: You don't put batch normalization or dropout layers after the last layer, it will just "corrupt" your predictions. They are intended to be used only within the network, to help it converge and avoid overfitting. BTW even if your fully connected layer's output is always positive, it would have positive and negative outputs after batch normalization. But as I said you shouldn't have that layer there anyway. Upvotes: 2 <issue_comment>username_3: If your output is always positive (say from zero to infinity), I guess it won't hurt to put a relu after the last layer. Note that in the case where your output is probabilities (from range 0 to 1, say for classification), people generally apply a sigmoid or softmax (depending whether the task is mutilabel or not) after the last layer. But this is equivalent to not applying any activation to the output, and instead interpret the output as logits (say setting the logits=True flag true in the loss function). Choosing between the two is largely a matter of software-engineering style. As for the question of whether to apply batchnorm after the last layer, it doesn't make much sense to do so, because in which case each output node is always of mean zero, so it's hard to make any strong prediction (even if you were to apply a relu after it). Actually, I think most people do not even use batchnorm **before** the last layer, but the reason for this is more empirical that theoretically justified. And you definitely do not want to apply dropout after the last layer, which would result in the correct prediction being occasionally dropped.. Upvotes: 0 <issue_comment>username_4: There are both benefits and drawbacks to using batch normalization , relu & dropout after the last fully connected layer. One benefit of using this combination is that it can help to prevent overfitting, as the batch normalization and dropout provide additional regularization. However, using this combination may also make the model more difficult to train, as the batch normalization *can introduce additional vanishing gradients*. Ultimately, it is up to the practitioner to decide whether to use this combination after the last fully connected layer, based on the specific model and dataset. Upvotes: 0
2020/04/27
2,043
8,140
<issue_start>username_0: I know it's not an exact science. But would you say that generally for more complicated tasks, deeper nets are required?<issue_comment>username_1: Deeper models can have advantages (in certain cases) ---------------------------------------------------- Most people will answer "yes" to your question, see e.g. [Why are neural networks becoming deeper, but not wider?](https://stats.stackexchange.com/q/222883/82135) and [Why do deep neural networks work well?](https://math.stackexchange.com/q/3147754/168764). In fact, there are cases where deep neural networks have certain advantages compared to shallow ones. For example, see the following papers * [The Power of Depth for Feedforward Neural Networks](http://proceedings.mlr.press/v49/eldan16.pdf) (2016) by <NAME> and <NAME> * [Benefits of depth in neural networks](https://arxiv.org/pdf/1602.04485.pdf) (2016) by <NAME>. * [Depth-Width Tradeoffs in Approximating Natural Functions with Neural Networks](https://arxiv.org/pdf/1610.09887.pdf) (2017) by Safran and Shamir * [Optimal approximation of piecewise smooth functions using deep ReLU neural networks](https://arxiv.org/pdf/1709.05289.pdf) (2018) by Petersen and Voigtlaender What about the width? --------------------- The following papers may be relevant * [Wide Residual Networks](https://arxiv.org/abs/1605.07146) (2017) by <NAME> and <NAME> * [The Expressive Power of Neural Networks: A View from the Width](http://papers.nips.cc/paper/7203-the-expressive-power-of-neural-networks-a-view-from-the-width) (2017) by <NAME> et al. Bigger models have bigger capacity but also have disadvantages -------------------------------------------------------------- <NAME> (co-inventor of VC theory and SVMs, and one of the most influential contributors to learning theory), who is [not a fan of neural networks](https://www.youtube.com/watch?v=Ow25mjFjSmg), will probably tell you that you should look for the smallest model (set of functions) that is consistent with your data (i.e. an admissible set of functions). For example, watch this podcast [<NAME>: Statistical Learning | Artificial Intelligence (AI) Podcast](https://www.youtube.com/watch?v=STFcvzoxVw4) (2018), where he says this. His new learning theory framework based on statistical invariants and predicates can be found in the paper [Rethinking statistical learning theory: learning using statistical invariants](https://link.springer.com/article/10.1007/s10994-018-5742-0) (2019). You should also read ["Learning Has Just Started" – an interview with Prof. <NAME>](https://www.learningtheory.org/learning-has-just-started-an-interview-with-prof-vladimir-vapnik/) (2014). Bigger models have a bigger capacity (i.e. a bigger VC dimension), which means that you will more likely **overfit the training data**, i.e., the model may not really be able to generalize to unseen data. So, in order not to overfit, models with more parameters (and thus capacity) will also require more data. You should also ask yourself why people use **regularisation techniques**. In practice, models that achieve state-of-the-art performance can be very deep, but they are also **computationally inefficient to train** and they **require huge amounts of training data** (either manually labeled or automatically generated). Moreover, there are many other technical complications with deeper neural networks, for example, problems such as the **vanishing (and exploding) gradient problem**. Complex tasks may not require bigger models ------------------------------------------- Some people will tell you that you require deep models because, empirically, some deep models have achieved state-of-the-art results, but that's probably because we haven't found cleverer and more efficient ways of solving these problems. Therefore, I would not say that "complex tasks" (whatever the definition is) necessarily require deeper or, in general, bigger models. While designing our models, it may be a good idea to always keep in mind principles like Occam's razor! A side note ----------- As a side note, I think that more people should focus more on the mathematical aspects of machine learning, i.e. computational and statistical learning theory. There are too many practitioners, who don't really understand the underlying learning theory, and too few theorists, and the progress could soon stagnate because of a lack of understanding of the underlying mathematical concepts. To give you a more concrete idea of the current mentality of the deep learning community, in [this lesson](https://www.youtube.com/watch?v=9EN_HoEk3KY), a person like <NAME>, who is considered an "important and leading" researcher in deep learning, talks about NP-complete problems as if he doesn't really know what he's talking about. NP-complete problems aren't just "hard problems". NP-completeness has a very specific definition in computational complexity theory! Upvotes: 4 <issue_comment>username_2: Deeper networks have more learning capacity in the sense that they can fit to more complex data. But at the same time, they are also more prone to overfitting the training data and therefore fails to generalize to the test set. Apart from overfitting, exploding/vanishing gradients is another problem which hampers convergence. This can be addressed by normalizing the initialization and normalizing the intermediate layers. Then you can do backpropagation with stochastic gradient descent (SGD). When deeper networks are able to converge, another problem of 'degradation' has been detected. The accuracy saturates and then starts to degrade. This is not caused by overfitting. In fact, adding more layers here leads to higher training error. A possible fix is to use ResNets (residual networks), which have been shown to decrease 'degradation' Upvotes: 2 <issue_comment>username_3: My experience from a tactical standpoint is to start out with a smaller simple model first. Train the model and observe the training accuracy and validation loss and validation accuracy. My observation is that, to be a good model, your training accuracy should achieve a value of at least 95%. If it does not, then try to optimize some of the hyper-parameters. If the training accuracy does not improve, then you may try to incrementally add more complexity to the model. As you add more complexity the risk of overfitting, vanishing or exploding gradients becomes higher. You can detect overfitting by monitoring the validation loss. If as the model accuracy goes up the validation loss on later epochs starts to go up you are overfitting. At that point, you will have to take remedial action in your model like adding dropout layers and use regularizers. Keras documentation is [here](https://keras.io/regularizers/). As pointed out in [the answer by username_1](https://ai.stackexchange.com/a/20689/2444), the theory addressing this issue is complex. I highly recommend the excellent tutorial on this subject which can be found on YouTube [here](https://www.youtube.com/watch?v=mbyG85GZ0PI&list=PLnIDYuXHkit4LcWjDe0EwlE57WiGlBs08). Upvotes: 2 <issue_comment>username_4: Speaking *very* generally, I would say that with the current state of machine learning, a "more complicated" task requires *more trainable parameters*. You can increase parameter count by either increasing width and also by increasing depth. Again, speaking *very generally*, I would say that in practice, people have found more success by increasing depth than by increasing width. However, this depends a lot on what you mean by "more complicated". I would argue that generating something is a fundamentally more complicated problem than just identifying something. However, a GAN to generate a 4-pixel image will probably be far more shallow than the shallowest ImageNet network. One could also make an argument that the *definition* of complexity of a deep learning task is "more layers needed == more complicated", in which case it's obvious that by definition, a more complicated task requires a deeper net. Upvotes: 1
2020/04/27
855
3,236
<issue_start>username_0: I have a dataset which includes states, actions, and reward. The dataset includes information on the transition, i.e., $p(r,s' \mid s,a)$. Is there a way to estimate a behavior policy from this dataset so that it can be used in an off-policy learning algorithm?<issue_comment>username_1: > > Is there a way to estimate a behavior policy from this dataset so that it can be used in an off-policy learning algorithm? > > > If you have enough examples of $(s,a)$ pairs for each instance of $s$ then you can simply estimate $$b(a|s) = \frac{N(a,s)}{N(s)}$$ Where $N$ counts the number of instances in your dataset. This might be enough to use off-policy with importance sampling. Alternatively, you can use an off-policy approach that doesn't need importance sampling. The most straightforward one here would be single-step Q learning. The update step for 1-step Q-learning does not depend on behaviour policy, because: * The action value being updated $Q(s,a)$ already assumes $a$ is being taken, so you don't need any conditional probability there. * The TD target $r + \gamma \text{max}\_{a'}[Q(s',a')]$ does not need to be adjusted for behaviour policy, it works with the target policy directly (implied as $\pi(s) = \text{argmax}\_{a}[Q(s,a)]$) A 2-step Q learning algorithm would need to adjust for likelihood $b(a'|s')$ in the TD target $\frac{\pi(a'|s')}{b(a'|s')}(r + \gamma r' + \gamma^2\text{max}\_{a''}[Q(s'',a'')])$ - typically $\pi(a'|s')$ is either 0 or 1, thus making $b(a'|s')$ irrelevant some of the time. But you would still prefer to know it for performing updates it you can. If you are making updates offline and off-policy, then single-step Q learning is probably the simplest approach. It will require more update steps overall to reach convergence, but each one will be simpler. Upvotes: 1 [selected_answer]<issue_comment>username_2: If your data look like this $(s\_{1},a\_{1},r\_{1},s\_{2}),(s\_{2},a\_{2},r\_{2},s\_{3}),....,$ then this sample drawn from a particular behavior policy. So, you do not need to find the behavior policy just Q-Learning to find the optimal policy while following the behavior policy. If the MDP is too big then consider applying Deep Q Learning. In both cases, the transition probability they have given has no use. But if you use on-policy learning and you know the dynamics of the system(means transition probabilities), I will recommend you to use dynamic programming(if state-space is not quite large). But for your above problem setting, you can not use dynamic programming, you have only one choice to use off-policy learning. Upvotes: 0 <issue_comment>username_3: You can simply train a policy from the inputs to predict the actions in your dataset. You can use the cross entropy loss for this, i.e. maximize the the log probability that the policy assigns to the actions in the data set when given the corresponding inputs. This is called behavioral cloning. The result is an approximation of the behavioral policy that lets you compute probability densities of actions. It is an approximation because the dataset is finite, and even more so when you restrict the learned policy to a class of distributions, e.g. Gaussians. Upvotes: 1
2020/04/27
390
1,624
<issue_start>username_0: People sometimes use 1st layer, 2nd layer to refer to a specific layer in a neural net. Is the layer immediately follows the input layer called 1st layer? How about the lowest layer and highest layer?<issue_comment>username_1: > > People sometimes use 1st layer, 2nd layer to refer to a specific layer in a neural net. Is the layer immediately follows the input layer called 1st layer? > > > The 1st layer should typically refer to the layer that comes after the input layer. Similarly, the 2nd layer should refer to the layer that comes after the 1st layer, and so on. However, note that this convention and terminology may not be applicable in all cases. You should always take into account your context! > > How about lowest layer and highest layer? > > > To be honest, I also dislike this ambiguous terminology. From my experience, I don't think there's an agreement on the actual meaning of "lowest" or "highest". It depends on how you depict the neural network, but it's possible that "lowest" refers to the layers closer to the inputs, because, if you think of a neural network as a hierarchy that starts from the inputs and builds more complex representations of it, the "lowest" may refer to the "lowest in the hierarchy" (but who knows!). Upvotes: 3 [selected_answer]<issue_comment>username_2: Lowest layer generally refers to the layer closest to the input. This comes from the idea that layers closer to the input represent low-level features such as gradients and edges, while layers closer to the output represent high-level features such as parts and objects. Upvotes: 3
2020/04/28
890
3,820
<issue_start>username_0: I understand the gist of what convolutional neural networks do and what they are used for, but I still wrestle a bit with how they function on a conceptual level. For example, I get that filters with kernel size greater than 1 are used as feature detectors, and that number of filters is equal to the number of output channels for a convolutional layer, and the number of features being detected scales with the number of filters/channels. However, recently, I've been encountering an increasing number of models that employ 1- or 2D convolutions with kernel sizes of 1 or 1x1, and I can't quite grasp why. It feels to me like they defeat the purpose of performing a convolution in the first place. What is the advantage of using such layers? Are they not just equivalent to multiplying each channel by a trainable, scalar value?<issue_comment>username_1: Traditional CNNs used for image classification (and related tasks) are composed of 1 or more fully connected layers (FCs), after the convolutional and pooling layers, which take as input the features extracted from the convolutional and pooling layers, in order to perform classification or regression. One problem with FCs in CNNs is that the number of parameters can be very big, with respect to the number of parameters in the convolutional layers. There are tasks, such as *image segmentation*, where this big number of parameters is not really needed. An example of a neural network that does not make use of fully connected layers but only uses convolutions, downsampling (aka pooling), and upsampling operations is the [U-net](https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/), which is used for image segmentation. A neural network that only uses convolutions is known as a *fully convolutional network* (FCN). [Here](https://ai.stackexchange.com/a/21824/2444) I give a detailed description of FCNs and $1 \times 1$, which should also answer your question. In any case, to answer your question more directly, $1 \times 1$ convolutions have been used for **image segmentation** tasks, i.e. *dense* classification tasks, i.e. tasks where you want to assign a label to each pixel (or a group of pixels), as opposed to *sparse* classification tasks such as image classification (where the goal is to assign 1 label to the whole image). Moreover, in comparison with FC layers, they have **fewer parameters** and, more importantly, the **number of parameters in an FCN does not depend on the dimensions of the images** (as in the case of traditional CNNs), which is a good thing (especially, when your images have high resolutions), but typically it depends on the number of kernels and instances (of objects), in the case of instance segmentation. [The FCN paper](https://arxiv.org/pdf/1411.4038.pdf) discusses this reduction of the number of parameters (and computation time), so you should probably read this paper for more details. Upvotes: 1 <issue_comment>username_2: Typically 1x1 convolutions are used for changing the number of channels. Each output channel is a linear combination of the input channels. For example, if you perform a 1x1 convolution with only one output channel on an RGB image, then you get a grayscale image, whose intensity is a linear combination of the red, green, and blue values of the corresponding pixel (plus bias). If you perform a 1x1 convolution with more than one output channel, then each channel is formed in the same way, as a linear combination of the input channels. You can think of it as multiple convolutions, whose output is stacked on top of each other. All these filters have different parameters. Notice that if the output was equivalent to multiplying each channel by a scalar value, then you would always have the same number of inputs and outputs. Upvotes: 0
2020/04/28
1,071
3,618
<issue_start>username_0: > > In the standard Markov Decision Process (MDP) formalization of the reinforcement-learning (RL) problem (Sutton & Barto, 1998), a decision maker interacts with an environment consisting of **finite state and action spaces**. > > > This is an extract from [this paper](http://carlosdiuk.github.io/papers/OORL.pdf), although it has nothing to do with the paper's content per se (just a small part of the introduction). Could someone please explain why it makes sense to study finite state and action spaces? In the real world, we might not be able to restrict ourselves to a finite number of states and actions! Thinking of humans as RL agents, this really doesn't make sense.<issue_comment>username_1: In addition to the reason outlined in the comment, also note that if the state-space and action-space are both finite and of feasible size, tabular methods can be used, and there are some advantages to them (like the existence of convergence guarantees and generally a smaller number of hyperparameters to tune). Upvotes: 2 [selected_answer]<issue_comment>username_2: **Note: I assume you mean, countable Action and State Sets by 'Finite'.** *MDP(s) are not exclusive to finite spaces only. They can be used in Continuous/uncountable sets of Action and States too.* *Markov Decision Process (MDP)* is a tuple $(\mathcal S, \mathcal A, \mathcal P^a\_s, \mathcal R^a\_{ss'}, \gamma, \mathcal S\_o)$ where $\mathcal S$ is a set of States, $\mathcal A$ is the set of actions, $\mathcal P\_{s}^a: \mathcal A \times \mathcal S \rightarrow [0, 1]$ is a function that denotes Probability distribution over the states if action $a$ is execuited at state $s$. [1][2] Where, Q-function is defined as: $$ Q^\pi (s,a) = \mathbb E\_\pi \left [ \sum \limits\_{t=0}^{+\infty} \gamma(t)r\_t | s\_o = s, a\_o = a \right] \tag{\*}$$ *Note that $r\_t$ is just special case of Reward function $\mathcal R^a\_{ss'}$.* Now, if states and actions are discrete, then, the Q-Table Method[[3]](https://towardsdatascience.com/simple-reinforcement-learning-q-learning-fcddc4b6fe56) which is a state-action matrix helps us to evaluate $Q$ function and optimize efficiency. Whereas, in cases where the state/action sets are infinite or continuous, Deep Networks are preferred to Approximate $Q$ function. [[4]](https://www.geeksforgeeks.org/deep-q-learning/). **Q-Learning is Off-Policy method, doesn't require $\pi$ policy function** --- References: ----------- 1. <NAME> and <NAME>. *Reinforcement Learning: An Introduction*. MIT Press, 1998. 2. <NAME>, <NAME>, <NAME>, <NAME>, <NAME> and <NAME>. *A Tutorial on Linear Function Approximators for Dynamic Programming and Reinforcement Learning*. Foundations and Trends (R) in Machine Learning Vol. 6, No. 4 (2013) 375–454 3. <NAME>. [*Simple Reinforcement Learning: Q-learning*](https://towardsdatascience.com/simple-reinforcement-learning-q-learning-fcddc4b6fe56), *Create a q-table*, <https://towardsdatascience.com>, 2019. 4. <NAME>. [*Deep Q-Learning*](https://www.geeksforgeeks.org/deep-q-learning/), Deep Q-Learning, <https://www.geeksforgeeks.org/deep-q-learning/>, 2020. --- **Edit: I'd like to thank @nbro for editing suggestions.** Upvotes: 1 <issue_comment>username_3: To my knowledge you can't compute or solve an uncountably large MDP numerically. It will need to be discretized in some capacity. The same applies for classic control: you can't optimize over the true functional so you use a discrete approximation to the system and solve that. Upvotes: 0
2020/04/29
1,147
4,130
<issue_start>username_0: I'm interested about using Reinforcement Learning in a setting that might seem more suitable for Supervised Learning. There's a dataset $X$ and for each sample $x$ some decision needs to be made. Supervised Learning can't be used since there aren't any algorithms to solve or approximate the problem (so I can't solve it on the dataset) but for a given decision it's very easy to decide how good it is (define a reward). For example, you can think about the knapsack problem - let's say we have a dataset where each sample $x$ is a list (of let's say size 5) of objects each associated with a weight and a value and we want to decide which objects to choose (of course you can solve the knapsack problem for lists of size 5, but let's imagine that you can't). For each solution the reward is the value of the chosen objects (and if the weight exceeds the allowed weight then the reward is 0 or something). So, we let an agent "play" with each sample $M$ times, where play just means choosing some subset and training with the given value. For the $i$-th sample the step can be adjusted to be: $$\theta = \theta + \alpha \nabla\_{\theta}log \pi\_{\theta}(a|x^i)v$$ for each "game" with "action" $a$ and value $v$. instead of the original step: $$\theta = \theta + \alpha \nabla\_{\theta}log \pi\_{\theta}(a\_t|s\_t)v\_t$$ Essentially, we replace the state with the sample. The issue with this is that REINFORCE assumes that an action also leads to some new state where here it is not the case. Anyway, do you think something like this could work?<issue_comment>username_1: This seems like a multi-armed bandit problem (no states involved here). I had the same problem some times ago and I was advised to sample the output distribution M times, calculate the rewards and then feed them to the agent, this was also explained in [this paper](https://arxiv.org/pdf/1801.07365.pdf) Algorithm 1 page 3 (but different problem & different context). I honestly don't know if this will work for your case. You could also take a look at [this example](https://github.com/shshemi/NeuralKnapsack/blob/master/neural_knapsack.py). Upvotes: 1 <issue_comment>username_2: You should look into [contextual bandits](https://en.wikipedia.org/wiki/Multi-armed_bandit#Contextual_bandit), and specifically [gradient bandit solvers](https://towardsdatascience.com/13-solutions-to-multi-arm-bandit-problem-for-non-mathematicians-1b88b4c0b3fc) (see section 13). Your derivation of the gradient seems correct to me. Instead of a sampled/bootstrapped value function (as in Actor-Critic) or sampled full return (in REINFORCE) you can use the sampled reward. You will probably want to subtract a baseline from $v$, e.g. a rolling average reward for the current policy. I have [successfully used a gradient bandit solver for one-shot optimisation problem](https://www.kaggle.com/slobo777/pytorch-giant-gradient-bandit) with 5000 dimension actions. It was not as strong as a custom optimiser or SAT solver, but whether or not that is an issue for you will depend on the problem. Upvotes: 0 <issue_comment>username_3: Besides contextual Bandit or multi-armed Bandit perspective, if you want to use a dataset to train RL policy, I would recommend you [Batch RL](http://tgabel.de/cms/fileadmin/user_upload/documents/Lange_Gabel_EtAl_RL-Book-12.pdf), it is another RL working in a supervised learning way to train a policy. For your problem, I think you can still use one-state trajectories to train REINFORCE. For example, there is a trajectory, $\tau={(s, a, r, s^{\prime})}$, there $s^{\prime}$ is NULL. By using REINFORCE, you can get the gradient $\theta = \theta + \alpha \nabla\_{\theta}log \pi\_{\theta}(a|s)r$, and you do not need $s^{\prime}$ here. Upvotes: 0 <issue_comment>username_4: I think the key to your problem may not the **one-round**. Use RL to solve the knapsack problem is great related to the topic **rl for combination optimization**. U can use [NEURAL COMBINATORIAL OPTIMIZATION WITH REINFORCEMENT LEARNING](https://arxiv.org/pdf/1611.09940.pdf) to get some idea and find more related solutions. Upvotes: 1
2020/04/29
456
1,821
<issue_start>username_0: What does the term **"easy negatives"** exactly mean in the context of machine learning for a classification problem or any problem in general? From a quick google search, I think it means just negative examples in the training set. Can someone please elaborate a bit more on why the term "easy" is brought into the picture? Below, there is a screenshot taken from [the paper](https://openaccess.thecvf.com/content_ICCV_2017/papers/Lin_Focal_Loss_for_ICCV_2017_paper.pdf) where I found this term, which is underlined. [![enter image description here](https://i.stack.imgur.com/oLl7m.jpg)](https://i.stack.imgur.com/oLl7m.jpg)<issue_comment>username_1: OK, I think I understood what this means. Hard and easy negatives are the ones that have relatively large and small values for the loss function, respectively. Upvotes: 1 <issue_comment>username_2: It refers to samples that are very easy for the model to classify. If you are interested in the positive class, having many easy negatives could produce misleading results as your model could really struggle to classify not-so-easy samples. --- In a very hypothetical situation, imagine you are trying to classify brain scan images based on whether they show signs of a tumor or not. For the negative class, say you have a bunch of normal brain scans that have no tumor, but also a bunch of plain images, all black with nothing on them (you wouldn't have that, but let's imagine). For the positive class you got normal brain images with tumors. If you train a model, it might just learn that plain black pictures have no tumors, which is in fact true. Since half of your negative data have this kind of picture, the model could be virtually performing with a kinda-good accuracy but it would not be learning the actual problem. Upvotes: 0
2020/04/29
452
1,763
<issue_start>username_0: In DDPG, if there are no $\epsilon$-greedy and no action noise, is DDPG an on-policy algorithm?<issue_comment>username_1: If there was no action noise it would probably not explore enough to obtain a good estimate of Q or the policy gradient. Instead of estimating Q of the target policy you could estimate Q of the behavior policy but then you have a stochastic policy and the deterministic policy gradient theorem does not work anymore as it is a special case of the stochastic policy gradient theorem (see Section 3.3 of the DPG paper, <http://proceedings.mlr.press/v32/silver14.pdf>). You would have to use policy gradient theorem of Section 2.2 from the DPG paper. Upvotes: 0 <issue_comment>username_2: DDPG is an off-policy algorithm simply because of the objective taking expectation with respect to some other distribution that we are not learning about, i.e. the deterministic policy gradient can be expressed as $$\nabla \_{\theta^\mu} J \approx \mathbb{E}\_{s\_t \sim \rho^\beta} \left[ \nabla \_{\theta^\mu} Q(s,a|\theta^Q) | s=s\_t, a=\mu(s\_t ; \theta ^\mu) \right]\;.$$ We are interested in learning about the policy parameters of $\mu$, denoted by $\theta$, but we take expected with respect to some discounted state distribution induced by a policy $\beta$, which we will denote as $\rho^\beta$. To summarise, we are learning off-policy as the expectation of the gradient is taken with respect to some state distribution that occurs under some policy that we are not learning about. Given that on-policy learning is a special case of off-policy learning, if the replay buffer had a size of one, i.e. we use only the most recent experience tuple to perform parameter updates, then DDPG would be on-policy. Upvotes: 2
2020/04/29
467
1,847
<issue_start>username_0: What kinds of techniques do autopilots of autonomous cars (e.g. the ones of Tesla) use? Do they use reinforcement learning? Which types of neural network architecture do they use?<issue_comment>username_1: If there was no action noise it would probably not explore enough to obtain a good estimate of Q or the policy gradient. Instead of estimating Q of the target policy you could estimate Q of the behavior policy but then you have a stochastic policy and the deterministic policy gradient theorem does not work anymore as it is a special case of the stochastic policy gradient theorem (see Section 3.3 of the DPG paper, <http://proceedings.mlr.press/v32/silver14.pdf>). You would have to use policy gradient theorem of Section 2.2 from the DPG paper. Upvotes: 0 <issue_comment>username_2: DDPG is an off-policy algorithm simply because of the objective taking expectation with respect to some other distribution that we are not learning about, i.e. the deterministic policy gradient can be expressed as $$\nabla \_{\theta^\mu} J \approx \mathbb{E}\_{s\_t \sim \rho^\beta} \left[ \nabla \_{\theta^\mu} Q(s,a|\theta^Q) | s=s\_t, a=\mu(s\_t ; \theta ^\mu) \right]\;.$$ We are interested in learning about the policy parameters of $\mu$, denoted by $\theta$, but we take expected with respect to some discounted state distribution induced by a policy $\beta$, which we will denote as $\rho^\beta$. To summarise, we are learning off-policy as the expectation of the gradient is taken with respect to some state distribution that occurs under some policy that we are not learning about. Given that on-policy learning is a special case of off-policy learning, if the replay buffer had a size of one, i.e. we use only the most recent experience tuple to perform parameter updates, then DDPG would be on-policy. Upvotes: 2
2020/04/29
567
2,357
<issue_start>username_0: Currently, what are the most popular and effective approaches to leveraging AI for stock price prediction? It seems like there could be several approaches and problem formulations: * Supervised learning: * Regression: predict the stock price directly * Classification: predict whether the stock price goes up or down * Unsupervised learning: find clusters of stocks that move together * Reinforcement learning: let the agent directly maximize its stock market return * Other AI methods: rules, symbolic systems, etc. Which are most popular/performant? Are there other ways that people are using machine learning in stock trading (sentiment analysis on financial statements, news, etc.)?<issue_comment>username_1: If there was no action noise it would probably not explore enough to obtain a good estimate of Q or the policy gradient. Instead of estimating Q of the target policy you could estimate Q of the behavior policy but then you have a stochastic policy and the deterministic policy gradient theorem does not work anymore as it is a special case of the stochastic policy gradient theorem (see Section 3.3 of the DPG paper, <http://proceedings.mlr.press/v32/silver14.pdf>). You would have to use policy gradient theorem of Section 2.2 from the DPG paper. Upvotes: 0 <issue_comment>username_2: DDPG is an off-policy algorithm simply because of the objective taking expectation with respect to some other distribution that we are not learning about, i.e. the deterministic policy gradient can be expressed as $$\nabla \_{\theta^\mu} J \approx \mathbb{E}\_{s\_t \sim \rho^\beta} \left[ \nabla \_{\theta^\mu} Q(s,a|\theta^Q) | s=s\_t, a=\mu(s\_t ; \theta ^\mu) \right]\;.$$ We are interested in learning about the policy parameters of $\mu$, denoted by $\theta$, but we take expected with respect to some discounted state distribution induced by a policy $\beta$, which we will denote as $\rho^\beta$. To summarise, we are learning off-policy as the expectation of the gradient is taken with respect to some state distribution that occurs under some policy that we are not learning about. Given that on-policy learning is a special case of off-policy learning, if the replay buffer had a size of one, i.e. we use only the most recent experience tuple to perform parameter updates, then DDPG would be on-policy. Upvotes: 2
2020/04/30
733
2,688
<issue_start>username_0: > > "Single-object tracking commonly uses **Siamese networks, which can be seen as an RNN** unrolled over two time-steps." > > > [(from the SQAIR paper)](https://arxiv.org/abs/1806.01794) I'm wondering how Siamese networks can be viewed as RNNs, as mentioned above. A diagrammatic explanation, or anything that helps understand the same, would help! Thank you!<issue_comment>username_1: [![RNN structure](https://i.stack.imgur.com/mlDSR.png)](https://i.stack.imgur.com/mlDSR.png) Well, here in the picture we have the unrolled or unfold RNN on the right side. Siamese network is formed when it is said to be "unrolled over two time-steps". So, take part where there is two first iterations of RNN and yes, you have kind of Siamese network. One take from the [source of the image](http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/): > > Unlike a traditional deep neural network, which uses different parameters at each layer, a RNN shares the same parameters (U, V, W above) across all steps. This reflects the fact that we are performing the same task at each step, just with different inputs. This greatly reduces the total number of parameters we need to learn. > > > Sounds familiar to siamese network used on single-object tracking: there we take two signals (image and the tracked object), drive it through *identical* paths and make some maths to get results. Just something the RNN makes to time separated values! For proof of similarity, a take from a [site where siamese networks are nicely explained](https://medium.com/@reachraktim/object-tracking-with-siamese-networks-and-detectron2-572e04dac547): [![enter image description here](https://i.stack.imgur.com/tEZBm.png)](https://i.stack.imgur.com/tEZBm.png) **Side note:** *I don't know then, how closely those relate in real world (could a Siamese network in anyway be a RNN or vice versa), but supposedly so, because the comparison is made by researcher to say so. At diagrammatic level at least there would be no problem on that.* Upvotes: 3 [selected_answer]<issue_comment>username_2: Single object tracking using a Siamese Network is a Detect and compare approach, where an object of interest is detected and the one to be tracked is passed through a siam network with next consecutive frame to get a correlation between them, if you look at the related [reference](https://arxiv.org/abs/1704.06036), it is a correlation-based tracking, ie. correlation between objects detected in one frame and the ones detected in next frame, which you can imagine as samples considered across 2 timesteps or an RNN unrolled to 2 timesteps Upvotes: 0
2020/04/30
781
3,064
<issue_start>username_0: In case I had a prediction model and decided to add a PCA step prior to the model, is it theoretically possible/impossible that the number of output dimensions that is better for all tests may perform worse than the model without PCA? My question comes from the fact that I want to add a PCA step prior to a model and hyperparameterize the PCA output dimension from 1 to N (N being the number of dimensions in the original dataset) and I wanted to know if there is any theoretical basis that there is no case in which performing this previous step could have a worse performance than the previous model. Especially, my doubt comes if the best PCA case from a selection of dimensions from 1-N is always better than the best case without PCA.<issue_comment>username_1: [![RNN structure](https://i.stack.imgur.com/mlDSR.png)](https://i.stack.imgur.com/mlDSR.png) Well, here in the picture we have the unrolled or unfold RNN on the right side. Siamese network is formed when it is said to be "unrolled over two time-steps". So, take part where there is two first iterations of RNN and yes, you have kind of Siamese network. One take from the [source of the image](http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/): > > Unlike a traditional deep neural network, which uses different parameters at each layer, a RNN shares the same parameters (U, V, W above) across all steps. This reflects the fact that we are performing the same task at each step, just with different inputs. This greatly reduces the total number of parameters we need to learn. > > > Sounds familiar to siamese network used on single-object tracking: there we take two signals (image and the tracked object), drive it through *identical* paths and make some maths to get results. Just something the RNN makes to time separated values! For proof of similarity, a take from a [site where siamese networks are nicely explained](https://medium.com/@reachraktim/object-tracking-with-siamese-networks-and-detectron2-572e04dac547): [![enter image description here](https://i.stack.imgur.com/tEZBm.png)](https://i.stack.imgur.com/tEZBm.png) **Side note:** *I don't know then, how closely those relate in real world (could a Siamese network in anyway be a RNN or vice versa), but supposedly so, because the comparison is made by researcher to say so. At diagrammatic level at least there would be no problem on that.* Upvotes: 3 [selected_answer]<issue_comment>username_2: Single object tracking using a Siamese Network is a Detect and compare approach, where an object of interest is detected and the one to be tracked is passed through a siam network with next consecutive frame to get a correlation between them, if you look at the related [reference](https://arxiv.org/abs/1704.06036), it is a correlation-based tracking, ie. correlation between objects detected in one frame and the ones detected in next frame, which you can imagine as samples considered across 2 timesteps or an RNN unrolled to 2 timesteps Upvotes: 0
2020/05/01
1,583
4,511
<issue_start>username_0: I am trying to understand the mathematics behind the forward and backward propagation of neural nets. To make myself more comfortable, I am testing myself with an arbitrarily chosen neural network. However, I am stuck at some point. Consider a simple fully connected neural network with two hidden layers. For simplicity, choose linear activation function (${f(x) = x}$) at all layer. Now consider that this neural network takes two $n$-dimensional inputs $X^{1}$ and $X^{2}$. However, the first hidden layer only takes $X^1$ as the input and produces the output of $H^1$. The second hidden layer takes $H^{1} $and $X^2$ as the input and produces the output $H^{2}$. The output layer takes $H^{2}$ as the input and produces the output $\hat{Y}$. For simplicity, assume, we do not have any bias. So, we can write that, $H^1 = W^{x1}X^{1}$ $H^2 = W^{h}H1 + W^{x2}X^{2} = W^{h}W^{x1}X^{1} + W^{x2}X^{2}$ [substituting the value of $H^1$] $\hat{Y} = W^{y}H^2$ Here, $W^{x1}$, $W^{x2}$, $W^{h}$ and $W^{y}$ are the weight matrix. Now, to make it more interesting, consider a sharing weight matrix $W^{x} = W^{x1} = W^{x2}$, which leads, $H^1 = W^{x}X^{1}$ and $H^2 = W^{h}W^{x}X^{1} + W^{x}X^{2}$ I do not have any problem to do forward propagation by my hand; however, the problem arises when I tried to make backward propagation and update the $W^{x}$. $\frac{\partial loss}{\partial W^{x}} = \frac{\partial loss}{\partial H^{2}} . \frac{\partial H^{2}}{\partial W^{x}}$ Substituting, $\frac{\partial loss}{\partial H^{2}} = \frac{\partial Y}{\partial H^{2}}. \frac{\partial loss}{\partial Y}$ and $H^2 = W^{h}W^{x}X^{1} + W^{x}X^{2}$ $\frac{\partial loss}{\partial W^{x}}= \frac{\partial Y}{\partial H^{2}}. \frac{\partial loss}{\partial Y} . \frac{\partial}{\partial W^{x}} (W^{h}W^{x}X^{1} + W^{x}X^{2})$ Here I understand that, $\frac{\partial Y}{\partial H^{2}} = (W^y)^T$ and $\frac{\partial}{\partial W^{x}} W^{x}X^{2} = (X^{2})^T$ and we can also calculate $\frac{\partial Y}{\partial H^{2}}$, if we know the loss function. But how do we calculate $\frac{\partial}{\partial W^{x}} W^{h}W^{x}X^{1}$?<issue_comment>username_1: If we write $ H^2 = W^{h}H1 + W^{x}X^{2} $ then it will be better to understand the backward propagation step. Now, $\frac{\partial}{\partial W^{x}} W^{h}W^{x}X^{1}$ can be written as: $\frac{\partial H^2}{\partial H^1}\frac{\partial H^1}{\partial W^{x}} $ $\frac{\partial H^2}{\partial H^1} = (W^h)^T$ and $\frac{\partial H^1}{\partial W^{x}} = (X^{1})^T $ Therefore, $\frac{\partial}{\partial W^{x}} W^{h}W^{x}X^{1} = (W^h)^T(X^{1})^T $ I hope it has solved your problem. Upvotes: 1 <issue_comment>username_2: The product rule of partial derivative: $\frac{\partial}{\partial x} f g = g \frac{\partial}{\partial x} f + f \frac{\partial}{\partial x} g$ According to this: $\frac{\partial}{\partial W^{x}} W^{h}W^{x}X^{1} = W^{h}X^{1}$, because derivative of other term with respect to $W^{x}$ is zero. (I am not considering the transpose notation as it depends on how you organize your data.) **However**, Your assumption of giving $H^{1}$ and $X^{2}$ as input to second hidden layer is not valid(they are called hidden layer for that reason). The output of first hidden layer ($H^{1}$) will be fed to the input of second hidden layer. Your output of second hidden layer would be $H^{2} = W^{h} \* H^{1}$. You have to fed your input $X^{1} X^{2}$ to your network at once by means of looping or vectorization. Upvotes: 0 <issue_comment>username_3: I think your notations are unclear, but I can give an answer based on what you probably meant. For example, $\frac{\partial{L}}{\partial{W^x}}$ should be replaced by $(\nabla\_{W^x\_{j:}}L)\_{j=1, ...,n}$ (assuming everything stays in $\mathbb{R}^n$). Also your expression for $\frac{\partial{L}}{\partial{W^x}}$ is wrong, even accounting for the notation. Since $W^x\_{j:}$ affects the loss through $H\_{1,j}$ and $H\_{2,j}$, it would be better to treat the math in this way: $$\nabla\_{W^x\_{j:}}L=\frac{\partial{L}}{\partial{H\_{1,j}}}\nabla\_{W^x\_{j:}}H\_{1,j}+\frac{\partial{L}}{\partial{H\_{2,j}}}\nabla\_{W^x\_{j:}}H\_{2,j}$$ Now, $H\_{1, j}$ affects the loss though $H\_{2,k}\ \forall\ k=1,...,n.$ So, $$\frac{\partial{L}}{\partial{H\_{1,j}}}=\sum\_{k=1}^{n}\frac{\partial{L}} {\partial{H\_{2,k}}}W^x\_{kj}$$ And, $$\frac{\partial{L}}{\partial{H\_{2,j}}}=\sum\_{k=1}^{n}\frac{\partial{L}} {\partial{Y\_{k}}}W^y\_{kj}$$ Similarly, $\nabla\_YL$ can be computed. Upvotes: 1
2020/05/01
431
1,735
<issue_start>username_0: In the context of Reinforcement Learning, **what does it mean to have a multi-dimensional continuous action space?** I came across the following in the [COBRA Paper](https://arxiv.org/abs/1905.09275) > > A method for learning a distribution over a **multi-dimensional continuous action space.** This learned distribution can be sampled efficiently. > > > and > > During the initial exploration phase it explores its environment, in which it can move objects freely with a **continuous action space** but is not rewarded for its actions. > > > So, what do the multi-dimensionality and the continuity of the action space refer to? It'd be great if someone could provide an explanation with examples!<issue_comment>username_1: Let me rephrase it a little - it's a multidimensional continuous space of actions. So, you assign each action some vector from $R^{n}$. For intuition -- imagine you have a robot arm with four joints. For every joint you could applied a rotation force from [-1, 1] and thus you get a 4-D vector with float numbers for each possible action. Upvotes: 3 [selected_answer]<issue_comment>username_2: The question has already been answered by Kirill, but I thought I'll add a good example of a multi-dimensional continuous action space too, namely the one I just encountered in the COBRA paper itself. > > In all of our experiments we use a 2-dimensional virtual "touch-screen" environment that contains objects with configurable shape, position, and color. The agent can move any visible object by clicking on the object and clicking in a direction for the object to move. Hence the action space is **continuous and 4-dimensional, namely a pair of clicks**. > > > Upvotes: 1
2020/05/02
1,128
4,965
<issue_start>username_0: If a research paper uses multi-armed bandits (either in their standard or contextual form) to solve a particular task, can we say that they solved this task using a reinforcement learning approach? Or should we distinguish between the two and use the RL term only when it is associated with an MDP formulation? In fact, each RL course/textbook usually contains a section about bandits (especially when dealing with the exploration-exploitation tradeoff). Additionally, bandits also have the concept of actions and rewards. I just want to make sure what the right terminology should be, when describing either approach.<issue_comment>username_1: Several important researchers distinguish between bandit problems and the general reinforcement learning problem. The book [Reinforcement learning: an introduction](http://incompleteideas.net/book/RLbook2020.pdf#page=45) by Sutton and Barto describes bandit problems as a special case of the general RL problem. > > The first chapter of this part of the book describes solution methods for the **special case of the reinforcement learning problem** in which there is **only a single state**, called **bandit problems**. The second chapter describes the **general problem formulation** that we treat throughout the rest of the book — **finite Markov decision processes** — and its main ideas including Bellman equations and value functions. > > > This means that you can represent your bandit problem as an MDP with a single state and possibly multiple actions. In [section 1.1.2 of the book Bandit Algorithms](https://tor-lattimore.com/downloads/book/book.pdf#page=21) (2020), Szepesvari and Lattimore describe the differences between bandits and reinforcement learning > > One of the distinguishing features of all bandit problems studied in this book is that **the learner never needs to plan for the future**. More precisely, we will invariably make the assumption that the learner's available choices and rewards tomorrow are not affected by their decisions today. **Problems that do require this kind of long-term planning fall into the realm of reinforcement learning** > > > This definition is different than the one by Sutton and Barto. In this case, only bandit problems where the learner doesn't need to plan for the future are considered. In any case, bandit problems and RL problems have a lot of similarities. For example, both attempt to deal with the exploration-exploitation trade-off and, in both cases, the underlying problem can be formulated as a Markov decision process. Upvotes: 4 [selected_answer]<issue_comment>username_2: Let's have a look at the introduction of *Chapter 2: Multi-armed Bandits* in the [Reinforcement Learning: An Introduction by Sutton, Barto](http://incompleteideas.net/book/bookdraft2017nov5.pdf) > > ***The most important feature distinguishing reinforcement learning from other types of learning is that it uses training information that evaluates the actions taken rather than instructs by giving correct > actions.** This is what creates the need for active exploration, for an explicit search for good behavior. > **Purely evaluative feedback indicates how good the action taken was, but not whether it was the best or > the worst action possible. Purely instructive feedback, on the other hand, indicates the correct action > to take, independently of the action actually taken.** This kind of feedback is the basis of supervised > learning, which includes large parts of pattern classification, artificial neural networks, and system > identification. In their pure forms, these two kinds of feedback are quite distinct: evaluative feedback > depends entirely on the action taken, whereas instructive feedback is independent of the action taken. > In this chapter **we study the evaluative aspect of reinforcement learning in a simplified setting**, one > that does not involve learning to act in more than one situation. This nonassociative setting is the > one in which most prior work involving evaluative feedback has been done, and it avoids much of the > complexity of the full reinforcement learning problem. Studying this case enables us to see most clearly > how evaluative feedback differs from, and yet can be combined with, instructive feedback. > The particular nonassociative, **evaluative feedback problem that we explore is a simple version of > the k-armed bandit problem**. We use this problem to introduce a number of basic learning methods > which we extend in later chapters to apply to the **full reinforcement learning problem**. At the end > of this chapter, we take a step closer to the full reinforcement learning problem by discussing what > happens when the bandit problem becomes associative, that is, when actions are taken in more than > one situation.* > > > Since bandits involve *evaluative feedback* they are indeed a type of a (simplified) reinforcement learning problem. Upvotes: 2
2020/05/04
1,130
4,016
<issue_start>username_0: I have some gaps in my understanding regarding the performing of the gradient descent in Deep - Q networks. [The original deep q network for Atari](https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf) performs a gradient descent step to minimise $y\_j - Q(s\_j,a\_j,\theta)$, where $y\_j = r\_j + \gamma max\_aQ(s',a',\theta)$. In the example where I sample a single experience $(s\_1,a\_2,r\_1,s\_2)$ and I try to conduct a single gradient descent step, then feeding in $s\_1$ to the neural network outputs an array of $Q(s\_1,a\_0), Q(s\_1,a\_1), Q(s\_1,a\_2), \dots$ values. When doing gradient descent update for this single example, should the target output to set for the network be equivalent to $Q(s\_1,a\_0), Q(s\_1,a\_1), r\_1 + \gamma max\_{a'}Q(s\_2,a',\theta), Q(s\_1,a\_3), \dots$ ? I know the inputs to the neural network to be $s\_j$, to give the corresponding Q values. However, I cannot concretize the target values that the network should be optimized.<issue_comment>username_1: You are looking for the best actions which minimize the loss function. You sample a batch of memory buffer uniformly and define a loss function based on that batch. The memory buffer consists of trajectories. Each trajectory consists of an state and the action taken in that state which results in next state and an immediate reward. If the trajectory is shown by $(s,a,r,s\prime)$, the loss for this single state is simply defined as: $(r + max\_a\prime Q(s\prime,a\prime,w^-)-Q(s,a,w))^2$. The minus sign above the parameters means that you should fix the the target parameters to ensure the stability of learning. So the loss function for a whole batch is: $L(w) = E\_{(s,a,r,s\prime)\sim U(D)}(r + max\_a\prime Q(s\prime,a\prime,w^-)-Q(s,a,w))^2$. Upvotes: 1 <issue_comment>username_2: > > When doing gradient descent update for this single example, should the target output to set for the network be equivalent to $Q(s\_1,a\_0), Q(s\_1,a\_1), r\_2 + \gamma max\_aQ(s',a',\theta) , Q(s\_1,a\_3),...$ ? > > > Other than what looks like a couple of small typos, then yes. This is an implementation issue for DQN, where you have decided to create a function that outputs multiple Q functions at once. There is nothing about this in Q learning theory, so you need to figure out what will generate the correct error (and therefore gradients) for an update step. You don't know the TD targets for actions that were not taken, and cannot make any update for them, so the gradients for these actions must be zero. One way to achieve that is to feed back the network's own output for those actions. This is common practice because you can use built-in functions from neural network libraries to handle minibatches\*. There are some details worth clarifying: * You have substituted the third entry in the array with the calculated TD target because the action from experience replay is $a\_2$. In general you substitute for the action taken. Looks like you have this correct. * You have $r\_1$ in your experience replay table, but put $r\_2$ in your TD target formula. Looks like a typo. Another typo is that you maximise over $a$ but reference $a'$. Also, you reference $s'$ but don't define it anywhere. Fixing these issues gives $r\_1 + \gamma \text{max}\_{a'}Q(s\_2,a',\theta)$ * For the TD target it is often worth using a dedicated *target* network that every N steps is copied from the learning network. It helps with stability. This can be noted as a "frozen copy" of $\theta$ noted $\theta^-$, and the neural network approximate Q function often noted $\hat{q}$ giving formula of $r\_1 + \gamma \text{max}\_{a'}\hat{q}(s\_2,a',\theta^-)$ for your example. --- \* If you want you can also calculate the gradient more directly from the single action that was taken, and back propagate from there, knowing that all the other outputs will have a gradient componnet of zero. That requires implementing at least some of the back propagation yourself. Upvotes: 3 [selected_answer]
2020/05/04
2,060
7,694
<issue_start>username_0: I was watching a video in my online course where I'm learning about A.I. I am a very beginner in it. At one point in the course, the instructor says that reinforcement learning (RL) needs a deep learning model (NN) to perform an action. But for that, we need expected results in our model for the NN to learn how to predict the Q-values. Nevertheless, at the beginning of the course, they said to me that RL is an unsupervised learning approach because the agent performs the action, receives the response from the environment, and finally takes the more likely action, that is, with the highest Q value. But if I'm using deep learning in RL, for me, RL looks like a supervised learning approach. I'm a little confused about these things, could someone give me clarifications about them? [![Example from Online course](https://i.stack.imgur.com/JuKV7.png)](https://i.stack.imgur.com/JuKV7.png)<issue_comment>username_1: ### Supervised learning The supervised learning (SL) problem is formulated as follows. You are given a dataset $\mathcal{D} = \{(x\_i, y\_i)\_{i=1}^N$, which is assumed to be drawn i.i.d. from an **unknown** joint probability distribution $p(x, y)$, where $x\_i$ represents the $i$th input and $y\_i$ is the corresponding label. You choose a loss function $\mathcal{L}: V \times U \rightarrow \mathbb{R}$. Then your goal is to minimize the so-called *empirical risk* $$R\_{\mathcal{D}}[f]=\frac{1}{N} \sum\_{i=1}^N \mathcal{L}(x\_i, f(x\_i)) \tag{0}\label{0}$$ with respect to $f$. In other words, you want to find the $f$ that minimizes the average above, which can also be formally written as $$ f^\* = \operatorname{argmin}\_f R[f] \tag{1}\label{1} $$ The problem \ref{1} is called the **empirical risk minimization** because it is a proxy problem for the **expected risk minimization** (but you can ignore this for now). ### Reinforcement learning In reinforcement learning, you typically imagine that there's an agent that interacts, in time steps, with an environment by taking actions. At each time step $t$, the agent takes $a\_t$ in the state $s\_t$, receives a reward $r\_t$ from the environment and the agent and the environment move to another state $s\_{t+1}$. The goal of the agent is to maximize the **expected return** $$\mathbb{E}\left[ G\_t \right] = \mathbb{E}\left[ \sum\_{i=t+1}^\infty R\_i \right]$$ where $t$ is the current time step (so we don't care about the past), $R\_i$ is a random variable that represents the probable reward at time step $i$, and $G\_t = \sum\_{i=t+1}^\infty R\_i $ is the so-called *return* (i.e. a sum of future rewards, in this case, starting from time step $t$), which is also a random variable. In this context, the most important job of the programmer is to define a function $\mathcal{R}(s, a)$, the reward function, which provides the *reinforcement* (or reward) signal to the RL agent. $\mathcal{R}(s, a)$ will deterministically or stochastically determine the reward that the agent receives every time it takes action $a$ in the state $s$. (Note that $\mathcal{R}$ is different from $R\_i$, which is a random variable that represents the reward at time step $i$). ### What is the difference between SL and RL? In RL, you (the programmer) need to define the reward function $\mathcal{R}$ and you want to maximize the expected return. On the other hand, in SL you are given (or you collect) a dataset $\mathcal{D}$, you choose $\mathcal{L}$ in \ref{0}, and the goal is to find the function $f^\*$ that minimizes the empirical risk. So, these have different settings and goals, so they are different! However, every SL problem can be cast as an RL problem. See [this answer](https://ai.stackexchange.com/a/14168/2444). Similarly, in certain cases, you can formulate an RL as an SL problem. So, although the approaches are different, they are related. ### Is RL an unsupervised learning approach? In RL, you do **not** tell the agent what **action** it needs to take. You only say that the action that was taken was "bad", "good" or "so so". The agent needs to figure out which actions to take based on your feedback. In SL, you explicitly say that, for this input $x\_i$, the output should be $y\_i$. Some people may consider RL is an unsupervised learning approach, but I think this is wrong, because, in RL, the programmer still needs to define the reward function, so RL isn't totally unsupervised and it's also not totally supervised. For this reason, many people consider RL an approach that sits between UL and SL. ### What is deep learning? The term/expression deep learning (DL) refers to the use of deep neural networks (i.e. neural networks with many layers, where "many" can refer to more than 1 or 1000, i.e. it depends on the context) in machine learning, either supervised, unsupervised, or reinforcement learning. So, you can apply deep learning to SL, RL and UL. So, DL is not only restricted to SL. Upvotes: 3 [selected_answer]<issue_comment>username_2: In **Supervised learning**, the goal is to learn a mapping from points in a feature space to labels. So that for any new input data point, we are able to predict its label. whereas in **Unsupervised learning** data set is composed only of points in a feature space, i.e. there are no labels & here the goal is to learn some inner structure or organization in the feature space itself. Reinforcement Learning is basically concerned with **learning a policy** in a **sequential decision problem**. There are some components in RL that are “unsupervised” and some that are “supervised”, **but it is not a combination of “unsupervised learning” and “supervised learning”**, since those are terms used for very particular settings, and typically not used at all for sequential decision problems. [![Reinforcement learning](https://i.stack.imgur.com/mnvVc.png)](https://i.stack.imgur.com/mnvVc.png) [![enter image description here](https://i.stack.imgur.com/kHRkx.png)](https://i.stack.imgur.com/kHRkx.png) In Reinforcement Learning, we have something called as **Reward function** that the agent aims to **maximize**. During the learning process, one typical intermediate step is to learn to predict the **reward** obtained for a **specific policy**. In a nutshell, we can say **Reinforcement Learning** put a model in an environment where it learns everything on its own from data collection to model evaluation. It is about taking suitable action to maximize reward in a particular situation. There is no answer but the reinforcement agent decides what to do to perform the given task. In the absence of training dataset, it is bound to learn from its experience. ***To better understand let’s look at an analogy.*** Suppose you have a dog that is not so well trained, every time the dog messes up the living room you reduce the amount of tasty foods you give it (**punishment**) and every time it behaves well you double the tasty snacks (**reward)**. What will the dog eventually learn? Well, that messing up the living room is bad. This simple concept is powerful. The dog is the agent, the living room the environment, you are the source of the reward signal (tasty snacks). [![RL analogy](https://i.stack.imgur.com/hO3sQ.jpg)](https://i.stack.imgur.com/hO3sQ.jpg) To learn more on **Reinforcement learning**, pls check this awesome Reinforcement Learning lecture that is freely available on **youtube** by someone who actually leads the Reinforcement learning research group at **DeepMind** and also a lead researcher on **AlphaGo**, **AlphaZero**. [***RL Course by <NAME>***] <https://www.youtube.com/watch?v=2pWv7GOvuf0&list=PLqYmG7hTraZBiG_XpjnPrSNw-1XQaM_gB>" Upvotes: 2
2020/05/04
1,867
6,936
<issue_start>username_0: Apart from the vanishing or exploding gradient problems, what are other problems or pitfalls that we could face when training neural networks?<issue_comment>username_1: ### Supervised learning The supervised learning (SL) problem is formulated as follows. You are given a dataset $\mathcal{D} = \{(x\_i, y\_i)\_{i=1}^N$, which is assumed to be drawn i.i.d. from an **unknown** joint probability distribution $p(x, y)$, where $x\_i$ represents the $i$th input and $y\_i$ is the corresponding label. You choose a loss function $\mathcal{L}: V \times U \rightarrow \mathbb{R}$. Then your goal is to minimize the so-called *empirical risk* $$R\_{\mathcal{D}}[f]=\frac{1}{N} \sum\_{i=1}^N \mathcal{L}(x\_i, f(x\_i)) \tag{0}\label{0}$$ with respect to $f$. In other words, you want to find the $f$ that minimizes the average above, which can also be formally written as $$ f^\* = \operatorname{argmin}\_f R[f] \tag{1}\label{1} $$ The problem \ref{1} is called the **empirical risk minimization** because it is a proxy problem for the **expected risk minimization** (but you can ignore this for now). ### Reinforcement learning In reinforcement learning, you typically imagine that there's an agent that interacts, in time steps, with an environment by taking actions. At each time step $t$, the agent takes $a\_t$ in the state $s\_t$, receives a reward $r\_t$ from the environment and the agent and the environment move to another state $s\_{t+1}$. The goal of the agent is to maximize the **expected return** $$\mathbb{E}\left[ G\_t \right] = \mathbb{E}\left[ \sum\_{i=t+1}^\infty R\_i \right]$$ where $t$ is the current time step (so we don't care about the past), $R\_i$ is a random variable that represents the probable reward at time step $i$, and $G\_t = \sum\_{i=t+1}^\infty R\_i $ is the so-called *return* (i.e. a sum of future rewards, in this case, starting from time step $t$), which is also a random variable. In this context, the most important job of the programmer is to define a function $\mathcal{R}(s, a)$, the reward function, which provides the *reinforcement* (or reward) signal to the RL agent. $\mathcal{R}(s, a)$ will deterministically or stochastically determine the reward that the agent receives every time it takes action $a$ in the state $s$. (Note that $\mathcal{R}$ is different from $R\_i$, which is a random variable that represents the reward at time step $i$). ### What is the difference between SL and RL? In RL, you (the programmer) need to define the reward function $\mathcal{R}$ and you want to maximize the expected return. On the other hand, in SL you are given (or you collect) a dataset $\mathcal{D}$, you choose $\mathcal{L}$ in \ref{0}, and the goal is to find the function $f^\*$ that minimizes the empirical risk. So, these have different settings and goals, so they are different! However, every SL problem can be cast as an RL problem. See [this answer](https://ai.stackexchange.com/a/14168/2444). Similarly, in certain cases, you can formulate an RL as an SL problem. So, although the approaches are different, they are related. ### Is RL an unsupervised learning approach? In RL, you do **not** tell the agent what **action** it needs to take. You only say that the action that was taken was "bad", "good" or "so so". The agent needs to figure out which actions to take based on your feedback. In SL, you explicitly say that, for this input $x\_i$, the output should be $y\_i$. Some people may consider RL is an unsupervised learning approach, but I think this is wrong, because, in RL, the programmer still needs to define the reward function, so RL isn't totally unsupervised and it's also not totally supervised. For this reason, many people consider RL an approach that sits between UL and SL. ### What is deep learning? The term/expression deep learning (DL) refers to the use of deep neural networks (i.e. neural networks with many layers, where "many" can refer to more than 1 or 1000, i.e. it depends on the context) in machine learning, either supervised, unsupervised, or reinforcement learning. So, you can apply deep learning to SL, RL and UL. So, DL is not only restricted to SL. Upvotes: 3 [selected_answer]<issue_comment>username_2: In **Supervised learning**, the goal is to learn a mapping from points in a feature space to labels. So that for any new input data point, we are able to predict its label. whereas in **Unsupervised learning** data set is composed only of points in a feature space, i.e. there are no labels & here the goal is to learn some inner structure or organization in the feature space itself. Reinforcement Learning is basically concerned with **learning a policy** in a **sequential decision problem**. There are some components in RL that are “unsupervised” and some that are “supervised”, **but it is not a combination of “unsupervised learning” and “supervised learning”**, since those are terms used for very particular settings, and typically not used at all for sequential decision problems. [![Reinforcement learning](https://i.stack.imgur.com/mnvVc.png)](https://i.stack.imgur.com/mnvVc.png) [![enter image description here](https://i.stack.imgur.com/kHRkx.png)](https://i.stack.imgur.com/kHRkx.png) In Reinforcement Learning, we have something called as **Reward function** that the agent aims to **maximize**. During the learning process, one typical intermediate step is to learn to predict the **reward** obtained for a **specific policy**. In a nutshell, we can say **Reinforcement Learning** put a model in an environment where it learns everything on its own from data collection to model evaluation. It is about taking suitable action to maximize reward in a particular situation. There is no answer but the reinforcement agent decides what to do to perform the given task. In the absence of training dataset, it is bound to learn from its experience. ***To better understand let’s look at an analogy.*** Suppose you have a dog that is not so well trained, every time the dog messes up the living room you reduce the amount of tasty foods you give it (**punishment**) and every time it behaves well you double the tasty snacks (**reward)**. What will the dog eventually learn? Well, that messing up the living room is bad. This simple concept is powerful. The dog is the agent, the living room the environment, you are the source of the reward signal (tasty snacks). [![RL analogy](https://i.stack.imgur.com/hO3sQ.jpg)](https://i.stack.imgur.com/hO3sQ.jpg) To learn more on **Reinforcement learning**, pls check this awesome Reinforcement Learning lecture that is freely available on **youtube** by someone who actually leads the Reinforcement learning research group at **DeepMind** and also a lead researcher on **AlphaGo**, **AlphaZero**. [***RL Course by <NAME>***] <https://www.youtube.com/watch?v=2pWv7GOvuf0&list=PLqYmG7hTraZBiG_XpjnPrSNw-1XQaM_gB>" Upvotes: 2
2020/05/04
3,221
13,408
<issue_start>username_0: In reinforcement learning (RL), what is the difference between training and testing an algorithm/agent? If I understood correctly, testing is also referred to as evaluation. As I see it, both imply the same procedure: select an action, apply to the environment, get a reward, and next state, and so on. But I've seen that, e.g., the [Tensorforce RL framework](https://github.com/tensorforce/tensorforce) allows running with or without evaluation.<issue_comment>username_1: If you want, you can do training and testing in RL. Exactly the same usage, training for building up a policy, and testing for evaluation. In supervised learning, if you use test data in training, it is like cheating. You cannot trust the evaluation. That's why we separate train and test data. The Objective of RL is a little different. RL trying to find the optimal policy. Since RL collects the information by doing, while the agent explores the environment (for more information), there might be a loss in the objective function. But, it might be inevitable for a better future gain. Multi-arm bandit example, If there are 10 slot machines. They will return random amounts of money. They have different expected returns. I want to find the best way to maximize my gain. easy, I have to find the machine with the greatest expected return and use only the machine. How to find the best machine? If we have a training and testing (periods), For example, I will give you an hour of the training period, so it doesn't matter if you lose or how much you earn. And in the testing period, I will evaluate your performance. What would you do? In the training period, you will try as much as possible, without considering the performance/gain. And in the testing period, you will use only the best machine you found. This is not a typical RL situation. RL is trying to find the best way, Learning by doing. All the results while doing are considered. Suppose that I tried all 10 machines once each. And, the No.3 machine gave me the most money. But I am not sure that it is the best machine, because all the machines provide a RANDOM amount. If I keep using the No.3 machine, it might be a good idea, because according to the information so far, it is the best machine. However, You might miss the better machine if you don't try other machines due to randomness. But if you try other machines, you might lose an opportunity to earn more money. What should I do? This is a well-known Exploration and Exploitation trade-off in RL. RL trying to maximize the gain including the gains right now and the gains in the future. In other words, the performance during training is also considered as its performance. That's why RL is not unsupervised nor supervised learning. However, in some situations, you might want to separate training and testing. RL is designed for an agent who interacts with the environment. However, in some cases, (for example), rather than having an interactive playground, you have data of interactions. The formulation would be a little different in this case. Upvotes: 1 <issue_comment>username_2: Reinforcement Learning Workflow =============================== The general workflow for using and applying reinforcement learning to solve a task is the following. [![enter image description here](https://i.stack.imgur.com/RV6cf.jpg)](https://i.stack.imgur.com/RV6cf.jpg) 1. **Create the Environment** 2. **Define the Reward** 3. **Create the Agent** 4. **Train and Validate the Agent** 5. **Deploy the Policy** --- Training ======== * Training in Reinforcement learning employs a **system of rewards** and **penalties** to compel the computer to solve a problem by itself. * Human involvement is **limited** to **changing** the environment and **tweaking** the system of rewards and penalties. * As the computer **maximizes the reward**, it is prone to seeking unexpected ways of doing it. * Human involvement is focused on **preventing** it from **exploiting the system** and **motivating** the machine to perform the task in the way expected. * Reinforcement learning is useful when there is no **“proper way”** to perform a task, yet there are rules the model has to follow to perform its duties correctly. * Example: By tweaking and seeking the optimal policy for deep reinforcement learning, we built an agent that in just **20 minutes** reached a superhuman level in playing **Atari games**. * Similar algorithms, in principle, can be used to build AI for an **autonomous car.** [![Car](https://i.stack.imgur.com/lplhT.png)](https://i.stack.imgur.com/lplhT.png) Testing ======= * Debugging RL algorithms is very hard. Everything runs and you are not sure where the problem is. * To test if it worked well, if the trained agent is good at what it was trained for, **you take your trained model and apply it to the situation it is trained for**. * If it’s something like **chess** or **Go**, you could **benchmark** it against other engines (say stockfish for chess) or human players. * You can also define metrics for performance, ways of measuring the quality of the agent’s decisions. * In some settings (e.g a **Reinforcement Learning Pacman player**), the game score literally defines the target outcome, so you can just evaluate your model’s performance based on that metric. Upvotes: 3 <issue_comment>username_3: The goal of the reinforcement learning (RL) is to use data obtained via interaction with the environment to **solve the underlying Markov Decision Process** (MDP). "Solving the MDP" is tantamount to finding the optimal policy (with respect to the MDP's underlying dynamics which are usually assumed to be stationary). **Training** is the process of using data in order to find the optimal policy. **Testing** is the process of evaluating the (final) policy obtained by training. Note that, since we're generally testing the policy on the same MDP we used for training, the **distinction between the training dataset and the testing set** is no longer as important as it is the case with say supervised learning. Consequently, classical notions of **overfitting and generalization** should be approached from a different angle as well. Upvotes: 1 <issue_comment>username_4: What is reinforcement learning? =============================== In reinforcement learning (RL), you typically imagine that there's an agent that interacts, in time steps, with an environment by taking actions. On each time step $t$, the agent takes the action $a\_t \in \mathcal{A}$ in the state $s\_t \in \mathcal{S}$, receives a reward (or reinforcement) signal $r\_t \in \mathbb{R}$ from the environment and the agent and the environment move to another state $s\_{t+1} \in \mathcal{S}$, where $\mathcal{A}$ is the action space and $\mathcal{S}$ is the state space of the environment, which is typically assumed to be a Markov decision process (MDP). What is the goal in RL? ======================= The goal is to find a policy that maximizes the **expected return** (i.e. a sum of rewards starting from the current time step). The policy that maximizes the expected return is called the **optimal policy**. Policies -------- A policy is a function that maps states to actions. Intuitively, the policy is the strategy that implements the behavior of the RL agent while interacting with the environment. A policy can be deterministic or stochastic. A deterministic policy can be denoted as $\pi : \mathcal{S} \rightarrow \mathcal{A}$. So, a deterministic policy maps a state $s$ to an action $a$ with probability $1$. A stochastic policy maps states to a probability distribution over actions. A stochastic policy can thus be denoted as $\pi(a \mid s)$ to indicate that it is a conditional probability distribution of an action $a$ given that the agent is in the state $s$. Expected return --------------- The expected return can be formally written as $$\mathbb{E}\left[ G\_t \right] = \mathbb{E}\left[ \sum\_{i=t+1}^\infty R\_i \right]$$ where $t$ is the current time step (so we don't care about the past), $R\_i$ is a random variable that represents the probable reward at time step $i$, and $G\_t = \sum\_{i=t+1}^\infty R\_i $ is the so-called *return* (i.e. a sum of future rewards, in this case, starting from time step $t$), which is also a random variable. Reward function --------------- In this context, the most important job of the human programmer is to define a function $\mathcal{R}: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$, the reward function, which provides the *reinforcement* (or reward) signal to the RL agent while interacting with the environment. $\mathcal{R}$ will deterministically or stochastically determine the reward that the agent receives every time it takes action $a$ in the state $s$. The reward function $R$ is also part of the environment (i.e. the MDP). Note that $\mathcal{R}$, the reward function, is different from $R\_i$, which is a random variable that represents the reward at time step $i$. However, clearly, the two are very related. In fact, the reward function will determine the actual *realizations* of the random variables $R\_i$ and thus of the return $G\_i$. How to estimate the optimal policy? ----------------------------------- To estimate the optimal policy, you typically design optimization algorithms. ### Q-learning The most famous RL algorithm is probably Q-learning, which is also a numerical and iterative algorithm. Q-learning implements the interaction between an RL agent and the environment (described above). More concretely, it attempts to estimate a function that is closely related to the policy and from which the policy can be derived. This function is called the **value function**, and, in the case of Q-learning, it's a function of the form $Q : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$. The name $Q$-learning derives from this function, which is often denoted as $Q$. Q-learning doesn't necessarily find the optimal policy, but there are cases where it is guaranteed to find the optimal policy (but I won't dive into the details). Of course, I cannot describe all the details of Q-learning in this answer. Just keep in mind that, to estimate a policy, in RL, you will typically use a numerical and iterative optimization algorithm (e.g. Q-learning). What is training in RL? ----------------------- In RL, training (also known as *learning*) generally refers to the use of RL algorithms, such as Q-learning, to estimate the optimal policy (or a value function) Of course, as in any other machine learning problem (such as supervised learning), there are many practical considerations related to the implementation of these RL algorithms, such as * Which RL algorithm to use? * Which programming language, library, or framework to use? These and other details (which, of course, I cannot list exhaustively) can actually affect the policy that you obtain. However, the basic goal during the learning or training phase in RL is to find a policy (possibly, optimal, but this is almost never the case). What is evaluation (or testing) in RL? -------------------------------------- During learning (or training), you may not be able to find the optimal policy, so how can you be sure that the learned policy to solve the actual real-world problem is *good enough*? This question needs to be answered, ideally before deploying your RL algorithm. The evaluation phase of an RL algorithm is the **assessment of the quality of the learned policy** and **how much reward the agent obtains if it follows that policy**. So, a typical metric that can be used to assess the quality of the policy is to plot **the sum of all rewards received so far** (i.e. cumulative reward or return) **as a function of the number of steps**. One RL algorithm dominates another if its plot is consistently above the other. You should note that the evaluation phase can actually occur during the training phase too. Moreover, you could also assess the **generalization** of your learned policy by evaluating it (as just described) in different (but similar) environments to the training environment [[1](http://proceedings.mlr.press/v97/cobbe19a/cobbe19a.pdf)]. The section [12.6 Evaluating Reinforcement Learning Algorithms](https://artint.info/2e/html/ArtInt2e.Ch12.S6.html) of the book [Artificial Intelligence: Foundations of Computational Agents](https://artint.info/2e/html/ArtInt2e.html) (2017) by Poole and Mackworth provides more details about the evaluation phase in reinforcement learning, so you should probably read it. Apart from evaluating the learned policy, you can also evaluate your RL algorithm, in terms of * **resources used** (such as CPU and memory), and/or * experience/data/samples needed to converge to a certain level of performance (i.e. you can evaluate the **data/sample efficiency** of your RL algorithm) * **robustness/sensitivity** (i.e., how the RL algorithm behaves if you change certain hyper-parameters); this is also important because RL algorithms can be very sensitive (from my experience) What is the difference between training and evaluation? ------------------------------------------------------- During training, you want to find the policy. During the evaluation, you want to assess the quality of the learned policy (or RL algorithm). You can perform the evaluation even during training. Upvotes: 4 [selected_answer]
2020/05/04
789
2,913
<issue_start>username_0: I am asking for a book (or any other online resource) where we can solve exercises related to neural networks, similar to the books or online resources dedicated to mathematics where we can solve mathematical exercises.<issue_comment>username_1: There are actually quite a few. Personally I would say these courses have high quality and strong focus on practice: * Standford computer vision cs231. Check the assignments materials on [this page](http://cs231n.stanford.edu/2019/syllabus.html). This course has good explanation/exercises of how generally neural nets and backprop works. * [Fastai course notebooks](https://github.com/fastai/course-v3/tree/master/nbs/dl1/). You can listen to the lectures as well, but notebooks are quite self-containing * [Practical reinforcement learning](https://github.com/yandexdataschool/Practical_RL) course, if you interested in NN application for RL Upvotes: 2 <issue_comment>username_2: One of the most famous books dedicated to neural networks is [Neural Networks - A Systematic Introduction](http://page.mi.fu-berlin.de/rojas/neural/) (1996) by [<NAME>](https://en.wikipedia.org/wiki/Ra%C3%BAl_Rojas). [Most chapters end with a series of exercises](http://page.mi.fu-berlin.de/rojas/neural/chapter/forword.pdf) that test your understanding of the material. For example, in chapter [14 Stochastic Networks](http://page.mi.fu-berlin.de/rojas/neural/chapter/K14.pdf), one of the exercises is > > Solve the eight queens problem using a Boltzmann machine. Define the > network's weights by hand. > > > This should give you a sense of the type of exercise that you will find in this book. Upvotes: 2 <issue_comment>username_3: The book [Grokking Deep Learning](http://www.hdip-data-analytics.com/_media/resources/pdf/s4/grokking_deep_learning.pdf), by <NAME> (a PhD student at Oxford University and a research scientist at DeepMind), a wonderful, clean, and plain-English discussion of the basic mechanics that go on under the hood of neural networks - from data flow to updating of weights. It is written without a slant on normally-wonky math, the concepts are presented and then advanced at a digestible pace for anyone. Here are a few more possibly useful resources. 1. [Neural Networks: Playground Exercises](https://developers.google.com/machine-learning/crash-course/introduction-to-neural-networks/playground-exercises) 2. [Getting Started With Deep Learning: Convolutional Neural Networks](https://cloudacademy.com/course/convolutional-neural-networks/exercise-1-solution/) 3. [Deep learning focuses on practical aspects of deep learning](https://tensorchiefs.github.io/dl_course/) 4. [First lab assignment in Deep learning](http://www.zemris.fer.hr/~ssegvic/du/lab1en.shtml) 5. [Deeplearning stanford](http://deeplearning.stanford.edu/tutorial/supervised/ExerciseConvolutionalNeuralNetwork/) Upvotes: 3 [selected_answer]
2020/05/05
1,533
4,959
<issue_start>username_0: Let's consider this scenario. I have two conceptually different video datasets, for example a dataset A composed of videos about cats and a dataset B composed of videos about houses. Now, **I'm able** to extract a feature vectors from both the samples of the datasets A and B, and I know that, each sample in the dataset A is related to one and only one sample in the dataset B and they belong to a specific class (there are only 2 classes). For example: ``` Sample x1 AND sample y1 ---> Class 1 Sample x2 AND sample y2 ---> Class 2 Sample x3 AND sample y3 ---> Class 1 and so on... ``` **If I extract the feature vectors from samples in both datasets , which is the best way to combine them in order to give a correct input to the classifier (for example a neural network) ?** feature vector **v1** extracted from **x1** + feature vector **v1'** extracted from **y1** ---> input for classifier *I ask this because I suspect that neural networks only take one vector as input, while I have to combine two vectors*<issue_comment>username_1: The easiest way can be the concatenation of the feature vectors to create a single feature vector for each sample. Assume the first sample is made of the pair $X\_1$ and $Y\_1$. Let the corresponding feature vectors for $X\_1$ and $Y\_1$ be $\textbf{v}\_1$ and $\textbf{v}\_2$, respectively. $$ \textbf{v}\_1 = [f\_1, f\_2, \ldots , f\_n],\\ \textbf{v}\_2 = [g\_1, g\_2, \ldots , g\_m]. $$ Then, the first sample's feature can be defined as $$ \textbf{v} = [f\_1, f\_2, \ldots , f\_n, g\_1, g\_2, \ldots , g\_m]. $$ Eventually, when you pass the latter feature vector to a machine learning model, it will try to capture the dependencies among all of these features, to learn a solution for your task of interest (i.e. classification). Upvotes: 3 [selected_answer]<issue_comment>username_2: *$^\*$Note - Question is bit unclear, in case the answer doesn't addresses the question, please ask for edit/delete Request.* --- GENERALIZATION -------------- Suppose there are multiple datasets denoted by $A\_i$. Datasets contain a set of Vectors $x\_{j} $. Mathematically $A\_i = \{ x\_j\}\_{j=0}^n$. We've to find an estimator function $\hat f$, such that $\hat f( \vec r) = y, \, \vec r \in X$ where $X $ is a special dataset created by combining all $A\_i$ which helps in classification into $y \in Y$ which is the set of classes. . As @username_1 Mentioned out, linearly separable feature can be easily separated by straight combination of vectors i.e. if $x\_u \in A\_i, w\_v \in A\_j \dots$, then $r = [x\_1 \,x\_2 \, \dots \, x\_u \, w\_1 \, \dots w\_v \dots]$. Where, $r \in X$ which is the required dataset. *There are cases where the features are not linearly separable*, We use **basis expansion methods**[[1]](https://towardsdatascience.com/non-linear-regression-basis-expansion-polynomials-splines-2d7adb2cc226) to make required shape of hyperplane to separate the features. We create a new dataset combining $A\_i \, \forall i \in C \subset \mathbb N$. Suppose that the new dataset is $X$, then $r \in X$ and $r = [r\_0, r\_1, \dots r\_n].$ Then, $$r\_1 = u\_1^2v\_1^2 \\ r\_2 = \sin(u\_2)\sin(v\_2) \\ r\_3 = ae^{u\_3 + v\_3} \\ r\_4 = a v\_4 v\_4 + a\_2 u\_4^2 v\_4^2 + \dots \\ \dots$$ Here $u\_p \in A\_i; \, v\_q \in A\_j$ Here you can use all the creativity to set $r = [r\_1, r\_2, \dots , r\_n]$ and make a new dataset. What equations and what functions you chose fully depends on the kind of hyperplane shape you want to obtain. Basis expansion is just one of the methods for feature extraction is certainly one of the most flexible too. Now, you feed the newly created vectors into your trained estimator functions (which is Neural Net) which can classify things much easily now. *In case of Regression/Classification without Neural Net needs some extra treatment to train the model*[[2]](https://medium.com/datadriveninvestor/improve-your-classification-models-using-mean-target-encoding-a3d573df31e8). --- **[[2]](https://medium.com/datadriveninvestor/improve-your-classification-models-using-mean-target-encoding-a3d573df31e8)Note:** There is also a big role of encoding. For example, if you encode colors by numbers $1, 2, 3$ for RGB or $10,01, 11$ fully changes everything and your features too. In such cases, You may even need different equations to make your required dataset $X$ and vectors $r$. --- REFERENCES: ----------- 1. <NAME>. <https://towardsdatascience.com>. [*Non-linear regression: basis expansion, polynomials & splines*](https://towardsdatascience.com/non-linear-regression-basis-expansion-polynomials-splines-2d7adb2cc226). Sep 30, 2019. Web. 6 May 2020. 2. Sangarshanan. <https://medium.com>. [*Improve your classification models using Mean /Target Encoding*](https://medium.com/datadriveninvestor/improve-your-classification-models-using-mean-target-encoding-a3d573df31e8). Jun 23, 2018. Web. 6 May 2020. Upvotes: 1
2020/05/07
842
3,097
<issue_start>username_0: I was going through the [AlphaGo Zero paper](https://discovery.ucl.ac.uk/id/eprint/10045895/1/agz_unformatted_nature.pdf) and I was trying to understand everything, but I just can't figure out this one formula: $$ \pi(a \mid s\_0) = \frac{N(s\_0, a)^{\frac{1}{\tau}}}{\sum\_b N(s\_0, b)^{\frac{1}{\tau}}} $$ Could someone decode how the policy makes decisions following this formula? I pretty much understood all of the other parts of the paper, and also the temperature parameter is clear to me. It might be a simple question, but can't figure it out.<issue_comment>username_1: The formula in question uses a function N(state, action) that defines a visit count of a state-action pair (introduced on page 3). To describe how it is used, lets first describe the steps of AlphaGo Zero as a whole. There are 4 "phases" to the Monte-Carlo tree search in AlphaGo Zero as depicted in Figure 2. The first 3 expand and update the tree and together are the "search" in Monte-Carlo tree "search" in AlphaGo Zero. 1. **Select** an edge (action) in the tree with maximum action-value Q (plus upper confidence bound U) 2. **Expand and evaluate** the leaf node using the network 3. **Backup** Action-values Q are updated to track the evaluations of the Value in the subtree. 4. **Play** - After the "search" is complete, the search probabilities are returned proportional to the visitation count of the nodes of the tree.**\*** **\*** This is where the formula in question comes into play (pun intended). During the "search", nodes that looked good were expanded on and thus their visitation counts were updated. So the formula is essentially describing this logic: *Good nodes have higher counts, so choose the nodes with higher counts often* But what if there is a really good node but it hasn't been visited a lot so its not chosen? This is where the temperature parameter comes in: * If the temperature is 1, this selects moves proportionally to their visit counts. * If the temperature is 0 (not actually but rather in infinitesimal that approaches 0), and with some added noise, it "*ensures that all moves may be tried, but the search may still overrule bad moves*." So altogether, the formula is saying : *Pick good things most of the time* The mathematical evaluation of the formula is described below: AlphaGo Zero defines the probability of each action (aka the policy) by that formula. If there are 3 nodes, A, B and C, and they have each been visited 10, 70, and 20 times (100 times in total), respectively, then the probability of taking those actions is: * P(A) = 10/100 = .10 * P(B) = 70/100 = .7 * P(C) = 20/100 = .2 Upvotes: 2 [selected_answer]<issue_comment>username_2: Intuitively: 1. Larger Q means larger probability that the node (s'|s,a) would be chosen. When we selected most visited node, we selected a node with good Q. 2. More visited count means more accurate estimation. And the chosen node proved itself as a good choice even after more trials than other nodes. 3. Less computation in some cases (integer/long vs float/double) Upvotes: 0
2020/05/07
302
1,268
<issue_start>username_0: If I have the fitness of each genome, how do I determine which genome will crossover with which, and so on, so that I get a new population? Unfortunately, I can't find anything about it in the original paper, so I ask here?<issue_comment>username_1: The good thing about genetic algorithms is that they are exchangeable. If you have the fitness of each individual, any algorithm (i.e. roulette, rank, tournament) will do. Upvotes: 0 <issue_comment>username_2: The original work on NEAT(Neuroevolution of augmenting topologies) was by <NAME> in 2002 at The University of Texas at Austin. The web page for the project is [here](http://nn.cs.utexas.edu/?stanley:ec02) I suggest you download and read the paper linked from that page. As for selection of genome pairs, NEAT makes use of a speciation model so the selection of such pairs is constrained to at least prefer pairs from the same 'species', on the assumption that the species has evolved such that species population is isolated under reproduction. The innovation that has been 'bred' into the species is thus preserved under reproduction. Selection by fitness alone is insufficient in such models. This differs from the simple GA where pair selection is unconstrained. Upvotes: 1
2020/05/07
1,085
4,806
<issue_start>username_0: I wanted to train a model that recognizes sign language. I have found a dataset for this and was able to create a model that would get 94% accuracy on the test set. I have trained models before and my main goal is not to have the best model (I know 94% could easiy be tuned up). However these models where always for class exercises and thus were never used on 'real' new data. So I took a new picture of my hand that I know I wanted to be a certain letter (let's assume A). Since my model was trained on 28x28 images, I needed to re-size my own image because it was larger. After that I fed this image to my model only to get a wrong classification. <https://i.stack.imgur.com/Rf1UK.jpg> These are my pictures (upper-left = my own image (expected class A), upper-right = an image of class A (that my model correctly classifies as A), bottom = picture of class Z (the class my image was classified as)). You can clearly see that my own image looks for more like the image of class A (that I wanted my model to predict), than the model it did predict. What could be reasons that my model does not work on real-life images? (If code is wanted I can provide it ofcourse but since I don't know where I go wrong, it seemed out of line to copy all the code).<issue_comment>username_1: I’m assuming that you used LeNet (our some other model with small number of parameters) since your training image size is 28x28. Note that LeNet doesn’t generalize well to new images. I think it performs fine (>90%) on MNIST but not good on CIFAR10 (>60%) albeit both datasets contain similar size image. (Just trying to remember the performance from PyTorch implementations). It’s more about if the model has capacity to learn the complexity of dataset. CIFAR10 is more complex and harder to model than MNIST. LeNet is a small image classification model (in terms of capacity) so it cannot nicely learn the correlation between pixels of input images well and therefore doesn’t perform well on unseen images. In your case it seems like your model has overfit to training examples. It might perform well on test images because both training and test subsets are sampled from same data generating distribution but real-world images it might experience in future might be different (like your own hand). If it doesn’t perform well on unseen images we say it has not generalized well, which looks the case in your situation. In this case you need a validation set to validate that your model generalizes to unseen images. If you have it then you should use it in early stopping regularization technique. You can also add other regularizers to your model (the simplest one is weight decay). But instead of inventing your network architecture why don’t you use models like ResNet. Just fine-tune the pre-trained ResNet on your own dataset. I’d prefer to fine-tune personally in this situation because data distribution it was trained on (ImageNet) is pretty different from your hand sign dataset. In other case if your dataset contained nature and surrounding images I’d rather freeze the parameters of fixed-feature extractor layers and trained only the last few layers of ResNet (or similar model). I hope this helps! Upvotes: 1 <issue_comment>username_2: This is not an uncommon situation. The data set your model is trained on represents a certain probability distribution. Your test set is most likely a good representation from that distribution so your test results will be good. However when you use real world images they may or may not have a similar distribution. Typically if the training set is large and diverse it is a good representation of the distribution and when the model is used to classify a real world image it will do so correctly. I think I know of the data set you are working with and if I recall it is fairly large. So the problem may be that your model is not complex enough to fully capture the complexity of the data. You can test that fairly simply by using transfer learning with a model that is known to be effective for image classification. I recommend using the MobileNet model. It contains only about 4 million parameters but is about as accurate as larger models containing 10 times as many parameters. So MobileNet is not computationally expensive to train. Documentation can be found [here](https://keras.io/api/applications/). Upvotes: 1 <issue_comment>username_3: Are you sure the image quality in your test set and phone camera image similar? I once trained a CNN model on poor quality images and with very good validation accuracy but when I tested on image from my camera it didn't work at all. I degraded the image quality by resizing the image from my camera to a small size then again back to required size and it worked perfectly. Upvotes: 0
2020/05/08
1,350
4,992
<issue_start>username_0: I'm looking to implement a AI for the turn-based game Mastermind in Node.JS, using Google's Tensorflow library. Basically the AI needs to predict the 4D input for the optimal 2D output `[0,4]` with a given list of 4D inputs and 2D outputs from previous turns in the form of `[input][output]`. The optimal output would be `[0,4]`, which would be the winning output. The training data looks like this: ``` [1,2,3,4][0,1] [0,5,2,6][3,1] [0,2,5,6][2,2] [6,5,2,0][4,0] [5,2,0,6][0,4] ``` So given these previous turns ``` [1,2,3,4][0,1] [0,5,2,6][3,1] [0,2,5,6][2,2] [6,5,2,0][4,0] ``` the AI would predict an input of `[5,2,0,6]` for the output `[0,4]`. I've looked at [this post](https://ai.stackexchange.com/questions/3463/neural-network-model-to-infer-inputs-given-an-output) but it talks about only inferring input for a output without any context. In Mastermind, the context of previous guesses and results from them are critical My algorithm would need to use the information from previous turns to determine the best input for the winning output (`[0,4]`). So my question is: **How can I implement AI for Mastermind?**<issue_comment>username_1: You could possibly apply neural networks, reinforcement learning to summarise results of previous choices (what you are calling *context*) and use score predictions to suggest the next turn's guess. However, the game of Mastermind has a small search space and it is possible to process this "context" more directly by refining a set of guesses. This will be much more efficient and simpler to understand than a neural network approach. It would be very hard to make a neural network variant which was as efficient - either in terms of CPU time, or in terms of number of turns it takes to find a solution. In practice, a Mastermind solver is much like a Hangman solver, or a Guess Who? solver. You have an initial large set of all possible answers, and need to narrow it down to a single correct answer. You do this by processing after each guess to reduce the set of answers that meet all the constraints that the game has given you so far. The agent needs to know the score function that compares a target value with the guess and returns the score. Let's call that `score(guess, target)` The algorithm looks like this: ``` (Opponent sets unknown_target) Initialise possible_answers as list of all valid targets For each turn: Select one of possible_answers as this turn's guess Ask opponent for gscore = score(guess, unknown_target) If gscore is [0,4] then exit(win) If it was last guess then exit(lose) For each possible_answer in possible_answers: pscore = score(guess, possible_answer) If pscore != gscore, then remove possible_answer from possible_answers ``` You can finesse this for the stage `Select one of possible_answers as this turns guess` by trying to optimise either by psychological model of opponent or trying to find choices that are likely to cause the best reduction in size of `possible_answers`. However, a simple random choice should do quite well. Also worth noting is that the algorithm does not depend on the exact nature of the scoring function, so it is applicable for many variations of guessing games. It does rely on the score providing information that will reduce the remaining set of guesses. In some games that may mean taking more care about the precise nature of a guess, in order to maximise this effect. Out of interest, I implemented this algorithm and tested it when there were 10 choices at each position (i.e. digits 0 to 9), maximum of 10 guesses allowed, and the target to guess was set randomly. Using random guesses and the algorithm exactly as written, the above approach guessed correctly 9,996 times out of 10,000, and on average the guesser won the game in 6.2 turns. Upvotes: 1 <issue_comment>username_2: although I have seen RL solutions to this problem, (those I saw) fail to realize that the state of mastermind is not observable, as there is the "secret" we're trying to guess. mastermind is best approached as a constraint satisfaction problem, along the lines described by username_1. The whole trick is to realize that you can eliminate options from the "current possible alternatives" set by treating the latest guess as the target and eliminate any combinations that don't agree with it e.g. (using 3 digits for clarity) last guess=123, scores [2,0] i.e. 2 white, 0 black then the current alternatives are eliminated if they don't score [2,0] against the **last guess** 123: 124 [0,2] 214 [2,0] 215 [2,0] 321 [2,1] let's say the secret is 215, you see that our method of elimination is correct, albeit we don't know the secret! I have seen lots of different approaches (genetic engineering, information theory etc), but the plain truth is that a 50-line matlab piece of code with random guessing will give a winning stragegy that averages 4.3 guesses for the standard mastermind game (1296 alternatives) Upvotes: 0
2020/05/08
955
3,521
<issue_start>username_0: We all have heard about how beneficial AI can be in health. There are plenty of papers and research about confronting diseases, like cancer. However, in 2020 with COVID-19 be one of the most serious health problems that have caused thousands of deaths worldwide. Is AI already being used in the drug industry to combat the COVID-19? If yes, can you, please, provide a reference?<issue_comment>username_1: I'm not sure if it is being used directly in the *industry*, but [here](https://www.lancaster.ac.uk/news/uk-scientists-develop-new-rapid-smart-testing-device-for-coronavirus) is an interesting article on research being done by 3 UK universities using AI. Upvotes: 2 <issue_comment>username_2: A global race is underway to discover a vaccine, drug, or combination of treatments that can disrupt the SARS-CoV-2 virus. The problem is, there are more than a **billion** such molecules. A researcher would conceivably want to test each one against the two dozen or so proteins in **SARS-CoV-2** to see their effects. Such a project could use every wet lab in the world and still not be completed for centuries. Computer modelling is a common approach used by academic researchers and pharmaceutical companies as a preliminary, filtering step in drug discovery. However, in this case, even every supercomputer on Earth could not test those **billion molecules** in a reasonable amount of time. [**Folding@home**](https://foldingathome.org/) is a distributed computing project run by Stanford University. The aim of the project is to examine how proteins fold and it does this using **spare computing power**. however, there is a lot of research in progress that are harnessing the potential of **Artificial intelligence** to develop the potential treatment to combat the **COVID-19**. Check this recent article By **<NAME>** in **biv** focus on how artificial intelligence is used to accelerate the process of Drug Discovery: [**Drug research turns to artificial intelligence in COVID-19 fight**](https://biv.com/article/2020/05/drug-research-turns-artificial-intelligence-covid-19-fight) Here are the list of some companies that are using **AI-driven** approach for Drug discovery * [BlackThorn Therapeutics](https://www.blackthornrx.com/) * [xscientia](https://www.exscientia.ai/) * [Insilico Medicine](https://insilico.com/) * [Insitro](http://insitro.com/) * [Notable Labs](https://notablelabs.com/) * [Standigm](http://www.standigm.com/) * [Recursion Pharmaceuticals](https://www.recursionpharma.com/) The Hong Kong-based company **Insilico Medicine**, a developer of comprehensive drug discovery and biomarker development platform **GENTRL**, and a pioneer in the application of **generative adversarial networks (GANs)** to drug discovery. Insilico Medicine, Publishes a paper in September last year titled, [**Deep learning enables rapid identification of potent DDR1 kinase inhibitors**](https://www.nature.com/articles/s41587-019-0224-x)," in a most reputed journal **Nature Biotechnology**. The paper describes a timed challenge, where the new artificial intelligence system called **Generative Tensorial Reinforcement Learning (GENTRL)** designed **six novel inhibitors of DDR1**, a **kinase target implicated in fibrosis** **and other diseases**, **in 21 days**. Four compounds were active in biochemical assays, and two were validated in cell-based assays. One lead candidate was tested and demonstrated favorable pharmacokinetics in mice. Upvotes: 4 [selected_answer]
2020/05/09
1,266
4,865
<issue_start>username_0: I am reading a paper implementing a deep deterministic policy gradient algorithm for portfolio management. My question is about a specific neural network implementation they depict in this picture ([paper](https://arxiv.org/pdf/1706.10059v2.pdf), picture is on page 14). [![enter image description here](https://i.stack.imgur.com/nIX2S.png)](https://i.stack.imgur.com/nIX2S.png) The first three steps are convolutions. Once they have reduced the initial tensor into a vector, they add that little yellow square entry to the vector, called the cash bias, and then they do a softmax operation. The paper does not go into any detail about what this bias term could be, they just say that they add this bias before the softmax. This makes me think that perhaps this is a standard step? But I don't know if this is a learnable parameter, or just a scalar constant they concatenate to the vector prior to the softmax. I have two questions: 1) When they write softmax, is it safe to assume that this is just a softmax function, with no learnable parameters? Or is this meant to depict a fully connected linear layer, with a softmax activation? 2) If it's the latter, then I can interpret the cash bias as being a constant term they concatenate to the vector before the fully connected layer, just to add one more feature for the cash assets. However, if softmax means just a function, then what is this cash bias? It must be a constant that they implement, but I don't see what the use of that would be, how can you pick a constant scalar that you are confident will have the intended impact on the softmax output to bias the network to put some weight on that feature (cash)? Any comments/interpretations are appreciated!<issue_comment>username_1: I'm not sure if it is being used directly in the *industry*, but [here](https://www.lancaster.ac.uk/news/uk-scientists-develop-new-rapid-smart-testing-device-for-coronavirus) is an interesting article on research being done by 3 UK universities using AI. Upvotes: 2 <issue_comment>username_2: A global race is underway to discover a vaccine, drug, or combination of treatments that can disrupt the SARS-CoV-2 virus. The problem is, there are more than a **billion** such molecules. A researcher would conceivably want to test each one against the two dozen or so proteins in **SARS-CoV-2** to see their effects. Such a project could use every wet lab in the world and still not be completed for centuries. Computer modelling is a common approach used by academic researchers and pharmaceutical companies as a preliminary, filtering step in drug discovery. However, in this case, even every supercomputer on Earth could not test those **billion molecules** in a reasonable amount of time. [**Folding@home**](https://foldingathome.org/) is a distributed computing project run by Stanford University. The aim of the project is to examine how proteins fold and it does this using **spare computing power**. however, there is a lot of research in progress that are harnessing the potential of **Artificial intelligence** to develop the potential treatment to combat the **COVID-19**. Check this recent article By **<NAME>** in **biv** focus on how artificial intelligence is used to accelerate the process of Drug Discovery: [**Drug research turns to artificial intelligence in COVID-19 fight**](https://biv.com/article/2020/05/drug-research-turns-artificial-intelligence-covid-19-fight) Here are the list of some companies that are using **AI-driven** approach for Drug discovery * [BlackThorn Therapeutics](https://www.blackthornrx.com/) * [xscientia](https://www.exscientia.ai/) * [Insilico Medicine](https://insilico.com/) * [Insitro](http://insitro.com/) * [Notable Labs](https://notablelabs.com/) * [Standigm](http://www.standigm.com/) * [Recursion Pharmaceuticals](https://www.recursionpharma.com/) The Hong Kong-based company **Insilico Medicine**, a developer of comprehensive drug discovery and biomarker development platform **GENTRL**, and a pioneer in the application of **generative adversarial networks (GANs)** to drug discovery. Insilico Medicine, Publishes a paper in September last year titled, [**Deep learning enables rapid identification of potent DDR1 kinase inhibitors**](https://www.nature.com/articles/s41587-019-0224-x)," in a most reputed journal **Nature Biotechnology**. The paper describes a timed challenge, where the new artificial intelligence system called **Generative Tensorial Reinforcement Learning (GENTRL)** designed **six novel inhibitors of DDR1**, a **kinase target implicated in fibrosis** **and other diseases**, **in 21 days**. Four compounds were active in biochemical assays, and two were validated in cell-based assays. One lead candidate was tested and demonstrated favorable pharmacokinetics in mice. Upvotes: 4 [selected_answer]
2020/05/11
731
3,317
<issue_start>username_0: For some environments taking an action may not update the environment state. For example, a trading RL agent may take an action to buy shares s. The state at time t which is the time of investing is represented as the interval of 5 previous prices of s. At t+1 the share price has changed but it may not be as a result of the action taken. Does this affect RL learning, if so how ? Is it required that state is updated as a result of taking actions for agent learning to occur ? In gaming environments it is clear how actions affect the environment. Can some rules of RL breakdown if no "noticeable" environment change takes place as a result of actions ? Update: "actions influence the state transitions", is my understanding correct: If transitioning to a new state is governed by epsilon greedy and epsilon is set to .1 then with .1 probability the agent will choose an action from the q table which has max reward reward for the given state. Otherwise the agent randomly chooses and performs an action then updates the q table with discounted reward received from the environment for the given action. I've not explicitly modeled an MDP and just defined the environment and let the agent determine best actions over multiple episodes of choosing either a random action or the best action for the given state, the selection is governed by epsilon greedy. But perhaps I've not understood something fundamental in RL. I'm ignoring MDP in large part as I'm not modeling the environment explicitly. I don't set the probabilities of moving from each state to other states.<issue_comment>username_1: A very vague question. What's the objective? Reinforcement Learning (RL) typically uses the Markov Decision Process framework, which is a sequential decision making framework. In this framework, actions influence the state transitions. In other words, RL deals with controlling (via actions) a Markov chain. The objective in RL is figure out how to take actions in an optimal (in some sense) way! If, in the application you mentioned, the actions don't influence the state transitions and the objective is to predict states, RL is not required. It's just a regression/ time-series problem. Upvotes: 2 [selected_answer]<issue_comment>username_2: It seems to me that you are confusing two things, **State of the agent** and **State of the environment**. Think about a robot learning to walk on a rugged terrain. The actions of the robots don't change the terrain at all ! The robot can include in his own state, part of the environment state, the topology of the terrain in front of him for instance. In an MDP, an agent may or may not modify the environment state, in your case, if we make the assumption that you don't manipulate the market, you can consider that you don't modify the environment state. Your issue might be in the way you designed your agent, relying only on a part of the state environment, and not really having any state of his own, such as a portfolio for instance. If you're only interested in the way the state environment changes, then the above answer is right, the RL framework may not be a good fit. Finally, you don't have to model explicitly the environment, that's the beauty of model free RL ! Q-learning, that you mentioned, is model-free. Upvotes: 0
2020/05/12
772
3,021
<issue_start>username_0: <NAME> in his [deep rl bootcamp policy gradient lecture](https://www.youtube.com/watch?v=S_gwYj1Q-44&list=PLAdk-EyP1ND8MqJEJnSvaoUShrAWYe51U&index=4) derived the gradient of the utility function with respect to $\theta$ as $\nabla U(\theta) \approx \hat{g} = 1/m\sum\_{i=1}^m \nabla\_\theta logP(\tau^{(i)}; \theta)R(\tau^{(i)})$, where $m$ is the number of rollouts, and $\tau$ represents the trajectory of $s\_0,u\_0, ..., s\_H, u\_H$ state action sequences. He also explains that the gradient increases the log probabilities of trajectories that have positive reward and decreases the log probabilities of trajectories with negative reward, as seen in the picture. From the equation, however, I don't see how the gradient tries to increase the probabilities of the path with positive R? From the equation, what I understand is that we would want to update $\theta$ in a way that moves in the direction of $\nabla U(\theta)$ so that the overall utility is maximised, and this entails computing the gradient log probability of a trajectory. Also, why is $\theta$ omitted in $R(\tau^{(i)})$, since $\tau$ depends on the policy which is dependent on $\theta$ ? [![enter image description here](https://i.stack.imgur.com/Yfwre.png)](https://i.stack.imgur.com/Yfwre.png)<issue_comment>username_1: A very vague question. What's the objective? Reinforcement Learning (RL) typically uses the Markov Decision Process framework, which is a sequential decision making framework. In this framework, actions influence the state transitions. In other words, RL deals with controlling (via actions) a Markov chain. The objective in RL is figure out how to take actions in an optimal (in some sense) way! If, in the application you mentioned, the actions don't influence the state transitions and the objective is to predict states, RL is not required. It's just a regression/ time-series problem. Upvotes: 2 [selected_answer]<issue_comment>username_2: It seems to me that you are confusing two things, **State of the agent** and **State of the environment**. Think about a robot learning to walk on a rugged terrain. The actions of the robots don't change the terrain at all ! The robot can include in his own state, part of the environment state, the topology of the terrain in front of him for instance. In an MDP, an agent may or may not modify the environment state, in your case, if we make the assumption that you don't manipulate the market, you can consider that you don't modify the environment state. Your issue might be in the way you designed your agent, relying only on a part of the state environment, and not really having any state of his own, such as a portfolio for instance. If you're only interested in the way the state environment changes, then the above answer is right, the RL framework may not be a good fit. Finally, you don't have to model explicitly the environment, that's the beauty of model free RL ! Q-learning, that you mentioned, is model-free. Upvotes: 0
2020/05/12
339
1,434
<issue_start>username_0: I'm struggling with calculating accuracy when I do cross-validation for a deep learning model. I have two candidates for doing this. 1. Train a model with 10 different folds and get the best accuracy of them(so I get 10 best accuracies) and average them. 2. Train a model with 10 different folds and get 10 accuracy learning curves. Now, average these learning curves by calculating the mean of 10 accuracies of each epoch. So now we get one averaged accuracy learning curve and find the highest accuracy from this curve. Among these two candidates which one is correct??<issue_comment>username_1: I guess you could train your model with 10 different folds and in each fold calculate the average accuracy. So you would have 10 values - one corresponding to each fold. And then take the mean of all of them to get the average accuracy of your model. Your first option doesn't seem great because you take the highest accuracy among folds. If for some reason, the variance between accuracies is high for a fold, this would bias your numbers. Taking mean or maybe median of accuracies might be more reasonable. Does that help? Upvotes: 2 <issue_comment>username_2: In most cases we choose to take the mean of the k accuracies of k-fold cross validation; that is each time take the one that corresponds to the fold and when every fold has been used as validation set, find the mean accuracy of them. Upvotes: 0
2020/05/13
1,278
5,568
<issue_start>username_0: One way of understanding the difference between value function approaches, policy approaches and actor-critic approaches in reinforcement learning is the following: * A critic explicitly models a value function for a policy. * An actor explicitly models a policy. Value function approaches, such as Q-learning, only keep track of a value function, and the policy is directly derived from that (e.g. greedily or epsilon-greedily). Therefore, these approaches can be classified as a "critic-only" approach. Some policy search/gradient approaches, such as REINFORCE, only use a policy representation, therefore, I would argue that this approach can be classified as an "actor-only" approach. Of course, many policy search/gradient approaches also use value models in addition to a policy model. These algorithms are commonly referred to as "actor-critic" approaches (well-known ones are A2C / A3C). Keeping this taxonomy intact for model-based dynamic programming algorithms, I would argue that *value iteration* is an actor-only approach, and *policy iteration* is an actor-critic approach. However, not many people discuss the term actor-critic when referring to policy iteration. How come? Also, I am not familiar with any model-based/dynamic programming like actor only approaches? Do these exist? If not, what prevents this from happening?<issue_comment>username_1: > > Keeping this taxonomy intact for model-based Dynamic programming algorithms, I would argue that value iteration is a Actor only approach, and policy iteration is a Actor-Critic approach. However, not many people discuss the term Actor-Critic when referring to Policy Iteration. How come? > > > Both policy iteration and value iteration are value-based approaches. The policy in policy iteration is either arbitrary or derived from a value table. It is not modelled separately. To count as an Actor, the policy function needs to modelled directly as a *parametric* function of the state, not indirectly via a value assessment. You cannot use policy gradient methods to adjust an Actor's policy function unless it is possible to derive the gradient of the policy function with respect to parameters that control the relationship bewteen state and action. An Actor policy might be noted as $\pi(a|s,\theta)$ and the parameters $\theta$ are what make it possible to learn improvements. Policy iteration often *generates* an explicit policy, from the current value estimates. This is not a representation that can be directly manipulated, instead it is a consequence of measuring values, and there are no parameters that can be learned. Therefore the policy seen in policy iteration cannot be used as an actor in Actor-Critic or related methods. Another way to state this is that the policy and value functions in DP are not separate enough to be considered as an actor/critic pair. Instead they are both views of the same measurement, with the value function being closer to raw measurements and policy being a mapping of the value function to policy space. > > Also, I am not familiar with any model-based/dynamic programming like actor only approaches? Do these exist? If not, what prevents this from happening? > > > The main difference between model-based dynamic programming and model-free methods like Q-learning, or SARSA, is that the dynamic programming methods directly use the full distribution model (which can be expressed as $p(r, s'|s,a)$) to calculate expected bootstrapped returns. There is nothing in principle stopping you substituting expected returns calculated in this way into REINFORCE or Actor-Critic methods. However, it may be computationally hard to do so - these methods are often chosen when action space is large for instance. Basic REINFORCE using model-based expectations would be especially hard as you need an expected value calculated over all possible trajectories from each starting state - if you are going to expand the tree of all possible results to that degree, then a simple tree search algorithm would perform better, and the algorithm then resolves to a one-off planning exhaustive tree search. Actor-Critic using dynamic programming methods for the Critic should be viable, and I expect you could find examples of it being done in some situations. It may work well for some card or board games, if the combined action space and state space is not too large - it would behave a little like using Expected SARSA for the Critic component, except also run expectations over the state transition dynamics (whilst Expected SARSA only runs expectations over policy). You could vary the depth of this too, getting better estimates theoretically at the expense of extra computation (potentially a lot of extra computation if there is a large branching factor) Upvotes: 2 <issue_comment>username_2: > > policy iteration is an actor-critic approach. > > > This is a very insightful observation and I would agree with it. You can find similar statement in this [lecture](https://www.cs.hhu.de/fileadmin/redaktion/Fakultaeten/Mathematisch-Naturwissenschaftliche_Fakultaet/Informatik/Dialog_Systems_and_Machine_Learning/Lectures_RL/L5.pdf) and this NeurIPS [paper](https://proceedings.neurips.cc/paper/2013/hash/2dace78f80bc92e6d7493423d729448e-Abstract.html). The actor's role is to perform policy improvement and the critic's role is to perform policy evaluation. This helps explain its advantage over value-only approaches, as PI converges faster than VI although each iteration is more expensive. Upvotes: 0
2020/05/14
1,393
5,591
<issue_start>username_0: [Artificial intelligence (AI)](https://en.wikipedia.org/wiki/Artificial_intelligence) refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving. According to [the Wikipedia article on swarm intelligence](https://en.wikipedia.org/wiki/Swarm_intelligence) > > Swarm intelligence (SI) is the collective behavior of decentralized, self-organized systems, natural or artificial. The concept is employed in work on **artificial intelligence**. > > > The application of swarm principles to robots is called swarm robotics, while 'swarm intelligence' refers to the more general set of algorithms. > > > SI systems consist typically of a population of simple agents or boids interacting locally with one another and with their environment. The inspiration often comes from nature, especially biological systems. > > > These two terms seem to be related, especially in their application in computer science and software engineering. Is one a subset of another? Is one tool (SI) is used to build a system for the other(AI)? What are their differences and why are they significant?<issue_comment>username_1: Well, one of the simpler definitions for `SI` sounds like this: > > The emergent collective intelligence of groups of simple agents.” > (Bonabeau et al, 1999) > > > So, in order to get to the `SI` you have to use some kind of algorithms/AI to get simple intelligent agents. It's just cooperative intelligence, or cooperative `AI` if you wish. `SI` just uses today's `AI/ML` techniques to build the swarm, in same manners as reinforcement learning uses `AI/ML` techniques to make agents that can behave reasonably in large spaces by approximating value functions `V(S)` and policies `pi(S)`. I hope this helps a little. So `AI/ML` is kinda of a tool plugged in `SI`, as `SI` is field with it's own algorithm definitions and theory. Upvotes: 2 <issue_comment>username_2: Artificial Intelligence, as its name suggests, is intelligence made by humans. It's usually thought of as having human-like behaviors and characteristics. However, it doesn't have to resemble humans to be AI. It just has to be made by humans. Many common AI algorithms aren't even made to resemble humans, they may just have similarities. Reinforcement learning is present in humans, but also in the many creatures with intelligence. Swarm Intelligence is basically *a lot of small stupid things working together to do something complex*. Take for example ants. Each individual ant only follows a few very simple "instructions" like `if has this chemical: follow`. Like Evolutionary AI, it's taking mimicking features of nature. We humans just take a feature made by nature (Evolution/Swarming) and try to replicate some behaviors. Much like Evolutionary AI, Swarm Intelligence is a type of AI. tl;dr: * AI: intelligence made by humans * SI: feature made by nature that humans are trying to copy Upvotes: 2 <issue_comment>username_3: **[Swarm intelligence (SI)](http://www.scholarpedia.org/article/Swarm_intelligence) is a *sub-field of* or *an approach to* artificial intelligence (AI)**, where you have multiple individuals (for example, artificial ants), which collectively can produce what we (or most of us) would *intuitively* call *intelligent behaviour*. SI is sometimes categorized as a sub-field of *evolutionary computation* (which also includes *evolutionary algorithms*, such as *genetic algorithms*, *genetic programming*, *evolution strategies*, and so on), which is often considered a sub-field of AI or techniques to produce artificial intelligence, because all these techniques are often based on the use of multiple individuals/solutions (that either compete or collaborate with each other). One of the most commonly used SI techniques is [ant colony optimization algorithms](http://www.scholarpedia.org/article/Ant_colony_optimization) (proposed by <NAME> and further developed by other people like <NAME>ambardella), which have been successfully applied to solve the *non-decision* version of the [NP-complete problem](https://en.wikipedia.org/wiki/NP-completeness) (in simple words, it's a combinatorial problem that may require exponential time to be solved in the *usual* case) known as the [travelling salesman problem](https://en.wikipedia.org/wiki/Travelling_salesman_problem). There are other SI techniques, which are somehow similar to ACO algorithms, such as [particle swarm optimization](http://www.scholarpedia.org/article/Particle_swarm_optimization) or the [artificial bee colony algorithm](http://www.scholarpedia.org/article/Artificial_bee_colony_algorithm). Occasionally, SI may also be categorised as a sub-field of [*computational intelligence*](https://ai.stackexchange.com/a/16649/2444), which often refers to specific techniques to create artificially intelligent systems (i.e. programs that exhibit what we would call *intelligence*) that are more based on or inspired by the biology, such as neural networks, genetic algorithms, or, in fact, SI algorithms, such as ACO algorithms. However, CI can also be considered a sub-field of AI, given that it studies techniques to produce artificial intelligence, so, in the end, as I said above, SI is an approach to AI, which includes other approaches, such as evolutionary algorithms, rule-based systems, deep learning or other machine learning techniques. Upvotes: 3 [selected_answer]
2020/05/14
1,164
4,682
<issue_start>username_0: What is the difference between the prediction (value estimation) and control problems in reinforcement learning? Are there scenarios in RL where the problem cannot be distinctly categorised into the aforementioned problems and is a mixture of the problems? Examples where the problem cannot be easily categorised into one of the aforementioned problems would be nice.<issue_comment>username_1: Well, one of the simpler definitions for `SI` sounds like this: > > The emergent collective intelligence of groups of simple agents.” > (Bonabeau et al, 1999) > > > So, in order to get to the `SI` you have to use some kind of algorithms/AI to get simple intelligent agents. It's just cooperative intelligence, or cooperative `AI` if you wish. `SI` just uses today's `AI/ML` techniques to build the swarm, in same manners as reinforcement learning uses `AI/ML` techniques to make agents that can behave reasonably in large spaces by approximating value functions `V(S)` and policies `pi(S)`. I hope this helps a little. So `AI/ML` is kinda of a tool plugged in `SI`, as `SI` is field with it's own algorithm definitions and theory. Upvotes: 2 <issue_comment>username_2: Artificial Intelligence, as its name suggests, is intelligence made by humans. It's usually thought of as having human-like behaviors and characteristics. However, it doesn't have to resemble humans to be AI. It just has to be made by humans. Many common AI algorithms aren't even made to resemble humans, they may just have similarities. Reinforcement learning is present in humans, but also in the many creatures with intelligence. Swarm Intelligence is basically *a lot of small stupid things working together to do something complex*. Take for example ants. Each individual ant only follows a few very simple "instructions" like `if has this chemical: follow`. Like Evolutionary AI, it's taking mimicking features of nature. We humans just take a feature made by nature (Evolution/Swarming) and try to replicate some behaviors. Much like Evolutionary AI, Swarm Intelligence is a type of AI. tl;dr: * AI: intelligence made by humans * SI: feature made by nature that humans are trying to copy Upvotes: 2 <issue_comment>username_3: **[Swarm intelligence (SI)](http://www.scholarpedia.org/article/Swarm_intelligence) is a *sub-field of* or *an approach to* artificial intelligence (AI)**, where you have multiple individuals (for example, artificial ants), which collectively can produce what we (or most of us) would *intuitively* call *intelligent behaviour*. SI is sometimes categorized as a sub-field of *evolutionary computation* (which also includes *evolutionary algorithms*, such as *genetic algorithms*, *genetic programming*, *evolution strategies*, and so on), which is often considered a sub-field of AI or techniques to produce artificial intelligence, because all these techniques are often based on the use of multiple individuals/solutions (that either compete or collaborate with each other). One of the most commonly used SI techniques is [ant colony optimization algorithms](http://www.scholarpedia.org/article/Ant_colony_optimization) (proposed by <NAME> and further developed by other people like <NAME>), which have been successfully applied to solve the *non-decision* version of the [NP-complete problem](https://en.wikipedia.org/wiki/NP-completeness) (in simple words, it's a combinatorial problem that may require exponential time to be solved in the *usual* case) known as the [travelling salesman problem](https://en.wikipedia.org/wiki/Travelling_salesman_problem). There are other SI techniques, which are somehow similar to ACO algorithms, such as [particle swarm optimization](http://www.scholarpedia.org/article/Particle_swarm_optimization) or the [artificial bee colony algorithm](http://www.scholarpedia.org/article/Artificial_bee_colony_algorithm). Occasionally, SI may also be categorised as a sub-field of [*computational intelligence*](https://ai.stackexchange.com/a/16649/2444), which often refers to specific techniques to create artificially intelligent systems (i.e. programs that exhibit what we would call *intelligence*) that are more based on or inspired by the biology, such as neural networks, genetic algorithms, or, in fact, SI algorithms, such as ACO algorithms. However, CI can also be considered a sub-field of AI, given that it studies techniques to produce artificial intelligence, so, in the end, as I said above, SI is an approach to AI, which includes other approaches, such as evolutionary algorithms, rule-based systems, deep learning or other machine learning techniques. Upvotes: 3 [selected_answer]
2020/05/14
1,808
7,866
<issue_start>username_0: I'm a novice researcher, and as I started to read papers in the area of deep learning I noticed that the implementation is normally not added and is needed to be searched elsewhere, and my question is how come that's the case? The paper's authors needed to implement their models anyway in order to conduct their experimentations, so why not publish the implementation? Plus, if the implementation is not added and there's no reproducibility, what prevents authors from forging results?<issue_comment>username_1: > > The paper's authors needed to implement their models anyway in order to conduct their experimentations, so why not publish the implementation? > > > Some papers and authors actually provide a link to their own implementation, but most of the papers (that I have read) don't provide it, although some third-party implementations may already be available on Github (or other code-hosting sites) when you are reading the paper. There may be different reasons why the author(s) of a paper don't provide a reference implementation * They use some closed-source software or maybe it makes use of other resources that cannot be shared * Their implementation is a mess and, so, from a pedagogical point of view, it's quite useless * This may encourage other people to try to reproduce their results with different implementations, so it may indirectly encourage people to do research on the same topic (but maybe not providing a reference implementation could actually have the opposite effect!) > > Plus, if the implementation is not added and there's no reproducibility, what prevents authors from forging results? > > > I had some experience as a researcher, but not enough to answer this question precisely. Nevertheless, from some reviews of papers I have read (e.g. on [OpenReview](https://openreview.net/)), in most cases, the reviewers are interested in the consistency of the results, the novelty of the work, the clarity and structure of the paper, etc. I think that, in most cases, they probably trust the provided results, also because, often, for reproducibility, researchers are expected to describe their models and parameters in detail, provide plots, etc., but I don't exclude that there are cases of people that try to fool the reviewers. For example, watch [this video](https://www.youtube.com/watch?v=bmZyJ2Dt8dw) where <NAME> comments on ridiculous attempts to fool people and plagiarism by <NAME>. Upvotes: 4 <issue_comment>username_2: Someone can argue to some human adequate reasons, but there is a bad trend of falsified results in deep learning research papers that propose some nowel solutions or even update state-of-the-art model performance. And that's not just a few papers that lie, it's a large portion of them. And the reason for that is even more sad - most of so-called deep learning research papers just describing some empirical experiments, without any math and proving any theorem, and so it's easy to cheat. So objectively, if the only thing you propose in your paper is your empirical results - you must confirm them true by sharing source code. Otherwise, your work will be ignored. Upvotes: 3 <issue_comment>username_3: 1. The first reason described in [nbro's answer](https://ai.stackexchange.com/a/21161/1641) can definitely be an important one; authors may have implemented their software using code that they can't share. There's a lot of research coming out of companies (large and small), and they may use all sorts of proprietary libraries that were built in the company and cannot be distributed outside. 2. As described in [this answer](https://academia.stackexchange.com/a/10270/69376), sometimes researchers prefer to keep the code to themselves because it may give them an "advantage" over other researchers fot future work / follow-up research in the same area. I'm not saying that I believe this is a *good* reason, it definitely doesn't sound like it's good for the overall benefit of science... but it may be understandable in a "publish or perish" world where there's quite a bit of pressure to keep publishing frequently if you want your academic career to survive. 3. Also described in more detail in the answer I linked above, research code is often messy, and not pretty. Nbro also mentioned this, though I personally don't feel like the rationale is "it's too messy to be useful", and more often it's more along the lines of "it's so messy that I'm too embarassed to share it". 4. Some researchers, especially in larger teams, do not just work on a single paper at a time. They may have multiple papers they're working on simultaneously, and if they're closely related it can often be convenient to have them all in a single codebase. This is especially the case with longer review times; in the time between submission of a paper -- where it and anything related to it, such as source code, must remain private -- and an acceptance notification, there's plenty of time to start working on a next project. If the code for the previous project is mixed in with the code for the next project, and you can't / don't want to publish the code for the next project either yet... it may be easier to just not release anything. 5. In some cases, authors may feel it is "dangerous" to release their source code (or trained models). This is probably relatively uncommon, but can happen. Consider [the situation surrounding OpenAI's GPT-2 language model](https://openai.com/blog/better-language-models/), for example. Not directly a response to your question, but it may also be useful to keep in mind that sometimes not all authors of a paper may agree on whether or not to open-source it. Legally, I suppose that usually all the authors (or all contributors to the source code) would be copyright holders, and it can only be released if they **all** agree to release it. So if one of them feels (based on any of the reasons listed above, or maybe other reasons) that it shouldn't be released, it won't. In practice, I suppose that it would often primarily be the call of the more senior authors on a paper / principal investigators / supervisors. --- > > Plus, if the implementation is not added and there's no reproducibility, what prevents authors from forging results? > > > Personally I wouldn't be concerned as much about forging results as just... "accidental" false positives. Yes, it's possible and it will happen. But the pay-off of successfully forging false results and getting a paper published seem REALLY low compared to the risk of your academic career ending if it gets out. If you really have to forge your results just to get your paper accepted, and it has zero other meaningful contributions (no "unforgeable" contributions like theoretical results or really new and useful insights).. it's unlikely to become a really impactful paper, a widely-cited one. The really highly impactful empirical papers only become highly impactful because people will immediately try to re-implement and reproduce it anyway, and if that turns out to be impossible, it will turn into a dead end. That said, I'm not saying it can't be important to share source code. Especially in deep learning, and especially in deep reinforcement learning, [it has indeed been shown that tiny implementation details can be massively important to empirical performance](https://arxiv.org/abs/1709.06560), and these tiny implementation details are rarely all available in papers. There has certainly been a push towards encouraging the publication of source code, and it **is** important -- but unfortunately it's not always a black-and-white story, and there **can** sometimes also be good reasons that make it difficult/impossible to do so. If it's good research, I'd personally still rather have it without source code, than not have it at all. Upvotes: 3
2020/05/14
859
3,030
<issue_start>username_0: Here is the [code](https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On/blob/master/Chapter10/02_pong_a2c.py) written by <NAME>. I am reading his book (*Deep Reinforcement Learning Hands-on*). I have seen a line in his code which is really weird. In the accumulation of the policy gradient $$\partial \theta\_{\pi} \gets \partial \theta\_{\pi} + \nabla\_{\theta}\log\pi\_{\theta} (a\_i | s\_i) (R - V\_{\theta}(s\_i))$$ we have to compute the advantage $R - V\_{\theta}(s\_i)$. In line 138, maxim uses `adv_v = vals_ref_v - value_v.detach()`. Visually, it looks fine, but look at the shape of each term. ``` ipdb> adv_v.shape torch.Size([128, 128]) ipdb> vals_ref_v.shape torch.Size([128]) ipdb> values_v.detach().shape torch.Size([128, 1]) ``` In a much simpler code, it is equivalent to ``` In [1]: import torch In [2]: t1 = torch.tensor([1, 2, 3]) In [3]: t2 = torch.tensor([[4], [5], [6]]) In [4]: t1 - t2 Out[4]: tensor([[-3, -2, -1], [-4, -3, -2], [-5, -4, -3]]) In [5]: t1 - t2.detach() Out[5]: tensor([[-3, -2, -1], [-4, -3, -2], [-5, -4, -3]]) ``` I have trained the agent with his code and it works perfectly fine. I am very confused why it is good practice and what it is doing. Could someone enlighten me on the line `adv_v = vals_ref_v - value_v.detach()`? For me, the right thing to do was `adv_v = vals_ref_v - value_v.squeeze(-1)`. Here is the full algorithm used in his book : **UPDATE** [![enter image description here](https://i.stack.imgur.com/aNOfJ.png)](https://i.stack.imgur.com/aNOfJ.png) As you can see by the image, it is converging even though `adv_v = vals_ref_v - value_v.detach()` looks wrongly implemented. It is not done yet, but I will update the question later.<issue_comment>username_1: I changed the line `adv_v = vals_ref_v - value_v.detach()` to `adv_v = vals_ref_v - value_v.squeeze(-1).detach()`. It seems the convergence is much faster. According to the A2C algorithm, it is just logic to apply $Q(a, s) - V(s)$, where $Q(a, s)$ and $V(s)$ with the same shape. The call to `detach()` is important here as we don't want to propagate the PG into our value approximation head. Upvotes: 1 <issue_comment>username_2: Yeah, it seems like it's a wrong implementation. vals\_ref\_v is a matrix of 1 row, and 128 columns. value\_v.detach() is a matrix of 128 row Upvotes: 2
2020/05/17
421
1,600
<issue_start>username_0: Besides computer vision and image classification, what other use cases/applications are for few-shot learning?<issue_comment>username_1: Few-short learning (FSL) can be useful for many (if not all) machine learning problems, including supervised learning (regression and classification) and reinforcement learning. The paper [Generalizing from a Few Examples: A Survey on Few-Shot Learning](https://arxiv.org/pdf/1904.05046.pdf) (2020) provides an overview (including examples of applications and use cases) of FSL. Their definition of FSL provided is based on Tom Mitchell's famous definition of machine learning. > > Definition 2.1 (**Machine Learning** [92, 94]). A computer program is said to learn from experience $E$ with respect to some classes of task $T$ and performance measure $P$ if its performance can improve with $E$ on $T$ measured by $P$. > > > Here's the definition of FSL. > > Definition 2.2. **Few-Shot Learning** (FSL) is a type of machine learning problems, specified by $E$, $T$ and $P$, where $E$ contains only a limited number of examples with supervised information for the task $T$. > > > Specific examples of applications of FSL are * character generation * drug toxicity discovery * sentiment classification from short text * object recognition Upvotes: 2 <issue_comment>username_2: An interesting use case is IQ tests, or program synthesis from examples in general. IQ tests often require you to derive a program from a few examples that can produce a certain output. See for instance <https://github.com/fchollet/ARC> Upvotes: 0
2020/05/18
1,807
5,382
<issue_start>username_0: In scaled dot product attention, we scale our outputs by dividing the dot product by the square root of the dimensionality of the matrix: [![enter image description here](https://i.stack.imgur.com/wLI4m.png)](https://i.stack.imgur.com/wLI4m.png) The reason why is stated that this constrains the distribution of the weights of the output to have a standard deviation of 1. Quoted from [Transformer model for language understanding | TensorFlow](https://www.tensorflow.org/tutorials/text/transformer): > > For example, consider that $Q$ and $K$ have a mean of 0 and variance of 1. Their matrix multiplication will have a mean of 0 and variance of $d\_k$. Hence, square root of $d\_k$ is used for scaling (and not any other number) because the matmul of $Q$ and $K$ should have a mean of 0 and variance of 1, and you get a gentler softmax. > > > Why does this multiplication have a variance of $d\_k$? If I understand this, I will then understand why dividing by $\sqrt({d\_k})$ would normalize to 1. Trying this experiment on 2x2 arrays I get an output of 1.6 variance: [![enter image description here](https://i.stack.imgur.com/GCe3t.png)](https://i.stack.imgur.com/GCe3t.png)<issue_comment>username_1: It might help to take two small matrices that match the assumptions (mean of zero and variance of one) and just do the matrix multiplication. The dimensionality of K scales Q in the multiplication, scaling the variance simultaneously. Upvotes: 0 <issue_comment>username_2: In statistics, if $X$ and $Y$ are independent and randomly distributed variables: $\mathbb{E}[X + Y] = \mathbb{E}[X] + \mathbb{E}[Y] \\ Var(X + Y) = Var(X) + Var(Y) \\ \mathbb{E}[XY] = \mathbb{E}[X]\mathbb{E}[Y] \\ Var(XY) = (Var(X) + \mathbb{E}[X]^2)(Var(Y) + \mathbb{E}[Y]^2) - \mathbb{E}[X]^2\mathbb{E}[Y]^2$ Let $Q$ and $K$ be random $d\_k$ x $d\_k$ matrices, where each entry is a some random distribution with $0$ mean and $1$ variance. Every entry is independent from each other. Since each entry of $Q$ and $K$ have identical distribution, we can focus only on the top-left-most element of $QK$ without loss of generality. The same applies to every other element. The top-left-most element of $QK$ is $\displaystyle \sum\_{i=0}^{d\_k} Q\_{1,i} K\_{i, 1}$. Since $Q$ and $K$ are independent: $\mathbb{E}[Q\_{1, i} K\_{i, 1}] = \mathbb{E}[Q\_{1, i}] \mathbb{E}[K\_{i, 1}] = 0 \\ Var(Q\_{1, i} K\_{i, 1}) = (Var(Q\_{1, i}) + \mathbb{E}[Q\_{1, i}]^2)(Var(K\_{i, 1}) + \mathbb{E}[K\_{i, 1}]^2) - \mathbb{E}[Q\_{1, i}]^2\mathbb{E}[K\_{i, 1}]^2 = 1$ And so summing up $d\_k$ of them: $\mathbb{E} \left[\displaystyle \sum\_{i=0}^{d\_k} Q\_{1,i} K\_{i, 1} \right] = \displaystyle \sum\_{i=0}^{d\_k} \mathbb{E} \left[ Q\_{1,i} K\_{i, 1} \right] = 0 \\ Var\left(\displaystyle \sum\_{i=0}^{d\_k} Q\_{1,i} K\_{i, 1} \right) = \displaystyle \sum\_{i=0}^{d\_k} Var\left( Q\_{1,i} K\_{i, 1} \right) = d\_k$ For your code block, you are computing the dot product of matrices $a$ and $b$, when you should be doing a matrix multiplication (the attention function multiplies $Q$ by $K$ after all, which is the vectorized form of dot-product -- it doesn't actually do dot product). It should work out to unit variance. *edit: the last paragraph is incorrect, as dot is the same as matrix multiply in the above case* Upvotes: 4 [selected_answer]<issue_comment>username_3: The mathematical explanation is already explained in the another answer. But I feel bothered why OP's code get an output of 1.6 variance, hence I run my own experiment to prove it: ``` import numpy as np a = np.random.randn(1111,111) print('variance of matrix A =', a.var()) b = np.random.randn(111,1111) print('variance of matrix B =', b.var()) c = a @ b print('variance of dot-product =', c.var()) d = c / np.sqrt(111) print('variance of scaled (normalized) dot-product =', d.var()) ``` The output: ``` variance of matrix A = 0.9998308190814476 variance of matrix B = 0.9957289239999045 variance of dot-product = 110.5603610976103 variance of scaled (normalized) dot-product = 0.9960392891676604 ``` As shown above, the variance is becoming (close to) 1 after dividing by $\sqrt({d\_k})$. Upvotes: 1 <issue_comment>username_4: Assume that the query embeddings $Q$ and key embeddings $K$ have zero mean and unit std. Then for the variance of the attention score between any query and key we get: $$ \alpha = q\_i k\_j^T = \sum\_{n=1}^{d\_k} q\_{in} k\_{jn} $$ $$ \text{Var}(\alpha) = d\_k $$ $$ \text{std}(\alpha) = \sqrt{d\_k} $$ But is it safe to assume that our query embeddings and key embeddings have unit variance? The answer is yes. The embeddings are computed by multiplying the input with the query and key matrices ($W\_Q$ and $W\_K$). We can assume that the input already has unit variance by using a normalizing layer (e.g. LayerNorm), and the weights of the $W\_Q$ and $W\_K$ matrices should be initialized so that variance is preserved. ***But why do we want unit variance at all?*** Applying softmax on the attention scores with such high variance will result in all of the weight being placed on one random element, while all the other elements will have a weight of zero. Thus, in order to have the attention scores with unit std, we scale by $\sqrt{d\_k}$. For more checkout [this](https://username_4.github.io/posts/transformer/#multi-head-attention-layer). Upvotes: 2
2020/05/18
1,599
4,772
<issue_start>username_0: I am trying to understand the difference between a Bayesian Network and a Markov Chain. When I search for this one the web, the unanimous solution seems to be that a Bayesian Network is directional (i.e. it's a DAG) and a Markov Chain is not directional. However, often a Markov Chain example is overtime, where the weather today is impacting the weather tomorrow, but the weather tomorrow is not (obviously) impacting the weather today. So I am quite confused how is a Markov Chain not directional? I seem to be missing something here. Can someone please help me understand?<issue_comment>username_1: It might help to take two small matrices that match the assumptions (mean of zero and variance of one) and just do the matrix multiplication. The dimensionality of K scales Q in the multiplication, scaling the variance simultaneously. Upvotes: 0 <issue_comment>username_2: In statistics, if $X$ and $Y$ are independent and randomly distributed variables: $\mathbb{E}[X + Y] = \mathbb{E}[X] + \mathbb{E}[Y] \\ Var(X + Y) = Var(X) + Var(Y) \\ \mathbb{E}[XY] = \mathbb{E}[X]\mathbb{E}[Y] \\ Var(XY) = (Var(X) + \mathbb{E}[X]^2)(Var(Y) + \mathbb{E}[Y]^2) - \mathbb{E}[X]^2\mathbb{E}[Y]^2$ Let $Q$ and $K$ be random $d\_k$ x $d\_k$ matrices, where each entry is a some random distribution with $0$ mean and $1$ variance. Every entry is independent from each other. Since each entry of $Q$ and $K$ have identical distribution, we can focus only on the top-left-most element of $QK$ without loss of generality. The same applies to every other element. The top-left-most element of $QK$ is $\displaystyle \sum\_{i=0}^{d\_k} Q\_{1,i} K\_{i, 1}$. Since $Q$ and $K$ are independent: $\mathbb{E}[Q\_{1, i} K\_{i, 1}] = \mathbb{E}[Q\_{1, i}] \mathbb{E}[K\_{i, 1}] = 0 \\ Var(Q\_{1, i} K\_{i, 1}) = (Var(Q\_{1, i}) + \mathbb{E}[Q\_{1, i}]^2)(Var(K\_{i, 1}) + \mathbb{E}[K\_{i, 1}]^2) - \mathbb{E}[Q\_{1, i}]^2\mathbb{E}[K\_{i, 1}]^2 = 1$ And so summing up $d\_k$ of them: $\mathbb{E} \left[\displaystyle \sum\_{i=0}^{d\_k} Q\_{1,i} K\_{i, 1} \right] = \displaystyle \sum\_{i=0}^{d\_k} \mathbb{E} \left[ Q\_{1,i} K\_{i, 1} \right] = 0 \\ Var\left(\displaystyle \sum\_{i=0}^{d\_k} Q\_{1,i} K\_{i, 1} \right) = \displaystyle \sum\_{i=0}^{d\_k} Var\left( Q\_{1,i} K\_{i, 1} \right) = d\_k$ For your code block, you are computing the dot product of matrices $a$ and $b$, when you should be doing a matrix multiplication (the attention function multiplies $Q$ by $K$ after all, which is the vectorized form of dot-product -- it doesn't actually do dot product). It should work out to unit variance. *edit: the last paragraph is incorrect, as dot is the same as matrix multiply in the above case* Upvotes: 4 [selected_answer]<issue_comment>username_3: The mathematical explanation is already explained in the another answer. But I feel bothered why OP's code get an output of 1.6 variance, hence I run my own experiment to prove it: ``` import numpy as np a = np.random.randn(1111,111) print('variance of matrix A =', a.var()) b = np.random.randn(111,1111) print('variance of matrix B =', b.var()) c = a @ b print('variance of dot-product =', c.var()) d = c / np.sqrt(111) print('variance of scaled (normalized) dot-product =', d.var()) ``` The output: ``` variance of matrix A = 0.9998308190814476 variance of matrix B = 0.9957289239999045 variance of dot-product = 110.5603610976103 variance of scaled (normalized) dot-product = 0.9960392891676604 ``` As shown above, the variance is becoming (close to) 1 after dividing by $\sqrt({d\_k})$. Upvotes: 1 <issue_comment>username_4: Assume that the query embeddings $Q$ and key embeddings $K$ have zero mean and unit std. Then for the variance of the attention score between any query and key we get: $$ \alpha = q\_i k\_j^T = \sum\_{n=1}^{d\_k} q\_{in} k\_{jn} $$ $$ \text{Var}(\alpha) = d\_k $$ $$ \text{std}(\alpha) = \sqrt{d\_k} $$ But is it safe to assume that our query embeddings and key embeddings have unit variance? The answer is yes. The embeddings are computed by multiplying the input with the query and key matrices ($W\_Q$ and $W\_K$). We can assume that the input already has unit variance by using a normalizing layer (e.g. LayerNorm), and the weights of the $W\_Q$ and $W\_K$ matrices should be initialized so that variance is preserved. ***But why do we want unit variance at all?*** Applying softmax on the attention scores with such high variance will result in all of the weight being placed on one random element, while all the other elements will have a weight of zero. Thus, in order to have the attention scores with unit std, we scale by $\sqrt{d\_k}$. For more checkout [this](https://username_4.github.io/posts/transformer/#multi-head-attention-layer). Upvotes: 2
2020/05/20
1,672
4,896
<issue_start>username_0: Here is my understanding of importance sampling. If we have two distributions $p(x)$ and $q(x)$, where we have a way of sampling from $p(x)$ but not from $q(x)$, but we want to compute the expectation wrt $q(x)$, then we use importance sampling. The formula goes as follows: $$ E\_q[x] = E\_p\Big[x\frac{q(x)}{p(x)}\Big] $$ The only limitation is that we need a way to compute the ratio. Now, here is what I don't understand. Without knowing the density function $q(x)$, how can we compute the ratio $\frac{q(x)}{p(x)}$? Because if we know $q(x)$, then we can compute the expectation directly. I am sure I am missing something here, but I am not sure what. Can someone help me understand this?<issue_comment>username_1: It might help to take two small matrices that match the assumptions (mean of zero and variance of one) and just do the matrix multiplication. The dimensionality of K scales Q in the multiplication, scaling the variance simultaneously. Upvotes: 0 <issue_comment>username_2: In statistics, if $X$ and $Y$ are independent and randomly distributed variables: $\mathbb{E}[X + Y] = \mathbb{E}[X] + \mathbb{E}[Y] \\ Var(X + Y) = Var(X) + Var(Y) \\ \mathbb{E}[XY] = \mathbb{E}[X]\mathbb{E}[Y] \\ Var(XY) = (Var(X) + \mathbb{E}[X]^2)(Var(Y) + \mathbb{E}[Y]^2) - \mathbb{E}[X]^2\mathbb{E}[Y]^2$ Let $Q$ and $K$ be random $d\_k$ x $d\_k$ matrices, where each entry is a some random distribution with $0$ mean and $1$ variance. Every entry is independent from each other. Since each entry of $Q$ and $K$ have identical distribution, we can focus only on the top-left-most element of $QK$ without loss of generality. The same applies to every other element. The top-left-most element of $QK$ is $\displaystyle \sum\_{i=0}^{d\_k} Q\_{1,i} K\_{i, 1}$. Since $Q$ and $K$ are independent: $\mathbb{E}[Q\_{1, i} K\_{i, 1}] = \mathbb{E}[Q\_{1, i}] \mathbb{E}[K\_{i, 1}] = 0 \\ Var(Q\_{1, i} K\_{i, 1}) = (Var(Q\_{1, i}) + \mathbb{E}[Q\_{1, i}]^2)(Var(K\_{i, 1}) + \mathbb{E}[K\_{i, 1}]^2) - \mathbb{E}[Q\_{1, i}]^2\mathbb{E}[K\_{i, 1}]^2 = 1$ And so summing up $d\_k$ of them: $\mathbb{E} \left[\displaystyle \sum\_{i=0}^{d\_k} Q\_{1,i} K\_{i, 1} \right] = \displaystyle \sum\_{i=0}^{d\_k} \mathbb{E} \left[ Q\_{1,i} K\_{i, 1} \right] = 0 \\ Var\left(\displaystyle \sum\_{i=0}^{d\_k} Q\_{1,i} K\_{i, 1} \right) = \displaystyle \sum\_{i=0}^{d\_k} Var\left( Q\_{1,i} K\_{i, 1} \right) = d\_k$ For your code block, you are computing the dot product of matrices $a$ and $b$, when you should be doing a matrix multiplication (the attention function multiplies $Q$ by $K$ after all, which is the vectorized form of dot-product -- it doesn't actually do dot product). It should work out to unit variance. *edit: the last paragraph is incorrect, as dot is the same as matrix multiply in the above case* Upvotes: 4 [selected_answer]<issue_comment>username_3: The mathematical explanation is already explained in the another answer. But I feel bothered why OP's code get an output of 1.6 variance, hence I run my own experiment to prove it: ``` import numpy as np a = np.random.randn(1111,111) print('variance of matrix A =', a.var()) b = np.random.randn(111,1111) print('variance of matrix B =', b.var()) c = a @ b print('variance of dot-product =', c.var()) d = c / np.sqrt(111) print('variance of scaled (normalized) dot-product =', d.var()) ``` The output: ``` variance of matrix A = 0.9998308190814476 variance of matrix B = 0.9957289239999045 variance of dot-product = 110.5603610976103 variance of scaled (normalized) dot-product = 0.9960392891676604 ``` As shown above, the variance is becoming (close to) 1 after dividing by $\sqrt({d\_k})$. Upvotes: 1 <issue_comment>username_4: Assume that the query embeddings $Q$ and key embeddings $K$ have zero mean and unit std. Then for the variance of the attention score between any query and key we get: $$ \alpha = q\_i k\_j^T = \sum\_{n=1}^{d\_k} q\_{in} k\_{jn} $$ $$ \text{Var}(\alpha) = d\_k $$ $$ \text{std}(\alpha) = \sqrt{d\_k} $$ But is it safe to assume that our query embeddings and key embeddings have unit variance? The answer is yes. The embeddings are computed by multiplying the input with the query and key matrices ($W\_Q$ and $W\_K$). We can assume that the input already has unit variance by using a normalizing layer (e.g. LayerNorm), and the weights of the $W\_Q$ and $W\_K$ matrices should be initialized so that variance is preserved. ***But why do we want unit variance at all?*** Applying softmax on the attention scores with such high variance will result in all of the weight being placed on one random element, while all the other elements will have a weight of zero. Thus, in order to have the attention scores with unit std, we scale by $\sqrt{d\_k}$. For more checkout [this](https://username_4.github.io/posts/transformer/#multi-head-attention-layer). Upvotes: 2
2020/05/22
1,265
4,730
<issue_start>username_0: The technique for off-policy value evaluation comes from importance sampling, which states that $$E\_{x \sim q}[f(x)] \approx \frac{1}{n}\sum\_{i=1}^n f(x\_i)\frac{q(x\_i)}{p(x\_i)},$$ where $x\_i$ is sampled from $p$. In the application of importance sampling to RL, is the expectation of the function $f$ equivalent to the value of the trajectories, which is represented by the trajectories $x$? The distributions $p$ represent the probability of sampling trajectories from the behavior policy and the distribution $q$ represents the probability of sampling trajectories from the target policy $q$? How would the trajectories from distribution $q$ be better than that of $p$? I know from the equation how it is better, but it is hard to understand intuitively why this could be so.<issue_comment>username_1: > > In the application of importance sampling to RL, is the expectation of the function $f$ equivalent to the value of the trajectories, which is represented by the trajectories $x$? > > > I believe what you are asking here is if when using importance sampling in the off-policy RL setting that we set $f(x)$ from the general importance sampling formula to be our returns - the answer to this is yes. As always we are interested in calculating our expected returns. > > How would the trajectories from the distribution $q$ be better than that > of $p$? I know from the equation how it is better but it is hard to > understand intuitively why this could be so. > > > I think here you got your $p$ and $q$ the wrong way around as we are using samples from $p$ to approximate our policy $q$. We typically will use importance sampling to generate samples from a *different* policy to our target policy for a few reasons - one reason might be that our target policy is hard to sample from whereas sampling from our behaviour policy $p$ might be relatively easy to sample from. Another reason is that we generally want to learn an optimal policy, but this could be difficult to learn if we don't explore enough. So we can follow some other policy that will explore sufficiently and still learn about our optimal target policy through the importance sampling ratio. Upvotes: 2 <issue_comment>username_2: Recall that our goal is to be able to accurately estimate the true value of each state by computing a sample average over returns starting from that state: $$v\_{q}(s) \doteq \mathbb{E}\_{q}\left[G\_{t} | S\_{t}=s\right] \approx \frac{1}{n} \sum\_{i=1}^{n} Return\_i $$ where $Return\_i$ is the return obtained from the $i^{th}$ trajectory. The problem is that the $\approx $ does not hold, since in off-policy learning, we got those returns by following the behavior policy, $p$, and not the target policy, $q$. To address that, we have to correct each return in the sample average by multiplying by the importance sampling ratio. $$v\_{q}(s) \doteq \mathbb{E}\_{q}\left[G\_{t} | S\_{t}=s\right] \approx \frac{1}{n} \sum\_{i=1}^{n} \rho\_i Return\_i$$ where the importance sampling ratio is : $\rho=\frac{\mathbb{P}(\text { trajectory under } q)}{\mathbb{P}(\text { trajectory under } p)}$ What this multiplication does is that it increases the importance of returns that were more likely to be seen under the target policy $q$ and it decreases those that were less likely. So, at the end, in expectation, it would be as if the returns were averaged following $q$. (A side note: To avoid the risks of mixing $p$ and $q$, it might be a good idea to denote/think of the **b**ehavior policy as $b$ and the target policy as $\pi$, following the convention in Sutton and Barto's RL book.) Upvotes: 2 <issue_comment>username_3: Let's fix some notation: we're collecting data from behavior policy $\pi\_0$ and we want to evaluate a policy $\pi$. Of course, if we had plenty of data from policy $\pi$ that would be the best way to evaluate $\pi$ as we just take the empirical average (without any importance sampling) and CLT gives us confidence intervals that shrink at $\frac{1}{\sqrt n}$ rates. However, collecting data from $\pi$ is often time-consuming and costly: you may need to productionize it at a company, and if $\pi$ were dangerous, some damage could be done during rollouts. So how can we make the best use of our data from any policies, not necessarily $\pi$, to evaluate $\pi$? This is the question of off-policy evaluation, and you're right that IS is one approach. This picture from a [great talk by Thorsten](https://www.youtube.com/watch?v=lzA5K4im2no&ab_channel=CriteoLabs) provides nice intuition on why the weighting is unbiased [![enter image description here](https://i.stack.imgur.com/h770z.jpg)](https://i.stack.imgur.com/h770z.jpg). Upvotes: 2
2020/05/22
1,848
5,992
<issue_start>username_0: Vanilla policy gradient algorithm (using baseline to reduce variance) acc to [here](http://joschu.net/docs/thesis.pdf) (page 16) > > Initialize policy parameter θ, baseline b > > > for iteration=1, 2, . . . do > > > > > > > Collect a set of trajectories by executing the current policy > > > > > > At each timestep in each trajectory, compute > > > > > > > > > > > > the return $R\_{t}= \sum\_{t'=t}^{T-1}\gamma^{t'-t}r\_{t'}$ > > > > > > > > > the advantage estimate $\hat{A}\_{t} = R\_{t} - b(s\_{t})$ > > > > > > > > > > > > > > > Re-fit the baseline, by minimizing $\lVert b(s\_{t}) - R\_{t} \rVert^{2}$ > > > > > > > > > > > > summed over all trajectories and timesteps. > > > > > > > > > > > > > > > Update the policy, using a policy gradient estimate $\hat{g}$, > > > > > > > > > > > > which is a sum of terms $\nabla\_{\theta}log\pi(a\_{t}|s\_{t},\theta)\hat{A\_{t}}$ > > > > > > > > > > > > > > > > > > * At line 6, advantage estimate is computed by subtracting baseline from the returns * At line 7, baseline is re-fit minimizing mean squared error between state dependent baseline and return * At line 8, we update the policy using advantage estimate from line **6** So is the baseline expected to be used in the next iteration when our policy has *changed*? To compute the advantage we subtract the state value $V(s\_{t})$ from the action value $Q(s\_{t},a\_{t})$, under the *same* policy, then why is the old baseline used here in advantage estimation?<issue_comment>username_1: > > So is the baseline expected to be used in the next iteration when our policy has *changed*? > > > Yes. > > To compute the advantage we subtract the state value $V(s\_{t})$ from the action value $Q(s\_{t},a\_{t})$, under the *same* policy, then why is the old baseline used here in advantage estimation? > > > The precise value of the baseline is not that important. What is important is that the baseline does not depend on the action choice, $a$, so it does not impact the gradient estimations or update steps for the policy function you are trying to improve. You could in theory use a fixed offset instead of $V(s)$, or any arbitrary function that does not depend on $a$. In some settings the average reward $\bar{R}$ seen so far is used. Using a rough approximation to $V(s)$ - and thus an approximate advantage function overall is useful as it removes a large source of variance in gradient estimates (the inherent value of the current state under the current policy, which is irrelevant to search for *adjustments* to that policy). The more accurate $V(s)$ is, the lower variance, thus faster and more reliable convergence, so you do want it to be a good estimate. But a little bit of lag behind policy updates is acceptable and does not break the algorithm. For more on this, see [Sutton & Barto, chapter 13, section 13.4](http://incompleteideas.net/book/RLbook2020.pdf). Upvotes: 1 [selected_answer]<issue_comment>username_2: To be honest I was also struggling with this question for quite some time and I think that it was written this way mostly because of historic reasons. Once you have the advantage $A\_t = \sum\_{i=t}^{T} r\_{i+1} - V\_\phi(s)$, you can use it directly to update both the value and the policy networks: \begin{align} &w = w + \alpha A\_t \nabla V\_\phi(s) \\ &\theta = \theta + \alpha A\_t \nabla \log \pi\_\theta(a|s) \end{align} To see why this is true note that you update the value network using the squared-error loss $L^{VF} = 0.5(\sum r\_{t+1} - V\_\phi(s))^2$, and when you differentiate this you get simply $A\_t$. And the loss for the policy network is simply $L^{PG} = \log \pi\_\theta(a|s) A\_t$. Essentially you evaluate the value network only one time, instead of two times, which probably back at the time when this algorithm was invented (e.g. in the 1990s) was a big deal. If I were to implement vpg with baseline, I would first update the value network and only then calculate the advantage to update the policy network, i.e. I would reshuffle the steps so that I use the updated baseline. However, I would never use vpg with baseline. I don't think anyone has ever used it for something practical and I don't know of any papers that show SotA results with it. Note that if you have the value network, instead of using it merely as a baseline, you would be much better off using it to bootstrap the estimate of the return, i.e. $A\_t = r\_{t+1} + V(s\_{t+1}) - V(s\_t).$ If you take a look at the pseudocode for this algorithm you can see that, again, people update the value network after they compute the advantage. Here, you could again update the value network before that, but now you have to be extra careful. When computing the advantage you have to use the old value network for bootstrapping and the new value network only for the baseline: $A\_t = r\_{t+1} + V\_{old}(s\_{t+1}) - V\_{new}(s\_t).$ A very brief explanation for this is given at the end of page 55 in <NAME>'s phd thesis [that you shared](http://joschu.net/docs/thesis.pdf). So in this case, I would just go on and use the old network and not overthink it. Even better than a one-step bootstrap, you could do an n-step bootstrap: $A\_t^{(n)} = r\_{t+1} + r\_{t+2} + \cdots + r\_{t+n} + V(s\_{t+n}) - V(s\_t)$ and then use a generalized advantage estimation $A^{GAE} = \sum\_{n=1}^{\inf} (1-\lambda) \lambda ^ {n-1} A\_{t}^{(n)}$. When computing $A^{GAE}$ the value network is used both for bootstrapping and as a baseline and it is not possible to distinguish between the two uses because they are cancelled out in a telescoping sum. This means that the only correct way would be to update the value network after you compute the advantage. So you can see that once you have the value network you get from a poor algorithm to state-of-the-art with one simple equation. See [here](https://username_2.github.io/posts/actor-critic/) if you want to read more. Upvotes: 1
2020/05/23
733
2,904
<issue_start>username_0: I am running into an issue in which the the target (label collums) of my dataset contain a mixture of binary label (yes/no) and some numeric value label. [![d](https://i.stack.imgur.com/yqfys.png)](https://i.stack.imgur.com/yqfys.png) The value of these numeric value (resource 1 and resource 2 collumns) experience a large variation margin. Sometime these numeric value can be like 0.389 but sometimes they can be 0.389 x 10^-4 or something. My goal is to predict the binary decision and the amount of resource allocated to a new user who have input feature 1 (numeric) and input feature 2 (numeric). My initial though would be that the output neuron corresponding to the 0-1 decision would use logistic regression activation function. But for the neuron that corresponding to the resource I am not quite sure. What would be the appropriate way to tackle such situation in term of network structure or data pre-processing strategy ? Thank you for your enthusiasm !<issue_comment>username_1: In Neural Networks the function that provides the largest interval while activation is **Tanh** with a result **between -1 and 1** You can use it to train your model , when the label has value of false it should be -1 , and when true it should be 1 In prediction , you'll see where the value is more close , *for example if you get 0.4 more close to 1 so it'll be true* Upvotes: 0 <issue_comment>username_2: Your question are missing some details and i will assume some scenarios. * If you have a classification problem: you can try group the values in intervals that make sense (you should analyze and decide for this setup), if its possible. For example: 0.000-0.250 (0), 0.251-0.500 (1), 0.501-0.750 (2) and so on. Note that neural networks are sensible for distance between values (1 is closer to 0 than 2, so 1 is more similar to 0 than 2 and so on). If that is not your case, you should binarize the values in One Hot Encode manner. * If you have a regression problem, you should be ok without anything else. You can try normalize your outputs and observe the results, but generally it's not necessary for regression problems. * Be sure if your dataset are free of outliers and noisy data as much as possible. * It's important choose activations functions that are adequate for the range of values in your attributes and output. This can depend on how do you treat and setup your dataset, the range of values, normalization etc. **Update after more details in question** Your neural network should have 3 neurons in the output layer, with linear activation. As said before, normalization usually is not necessary in regression problems, but if your values are too diferent (like the range in resource 1 and resource 2) maybe some kind of adjustment (normalization, standardization etc) can be helpful. But you need try and see the results. Upvotes: 2 [selected_answer]
2020/05/23
1,032
4,036
<issue_start>username_0: I'm reading an article on reinforcement learning, and I don't understand why the agent's policy $\pi$ is not part of definition of Markov Decision process(MDP): > > [![enter image description here](https://i.stack.imgur.com/giJ04.png)](https://i.stack.imgur.com/giJ04.png) > > > Bu, Lucian, <NAME>, and <NAME>. "A comprehensive survey of multiagent reinforcement learning." IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 38.2 (2008): 156-172. > > > My question is: > > Why the policy is not a part of the MDP definition? > > ><issue_comment>username_1: The MDP defines the environment (which corresponds to the task that you need to solve), so it defines e.g. the states of the environment, the actions that you *can* take in those states, the probabilities of transitioning from one state to the other and the probabilities of getting a reward when you take a certain action in a certain state. The policy corresponds to a strategy that the RL agent can follow to act in that environment. Note that the MDP doesn't define what the agent does in each state. That's why you need the policy! An optimal policy for a specific MDP corresponds to the strategy that, if followed, is guaranteed to give you the highest amount of reward in that environment. However, there are multiple strategies, most of them are not optimal. This should clarify why the policy is not part of the definition of the MDP. Upvotes: 3 <issue_comment>username_2: Aside from the points raised in [nbro's answer](https://ai.stackexchange.com/a/21437/1641), I'd like to point out that for a single MDP (a single instance of a "problem"), it may be sensible to study it from perspectives that include no policy at all, or multiple different policies. For instance, if I have an MDP, I may be interested in studying it by looking at various inherent properties of the environment. And if I then have multiple different MDPs, all without any policies or anything like that, I could compare them based on those properties. For example, I might simply want to measure the sizes of the state and action spaces. Or write out something like a game tree, and measure properties like the branching factor and the average / min / max / median depth at which we can find a terminal state. On the other hand, it can also be interesting sometimes to study multiple different policies all for the same MDP. A very common example would be any off-policy learning algorithm (like $Q$-learning): they all involve at least one "target policy" (for which they're learning the $Q(s, a)$ values -- usually the greedy policy with respect to the values learned so far), and at least one "behaviour policy" (which they're using to generate experience -- often something like an $\epsilon$-greedy policy). A more complex example would be population-based training setups, like the one [DeepMind used for their StarCraft 2 training](https://doi.org/10.1038/s41586-019-1724-z); here they have a large population of different policies that they're all using in a complex training setup (and technically I suppose we should say they also have many different MDPs, where every combination of StarCraft 2 level + training opponent would formally be a different MDP). Upvotes: 2 <issue_comment>username_3: As already explained by others, the policy accounts for the agent's decisions, which are not set by the enviornment. The only requirement of an MDP is to define a *space of possible policies* from which the agent can sample from (the "D" of an MDP). This is usually skipped in literature as it is assumed that the agent can sample *any* policy, even if this is true only under strong conditions. Instead, a Markov Reward Process (MRP) is an MDP with a fixed policy $\pi$, and so is a Markov Process (MP), which is an MRP without rewards. I think David Silver's slides explain this very plainly, you might find it useful: <https://www.davidsilver.uk/wp-content/uploads/2020/03/MDP.pdf> Upvotes: 1
2020/05/27
931
3,820
<issue_start>username_0: There are a lot of examples of balancing a pole (see image below) using reinforcement learning, but I find that almost all examples start close to the upright position. Is there any good source (or paper) for when the pole actually starts all the way at the bottom? [![enter image description here](https://i.stack.imgur.com/9NbPg.png)](https://i.stack.imgur.com/9NbPg.png)<issue_comment>username_1: The MDP defines the environment (which corresponds to the task that you need to solve), so it defines e.g. the states of the environment, the actions that you *can* take in those states, the probabilities of transitioning from one state to the other and the probabilities of getting a reward when you take a certain action in a certain state. The policy corresponds to a strategy that the RL agent can follow to act in that environment. Note that the MDP doesn't define what the agent does in each state. That's why you need the policy! An optimal policy for a specific MDP corresponds to the strategy that, if followed, is guaranteed to give you the highest amount of reward in that environment. However, there are multiple strategies, most of them are not optimal. This should clarify why the policy is not part of the definition of the MDP. Upvotes: 3 <issue_comment>username_2: Aside from the points raised in [nbro's answer](https://ai.stackexchange.com/a/21437/1641), I'd like to point out that for a single MDP (a single instance of a "problem"), it may be sensible to study it from perspectives that include no policy at all, or multiple different policies. For instance, if I have an MDP, I may be interested in studying it by looking at various inherent properties of the environment. And if I then have multiple different MDPs, all without any policies or anything like that, I could compare them based on those properties. For example, I might simply want to measure the sizes of the state and action spaces. Or write out something like a game tree, and measure properties like the branching factor and the average / min / max / median depth at which we can find a terminal state. On the other hand, it can also be interesting sometimes to study multiple different policies all for the same MDP. A very common example would be any off-policy learning algorithm (like $Q$-learning): they all involve at least one "target policy" (for which they're learning the $Q(s, a)$ values -- usually the greedy policy with respect to the values learned so far), and at least one "behaviour policy" (which they're using to generate experience -- often something like an $\epsilon$-greedy policy). A more complex example would be population-based training setups, like the one [DeepMind used for their StarCraft 2 training](https://doi.org/10.1038/s41586-019-1724-z); here they have a large population of different policies that they're all using in a complex training setup (and technically I suppose we should say they also have many different MDPs, where every combination of StarCraft 2 level + training opponent would formally be a different MDP). Upvotes: 2 <issue_comment>username_3: As already explained by others, the policy accounts for the agent's decisions, which are not set by the enviornment. The only requirement of an MDP is to define a *space of possible policies* from which the agent can sample from (the "D" of an MDP). This is usually skipped in literature as it is assumed that the agent can sample *any* policy, even if this is true only under strong conditions. Instead, a Markov Reward Process (MRP) is an MDP with a fixed policy $\pi$, and so is a Markov Process (MP), which is an MRP without rewards. I think David Silver's slides explain this very plainly, you might find it useful: <https://www.davidsilver.uk/wp-content/uploads/2020/03/MDP.pdf> Upvotes: 1
2020/05/28
1,026
3,360
<issue_start>username_0: I am new in reinforcement learning, but I already know deep Q-learning and Q-learning. Now, I want to learn about double deep Q-learning. Do you know any good references for double deep Q-learning? I have read some articles, but some of them don't mention what the loss is and how to calculate it, so many articles are not complete. Also, Sutton and Barto (in their book) don't describe that algorithm either. Please, help me to learn Double Q-learning.<issue_comment>username_1: You should first read the introductory paper of Double DQN. <https://arxiv.org/abs/1509.06461> Then, depending on what you would like to do, search for other relevant papers that use this method. I also propose studying the original Double Q-learning paper to understand important concepts/issues, such as the overestimation bias of Q-learning: <https://proceedings.neurips.cc/paper/2010/file/091d584fced301b442654dd8c23b3fc9-Paper.pdf> Upvotes: 3 <issue_comment>username_2: If you're interested in the theory behind Double Q-learning (***not deep!***), the reference paper would be [Double Q-learning](https://papers.nips.cc/paper/3964-double-q-learning.pdf) by <NAME> (2010). As for Double ***deep*** Q-learning (also called DDQN, short for Double Deep Q-networks), the reference paper would be [Deep Reinforcement Learning with Double Q-learning](https://arxiv.org/abs/1509.06461) by Van Hasselt et al. (2016), as pointed out in [username_1's answer](https://ai.stackexchange.com/a/21516/34010). As for how the loss is calculated, it is not explicitly written in the paper. But, you can find it in the [Dueling DQN paper](https://arxiv.org/pdf/1511.06581.pdf), which is a subsequent paper where <NAME> is a coauthor. In the appendix, the authors provide the pseudocode for Double DQN. The relevant part for you would be: > > $y\_{j}=\left\{\begin{array}{ll}r & \text { if } s^{\prime} \text { is terminal } \\ r+\gamma Q\left(s^{\prime}, a^{\max }\left(s^{\prime} ; \theta\right) ; \theta^{-}\right), & \text {otherwise}\end{array}\right.$ > > > Do a gradient descent step with loss $ \left\|y\_{j}-Q(s, a ; \theta)\right\|^{2}$ > > > Here, $y\_j$ is the target, $\theta$ are the parameters of the regular network and $\theta^{-}$ are the target network parameters. The most important thing to note here is the difference with the DQN target: $y\_{i}^{D Q N}=r+\gamma \max \_{a^{\prime}} Q\left(s^{\prime}, a^{\prime} ; \theta^{-}\right)$. In DQN, we evaluate the Q-values based on parameters $\theta^{-}$ and we take the max over actions based on these Q-values parametrized with the **same** $\theta^{-}$. The problem with this is that it leads to an overestimation bias, especially at the beginning of the training process, where the Q-values estimates are noisy. In order to address this issue, in double DQN, we instead take the max based on Q-values calculated using $\theta$ and we evaluate the Q-value of $a^{max}$ based on a different set of parameters i.e. $\theta^{-}$. If you want to learn more about this, by watching a video lecture instead of reading a paper, I'd suggest you take a look at [this](https://www.youtube.com/watch?v=7Lwf-BoIu3M&t=39m20s) lecture from UC Berkley's DRL course, where the professor (<NAME>) discusses this in detail with examples. Upvotes: 4 [selected_answer]
2020/05/30
543
2,464
<issue_start>username_0: I know how pooling works, and what effect it has on the input dimensions - but I'm not sure why it's done in the first place. It'd be great if someone could provide some intuition behind it - while explaining the following excerpt from a blog: > > A problem with the output feature maps is that they are sensitive to the location of the features in the input. One approach to address this sensitivity is to down sample the feature maps. This has the effect of making the resulting down sampled feature maps more robust to changes in the position of the feature in the image, referred to by the technical phrase “*local translation invariance*.” > > > What's local translation invariance here?<issue_comment>username_1: Pooling has multiple benefits * Robust feature detection. * Makes it computationally feasible to have deeper CNNs Robust Feature Detection ------------------------ Think of max-pooling (most popular) for understanding this. Consider a 2\*2 box/unit in one layer which is mapped to only 1 box/unit in the next layer (Basically pooling). Let's say the feature map (kernel) detects a petal of a flower. Then qualifying a petal if any of the 4 units of the previous layer is fired makes the detection robust to noise. There is no strict requirement that all 4 units should be fired to detect a petal. Thus, the next layer (after pooling) captures the features with **noise invariance**. We can also say it is **local translation invariance** (in a close spatial sense) as a shifted feature will also be captured. But also remember **translation invariance** in general is captured by the convolution with kernels in the first place. (See how 1 kernel is convolved with the whole image) ### Computational advantage The dimensions of the inputs in image classification are so huge that the number of the multiplication operation is in billions even with very few layers. Pooling the output layer reduces the input dimension for the next layer thus saving computation. But also now one can aim for really deep networks(number of layers) with the same complexity as before. Upvotes: 4 [selected_answer]<issue_comment>username_2: In addition in general it somewhat aides in detection as only the strongest feature feature filter is activated so in a sense it removes additional information. But it obviously has draw backs resulting in combinations of features being detected which aren't actual.objects. Upvotes: 0
2020/05/31
575
2,276
<issue_start>username_0: I am learning to use a LSTM model to predict time series data. Specifically, I hope the network should output a sequence (with multiple time steps) only after the input sequence has finished feeding in, as shown in the left figure. [![enter image description here](https://i.stack.imgur.com/mnD8D.png)](https://i.stack.imgur.com/mnD8D.png) However, most of the LSTM sequence-to-sequence prediction tutorial I have read seems to be the right figure (i.e. each time step of the output sequence is generated after each time step of the input sequence). What's more, as far as I understand, the LSTM implementation in PyTorch (and probably Keras) can only return output sequence corresponding to each time step of the input sequence. It cannot make predictions after the input sequence is over. I hope to know is there any way to make a sequence-to-sequence LSTM network which starts output only after the input sequence has finished feeding in? And it would be better if someone can show me an example implementation code.<issue_comment>username_1: You should try an architecture with an **encoder** and a **decoder**. The encoder will consume all the data you give as in put and decoder will give out the series of output. Upvotes: 1 <issue_comment>username_2: These are some links that might be useful with some code, considering an LSTM Encoder-Decoder architecture. 1. [**A Simple Introduction to Sequence to Sequence Models**](https://www.analyticsvidhya.com/blog/2020/08/a-simple-introduction-to-sequence-to-sequence-models/) 2. [**NMT: Encoder and Decoder with Keras**](https://www.pluralsight.com/guides/nmt:-encoder-and-decoder-with-keras) 3. [**A ten-minute introduction to sequence-to-sequence learning in Keras**](https://blog.keras.io/a-ten-minute-introduction-to-sequence-to-sequence-learning-in-keras.html) 4. [**How to Develop an Encoder-Decoder Model for Sequence-to-Sequence Prediction in Keras**](https://machinelearningmastery.com/develop-encoder-decoder-model-sequence-sequence-prediction-keras/) 5. [**How to build an encoder decoder translation model using LSTM with Python and Keras**](https://towardsdatascience.com/how-to-build-an-encoder-decoder-translation-model-using-lstm-with-python-and-keras-a31e9d864b9b) Upvotes: 0
2020/06/03
1,947
8,706
<issue_start>username_0: Say we have a machine and we give it a task to do (vision task, language task, game, etc.), how can one prove that a machine actually know's what's going on/happening in that specific task? To narrow it down, some examples: **Conversation** - How would one prove that a machine actually knows what it's talking about or comprehending what is being said? The Turing test is a good start, but never actually addressed *actual comprehension*. **Vision**: How could someone prove or test that a machine actually knows what it's seeing? Object detection is a start, but I'd say it's very inconclusive that a machine understands at any level what it is actually seeing. How do we prove comprehension in machines?<issue_comment>username_1: This is one of the most important issues in the philosophy of artificial intelligence. The most famous philosophical argument that attempts to address this issue is the [Chinese Room argument](https://plato.stanford.edu/entries/chinese-room/) published by the philosopher <NAME> in 1980. The argument is quite simple. Suppose that you are inside a room and you need to communicate (in a written form) with people outside the room in a certain language that you do not *understand* (in the particular example given by Searle, Chinese), but you are given the rules to manipulate the characters of this language (for a given input, you have the rules to produce the correct output). If you follow these rules, to the people outside the room, it will seem as if you understand this language, but you don't. To be more concrete, when I say "apple", you understand that it refers to a specific fruit because you have eaten apples and you have a model of the world. That's understanding, according to Searle. The most famous mathematical model of computers, the Turing machine, is essentially a system that manipulates symbols, so the Chinese Room argument directly applies to computers. Many replies or counterarguments to the CR argument have been discussed, such as * the [system reply](https://plato.stanford.edu/entries/chinese-room/#SystRepl) (the symbol manipulator is only a part of the larger system). * the [robot reply](https://plato.stanford.edu/entries/chinese-room/#RoboRepl) (the symbol manipulator does not understand the meaning of the symbols because it has not experienced the associated real-world objects, so it suggests that understanding requires a body with sensors and controllers) * the [brain simulator reply](https://plato.stanford.edu/entries/chinese-room/#BraiSimuRepl) (the symbol manipulator can actually simulate the activity in the brain of a person that understands the unknown language) So, can we prove that machines really understand? Even before Searle, Turing had already asked the question ["Can machines think?"](https://academic.oup.com/mind/article/LIX/236/433/986238). To prove this, you need a rigorous definition of understanding and thinking that people agree on. However, many people do not want to agree on a [definition of intelligence](https://ai.stackexchange.com/a/13100/2444) and understanding (hence the many counterarguments to the CR argument). So, if you want to prove that machines understand, you need to provide a proof with respect to a specific definition of understanding. For example, if you think that understanding is just a side effect of symbol manipulation, you can easily prove that machines understand many concepts (it just follows from the definition of a Turing machine). However, even if understanding was just a side effect (what does a *side effect* actually mean in this case?) of symbol manipulation, would a machine be able to understand the same concepts and in the same way that humans understand? It's harder to answer this question because we really do not know if humans only manipulate symbols in our brains. Upvotes: 4 <issue_comment>username_2: I recently came across a neat definition of understanding in Roger Schank's *Dynamic Memory*: Basically, you store everything you experience in your memory, but you need to index it in order to be able to use it for processing. Obviously, all experiences are slightly different, eg going to a restaurant is broadly the same, but the details vary. So you need to abstract away the details and store those only if necessary (eg if the food or service was particularly good or bad). Otherwise you just store a general template (or 'script') of the event. In your memory (note: this is modeled, not neurologically correct) you thus have a whole set of event scripts that you can retrieve. So currently I would be accessing my reply-to-stack-exchange-question script to guide me how to best write this answer without getting downvoted for ludicrous claims etc. Understanding, then, would be to receive (through sensory input, or language) an event, and putting it into the right area in your memory. So if I told you I just went to Burger King, you would understand it when this activates your fast-food-restaurant memory set. If I then told you I went there to wipe the floor, it should instead activate cleaning-job, rather than fast-food-restaurant. So you understand the sequence "I went to Burger King to clean the floor" by linking it to the correct memory region. If a computer then responded with "What did you eat?" it would clearly not have understood the input. But a response of "Do you get free food for working there?" would indicate some level of comprehension/understanding, as it might recognise that people working in food outlets might get free food as a work-related benefit. If you experience something completely new, you recognise it as a new experience, and start a new cluster of experiences. For example, if you have been to restaurants before, but never to fast food ones. The first time it will be strange and different, but you remember it as differences to the existing restaurant script. Over time it becomes strong enough (assuming to go to more fast-food restaurants), and it will become its own area, still linked to restaurants, but also not quite the same. What I like about this is that it is a generic mechanism, rather than an explicit processing of content. It is based on learning and experience, which I believe are key aspects of intelligent behaviour. **UPDATE:** This answer is more concerned with trying to find a workable definition of what it means to *comprehend* something, rather than trying to operationalise it in a dialogue system. You can probably pass the Turing test with some clever tricks, without any comprehension at all. But the point is, what does it mean to understand something? And in the current definition it means to classify related events together, and to recognise similarities and differences between similar experiences. The reaction (ie a response) is not the understanding itself, but only a reflection of the internal state that would demonstrate understanding. The difference to a neural network is, I would guess, that it can cope with a broad range of experiences, where a NN would need vast amounts of training data (as it doesn't comprehend). Comprehension involves compression of information through abstraction and evaluating differences. This is still a hard problem, and I'd think difficult to achieve just with automated machine learning. **UPDATE 2:** With regards to the Turing Test, in a way it goes back to deep philosophical points about empiricism. How do you know the world around you exists? You can see it. But how do you know your eyes tell you the true picture? You can quickly descend into a Matrix-like scenario where you don't know anything for certain. The Turing Test is a proxy for showing understanding. You don't know the computer understands what you say, so you observe its responses and interpret them accordingly. Just like at school: the teachers asks a question, and from the pupils' answers infers whether they show understanding. If you simply regurgitate a memorised answer, that's not understanding. If you paraphrase in different words, that shows some sort of comprehension. If you draw analogies to similar issues and analyse why and how they are distinct, now there you show that you really get it. Because we cannot inspect the internal state of a pupil, we cannot measure objectively whether they understood something. We only have communication as an interface between our mind and theirs, and so far chatbots have focused on getting that right. But I think what we really need is to work on memory and memory processing to get further towards comprehension or understanding. And I say this as a computational linguist who specialises in the language parts... Upvotes: 3
2020/06/03
1,961
7,387
<issue_start>username_0: I am having trouble making a reinforcement algorithm than can win the 2048 game. I have tried with deep Q (which I think is the simplest algorithm that should be able to learn a winning strategy). My Q function is given by a NN of two hidden layers 16 -> 8 -> 4. Weight initialization is XAVIER. Activation function is RELU. Loss function is cuadratic loss. Correction is via gradient descent. To train the NN I used a reward given by : $$r\_t = \frac{1}{1024} \sum\_{i=0}^{n}{p^i r\_{((t-n)+i)}}$$ Where n is 20 or the amount of iterations since the last update if a game is lost and $p = 1.4$. There is an epsilon for discovery, set at 100% at the start and it decreases by 10% until it reaches 1%. I have tried to optimize the parameters but can't get better results than a "256" in the board. And the cuadratic loss seems to get stuck at 0.25: [![DL4J web UI for neural net info](https://i.stack.imgur.com/i9T42.png)](https://i.stack.imgur.com/i9T42.png) Is there something I am missing? [Code](https://github.com/EmmanuelMess/2048AI): ``` public enum GameAction { UP, DOWN, LEFT, RIGHT } public final class GameEnvironment { public final int points; public final boolean lost; public final INDArray boardState; public GameEnvironment(int points, boolean lost, int[] boardState) { this.points = points; this.lost = lost; this.boardState = new NDArray(boardState, new int[] {1, 16}, new int[] {16, 1}); } } public class SimpleAgent { private static final Random random = new Random(SEED); private static final MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder() .seed(SEED) .weightInit(WeightInit.XAVIER) .updater(new AdaGrad(0.5)) .activation(Activation.RELU) .optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT) .weightDecay(0.0001) .list() .layer(new DenseLayer.Builder() .nIn(16).nOut(8) .build()) .layer(new OutputLayer.Builder() .nIn(8).nOut(4) .lossFunction(LossFunctions.LossFunction.SQUARED_LOSS) .build()) .build(); MultiLayerNetwork Qnetwork = new MultiLayerNetwork(conf); private GameEnvironment oldState; private GameEnvironment currentState; private INDArray oldQuality; private GameAction lastAction; public SimpleAgent() { Qnetwork.init(); ui(); } public void setCurrentState(GameEnvironment currentState) { this.currentState = currentState; } private final ArrayList input = new ArrayList<>(); private final ArrayList output = new ArrayList<>(); private final ArrayList rewards = new ArrayList<>(); private int epsilon = 100; public GameAction act() { if(oldState != null) { double reward = currentState.points - oldState.points; if (currentState.lost) { reward = 0; } input.add(oldState.boardState); output.add(oldQuality); rewards.add(reward); if (currentState.lost || input.size() == 20) { for(int i = 0; i < rewards.size(); i++) { double discount = 1.4; double discountedReward = 0; for(int j = i; j < rewards.size(); j++) { discountedReward += rewards.get(j) \* Math.pow(discount, j - i); } rewards.set(i, lerp(discountedReward, 1024)); } ArrayList dataSets = new ArrayList<>(); for(int i = 0; i < input.size(); i++) { INDArray correctOut = output.get(i).putScalar(lastAction.ordinal(), rewards.get(i)); dataSets.add(new DataSet(input.get(i), correctOut)); } Qnetwork.fit(DataSet.merge(dataSets)); input.clear(); output.clear(); rewards.clear(); } epsilon = Math.max(1, epsilon - 10); } oldState = currentState; oldQuality = Qnetwork.output(currentState.boardState); GameAction action; if(random.nextInt(100) < 100-epsilon) { action = GameAction.values()[oldQuality.argMax(1).getInt()]; } else { action = GameAction.values()[new Random().nextInt(GameAction.values().length)]; } lastAction = action; return action; } private static double lerp(double x, int maxVal) { return x/maxVal; } private void ui() { UIServer uiServer = UIServer.getInstance(); StatsStorage statsStorage = new InMemoryStatsStorage(); uiServer.attach(statsStorage); Qnetwork.setListeners(new StatsListener(statsStorage)); } } ```<issue_comment>username_1: > > There is an epsilon for discovery, set at 100% at the start and it decreases by 10% until it reaches 1%. > > > After looking through your code on the linked GitHub repository, I think that the annealing of the `epsilon` parameter is a major issue. As clarified in the above comments, the `act()` method is called once per episode time step to determine the agent's choice of action. Within this method, it seems that `epsilon` is decreased extremely rapidly. The code states that `epsilon = Math.max(1, epsilon - 10)`, which means epsilon is decreased to 1 after 10 time steps. Also as clarified in the comments, `epsilon` is never reset to a larger number. Therefore, it seems that `epsilon` will be reduced from 100 to 1 after 10 time steps after (most likely) a single episode, which in my opinion is much too quick and **will stifle exploration**. As a first guess, I suggest annealing `epsilon` more slowly from 100 to 1 after roughly one million time steps. If you want to anneal the parameter linearly, then each of the first million time steps could reduce epsilon by `(starting epsilon - ending epsilon) / annealing time steps`, where `starting epsilon` = 100, `ending epsilon` = 1, and `annealing time steps` = 1000000. Other issues may crop up after making the above change, but I think this is a good starting point. This seems like a very fun project, and it would be fun if you kept us posted about the results! Upvotes: 2 <issue_comment>username_2: Q learning on its own isn't enough to learn a winning strategy for a game like 2048. 2048 requires predictive thinking for possible outcomes and good positional awareness. Performance of the agent is heavily dependent on the reward function. The approach to give the reward proportional to the obtained points after every move is naive since it might sacrifice better positional play for short term rewards. The way it's posed it seems like the agent will try to maximize its performance for the last 20 moves or so. That could lead an agent to the positional situation which leads to a loss in couple of moves. Possibly better strategy would be to give positive reward when the agent actually completes the 2048 tiles and negative reward if it loses. Such parse rewards would add difficulty to training since it would require sophisticated exploration strategies which $\epsilon$-greedy certainly isn't. The stochastic nature of the game would pose difficulty to the agent as well. Similar positions might lead to different outcomes because in some cases the new tile would spawn in a bad position for the continuation while in other cases it might be beneficial. The suggested approach would be to definitely include Monte Carlo tree search approach along with some RL algorithm which was successfully applied to agents like AlphaZero and AlphaGo. MCTS would sample moves ahead and agent would get better representation of how good certain actions are. Upvotes: 2
2020/06/03
248
917
<issue_start>username_0: Should I use minimax or alpha-beta pruning (or both)? Apparently, alpha-beta pruning prunes some parts of the search tree.<issue_comment>username_1: Both algorithms should give the same answer. However, their main difference is that alpha-beta does not explore all paths, like minimax does, but prunes those that are guaranteed not to be an optimal state for the current player, that is max or min. So, alpha-beta is a better implementation of minimax. Here are the time complexities of both algorithms * Minimax: $\mathcal{O}(b^d)$, * Alpha-beta (best-case scenario): $\mathcal{O}(b^{d/2}) = \mathcal{O}(\sqrt{b^d})$ where $b$ is an average branching factor with a search depth of $d$ plies. Upvotes: 4 [selected_answer]<issue_comment>username_2: Minimax is the base algorithm, and alpha beta pruning is an optimisation that you can apply to minimax to make it more efficient. Upvotes: 1
2020/06/03
286
1,137
<issue_start>username_0: Is there any good tutorials about training reinforcement learning agent from raw pixels using PyTorch? I don't understand the official PyTorch tutorial. I want to train the agent on the atari breakout environment. Unfortunately, I failed to train the agent on the RAM version. Now, I am looking for a way to train the agent from raw pixels.<issue_comment>username_1: Both algorithms should give the same answer. However, their main difference is that alpha-beta does not explore all paths, like minimax does, but prunes those that are guaranteed not to be an optimal state for the current player, that is max or min. So, alpha-beta is a better implementation of minimax. Here are the time complexities of both algorithms * Minimax: $\mathcal{O}(b^d)$, * Alpha-beta (best-case scenario): $\mathcal{O}(b^{d/2}) = \mathcal{O}(\sqrt{b^d})$ where $b$ is an average branching factor with a search depth of $d$ plies. Upvotes: 4 [selected_answer]<issue_comment>username_2: Minimax is the base algorithm, and alpha beta pruning is an optimisation that you can apply to minimax to make it more efficient. Upvotes: 1
2020/06/04
315
1,332
<issue_start>username_0: I am reading Sutton and Barto's book on reinforcement learning. I thought that reward and return were the same things. However, in Section 5.6 of the book, 3rd line, first paragraph, it is written: > > Whereas in Chapter 2 we averaged rewards, in Monte Carlo methods we average returns. > > > What does it mean? Are rewards and returns different things?<issue_comment>username_1: **Return** refers to the **total discounted reward**, starting from the current timestep. Upvotes: 4 [selected_answer]<issue_comment>username_2: As the accepted answer states, the return at the current timestep is equal to the sum of discounted rewards from all future timesteps until the end of the episode. In Chapter 5 of Sutton and Barto, returns must be used to estimate the state-value and action-value functions because episode lengths are unrestricted and may be *greater than one*. In contrast, Chapter 2 deals with the very special case of multi-armed bandits in which episode lengths are always *equal to one*: The agent begins each episode in a fixed start state, takes an action, receives a reward, and then the episode terminates and the agent begins the next episode at the same start state. Therefore, **a return is equivalent to a reward in Chapter 2 because all episodes have length one**. Upvotes: 2
2020/06/04
1,433
4,040
<issue_start>username_0: In [equation 3.17 of Sutton and Barto's book](http://incompleteideas.net/book/bookdraft2017nov5.pdf#page=68): $$q\_\*(s, a)=\mathbb{E}[R\_{t+1} + \gamma v\_\*(S\_{t+1}) \mid S\_t = s, A\_t = a]$$ $G\_{t+1}$ here have been replaced with $v\_\*(S\_{t+1})$, but no reason has been provided for why this step has been taken. Can someone provide the reasoning behind why $G\_{t+1}$ is equal to $v\_\*(S\_{t+1})$?<issue_comment>username_1: > > Can someone provide the reasoning behind why $G\_{t+1}$ is equal to $v\_\*(S\_{t+1})$? > > > The two things are not usually exactly equal, because $G\_{t+1}$ is a probability distribution over all possible future returns whilst $v\_\*(S\_{t+1})$ is a probability distribution derived over all possible values of $S\_{t+1}$. These will be different distributions much of the time, but their *expectations* are equal, provided the conditions of the expectation match. In other words, $$G\_{t+1} \neq v\_\*(S\_{t+1})$$ But $$\mathbb{E}[G\_{t+1}] = \mathbb{E}[v\_\*(S\_{t+1})]$$ . . . when the conditions that apply to the expectations on each side are compatible. The relevant conditions are * Same initial state or state/action at given timestep $t$ (or you could pick any earlier timestep) * Same state progression rules and reward structure (i.e. same MDP) * Same policy **More details** The definition of $v(s)$ can be given as $$v(s) = \mathbb{E}\_\pi[G\_t \mid S\_t = s]$$ If you substitute step s' and index $t+1$ you get $$v(s') = \mathbb{E}\_\pi[G\_{t+1} \mid S\_{t+1} = s']$$ (This is the same equation, true by definition, the substitution just shows you how it fits). In order to put this into equation 3.17, you need to note that: * It is OK to substitute terms *inside* an expectation if they are equal in separate expections, amd the conditions $c$ and $Y$ apply to both (or are irrelevant to either one or both). So if for example $\mathbb{E}\_c[Z] = \mathbb{E}\_c[X \mid Y]$ where $X$ and $Z$ are random variables, and you know $Z$ is independent of $Y$ then you can say $\mathbb{E}\_c[W + 2X \mid Y] = \mathbb{E}\_c[W + 2Z \mid Y]$ even if $X$ and $Z$ are different distributions. * $A\_{t+1} = a'$ does not need to be specified because it is decided by the same $\pi$ in both $q(s,a)$ and $v(s')$, making the conditions on the expectation compatible already. So the condition of following $\pi$ is compatible with $\mathbb{E}\_\pi[G\_{t+1} \mid S\_{t} = s, A\_{t}=a] = \mathbb{E}\_\pi[v\_\*(S\_{t+1}) \mid S\_{t} = s, A\_{t}=a]$ * The expectation over possible $s'$ in $\mathbb{E}\_\pi[v\_\*(S\_{t+1})|S\_t=s, A\_t=a] = \sum p(s'|s,a)v\_\*(s')$ is already implied by conditions on the original expectation that the functions are evaluating the same environment - something that is not usually shown in the notation. Also worth noting, in 3.17 $\pi$ is the optimal policy $\pi^\*$, but actually the equation holds for any fixed policy. Upvotes: 3 [selected_answer]<issue_comment>username_2: Note that for a general policy $\pi$ we have that $q\_{\pi}(s,a) = \mathbb{E}\_{\pi}[G\_t | S\_t = s, A\_t = a]$, where in state $S\_t$ we take action $a$ and thereafter following policy $\pi$. Note that the expectation is taken with respect to the reward transition distribution $\mathbb{P}(R\_{t+1} = r, S\_{t+1} = s' | A\_t = a, S\_t = s)$ which I will denote as $p(s',r,|s,a)$. We can then rewrite the expectation as follows \begin{align} q\_{\pi}(s,a) &= \mathbb{E}\_{\pi}[G\_t | S\_t = s, A\_t = a] \\ & = \mathbb{E}\_{\pi}[R\_{t+1} + \gamma G\_{t+1} | S\_t = s, A\_t = a] \\ & = \sum\_{r,s'}p(s',r|s,a)(r + \gamma \mathbb{E}\_\pi[G\_{t+1} | S\_{t+1} = s']) \\ & = \sum\_{r,s'}p(s',r|s,a)(r + \gamma v\_{\pi}(s')) \; . \end{align} The key thing to note is that these two terms, $G\_{t+1}$ and $v\_{\pi}(s')$, are only equal *in expectation*, which is why in the equation you can exchange the terms because we are taking the expectation. Note that I have shown this for a general policy $\pi$ not just the optimal policy. Upvotes: 2
2020/06/05
823
3,594
<issue_start>username_0: We hear this many time for different problems > > Train a model to solve this problem! > > > What do we really mean by training a model?<issue_comment>username_1: **In machine learning, when you train a model, you adjust (or change) the parameters (or weights) of the model so that its performance in solving a certain task increases.** There's little difference between the idea of training a model and the idea of training an animal. In fact, here's the dictionary definition of the verb *to train* > > teach (a person or animal) a particular skill or type of behaviour through practice and instruction over a period of time > > > If you train a model, you also teach a skill or type of behavior through practice and instruction. For example, if you train a model to solve an object classification problem, then you teach the model to classify certain objects according to their properties (which is the skill that the model learns). There are different ways to train a model, depending on the problem you want to solve, the algorithms that you use to train the model, and the available data. If you have a *labeled dataset*, then you train a model with a supervisory signal (the labels), i.e. you explicitly tell the model the output that it is supposed to produce for each input, and, if it does not produce it, then you adjust its parameters so that next time it is more likely to produce the correct output for that input. This is called **supervised learning** (or training). In certain cases, you do not have the correct output that the model is supposed to produce for each input, but you only have a *reward* (or reinforcement) signal. So, your training (or learning) algorithm needs to adjust the parameters of the model only based on the reward signal. This is called **reinforcement learning** (or training). Finally, there's also **unsupervised learning** (or training), where you are given a dataset without labels or rewards, but you want to learn e.g. a probability distribution that this data was likely sampled from or separate this data into groups. For example, in k-means (a clustering algorithm), you want to split the data into groups so that similar objects belong to the same group and dissimilar objects belong to different groups. Note that k-means is a learning algorithm, so it's not a model, but you could consider the centroids of the clusters the parameters of the model (a clustering model). There are variations or subcategories of these learning paradigms and you can also combine them, so sometimes the difference between them is not so clear. There are also different types of models. There are [parametric (e.g. a linear regression model) and non-parametric (e.g. neural networks) models](https://stats.stackexchange.com/q/268638/82135). Upvotes: 2 <issue_comment>username_2: Training a model simply means learning good values for all the weights and the bias from labeled examples. In supervised learning, a machine learning algorithm builds a model by examining many examples and attempting to find a model that minimizes loss; this process is called empirical risk minimization. Loss is the penalty for a bad prediction. That is, loss is a number indicating how bad the model's prediction was on a single example. If the model's prediction is perfect, the loss is zero; otherwise, the loss is greater. The goal of training a model is to find a set of weights and biases that have low loss, on average, across all examples. shows a high loss model on the left and a low loss model on the right. Upvotes: 0
2020/06/05
399
1,607
<issue_start>username_0: For the problems that can be solved algorithmically. We have very good formal literature for which problems can be solved in polynomial, exponential time and which cannot. **P/NP/NP-hard** But do we know some problems in machine learning paradigm for which no model can be trained? (With/without infinite computation capacity)<issue_comment>username_1: At least you should be aware of two points: * P/NP/NP-hard (and all other class of complexities) are thoroughly valid for the machine learning area as well. Because these concepts are related to the fundamental of computations ([theory of computation](https://en.wikipedia.org/wiki/Theory_of_computation)), and machine learning is not an exception here. * One of the useful concepts in the complexity of the learning problem is the [VC dimension](https://en.wikipedia.org/wiki/Vapnik%E2%80%93Chervonenkis_dimension), [PAC learnability](https://en.wikipedia.org/wiki/Probably_approximately_correct_learning), and their related concepts (such as [sample complexity](https://en.wikipedia.org/wiki/Sample_complexity)). Although these concepts can't be enough to measure the time complexity, they are useful for finding the learner model's capacity. Upvotes: 2 <issue_comment>username_2: Unsupervised disentanglement learning with arbitrary generative models is impossible without inductive biases [1]. In fact, in general, any kind of learning is impossible without inductive biases. [1]: [Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations](https://arxiv.org/abs/1811.12359) Upvotes: 0
2020/06/06
1,168
3,956
<issue_start>username_0: I often see that the state-action value function is expressed as: $$q\_{\pi}(s,a)=\color{red}{\mathbb{E}\_{\pi}}[R\_{t+1}+\gamma G\_{t+1} | S\_t=s, A\_t = a] = \color{blue}{\mathbb{E}}[R\_{t+1}+\gamma v\_{\pi}(s') |S\_t = s, A\_t =a]$$ Why does expressing the future return in the time $t+1$ as a state value function $v\_{\pi}$ make the expected value under policy change to expected value in general?<issue_comment>username_1: Let's first write the state-value function as $$q\_{\pi}(s,a) = \mathbb{E}\_{p, \pi}[R\_{t+1} + \gamma G\_{t+1} | S\_t = s, A\_t = a]\;,$$ where $R\_{t+1}$ is the random variable that represents the reward gained at time $t+1$, i.e. after we have taken action $A\_t = a$ in state $S\_t = s$, while $G\_{t+1}$ is the random variable that represents the return, the sum of future rewards. This allows us to show that the expectation is taken under the conditional joint distribution $p(s', r \mid s, a)$, which is the environment dynamics, and future actions are taken from our policy distribution $\pi$. As $R\_{t+1}$ depends on $S\_t = s, A\_t = a$ and $p(s', r \mid s, a)$, the only random variable in the expectation that is dependent on our policy $\pi$ is $G\_{t+1}$, because this is the sum of future reward signals and so will depend on future state-action values. Thus, we can rewrite again as $$q\_{\pi}(s,a) = \mathbb{E}\_{p}[R\_{t+1} + \gamma \mathbb{E}\_{\pi}[ G\_{t+1} |S\_{t+1} = s'] | S\_t = s, A\_t = a]\;,$$ where the inner expectation (coupled with the fact its inside an expectation over the state and reward distributions) should look familiar to you as the state value function, i.e. $$\mathbb{E}\_{\pi}[ G\_{t+1} |S\_{t+1} = s'] = v\_{\pi}(s')\;.$$ This leads us to get what you have $$q\_{\pi}(s,a) = \mathbb{E}\_{p}[R\_{t+1} + \gamma v\_{\pi}(s') | S\_t = s, A\_t = a]\;,$$ where the only difference is that we have made clear what our expectation is taken with respect to. The expectation is taken with respect to the conditional joint distribution $p(s', r \mid s, a)$, and we usually include the $\pi$ subscript to denote that they are also taking the expectation with respect to the policy, but here this does not affect the first term as we have conditioned on knowing $A\_t = a$ and only applies to the future reward signals. Upvotes: 4 [selected_answer]<issue_comment>username_2: username_1 Ireland gives a fantastic answer, and I will provide an intuitive and gentle (but less rigorous) answer for those who are unfamiliar with the relevant statistical concepts. **Next reward $R\_{t+1}$:** The next reward $R\_{t+1}$ is solely dependent on the current state $S\_t$ and action $A\_t$. It is only dependent on the policy because the policy details the probability distribution of actions given a state. Since we assume that the current state and action are given when calculating the expectation $\left(S\_t = s, A\_t = a\right)$, then the policy does not give us any new information, and therefore the next reward is independent of the policy. **Return $G\_{t+1}$**: By definition, $v\_{\pi}(s') = \mathbb{E}\_{\pi}[G\_{t+1}|S\_{t+1} = s']$. The value function is unaffected by sampling actions from the policy in the outer expectation $\left(\mathbb{E}\_{\pi}[v\_{\pi}(s')] = \mathbb{E}[v\_{\pi}(s')]\right)$ since the value function is an expectation under the policy, and hence samples actions from the policy already. **Dropping $\pi$ from $\mathbb{E}\_{\pi}$**: The expectation under the current policy samples next states and rewards from the environment and also samples actions from our policy $\pi$. Because the next reward is independent of the policy given the current state and action, and because the value function is unaffected by sampling actions from the policy in the outer expectation, we can simply drop the policy from the outer expectation (the outer expectation will still sample next states and rewards from the environment). Upvotes: 2
2020/06/06
964
3,653
<issue_start>username_0: I recently read the DQN [paper](https://arxiv.org/abs/1312.5602) titled: Playing Atari with Deep Reinforcement Learning. My basic and rough understanding of the paper is as follows: You have two neural networks; one stays frozen for a duration of time steps and is used in the computation of the loss function with the neural network that is updating. The loss function is used to update the neural network using gradient descent. Experience replay is used, which basically creates a buffer of experiences. This buffer of experiences is randomly sampled and these random samples are used to update the non-frozen neural network. My question pertains to the DQN algorithm illustrated in the paper: Algorithm 1, more specifically lines 4 and 9 of this algorithm. My understanding, which is also mentioned early on in the paper, is that the states are actually sequences of the game-play frames. I want to know, since the input is given to a CNN, how would we encode these frames to serve as input to the CNN? I also want to know since $s\_{1}$ is equal to a set, which can be seen in line 4 of the algorithm, then why is $s\_{t+1}$ equal to $s\_{t}$, $a\_{t}$, $x\_{t+1}$?<issue_comment>username_1: I read the DQN paper titled: Playing Atari with Deep Reinforcement Learning again I read, in the pre-processing and model architecture section (section 4.1), that for each state that is input to the CNN, that this state is actually stacked frames of the game, so basically what has to be done, to my understanding, is that for each time step you stack 4 frames (current frame and 3 previous frames) and this will serve as input to the CNN as the dimensions would be side \* side \* 4, 4 because the frames are converted to grey-scale and 4 frames are being used. Upvotes: 1 <issue_comment>username_2: > > I want to know, since the input is given to a CNN, how would we encode these frames to serve as input to the CNN? > > > As nbro mentioned in a comment to your answer, this question has very recently been asked and answered [here](https://ai.stackexchange.com/a/21673/37607). > > I also want to know since $s\_1$ is equal to a set, which can be seen in line 4 of the algorithm, then why is $s\_{t+1}$ equal to $s\_t$, $a\_t$, $x\_{t+1}$? > > > The algorithm presented in the original DQN [paper](https://arxiv.org/abs/1312.5602) is relatively simple and written to express the main ideas of their approach (e.g. experience replay, preprocessing histories, gradient descent, etc.); in fact, it isn't even the exact algorithm used in the experiments! For example, the experiments use frame-skipping to reduce computation - this is not mentioned in Algorithm 1 in the paper. With that in mind, setting $s\_{t+1}$ equal to $s\_t, a\_t, x\_{t+1}$ in the algorithm signifies a **general notion** of constructing the next raw state $s\_{t+1}$ from the previous preprocessed state $s\_t$, previous action $a\_t$, and current frame $x\_{t+1}$. For example: * If the action space at the next timestep is constrained by the state, then the state may need additional parameters to encode the action space. * The algorithm needs some indication if a state is terminal, and such an indication may need to be encoded in the state. * If there is frame skipping, then multiple frames will be needed to construct the next state, possibly using the previous state as well. The above examples should display how **the encoding of the state cannot always simply be a stack of raw frames, or even a function of $s\_t$, $a\_t$ and $x\_{t+1}$, and therefore a more general approach is often required**. Upvotes: 3 [selected_answer]
2020/06/06
762
2,896
<issue_start>username_0: In policy gradient algorithms the output is a stochastic policy - a probability for each action. I believe that if I follow the policy (sample an action from the policy) I make use of exploration because each action has a certain probability so I will explore all actions for a given state. Is it beneficial or is it common to use extra exploration strategies, like UCB, Thompson sampling, etc. with such algorithms?<issue_comment>username_1: I read the DQN paper titled: Playing Atari with Deep Reinforcement Learning again I read, in the pre-processing and model architecture section (section 4.1), that for each state that is input to the CNN, that this state is actually stacked frames of the game, so basically what has to be done, to my understanding, is that for each time step you stack 4 frames (current frame and 3 previous frames) and this will serve as input to the CNN as the dimensions would be side \* side \* 4, 4 because the frames are converted to grey-scale and 4 frames are being used. Upvotes: 1 <issue_comment>username_2: > > I want to know, since the input is given to a CNN, how would we encode these frames to serve as input to the CNN? > > > As nbro mentioned in a comment to your answer, this question has very recently been asked and answered [here](https://ai.stackexchange.com/a/21673/37607). > > I also want to know since $s\_1$ is equal to a set, which can be seen in line 4 of the algorithm, then why is $s\_{t+1}$ equal to $s\_t$, $a\_t$, $x\_{t+1}$? > > > The algorithm presented in the original DQN [paper](https://arxiv.org/abs/1312.5602) is relatively simple and written to express the main ideas of their approach (e.g. experience replay, preprocessing histories, gradient descent, etc.); in fact, it isn't even the exact algorithm used in the experiments! For example, the experiments use frame-skipping to reduce computation - this is not mentioned in Algorithm 1 in the paper. With that in mind, setting $s\_{t+1}$ equal to $s\_t, a\_t, x\_{t+1}$ in the algorithm signifies a **general notion** of constructing the next raw state $s\_{t+1}$ from the previous preprocessed state $s\_t$, previous action $a\_t$, and current frame $x\_{t+1}$. For example: * If the action space at the next timestep is constrained by the state, then the state may need additional parameters to encode the action space. * The algorithm needs some indication if a state is terminal, and such an indication may need to be encoded in the state. * If there is frame skipping, then multiple frames will be needed to construct the next state, possibly using the previous state as well. The above examples should display how **the encoding of the state cannot always simply be a stack of raw frames, or even a function of $s\_t$, $a\_t$ and $x\_{t+1}$, and therefore a more general approach is often required**. Upvotes: 3 [selected_answer]
2020/06/10
561
1,979
<issue_start>username_0: Why are the state-value and action-value functions are sometimes written in small letters and other times in capitals? For instance, why in the Q-learning algorithm ([page 131 of Barto and Sutton's book](http://incompleteideas.net/book/the-book-2nd.html) but not only), we the capitals are used $Q(S, A)$, while the Bellman equation it is $q(s,a)$?<issue_comment>username_1: Ordinary variables vs Random Variables ====================================== The difference is whether you're talking about a ordinary variable or a random variable. For instance, the q-function (lowercase) is an expectation value (i.e. not a random variable), conditioned on a specific state-action pair: $$ q(s,a)\ =\ \mathbb{E}\_t\left\{ R\_t+\gamma R\_{t+1} + \gamma^2R\_{t+2}+\dots\,\Big|\, S\_t=s, A\_t=a \right\} $$ Then, in some case, some authors may abuse notation slightly by feeding in a random variable into the q-function, e.g. $q(S\_t,a)$, $q(s,A\_t)$ or even $q(S\_t,A\_t)$, thereby *undoing* some or all of the conditioning in the definition of the q-function as an expectation value. Feeding a random variable into a function like the q-function results in an output that is a random variable in its own right. It is for this reason that some authors choose to give the function itself an uppercase letter as well. My advice would be to think to yourself, *is this a random variable?* For the rest, I would interpret upper/lowercase as no more than a hint to the reader. Upvotes: 1 <issue_comment>username_2: In the Sutton and Barto book $q(s,a)$ is used to denote the *true* expected value of taking action $a$ in state $s$, whereas capital $Q(s,a)$ is used to denote an *estimate* of $q(s,a)$. However, there is likely to be a lot of inconsistency in the literature as each author has their own preference on how to denote things. I would encourage you to consider whether the value you are reading is to denote an estimate or the true value. Upvotes: 2
2020/06/11
2,245
8,099
<issue_start>username_0: During my readings, I have seen many authors using the two terms interchangeably, i.e. as if they refer to the same thing. However, we all know about Google's first quotation of "knowledge graph" to refer to their *new* way of making use of their knowledge base. Afterward, other companies are claiming to use knowledge graphs. What are the technical differences between the two? Concrete examples will be very useful to understand better the nuances.<issue_comment>username_1: Based on [the related Wikipedia](https://en.wikipedia.org/wiki/Knowledge_base), a knowledge base (KB) is: > > a technology used to store complex structured and unstructured information used by a computer system. The initial use of the term was in connection with expert systems which were the first knowledge-based systems. > > > As there are different representation model for a KB, we can find different terminology in different domains. For example, in some AI articles, it's called **ontology**. [Knowledeg graph](https://en.wikipedia.org/wiki/Knowledge_Graph) (KG) is another object model to KB realization which is introduced by Google for its search engine (as you have mentioned). Hence, KG is a specification of KB. You can find more information in the paper [Knowledge Graphs](https://arxiv.org/pdf/2003.02320.pdf), such as more history about the KG or a formal definition of that: > > knowledge graph is a graph of data intended to accumulate and convey knowledge of the real world, whose nodes represent entities of interest and whose edges represent relations between these entities. > > > Moreover, you can find some articles about contextual KG (CKG) in the paper [Learning Contextual Embeddings for Knowledge Graph Completion](https://aisel.aisnet.org/cgi/viewcontent.cgi?article=1026&context=pacis2017) and [KG$^2$: Learning to Reason Science Exam Questions with Contextual Knowledge Graph Embeddings](https://arxiv.org/pdf/1805.12393.pdf). Upvotes: 1 <issue_comment>username_2: ### What is a knowledge graph? Appendix [**A.3 "Knowledge Graphs": 2012 Onwards**](https://arxiv.org/pdf/2003.02320.pdf#page=111) of the survey [Knowledge Graphs](https://arxiv.org/pdf/2003.02320.pdf) (which is probably the most extensive survey on KGs) states that **knowledge graphs have been defined in different ways** in recent years. Each of these definitions raises questions about the relationship between KGs and other related concepts, like graph databases, knowledge bases, and ontologies. One definition of a KG is > > a graph where nodes represent entities, and edges represent relationships between those entities. Often a directed edge labelled graph is assumed (or analogously, a set of binary relations, or a set of triples) > > > The question here is: what's the difference between KGs and graph databases (like Neo4j)? Graph databases have been used to build KGs, but is there any actual difference between these 2 terms? Another definition of a KG is > > a knowledge graph is a graph-structured knowledge base > > > So, according to this definition, a KG would be a type of knowledge base (KB). ### What is a knowledge base? In the same appendix, the authors write > > The phrase "knowledge base" was popularised in the 70's (possibly earlier) in the context of **rule-based expert systems** [[72]](https://stacks.stanford.edu/file/druid:pj337tr4694/pj337tr4694.pdf), and later were used in the context of ontologies and other logical formalisms [[68]](http://eolo.cps.unizar.es//Docencia/MasterUPV/Articulos/An%20Overview%20of%20the%20KL-ONE%20Knowledge%20Representation%20System-Brachman1985.PDF) > > > They conclude that a KB has also been defined in ambiguous ways in the past. Norvig and Russell, in chapter 7 (p. 235) of their AIMA book (3rd edition), define a KB as a **set of sentences/facts**, for example, expressed in propositional logic. You then use **inference** techniques to derive new knowledge from this knowledge base. The programming language [PROLOG](https://www.doc.gold.ac.uk/%7Emas02gw/prolog_tutorial/prologpages/) is based on this definition of a KB. ### What is the difference between a KG and KB? So, there is not a single answer to your question because knowledge graphs (KGs) and knowledge bases (KBs) have been **defined in multiple (often ambiguous) ways** in the past. Some people say that KGs are different from KBs, while other people use the term KG as a synonym for KB or define it as a type of KB. ### One possible answer However, if we define a KG as a graph with nodes that represent entities (like city and country) and edges that represent the relations between those entities (which is a common definition of a KG), we can view a **KG as a visual representation of a KB**, defined as a set of sentences/facts (for example, expressed in propositional logic). To see why this is the case, consider the following simple KG. [![enter image description here](https://i.stack.imgur.com/MMgXi.png)](https://i.stack.imgur.com/MMgXi.png) Let's denote this KG by $K = \{E, R\}$, where $E = \{ \text{Santiago}, \text{Chile}, \text{Perú}\} = \{ S, C, P\}$ is the set of entities (with a property) and $R = \{ \text{capital}, \text{borders}\} = \{ c, b \}$ is the set of relations. In $K$, we have the following relations * Santiago is the capital of Chile: $(S, c, C)$ * Chile borders Perú: $(C, b, P)$ * Perú borders Chile: $(P, b, C)$ which can be put into a set of facts $F = \{(S, c, C), (C, b, P), (P, b, C) \}$, which is the KB associated with this specific KG. In the context of KGs, we also have a task/problem similar to the **inference** (or **knowledge reasoning**) one in the context of KBs, which is known **graph completion**, which can be divided into other subtasks, like **entity prediction**, **relation prediction** and **triple classification**, which use **knowledge graph embeddings**. ### Further reading You should really read [**Appendix A**](https://arxiv.org/pdf/2003.02320.pdf#page=108) and successive appendices of [the mentioned survey](https://arxiv.org/pdf/2003.02320.pdf#page=111) to get more information about the issues concerning the history and definition of knowledge graphs and knowledge bases, and other related concepts, like schemas. You can also read chapter 7 (p. 235) of the AIMA book, 3rd edition to know more about knowledge bases. Upvotes: 2 <issue_comment>username_3: In [a book](https://rads.stackoverflow.com/amzn/click/com/1680838369) coauthored by <NAME>, one of the creators of the Google Knowledge Vault, it reads that: > > Over the last decade, large-scale knowledge bases, also known as knowledge graphs, have been automatically constructed from web contents and text sources, and have become a key asset for search engines. > > > Then I thought knowledge graphs are just large-scale knowledge bases, and this statement can be proved by this snippet in [this article](https://www.7x7.com/googles-emily-moxley-on-the-knowledge-graph-and-women-in-tech-1781625222.html) on <NAME>, the product manager of the 2012 knowledge graph project: > > Then, a year ago, when the search giant bought a company called Metaweb and its collaborative knowledge base called Freebase, made up of metadata collected from multiple sources, Moxley took on the responsibility to help integrate Freebase into Google’s search UI. > > > “At Metaweb, they were collecting factual information about real world things, like baseball teams and tourist destinations and celebrities. Once they came to Google, it was an amazing match, because we have all this data, including Maps, Business Photos, Google Books – gigantic data sets that let us identify real world things.” > > > Fast forward to the present, and from the **12 million entities** Metaweb brought with it to Google, the Knowledge Graph has ballooned to include **500 million items and 3.5 billion facts**. As for the related search results that are algorithmically displayed, Google recognizes patterns in search data no competitor can see due to its massive scale. > > > Upvotes: 0
2020/06/11
442
1,997
<issue_start>username_0: When I was learning about neural networks, I saw that a complex neural network can understand the MNIST dataset and a simple convolution network can also understand the same. So I would like to know if we can achieve a CNN's functionality with just using a simple neural network without the convolution layer and if we can then how to convert a CNN into an ANN.<issue_comment>username_1: The convolutional aspect of a CNN comes purely from the connections between layers. Instead of a fully-connected network, which can be difficult to train and tends to overfit more, the convolutional network utilizes hierarchical patterns in the data to limit the number of connections - a local edge detection feature in an image analysis network, for example, only needs input from a small number of local pixels, not the entire image. But in principle, you could assign weights to a fully-connected network to perfectly mimic a convolutional one - you just set the weights of the unneeded connections to zero. Because a general ANN has all the connections present in a CNN plus more, it can do anything a CNN can do plus more, although the training can be more difficult. Upvotes: 1 <issue_comment>username_2: It can be argued that CNN will outperform a fully connected network if they have the same structure (number of neurons). Normal neural networks can probably learn to detect things like CNNs, but the task would be a lot more computationally expensive. In a CNN, all neurons in a feature maps share the same parameters, so if CNN learns to recognize a pattern in one location, it can detect the pattern in any other location. Furthermore, CNNs take into account the fact that pixels that are closer in proximity with each other are more heavily related than the pixels that are further apart, this information is lost in a Normal neural network. Read More [here](https://towardsdatascience.com/cnn-vs-fully-connected-network-for-image-processing-8c5b95d4e42f). Upvotes: 0
2020/06/12
1,114
4,049
<issue_start>username_0: I have two questions 1. When we use our network to approximate our Q values, is the Q target a single value? 2. During backpropagation, when the weights are updated, does it automatically update the Q values, shouldn’t the state be passed in the network again to update it?<issue_comment>username_1: > > When we use our network to approximate our Q values,is the Q target a single value? > > > Yes, the target Q value is a single value if you are just updating a single training example. The loss function of a vanilla DQN for a single experience tuple $(s\_t,a\_t,r\_t,s\_{t+1})$ is calculated as $$L(\theta) = [r\_t + \gamma \,max\,Q\_{a\_{t+1}}(s\_{t+1},a\_{t+1};\theta) - Q(s\_t,a\_t;\theta)]^2$$ where $r\_t + \gamma \,max\,Q(s\_{t+1},a\_{t+1};\theta)$ is the target Q value. However, when using mini-batch gradient descent, you would have to compute multiple target Q values equivalent to the batch size > > During backpropagation, when the weights are updated, does it automatically update the Q values, shouldn’t the state be passed in the network again to update it? > > > During backpropagation of the loss function, the weights $\theta$ are automatically updated. You do not need to pass in the state again. Because in the first place, you would have computed $Q(s\_t,a\_t;\theta)$ by passing in the state as input to the neural network. That is how backpropagation works for Deep Q networks. Training for the DQN is as follows: 1. Collect experience tuples of $(s\_t,a\_t,r\_t,s\_{t+1})$ and store them in a replay buffer. 2. Sample mini-batch of experiences from the replay buffer. 3. From these sampled batch of experiences, compute $Q(s\_{t+1}.a\_{t+1};\theta)$ by passing $s\_{t+1}$ into the network and take the Q value with the maximum values 4. Compute $Q(s\_t,a\_t;\theta)$ by passing $s\_t$ into the network. 5. Compute the Loss for this experience and propagate the loss back to the network, hence updating the weights. Also, since the weights have changed after backpropagation, the Q values for the same state would also be updated if you pass the same state in to the network again. [Check out this paper as it explains how Deep Q Network works](https://www.cs.toronto.edu/%7Evmnih/docs/dqn.pdf). Upvotes: 2 [selected_answer]<issue_comment>username_2: What the network represents can be a little confusing - in tabular Q-learning you have a $Q$ function that you pass a state $s$ and action $a$ into and receive a scalar value. In the [Human Level Control](https://www.nature.com/articles/nature14236) paper where the DQN gained its popularity, the network is a little different to the tabular function. You pass into the network your current state and it outputs values for every action in your action space. You then choose the action which has the highest value. The way we then train it is as follows - first we get our scalar value which corresponds to the target $r + \gamma max\_{a'} Q(s',a')$; I will denote this as $y$ for short. As our network outputs a vector of dimension $|\mathcal{A}|$ where $\mathcal{A}$ is our action space, we get our prediction for the current state which is a vector of the dimension I just mentioned. We will now assume our action space has two possible actions and say that for state $s$ our network outputs the vector $[\hat{y}\_1,\hat{y}\_2]$. Assume that when we calculated $y$ earlier we took the first action, so we only want to update our prediction of $\hat{y}\_1$. To do this, if we were working in e.g. PyTorch I would put into the loss function, which is mean squared error, > > input = $[\hat{y}\_1,\hat{y}\_2$], target = $[y, \hat{y}\_2]$ > > > so essentially all we change is the position in the vector corresponding to the action we took with the Q-target we observed. The weights will thus be updated accordingly to move more into the direction of producing this output for state $s$. So to explicitly answer your first question, the Q-target is a scalar but the target we pass into the network is a vector. Upvotes: 0
2020/06/12
1,396
4,885
<issue_start>username_0: I was trying to solve an XOR problem, and the dataset seems like the one in the image. [![dataset](https://i.stack.imgur.com/qVEHH.png)](https://i.stack.imgur.com/qVEHH.png) I plotted the tree and got this result: [![enter image description here](https://i.stack.imgur.com/iFopL.png)](https://i.stack.imgur.com/iFopL.png) As I understand, the tree should have depth 2 and four leaves. The first comparison is annoying, because it is close to the right x border (0.887). I've tried other parameterizations, but the same result persists. I used the code below: ``` from sklearn.tree import DecisionTreeClassifier clf = DecisionTreeClassifier(criterion='gini') clf = clf.fit(X, y) fn=['V1','V2'] fig, axes = plt.subplots(nrows = 1,ncols = 1,figsize = (3,3), dpi=300) tree.plot_tree(clf, feature_names = fn, class_names=['1', '2'], filled = True); ``` I would be grateful if anyone can help me to clarify this issue.<issue_comment>username_1: I can reproduce this problem for an even more easily separable dataset: [![enter image description here](https://i.stack.imgur.com/U963rm.png)](https://i.stack.imgur.com/U963rm.png) The ideal tree for it should be as follows: [![enter image description here](https://i.stack.imgur.com/KG0vTm.png)](https://i.stack.imgur.com/KG0vTm.png) However, when I run `DecisionTreeClassifier` with the maximal depth = 2 in scikit-learn many times, it splits the dataset randomly and never gets it right. This is an example of 4 different runs: [![enter image description here](https://i.stack.imgur.com/ANA1C.png)](https://i.stack.imgur.com/ANA1C.png) The problem is that scikit-learn has only two measures of the quality of a split: `gini`, and `entropy`. Both of them estimate mutual information between the target and **only one** predictor. However, in XOR problem, mutual information of each predictor with the target is zero. You can read more about it here: [link](https://stats.stackexchange.com/questions/277710/how-to-be-absolutely-sure-that-features-do-have-predictive-power-to-predict-the) from which you can see that this problem exists not only for XOR but for any task where interaction between features is important. In order to solve it, the tree should be built based neither on the Gini impurity, nor on the information gain but on measures that estimate how the target depends on multiple features, e.g. [multivariate mutual information](https://en.wikipedia.org/wiki/Multivariate_mutual_information), [distance correlation](https://en.wikipedia.org/wiki/Distance_correlation), etc which might solve simple problems like XOR but might fail in the case of real tasks. It is easy to find simple cases when they fail (just try them for a regression for simple non-linear functions of a few variables). There is no such a measure that would estimate the dependence of a target on multiple interacting predictors very well and would work for all problems. EDIT to answer Asher's comment: I did several runs for `max_depth=3`. It is better than for `max_depth=2` but still misses the correct classification from time to time. Taking `max_depth=4` almost always gets XOR correctly with the occasional misses. Below are pictures of some runs for `max_depth=3` and `max_depth=4`. [![enter image description here](https://i.stack.imgur.com/ZUHRU.png)](https://i.stack.imgur.com/ZUHRU.png) [![enter image description here](https://i.stack.imgur.com/gJGnh.png)](https://i.stack.imgur.com/gJGnh.png) However, the trees for `max_depth=3` and `max_depth=4` become ugly. They are ugly not only because they are bigger than the ideal tree shown above but they totally obscure the XOR function. For example, can you decipher an XOR from this tree? [![enter image description here](https://i.stack.imgur.com/OdgTa.png)](https://i.stack.imgur.com/OdgTa.png) It is probably possible with some pruning technique but still, an extra work. Upvotes: 4 [selected_answer]<issue_comment>username_2: The algorithm fails because it is greedy. This means that it takes the first split decision immediately, without taking into account what will happen in next steps. An alternative would be given by the Viterbi algorithm, that would select the best sequence of splits by backtracking over the best final cumulated information gain. In the example given, two different split sequences would achieve maximum separation: first split along $V\_1$ or first split along $V\_2$. For other configurations of the data points, the order may be relevant, so all alternatives should be evaluated. Some resources about non-greedy algorithms for decision trees follow. <https://proceedings.neurips.cc/paper/2015/file/1579779b98ce9edb98dd85606f2c119d-Paper.pdf> <https://medium.com/mlearning-ai/optimal-decision-trees-dbd16dfca427> <https://faculty.ucmerced.edu/mcarreira-perpinan/papers/ijcnn21a-slides.pdf> Upvotes: 2
2020/06/13
3,991
7,962
<issue_start>username_0: In per-decison importance sampling given in [Sutton & Barto's book](http://incompleteideas.net/book/RLbook2020.pdf#page=136): > > Eq 5.12 $\rho\_{t:T-1}R\_{t+k} = \frac{\pi(A\_{t}|S\_{t})}{b(A\_{t}|S\_{t})}\frac{\pi(A\_{t+1}|S\_{t+1})}{b(A\_{t+1}|S\_{t+1})}\frac{\pi(A\_{t+2}|S\_{t+2})}{b(A\_{t+2}|S\_{t+2})}......\frac{\pi(A\_{T-1}|S\_{T-1})}{b(A\_{T-1}|S\_{T-1})}R\_{t+k}$ > > > Eq 5.13 $\mathbb{E}\left[\frac{\pi(A\_{k}|S\_{k})}{b(A\_{k}|S\_{k})}\right] = \displaystyle\sum\_ab(a|S\_k)\frac{\pi(A\_{k}|S\_{k})}{b(A\_{k}|S\_{k})} = \displaystyle\sum\_a\pi(a|S\_k) = 1$ > > > Eq.5.14 $\mathbb{E}[\rho\_{t:T-1}R\_{t+k}] = \mathbb{E}[\rho\_{t:t+k-1}R\_{t+k}]$ > > > As full derivation is not given, how do we arrive at Eq 5.14 from 5.12? ### From what i understand : 1) $R\_{t+k}$ is only dependent on action taken at $t+k-1$ given state at that time i.e. only dependent on $\frac{\pi(A\_{t+k-1}|S\_{t+k-1})}{b(A\_{t+k-1}|S\_{t+k-1})}$ 2) $\frac{\pi(A\_{k}|S\_{k})}{b(A\_{k}|S\_{k})}$ is independent of $\frac{\pi(A\_{k+1}|S\_{k+1})}{b(A\_{k+1}|S\_{k+1})}$ , so $\mathbb{E}\left[\frac{\pi(A\_{k}|S\_{k})}{b(A\_{k}|S\_{k})}\frac{\pi(A\_{k+1}|S\_{k+1})}{b(A\_{k+1}|S\_{k+1})}\right] = \mathbb{E}\left[\frac{\pi(A\_{k}|S\_{k})}{b(A\_{k}|S\_{k})}\right]\mathbb{E}\left[\frac{\pi(A\_{k+1}|S\_{k+1})}{b(A\_{k+1}|S\_{k+1})}\right], \forall \, k\in [t,T-2]$ Hence, $\mathbb{E}[\rho\_{t:T-1}R\_{t+k}]= \mathbb{E}\left[\frac{\pi(A\_{t}|S\_{t})}{b(A\_{t}|S\_{t})}\frac{\pi(A\_{t+1}|S\_{t+1})}{b(A\_{t+1}|S\_{t+1})}\frac{\pi(A\_{t+2}|S\_{t+2})}{b(A\_{t+2}|S\_{t+2})}......\frac{\pi(A\_{T-1}|S\_{T-1})}{b(A\_{T-1}|S\_{T-1})}R\_{t+k}\right] \\= \mathbb{E}\left[\frac{\pi(A\_{t}|S\_{t})}{b(A\_{t}|S\_{t})}\frac{\pi(A\_{t+1}|S\_{t+1})}{b(A\_{t+1}|S\_{t+1})}\frac{\pi(A\_{t+2}|S\_{t+2})}{b(A\_{t+2}|S\_{t+2})}....\frac{\pi(A\_{t+k-2}|S\_{t+k-2})}{b(A\_{t+k-2}|S\_{t+k-2})}\frac{\pi(A\_{t+k}|S\_{t+k})}{b(A\_{t+k}|S\_{t+k})}......\frac{\pi(A\_{T-1}|S\_{T-1})}{b(A\_{T-1}|S\_{T-1})}\right]\mathbb{E}\left[\frac{\pi(A\_{t+k-1}|S\_{t+k-1})}{b(A\_{t+k-1}|S\_{t+k-1})}R\_{t+k}\right] \\= \mathbb{E}\left[\frac{\pi(A\_{t}|S\_{t})}{b(A\_{t}|S\_{t})}\right]\mathbb{E}\left[\frac{\pi(A\_{t+1}|S\_{t+1})}{b(A\_{t+1}|S\_{t+1})}\right]\mathbb{E}\left[\frac{\pi(A\_{t+2}|S\_{t+2})}{b(A\_{t+2}|S\_{t+2})}\right]....\mathbb{E}\left[\frac{\pi(A\_{t+k-2}|S\_{t+k-2})}{b(A\_{t+k-2}|S\_{t+k-2})}\right]\mathbb{E}\left[\frac{\pi(A\_{t+k}|S\_{t+k})}{b(A\_{t+k}|S\_{t+k})}\right]......\mathbb{E}\left[\frac{\pi(A\_{T-1}|S\_{T-1})}{b(A\_{T-1}|S\_{T-1})}\right]\mathbb{E}\left[\frac{\pi(A\_{t+k-1}|S\_{t+k-1})}{b(A\_{t+k-1}|S\_{t+k-1})}R\_{t+k}\right] \\= \mathbb{E}[\frac{\pi\_{t+k-1}}{b\_{t+k-1}}R\_{t+k}]\\=\mathbb{E}[\rho\_{t+k-1}R\_{t+k}]$ ### which is not equal to eq 5.14. What's the mistake in the above calculations? Are 1 and 2 correct?<issue_comment>username_1: As mentioned in the comments your assumption about independence is wrong. Here's why. To prove independence we need to show the following holds: $$P(X=x, Y=y) = P(X=x)P(Y=y)$$ in the case of RL this becomes: $$P(X=a, X=a') = P(X=a)P(Y=a')$$ The left hand side has the value: $$P(X=a, Y=a') = b(A\_t = a| S\_t = s) p(s'|a,s) b(A\_{t+1} = a'|, S\_{t+1} = s')$$ while the right hand side has the value: $$P(X=a)P(Y=a') = b(A\_t = a| S\_t = s)b(A\_{t+1} = a'| S\_{t+1} = s')$$ And hence not independent. Now let use look at why the following expression holds: > > Eq.5.14: $\mathbb{E}[\rho\_{t:T-1}R\_{t+k}] = \mathbb{E}[\rho\_{t:t+k-1}R\_{t+k}]$ > > > I will not derive the exact expressions, but I hope you can form the reasoning I provide. By the rules of probability we know that sum of joint probability is equal to 1 i.e.: $$\sum\_{X\_1..X\_n} P(X\_1=a\_1, X\_2=a\_2,...X\_n = a\_n) = 1$$ I have alredy showed above, the trajectory is not independent. So $R\_{t+k}$ will depend on the trajectory $S\_{t:t+k-1}$ where $S\_{t:t+k-1}$ is a particular trajectory. At the end of this trajectory we get a reward $R\_{t+k}$ and thus $R\_{t+k}$ is exclusively a function of $S\_{t:t+k-1}$ i.e. $R\_{t+k} = f(S\_{t:t+k-1})$. The trajectory after this $S\_{t+k:T-1}$ irrelevant since it will always sum up of to 1. i.e once you have reached a particular state at time step $t+k-1$ you are now conditioning based on that $P(S\_{t+k:T-1}|S\_{t:t+k-1})$ and taking the expected value over all trajectories possible from thereon i.e. $\sum\_{S\_{t+k:T-1}} P(S\_{t+k:T-1}|S\_{t:t+k-1})$ which is 1 by probability rules. Thus, what you are really doing is: $$P(S\_{t:t+k-1})R\_{t+k}(\sum\_{S\_{t+k:T-1}} P(S\_{t+k:T-1}|S\_{t:t+k-1}))$$ and hence the remaining trajectory has no contribution. Another way of thinking this is you are taking weighted trajectories till time step $t+k-1$ weighted by rewards $R\_{t+k}$ and hence you cannot sum up to 1. The rest of the trajectory after $t+k-1$ will sum up to 1. I hope this qualitative description suffices. You can do the maths, but you must be careful with the notations and the assumptions you make. Also all the equations are correct, I hope you can indirectly see it from my reasoning. Upvotes: 2 [selected_answer]<issue_comment>username_2: **First Part** We can reduce variance in off-policy importance sapling, even in the absence of discounting ($\gamma = 1$). Notice that the off-policy estimators are made up of terms like $$\rho\_{t:T-1}G\_t = \rho\_{t:T-1} (R\_{t+1} + \gamma R\_{t+2} + \dots+ \gamma^{T-t-1}R\_{T})$$ and consider the second term, imagine $\gamma$=$1$: $$\rho\_{t:T-1}R\_{t+2} = \frac{\pi(A\_t|S\_t) \pi(A\_{t+1}|S\_{t+1})......\pi(A\_{T-1}|S\_{T-1})}{b(A\_t|S\_t) b(A\_{t+1}|S\_{t+1})...... b(A\_{T-1}|S\_{T-1})} R\_{t+2}$$ In above equation, the term $\pi(A\_t|S\_t)$, $\pi(A\_{t+1}|S\_{t+1})$, $R\_{t+2}$ are correleated, all the other terms are independent of each other. Notice the very import property of expectation: $E[ab] = E[a] E[b]$ if and only if $a$, $b$ are independent random variables. Now: $$ E[\frac{\pi(A\_t|S\_t) \pi(A\_{t+1}|S\_{t+1}).....\pi(A\_{T-1}|S\_{T-1})}{b(A\_t|S\_t) b(A\_{t+1}|S\_{t+1}).....b(A\_{T-1}|S\_{T-1})} R\_{t+2}]$$ $$ = E[\frac{\pi(A\_t|S\_t) \pi(A\_{t+1}|S\_{t+1})}{b(A\_t|S\_t) b(A\_{t+1}|S\_{t+1})} R\_{t+2}] E[\frac{\pi(A\_{t+2}|S\_{t+2})}{b(A\_{t+2}|S\_{t+2})}] ..... E[\frac{\pi(A\_{T-1}|S\_{T-1})}{b(A\_{T-1}|S\_{T-1})}]$$ $$ = E[\frac{\pi(A\_t|S\_t) \pi(A\_{t+1}|S\_{t+1})}{b(A\_t|S\_t) b(A\_{t+1}|S\_{t+1})} R\_{t+2}] \sum\_a b(a|s\_{t+2}) \frac{\pi(a|s\_{t+2}}{b(a|s\_{t+2}}.....\sum\_a b(a|s\_{T-1}) \frac{\pi(a|s\_{T-1}}{b(a|s\_{T-1}} $$ $$ = E[\frac{\pi(A\_t|S\_t) \pi(A\_{t+1}|S\_{t+1})}{b(A\_t|S\_t) b(A\_{t+1}|S\_{t+1})} R\_{t+2}] \sum\_a \pi(a|s\_{t+2}).....\sum\_a \pi(a|s\_{T-1})$$ $$ = E[\frac{\pi(A\_t|S\_t) \pi(A\_{t+1}|S\_{t+1})}{b(A\_t|S\_t) b(A\_{t+1}|S\_{t+1})} R\_{t+2}] 1 \* 1 $$ $$ = E[\frac{\pi(A\_t|S\_t) \pi(A\_{t+1}|S\_{t+1})}{b(A\_t|S\_t) b(A\_{t+1}|S\_{t+1})} R\_{t+2}] $$ therefore $$ E[\rho\_{t:T-1}R\_{t+2}] = E[\rho\_{t:t+1} R\_{t+2}]$$ If we repeat this analysis for the $k$th term, we will get: $$E[\rho\_{t:T-1}R\_{t+k}] = E[\rho\_{t:t+k-1} R\_{t+k}]$$ It follows that the expectation of our original term can be written: $$E[\rho\_{t:T-1}G\_{t}] = E[\tilde{G\_{t}}]$$ where $$\tilde{G}\_t \doteq \rho\_{t:t}R\_{t+1} + \gamma \rho\_{t:t+1}R\_{t+2} + \gamma^{2} \rho\_{t:t+2}R\_{t+3} + ...... + \gamma^{T-t-1} \rho\_{t:T-1}R\_{T}$$ We call this idea per reward importance sampling. It follows immediately that there is an alternative importance sampling estimate, with the same unbiased expectation as the ordinary importance sampling estimate: $$V(s) \doteq \frac{\sum\_{t\in\mathcal{T}(s)} \tilde{G}\_t}{|\mathcal{T}(s)|}$$ which we might expect to sometimes be of lower variance. **Second Part** The reward $R\_{k+1}$ depends on the previous $\pi(a\_1|s\_1)$ up to $\pi(a\_{k-1}|s\_{k-1})$. So, you can't seperate them and treat them as independent variable just you did on the aforementioned example. Upvotes: 0
2020/06/13
1,577
4,656
<issue_start>username_0: We assume infinite horizon and discount factor $\gamma = 1$. At each step, after the agent takes an action and gets its reward, there is a probability $\alpha = 0.2$, that agent will die. The assumed maze looks like this [![enter image description here](https://i.stack.imgur.com/vTe5M.png)](https://i.stack.imgur.com/vTe5M.png) Possible actions are go left, right, up, down or stay in a square. The reward has a value 1 for any action done in the square (1,1) and zero for actions done in all the other squares. With this in mind, what is the value of a square (1,1)? The correct answer is supposed to be 5, and is calculated as $1/(1\cdot 0.2) = 5$. But why is that? I didn't manage to find any explanation on the net, so I am asking here.<issue_comment>username_1: The value of a state depends on the policy that you use, so I'll make the assumption here that you're talking about value **using the optimal policy**. *According to the optimal policy, the agent would choose to stay in the square (1,1) every time*, but since it has a 0.8 probability of actually staying (and 0.2 probability of dying), we can compute the value of the agent using the Bellman equation as: $$ V(1,1) = 1 + 0.8 V(1,1) + 0.2 V(\text{death state}) \\ \implies V(1,1) = 1 + 0.8 V(1,1) \\ \implies V(1,1) = \frac{1}{1 - 0.8} \\ \implies V(1,1) = 5 $$ There are other ways of deriving the same number (value function has multiple definitions) but they are equivalent. Upvotes: 2 <issue_comment>username_2: I will fill in some details in username_1's answer for people who are interested. > > With this in mind, what is the value of a square (1,1)? > > > First of all, the value function is dependent on a policy. The supposed correct answer you provided is the value of $(1, 1)$ under the *optimal* policy, so from now on, we will assume that we are finding the value function under the optimal policy. Also, we will assume that the environment dynamics are deterministic: choosing to take an action will guarantee that the agent moves in that direction. > > Possible actions are go left, right, up, down or stay in a square. Reward has a value 1 for any action done in square (1,1) and zero for actions done in all the other squares. > > > Based on this information, the optimal policy at $(1, 1)$ should be to always stay in that square. The agent doesn't receive any reward for being in another square, and the probability of dying is the same for each square, so choosing the action to stay in square $(1, 1)$ is best. > > The correct answer is supposed to be 5, and is calculated as $\frac{1}{1 \cdot 0.2} = 5$. But why is that? > > > By the Bellman Equation, the value function under the optimal policy $\pi\_\*$ at $(1,1)$ can be written as follows: $$v\_{\pi\_\*}((1, 1)) = \mathbb{E}\_{\pi\_\*}\left[R\_t + \gamma v\_{\pi\_{\*}}(s') | s = (1,1)\right],$$ where $R\_t$ denotes the immediate reward, $s$ denotes the current state, and $s'$ denotes the next state. By the problem statement, $\gamma = 1$. The next state is the $\texttt{dead}$ terminal state $\alpha = 20\%$ of the time. Terminal states have value $0$, as they do not accrue future rewards. The next state $s'$ is equal to $(1, 1)$ the remaining $(1-\alpha) = 80\%$ of the time because our policy dictates to remain in the same state and we assumed the dynamics were deterministic. Since expectation is linear, we can rewrite the expectation as follows (replacing $\gamma$ with $1$): \begin{align\*} v\_{\pi\_\*}((1,1)) &= \mathbb{E}\_{\pi\_\*}\left[R\_t + v\_{\pi\_{\*}}(s') | s = (1,1)\right]\\ &= \mathbb{E}\_{\pi\_\*}\left[R\_t |s=(1, 1)\right]+ \mathbb{E}\_{\pi\_\*}\left[v\_{\pi\_{\*}}(s') | s = (1,1)\right].\qquad (\*) \end{align\*} We have $$\mathbb{E}\_{\pi\_\*}\left[R\_t |s=(1, 1)\right] = 1\qquad (\*\*)$$ because we are guaranteed an immediate reward of $1$ when taking an action in state $(1, 1)$. Also, from the comments above regarding the next state values and probabilities, we have the following: \begin{align\*}\mathbb{E}\_{\pi\_\*}\left[v\_{\pi\_{\*}}(s') | s = (1,1)\right] &= (1-\alpha) \cdot v\_{\pi\_{\*}}((1,1)) + \alpha \cdot v\_{\pi\_\*}(\texttt{dead})\\ &= 0.8 \cdot v\_{\pi\_{\*}}((1,1)) + 0.2 \cdot 0\\ &= 0.8 \cdot v\_{\pi\_{\*}}((1,1)).\qquad (\*\*\*) \end{align\*} Substituting $(\*\*)$ and $(\*\*\*)$ into $(\*)$ yields the following: \begin{align\*} v\_{\pi\_\*}((1,1)) &= 1 + 0.8 \cdot v\_{\pi\_{\*}}((1,1))\\ v\_{\pi\_\*}((1,1)) - 0.8 \cdot v\_{\pi\_{\*}}((1,1)) &= 1\\ (1-0.8)v\_{\pi\_\*}((1,1)) &= 1\\ v\_{\pi\_\*}((1,1)) &= \frac{1}{1-0.8} = \frac{1}{0.2} = 5. \end{align\*} Upvotes: 3 [selected_answer]
2020/06/14
1,678
6,963
<issue_start>username_0: I'm using Q-learning (off-policy TD-control as specified in Sutton's book on pg 131) to train an agent to play connect four. My goal is to create a strong player (superhuman performance?) purely by self-play, without training models against other agents obtained externally. I'm using neural network architectures with some convolutional layers and several fully connected layers. These train surprisingly efficiently against their opponent, either a random player or another agent previously trained through Q-learning. Unfortunately the resulting models don't generalise well. 5000 episodes seems enough to obtain a high (> 90%) win rate against whichever opponent, but after > 20 000 episodes, they are still rather easy to beat by myself. To solve this, I now train batches of models (~ 10 models per batch), which are then used in group as a new opponent, i.e.: * I train a batch of models against a completely random agent (let's call them the generation one) * Then I train a second generation of agents against this first generation * Then I train a third generation against generation two * ... So far this helped in creating a slightly stronger/more general connect four model, but the improvement is not as good as I was hoping for. Is it just a matter of training enough models/generations or are there better ways for using Q-learning in combination with self-play? I know the most successful techniques (e.g. alpha zero) rely on MCTS, but I'm not sure how to integrate this with Q-learning? Neither how MCTS helps to solve the problem of generalisation? Thanks for your help!<issue_comment>username_1: > > To solve this, I now train batches of models (~ 10 models per batch), which are then used in group as a new opponent, > > > This seems quite a reasonable approach on the surface, but possibly the agents will still lose generalisation if the solutions in each generation are too similar. It also looks like from your experiment that learning progress is too slow. One simple thing you could do is progress through the generations faster. You don't need to train until agents win 90% of games before upping the generation number. Yuo could set the target as low as 60% or even 55%. For generalisation, it may also help to train against a mix of previous generations. E.g. if you use ten opponents, have five from previous generation, two from each of two iterations before that, and one even older one. Although the setup you have created plays an agent you are training against another agent that you have created, it is not quite self-play. In self-play, an agent plays against itself, and learns as both players simultaneously. This requires a single neural network function that can switch its evaluation to score for each player - you can either make it learn to take the current player into account and make the change in viewpoint itself, or in zero-sum games (which Connect 4 is one) it can be more efficient to have it evaluate the end result for player 1 and simply take the negative of that as the score for player 2. This is also equivalent to using $\text{max}\_a$ and $\text{argmax}\_a$ for player 1's action choices and $\text{min}\_a$ and $\text{argmin}\_a$ for player 2's action choices - applying the concept of [minimax](https://en.wikipedia.org/wiki/Minimax) to Q learning. You can take minimax further to improve your algorithm's learning rate and performance during play. Essentially what Q learning and self-play does is learn a [heuristic](https://en.wikipedia.org/wiki/Heuristic_(computer_science)) for each state (or state/action pair) that can guide search. You can add search algorithms to your training and play in multiple ways. One simple approach during training is to perform some n-step look ahead using [negamax with alpha-beta pruning](https://en.wikipedia.org/wiki/Negamax#Negamax_with_alpha_beta_pruning) (an efficient variant of minimax in zero-sum games), and if it finds the end of the game: * when training, use the result (win/draw/lose) as your ground truth value instead of the normal Q-learning TD target. * when evaluating/playing vs human, prefer the action choice over anything the Q function returns. In practice, only bother with the Q function if look-ahead search cannot find a result. In the last few months, [Kaggle have been running a "Connect X" challenge](https://www.kaggle.com/c/connectx) (which is effectively only Connect 4 at the moment). The forums and example scripts (called "Kernels") are a good source of information for writing your own agents, and if you choose to compete, then the leaderboard should give you a sense for how well your agent is performing. The top agents are perfect players, as [Connect 4 is a solved game](https://www.youtube.com/watch?v=yDWPi1pZ0Po). I am taking part in that competition, and have trained my agent using self-play Q-learning plus negamax search as above - it is not perfect, but is close enough that it can often beat a perfect playing opponent when playing as player 1. It was trained on around 100,000 games of self-play as I described above, plus extra training games versus previous agents. > > I know the most successful techniques (e.g. alpha zero) rely on MCTS, but I'm not sure how to integrate this with Q-learning? Neither how MCTS helps to solve the problem of generalisation? > > > MCTS is a variant of search algorithm, and could be combined with Q-learning similarly to negamax, although in Alpha Zero it is combined with something more like Actor-Critic. The combination would be similar - from each position in play use MCTS to look ahead, and instead of picking the direct action with the best Q value, pick the one with the best MCTS score. Unlike negamax, MCTS is stochastic, but you can still use its evaluations as ground truth for training. MCTS does not solve generalisation issues for neural networks, but like negamax it will improve the performance of a game-playing agent by looking ahead. Its main advantage over negamax in board games is a capability to scale to large branching factors. MCTS does work well for Connect 4. Some of the best agents in the Kaggle competition are using MCTS. Howver, it is not necessary for creating a "superhuman" Connect 4 agent, Q-learning plus negamax can do just as well. Upvotes: 2 [selected_answer]<issue_comment>username_2: MCTS does not help with generalization directly, but it enables the agent to plan ahead (see depth-first search or breadth-first search). Having the state space search embedded in the algorithm is very important for playing zero sum games (we also plan ahead in our head when making moves right?). Now Q-learning is generally good for simple environments, but to achieve superhuman performance on board games you would need HUGE amounts of data without using any planning algorithm. I don't even know if practically achieving superhuman performance by only Q-learning is even possible. Upvotes: 0
2020/06/15
1,426
5,825
<issue_start>username_0: > > **[Named entity recognition (NER)](https://www.kdnuggets.com/2018/08/named-entity-recognition-practitioners-guide-nlp-4.html), also known as entity chunking/extraction, is a popular technique used in information extraction to identify and segment the named entities and classify or categorize them under various predefined classes.** > > > Briefly, how does NER work? What are the main ideas behind it? And which algorithms are used to perform NER?<issue_comment>username_1: > > To solve this, I now train batches of models (~ 10 models per batch), which are then used in group as a new opponent, > > > This seems quite a reasonable approach on the surface, but possibly the agents will still lose generalisation if the solutions in each generation are too similar. It also looks like from your experiment that learning progress is too slow. One simple thing you could do is progress through the generations faster. You don't need to train until agents win 90% of games before upping the generation number. Yuo could set the target as low as 60% or even 55%. For generalisation, it may also help to train against a mix of previous generations. E.g. if you use ten opponents, have five from previous generation, two from each of two iterations before that, and one even older one. Although the setup you have created plays an agent you are training against another agent that you have created, it is not quite self-play. In self-play, an agent plays against itself, and learns as both players simultaneously. This requires a single neural network function that can switch its evaluation to score for each player - you can either make it learn to take the current player into account and make the change in viewpoint itself, or in zero-sum games (which Connect 4 is one) it can be more efficient to have it evaluate the end result for player 1 and simply take the negative of that as the score for player 2. This is also equivalent to using $\text{max}\_a$ and $\text{argmax}\_a$ for player 1's action choices and $\text{min}\_a$ and $\text{argmin}\_a$ for player 2's action choices - applying the concept of [minimax](https://en.wikipedia.org/wiki/Minimax) to Q learning. You can take minimax further to improve your algorithm's learning rate and performance during play. Essentially what Q learning and self-play does is learn a [heuristic](https://en.wikipedia.org/wiki/Heuristic_(computer_science)) for each state (or state/action pair) that can guide search. You can add search algorithms to your training and play in multiple ways. One simple approach during training is to perform some n-step look ahead using [negamax with alpha-beta pruning](https://en.wikipedia.org/wiki/Negamax#Negamax_with_alpha_beta_pruning) (an efficient variant of minimax in zero-sum games), and if it finds the end of the game: * when training, use the result (win/draw/lose) as your ground truth value instead of the normal Q-learning TD target. * when evaluating/playing vs human, prefer the action choice over anything the Q function returns. In practice, only bother with the Q function if look-ahead search cannot find a result. In the last few months, [Kaggle have been running a "Connect X" challenge](https://www.kaggle.com/c/connectx) (which is effectively only Connect 4 at the moment). The forums and example scripts (called "Kernels") are a good source of information for writing your own agents, and if you choose to compete, then the leaderboard should give you a sense for how well your agent is performing. The top agents are perfect players, as [Connect 4 is a solved game](https://www.youtube.com/watch?v=yDWPi1pZ0Po). I am taking part in that competition, and have trained my agent using self-play Q-learning plus negamax search as above - it is not perfect, but is close enough that it can often beat a perfect playing opponent when playing as player 1. It was trained on around 100,000 games of self-play as I described above, plus extra training games versus previous agents. > > I know the most successful techniques (e.g. alpha zero) rely on MCTS, but I'm not sure how to integrate this with Q-learning? Neither how MCTS helps to solve the problem of generalisation? > > > MCTS is a variant of search algorithm, and could be combined with Q-learning similarly to negamax, although in Alpha Zero it is combined with something more like Actor-Critic. The combination would be similar - from each position in play use MCTS to look ahead, and instead of picking the direct action with the best Q value, pick the one with the best MCTS score. Unlike negamax, MCTS is stochastic, but you can still use its evaluations as ground truth for training. MCTS does not solve generalisation issues for neural networks, but like negamax it will improve the performance of a game-playing agent by looking ahead. Its main advantage over negamax in board games is a capability to scale to large branching factors. MCTS does work well for Connect 4. Some of the best agents in the Kaggle competition are using MCTS. Howver, it is not necessary for creating a "superhuman" Connect 4 agent, Q-learning plus negamax can do just as well. Upvotes: 2 [selected_answer]<issue_comment>username_2: MCTS does not help with generalization directly, but it enables the agent to plan ahead (see depth-first search or breadth-first search). Having the state space search embedded in the algorithm is very important for playing zero sum games (we also plan ahead in our head when making moves right?). Now Q-learning is generally good for simple environments, but to achieve superhuman performance on board games you would need HUGE amounts of data without using any planning algorithm. I don't even know if practically achieving superhuman performance by only Q-learning is even possible. Upvotes: 0
2020/06/16
1,554
6,303
<issue_start>username_0: I would like to build a model based on reinforcement learning (RL) for the following scenario > > Recommend the best route (of cities listed for a given country) that satisfies the required criteria (museum, beaches, food, etc) for a total budget of $2000. > > > Based on the recommendation, the user will provide its feedback (as a reward), so the recommendations can be fine-tuned (by reinforcement learning) the next time. I modeled the system this way: * States = (c,cr), where $c$ is the city and $cr$ is the criteria (history, beach, food, etc) * Actions = (p) is the price of visiting the city * Reward: acceptance of the cities selected by end user as a route (1 or 0) The objective is to decide which list of cities together satisfy the given budget. **Is this MDP model right and how can I implement this? May be the only option is using Monte Carlo methods and linear/dynamic programming.. Is there any other way?**<issue_comment>username_1: > > To solve this, I now train batches of models (~ 10 models per batch), which are then used in group as a new opponent, > > > This seems quite a reasonable approach on the surface, but possibly the agents will still lose generalisation if the solutions in each generation are too similar. It also looks like from your experiment that learning progress is too slow. One simple thing you could do is progress through the generations faster. You don't need to train until agents win 90% of games before upping the generation number. Yuo could set the target as low as 60% or even 55%. For generalisation, it may also help to train against a mix of previous generations. E.g. if you use ten opponents, have five from previous generation, two from each of two iterations before that, and one even older one. Although the setup you have created plays an agent you are training against another agent that you have created, it is not quite self-play. In self-play, an agent plays against itself, and learns as both players simultaneously. This requires a single neural network function that can switch its evaluation to score for each player - you can either make it learn to take the current player into account and make the change in viewpoint itself, or in zero-sum games (which Connect 4 is one) it can be more efficient to have it evaluate the end result for player 1 and simply take the negative of that as the score for player 2. This is also equivalent to using $\text{max}\_a$ and $\text{argmax}\_a$ for player 1's action choices and $\text{min}\_a$ and $\text{argmin}\_a$ for player 2's action choices - applying the concept of [minimax](https://en.wikipedia.org/wiki/Minimax) to Q learning. You can take minimax further to improve your algorithm's learning rate and performance during play. Essentially what Q learning and self-play does is learn a [heuristic](https://en.wikipedia.org/wiki/Heuristic_(computer_science)) for each state (or state/action pair) that can guide search. You can add search algorithms to your training and play in multiple ways. One simple approach during training is to perform some n-step look ahead using [negamax with alpha-beta pruning](https://en.wikipedia.org/wiki/Negamax#Negamax_with_alpha_beta_pruning) (an efficient variant of minimax in zero-sum games), and if it finds the end of the game: * when training, use the result (win/draw/lose) as your ground truth value instead of the normal Q-learning TD target. * when evaluating/playing vs human, prefer the action choice over anything the Q function returns. In practice, only bother with the Q function if look-ahead search cannot find a result. In the last few months, [Kaggle have been running a "Connect X" challenge](https://www.kaggle.com/c/connectx) (which is effectively only Connect 4 at the moment). The forums and example scripts (called "Kernels") are a good source of information for writing your own agents, and if you choose to compete, then the leaderboard should give you a sense for how well your agent is performing. The top agents are perfect players, as [Connect 4 is a solved game](https://www.youtube.com/watch?v=yDWPi1pZ0Po). I am taking part in that competition, and have trained my agent using self-play Q-learning plus negamax search as above - it is not perfect, but is close enough that it can often beat a perfect playing opponent when playing as player 1. It was trained on around 100,000 games of self-play as I described above, plus extra training games versus previous agents. > > I know the most successful techniques (e.g. alpha zero) rely on MCTS, but I'm not sure how to integrate this with Q-learning? Neither how MCTS helps to solve the problem of generalisation? > > > MCTS is a variant of search algorithm, and could be combined with Q-learning similarly to negamax, although in Alpha Zero it is combined with something more like Actor-Critic. The combination would be similar - from each position in play use MCTS to look ahead, and instead of picking the direct action with the best Q value, pick the one with the best MCTS score. Unlike negamax, MCTS is stochastic, but you can still use its evaluations as ground truth for training. MCTS does not solve generalisation issues for neural networks, but like negamax it will improve the performance of a game-playing agent by looking ahead. Its main advantage over negamax in board games is a capability to scale to large branching factors. MCTS does work well for Connect 4. Some of the best agents in the Kaggle competition are using MCTS. Howver, it is not necessary for creating a "superhuman" Connect 4 agent, Q-learning plus negamax can do just as well. Upvotes: 2 [selected_answer]<issue_comment>username_2: MCTS does not help with generalization directly, but it enables the agent to plan ahead (see depth-first search or breadth-first search). Having the state space search embedded in the algorithm is very important for playing zero sum games (we also plan ahead in our head when making moves right?). Now Q-learning is generally good for simple environments, but to achieve superhuman performance on board games you would need HUGE amounts of data without using any planning algorithm. I don't even know if practically achieving superhuman performance by only Q-learning is even possible. Upvotes: 0
2020/06/16
1,368
5,630
<issue_start>username_0: What is the cleanest, easiest way to explain someone who is a non-[STEM](https://en.wikipedia.org/wiki/Science,_technology,_engineering,_and_mathematics) work colleague the concept of Reinforcement Learning? What are the main ideas behind Reinforcement Learning?<issue_comment>username_1: > > To solve this, I now train batches of models (~ 10 models per batch), which are then used in group as a new opponent, > > > This seems quite a reasonable approach on the surface, but possibly the agents will still lose generalisation if the solutions in each generation are too similar. It also looks like from your experiment that learning progress is too slow. One simple thing you could do is progress through the generations faster. You don't need to train until agents win 90% of games before upping the generation number. Yuo could set the target as low as 60% or even 55%. For generalisation, it may also help to train against a mix of previous generations. E.g. if you use ten opponents, have five from previous generation, two from each of two iterations before that, and one even older one. Although the setup you have created plays an agent you are training against another agent that you have created, it is not quite self-play. In self-play, an agent plays against itself, and learns as both players simultaneously. This requires a single neural network function that can switch its evaluation to score for each player - you can either make it learn to take the current player into account and make the change in viewpoint itself, or in zero-sum games (which Connect 4 is one) it can be more efficient to have it evaluate the end result for player 1 and simply take the negative of that as the score for player 2. This is also equivalent to using $\text{max}\_a$ and $\text{argmax}\_a$ for player 1's action choices and $\text{min}\_a$ and $\text{argmin}\_a$ for player 2's action choices - applying the concept of [minimax](https://en.wikipedia.org/wiki/Minimax) to Q learning. You can take minimax further to improve your algorithm's learning rate and performance during play. Essentially what Q learning and self-play does is learn a [heuristic](https://en.wikipedia.org/wiki/Heuristic_(computer_science)) for each state (or state/action pair) that can guide search. You can add search algorithms to your training and play in multiple ways. One simple approach during training is to perform some n-step look ahead using [negamax with alpha-beta pruning](https://en.wikipedia.org/wiki/Negamax#Negamax_with_alpha_beta_pruning) (an efficient variant of minimax in zero-sum games), and if it finds the end of the game: * when training, use the result (win/draw/lose) as your ground truth value instead of the normal Q-learning TD target. * when evaluating/playing vs human, prefer the action choice over anything the Q function returns. In practice, only bother with the Q function if look-ahead search cannot find a result. In the last few months, [Kaggle have been running a "Connect X" challenge](https://www.kaggle.com/c/connectx) (which is effectively only Connect 4 at the moment). The forums and example scripts (called "Kernels") are a good source of information for writing your own agents, and if you choose to compete, then the leaderboard should give you a sense for how well your agent is performing. The top agents are perfect players, as [Connect 4 is a solved game](https://www.youtube.com/watch?v=yDWPi1pZ0Po). I am taking part in that competition, and have trained my agent using self-play Q-learning plus negamax search as above - it is not perfect, but is close enough that it can often beat a perfect playing opponent when playing as player 1. It was trained on around 100,000 games of self-play as I described above, plus extra training games versus previous agents. > > I know the most successful techniques (e.g. alpha zero) rely on MCTS, but I'm not sure how to integrate this with Q-learning? Neither how MCTS helps to solve the problem of generalisation? > > > MCTS is a variant of search algorithm, and could be combined with Q-learning similarly to negamax, although in Alpha Zero it is combined with something more like Actor-Critic. The combination would be similar - from each position in play use MCTS to look ahead, and instead of picking the direct action with the best Q value, pick the one with the best MCTS score. Unlike negamax, MCTS is stochastic, but you can still use its evaluations as ground truth for training. MCTS does not solve generalisation issues for neural networks, but like negamax it will improve the performance of a game-playing agent by looking ahead. Its main advantage over negamax in board games is a capability to scale to large branching factors. MCTS does work well for Connect 4. Some of the best agents in the Kaggle competition are using MCTS. Howver, it is not necessary for creating a "superhuman" Connect 4 agent, Q-learning plus negamax can do just as well. Upvotes: 2 [selected_answer]<issue_comment>username_2: MCTS does not help with generalization directly, but it enables the agent to plan ahead (see depth-first search or breadth-first search). Having the state space search embedded in the algorithm is very important for playing zero sum games (we also plan ahead in our head when making moves right?). Now Q-learning is generally good for simple environments, but to achieve superhuman performance on board games you would need HUGE amounts of data without using any planning algorithm. I don't even know if practically achieving superhuman performance by only Q-learning is even possible. Upvotes: 0
2020/06/16
451
1,872
<issue_start>username_0: During the first episode, it's 100% exploration, because all our Q values are 0. Suppose we have 1000 time steps, and it's terminated by meeting a reward. So, after the first episode, why can't we make it 100% exploitation? Why do we still need exploration?<issue_comment>username_1: You can't guarantee that you have taken every action from every state, even with 1000 time steps. There would be multiple outcomes: 1. The episode terminates, either by success or failure before the 1000 time steps. The agent is trying to maximise reward, if this is achieved by taking less than 1000 steps then it will do. It won't just walk around until it hits an arbitrary number of time steps. 2. If you have more states than time steps then you will never be unable to visit all states and so you cannot guarantee that the policy you followed was optimal (and hence would still want to explore). Even if you have #states = #time-steps then you will almost certaintly have more state-action pairs than timesteps. The only time this would be equal is if from every state there is only one action, which would be a trivial problem that wouldn't need RL to solve. Upvotes: 2 [selected_answer]<issue_comment>username_2: username_1's answer is good, but I'd like to add something. Your question realistically has nothing to do with Q Learning, in fact, you can ask the same thing about just about any RL algorithm. In fact, even in multi-armed bandits, you can easily see why your proposed method is suboptimal (please don't interpret this as a criticism, because I think your question is a very natural one). My suggestion to you is to read up on multi-armed bandits since they're much simpler to analyze. I think even the Sutton and Barto book deals with your proposed method explicitly, and mathematically proves that other strategies are better. Upvotes: 2
2020/06/17
1,772
7,576
<issue_start>username_0: Nowadays, CV has really achieved great performance in many different areas. However, it is not clear what a CV algorithm is. What are some examples of CV algorithms that are commonly used nowadays and have achieved state-of-the-art performance?<issue_comment>username_1: There are many computer vision (CV) algorithms and models that are used for different purposes. So, of course, I cannot list all of them, but I can enumerate some of them based on my experience and knowledge. Of course, this answer will only give you a flavor of the type of algorithm or model that you will find while solving CV tasks. For example, there are algorithms that are used to extract **keypoints** and **descriptors** (which are often collectively called **features**, although the descriptor is the actual feature vector and the keypoint is the actual feature, and in deep learning this distinction between keypoints and descriptors does not even exist, AFAIK) from images, i.e. **feature extraction** algorithms, such as [**SIFT**](https://www.cs.ubc.ca/%7Elowe/papers/ijcv04.pdf), BRISK, FREAK, SURF or ORB. There are also edge and corner detectors. For example, the [**Harris corner detector**](http://www.bmva.org/bmvc/1988/avc-88-023.pdf) is a very famous corner detector. Nowadays, **convolutional neural networks** (CNNs) have basically supplanted all these algorithms in many cases, especially when enough data is available. Rather than extracting the typical features from an image (such as corners), CNNs extract features that are most useful to solve the task that you want to solve by taking into account the information in the training data (which probably includes corners too!). Hence CNNs are often called **data-driven feature extractors**. There are different types of CNNs. For example, CNNs that were designed for **semantic segmentation** (which is a CV task/problem), such as the [**u-net**](https://arxiv.org/pdf/1505.04597.pdf), or CNNs that were designed for **instance segmentation**, such as [**mask R-CNN**](https://arxiv.org/pdf/1703.06870.pdf). There are also algorithms that can be used to normalize features, such as the [**bag-of-features** algorithm](https://ai.stackexchange.com/q/21914/2444), which can be used to create fixed-size feature vectors. This can be particularly useful for tasks like content-based image retrieval. There are many other algorithms that could be considered CV algorithms or are used to solve CV tasks. For example, [**RanSaC**](https://dl.acm.org/doi/10.1145/358669.358692), which is a very general algorithm to fit models to data in the presence of outliers, can be used to fit homographies (matrices that are generally used to transform planes to other planes) that transform pixels of one image to another coordinate system of another image. This can be useful for the purpose of **template matching** (which is another CV task), where you want to find a template image in another target image. This is very similar to **object detection**. There are also many **image processing** algorithms and techniques that are heavily used in computer vision. For example, all the **filters** (such as Gaussian, median, bilateral, non-local means, etc.) that can be used to smooth, blur or de-noise images. Nowadays, some deep learning techniques have also replaced some of these filters and image processing techniques, such as **de-noising auto-encoders**. All these algorithms and models have something in common: they are used to process images and/or get low- or high-level information from images. Most of them are typically used to extract features (i.e. regions of the images that are relevant in some way) from images, so that they can later be used to train a classifier or regressor to perform some kind of task (e.g. find and distinguish the objects, such people, cars, dogs, etc. in an image). The classifiers/regressors are typically machine learning (ML) models, such as SVMs or fully-connected neural networks, but there's a high degree of overlap between CV and ML because some ML tools are used to solve CV tasks (e.g. image classification). Upvotes: 4 [selected_answer]<issue_comment>username_2: Computer vision is a wide field, and besides the fact that deep learning dominates, there are still many, many other algorithms that see widespread use in both academia and industry. For tasks such as image classification / object recognition, the typical paradigm is some CNN architecture such as a ResNet or VGG. There has been lots of works to extend and improves the CNNs, but the basic architecture has not really changed much over the years. Interestingly, there's been some work to encode more complex inductive biases / invariants into the deep learning modelling process, such as Spatial Transformer Networks and Group Equivariant Networks. More classical vision approaches to such problems typically include computing some form of hand-crafted feature (HOG, LBP), and training any off-the-shelf classifier. For object detection, the de-facto for many years was Viola-Jones for it's combination of performance and speed (even though there were more accurate systems at the time, but they were slower). More recently, object detection has been dominated by deep learning, with architectures such as SSD, YOLO, all the RCNN variants, etc. A related problem to object detection is segmentation. Deep learning again dominates in this area with algorithms such as Mask RCNN. However, many other approaches exist and see some use, such as superpixels (e.g. SLIC), watershed, and normalized cuts. For problems such as image search, vision approaches such as Fisher vectors and VLAD (computed from image descriptors such as SIFT or SURF) are still competitive. However, CNN features have also seen use in this domain. For video analysis, CNNs (typically, 3D CNNs) are popular. However, they often leverage other vision techniques such as optical flow. The most popular optical flow algorithms are Brox, TVL-1, KLT, and Farneback. There are more recent approaches which attempt to use deep learning to actually learn the optical flow, though. An overarching set of techniques that has so many varying applications are interest point detectors, image descriptors, and feature encoding techniques. Interest point detectors attempt to localise interest points in an image or video, and popular detectors include Harris, FAST, and MSER. Image descriptors are used to describe those interest points. Example descriptors include SIFT, SURF, KAZE, and ORB. The descriptors themselves can be used to do various things such as estimate homographies using the RANSAC algorithm (for applications such as panorama and camera stabilisation). However, the descriptors can also be encoded and pooled into a single fixed-length feature vector, which serves as the representation of the image. The most common approach to this encoding is bag of feature / bag of visual words. This is based on K-means. However, popular extensions / variants include Fisher vectors and VLAD. Self-supervised and semi-supervised learning is also very popular nowadays in academia, and seeks to get the most of out the abundant unlabelled data. In a computer vision context, popular techniques include MoCo and SimCLR, but new methods are released almost weekly! Another problem domain in computer vision is the ability to generate / synthesize images. The is not unique to computer vision, but the common algorithms for this are variational autoencoders (VAEs) and generative adversarial networks (GANs). Upvotes: 2
2020/06/18
2,939
10,155
<issue_start>username_0: Typically, people say that convolutional neural networks (CNN) perform the convolution operation, hence their name. However, some people have also said that a CNN actually performs the cross-correlation operation rather than the convolution. How is that? Does a CNN perform the convolution or cross-correlation operation? What is the difference between the convolution and cross-correlation operations?<issue_comment>username_1: Short answer ============ Theoretically, convolutional neural networks (CNNs) can either perform the cross-correlation or convolution: it does not really matter whether they perform the cross-correlation or convolution because the kernels are learnable, so they can adapt to the cross-correlation or convolution given the data, although, in the typical diagrams, CNNs are shown to perform the cross-correlation because (in libraries like TensorFlow) they are typically *implemented* with cross-correlations (and cross-correlations are conceptually simpler than convolutions). Moreover, in general, the kernels can or not be symmetric (although they typically won't be symmetric). In the case they are symmetric, the cross-correlation is equal to the convolution. Long answer =========== To understand the answer to this question, I will provide two examples that show the similarities and differences between the convolution and cross-correlation operations. I will focus on the convolution and cross-correlation applied to 1-dimensional discrete and finite signals (which is the simplest case to which these operations can be applied) because, essentially, CNNs process finite and discrete signals (although typically higher-dimensional ones, but this answer applies to higher-dimensional signals too). Moreover, in this answer, I will assume that you are at least familiar with how the convolution (or cross-correlation) in a CNN is performed, so that I do not have to explain these operations in detail (otherwise this answer would be even longer). What is the convolution and cross-correlation? ---------------------------------------------- Both the convolution and the cross-correlation operations are defined as the dot product between a small matrix and different parts of another typically bigger matrix (in the case of CNNs, it is an image or a feature map). Here's the usual illustration (of the cross-correlation, but the idea of the convolution is the same!). [![enter image description here](https://i.stack.imgur.com/Jhb6U.gif)](https://i.stack.imgur.com/Jhb6U.gif) Example 1 --------- To be more concrete, let's suppose that we have the output of a function (or signal) $f$ grouped in a matrix $$f = [2, 1, 3, 5, 4] \in \mathbb{R}^{1 \times 5},$$ and the output of a kernel function also grouped in another matrix $$h=[1, -1] \in \mathbb{R}^{1 \times 2}.$$ For simplicity, let's assume that we do not pad the input signal and we perform the convolution and cross-correlation with a stride of 1 (I assume that you are familiar with the concepts of padding and stride). ### Convolution Then the **convolution** of $f$ with $h$, denoted as $f \circledast h = g\_1$, where $\circledast$ is the convolution operator, is computed as follows \begin{align} f \circledast h = g\_1 &=\\ [(-1)\*2 + 1\*1, \\ (-1)\*1 + 1\*3, \\ (-1)\*3 + 1\*5, \\ (-1)\*5+1\*4] &=\\ [-2 + 1, -1 + 3, -3 + 5, -5 + 4] &=\\ [-1, 2, 2, -1] \in \mathbb{R}^{1 \times 4} \end{align} So, the convolution of $f$ with $h$ is computed as a series of element-wise multiplications between the horizontally flipped kernel $h$, i.e. $[-1, 1]$, and each $1 \times 2$ window of $f$, each of which is followed by a summation (i.e. a dot product). This follows from the definition of convolution (which I will not report here). ### Cross-correlation Similarly, the **cross-correlation** of $f$ with $h$, denoted as $f \otimes h = g\_2$, where $\otimes$ is the cross-correlation operator, is also defined as a dot product between $h$ and different parts of $f$, but without flipping the elements of the kernel before applying the element-wise multiplications, that is \begin{align} f \otimes h = g\_2 &=\\ [1\*2 + (-1)\*1, \\ 1\*1 + (-1)\*3, \\ 1\*3 + (-1)\*5, \\ 1\*5 + (-1)\*4] &=\\ [2 - 1, 1 - 3, 3 - 5, 5 - 4] &=\\ [1, -2, -2, 1] \in \mathbb{R}^{1 \times 4} \end{align} ### Notes 1. The only difference between the convolution and cross-correlation operations is that, in the first case, the kernel is flipped (along all spatial dimensions) before being applied. 2. In both cases, the result is a $1 \times 4$ vector. If we had convolved $f$ with a $1 \times 1$ vector, the result would have been a $1 \times 5$ vector. Recall that we assumed no padding (i.e. we don't add dummy elements to the left or right borders of $f$) and stride 1 (i.e. we shift the kernel to the right one element at a time). Similarly, if we had convolved $f$ with a $1 \times 3$, the result would have been a $1 \times 3$ vector (as you will see from the next example). 3. The results of the convolution and cross-correlation, $g\_1$ and $g\_2$, are different. Specifically, one is the negated version of the other. So, the result of the convolution is generally different than the result of the cross-correlation, given the same signals and kernels (as you might have suspected). Example 2: symmetric kernel --------------------------- Now, let's convolve $f$ with a $1 \times 3$ kernel that is symmetric around the middle element, $h\_2 = [-1, 2, -1]$. Let's first compute the convolution. \begin{align} f \circledast h\_2 = g\_3 &=\\ [(-1)\*2 + 1\*2 + (-1) \* 3,\\ (-1)\*1 + 2\*3 + (-1) \* 5,\\ (-1)\*3 + 2\*5 + (-1) \* 4] &=\\ [-2 + 2 + -3, -1 + 6 + -5, -3 + 10 + -4] &=\\ [-3, 0, 3] \in \mathbb{R}^{1 \times 3} \end{align} Now, let's compute the cross-correlation \begin{align} f \otimes h\_2 = g\_4 &=\\ [(-1)\*2 + 1\*2 + (-1) \* 3, \\ (-1)\*1 + 2\*3 + (-1) \* 5, \\ (-1)\*3 + 2\*5 + (-1) \* 4] &=\\ [-3, 0, 3] \in \mathbb{R}^{1 \times 3} \end{align} Yes, that's right! In this case, the result of the convolution and the cross-correlation is the same. This is because the kernel is symmetric around the middle element. This result applies to any convolution or cross-correlation in any dimension. For example, the convolution of the 2d Gaussian kernel (a centric-symmetric kernel) and a 2d image is equal to the cross-correlation of the same signals. CNNs have learnable kernels --------------------------- In the case of CNNs, the kernels are the learnable parameters, so we do not know beforehand whether the kernels will be symmetric or not around their middle element. They won't probably be. In any case, CNNs can perform either the cross-correlation (i.e. no flip of the filter) or convolution: it does not really matter if they perform cross-correlation or convolution because the filter is learnable and can adapt to the data and tasks that you want to solve, although, in the visualizations and diagrams, CNNs are typically shown to perform the cross-correlation (but this does not have to be the case in practice). Do libraries implement the convolution or correlation? ------------------------------------------------------ In practice, certain libraries provide functions to compute both convolution and cross-correlation. For example, NumPy provides both the functions [`convolve`](https://numpy.org/doc/stable/reference/generated/numpy.convolve.html) and [`correlate`](https://numpy.org/doc/stable/reference/generated/numpy.correlate.html) to compute both the convolution and cross-correlation, respectively. If you execute the following piece of code (Python 3.7), you will get results that are consistent with my explanations above. ``` import numpy as np f = np.array([2., 1., 3., 5., 4.]) h = np.array([1., -1.]) h2 = np.array([-1., 2., 1.]) g1 = np.convolve(f, h, mode="valid") g2 = np.correlate(f, h, mode="valid") print("g1 =", g1) # g1 = [-1. 2. 2. -1.] print("g2 =", g2) # g2 = [ 1. -2. -2. 1.] ``` However, NumPy is not really a library that provides [out-of-the-box](https://numpy.org/doc/stable/reference/generated/numpy.correlate.html) functionality to build CNNs. On the other hand, TensorFlow's and [PyTorch's](https://pytorch.org/docs/master/generated/torch.nn.Conv1d.html) functions to build the convolutional layers actually perform cross-correlations. As I said above, although it does not really matter whether CNNs perform the convolution or cross-correlation, this naming is misleading. Here's a proof that TensorFlow's [`tf.nn.conv1d`](https://www.tensorflow.org/api_docs/python/tf/nn/conv1d) actually implements the cross-correlation. ``` import tensorflow as tf # TensorFlow 2.2 f = tf.constant([2., 1., 3., 5., 4.], dtype=tf.float32) h = tf.constant([1., -1.], dtype=tf.float32) # Reshaping the inputs because conv1d accepts only certain shapes. f = tf.reshape(f, [1, int(f.shape[0]), 1]) h = tf.reshape(h, [int(h.shape[0]), 1, 1]) g = tf.nn.conv1d(f, h, stride=1, padding="VALID") print("g =", g) # [1, -2, -2, 1] ``` Further reading --------------- After having written this answer, I found the article [Convolution vs. Cross-Correlation](https://glassboxmedicine.com/2019/07/26/convolution-vs-cross-correlation) (2019) by <NAME>, which essentially says the same thing that I am saying here, but provides more details and examples. Upvotes: 4 [selected_answer]<issue_comment>username_2: Just as a short and quick answer to build of username_1s: The way CNNs are typically taught, they are taught using a correlation on the forward pass, rather than a convolution. In reality, Convolutional neural networks is a bit of a misleading name, but not entirely incorrect. CNNs do in fact use convolutions every time they are trained and run. If a *correlation* is used on the forward pass, a *convolution* is used on the backward pass. The opposite is true if a convolution is used on the forward pass (which is equally valid as using a correlation). I couldn't seem to find this information anywhere, so had to learn it myself the hard way. So to summarise, a typical CNN goes like this: Correlation forward, convolution backward. Upvotes: 0
2020/06/19
1,060
3,876
<issue_start>username_0: Currently, I'm only going through these two books * [Reinforcement Learning: An Introduction, by Sutton and Barto](http://incompleteideas.net/book/RLbook2020.pdf): RL explained on an engineering level (mathematical, but readable for a non-mathematician). Elementary notions from probability and statistics are required (conditional probability, total probability theorem, total expectation theorem, and similar. The [MIT RES.6-012 "Introduction to Probability"](https://www.youtube.com/watch?v=1uW3qMFA9Ho&list=PLUl4u3cNGP60hI9ATjSFgLZpbNJ7myAg6) course is a great source of information for these topics.). * [Deep Reinforcement Learning, by <NAME>](https://www.manning.com/books/grokking-deep-reinforcement-learning): this book introduces the main elements of reinforcement learning in a less formal way than Sutton and Barto (derivations for some equations are not given), using examples to describe the math. What other introductory books to reinforcement learning do you know, and how do they approach this topic?<issue_comment>username_1: In addition to the ones you mentioned, I would add [Algorithms of Reinforcement Learning](https://sites.ualberta.ca/%7Eszepesva/rlbook.html) by <NAME>. There is a number of professors who use it as a reference in their RL teaching materials (for example [this one](https://cs.uwaterloo.ca/%7Eppoupart/teaching/cs885-spring20/textbook.html)). It generally follows the same outline as Sutton & Barto's book (except the part on bandits, it is included in the Chapter on *Control*). In fact, it may be considered as a condensed version of Sutton & Barto (about 100 pages). In addition, it's freely available online. I like the author's justification as to why he wrote this book, so I'm just going to quote it: > > Why did I write this book? Good question! There exist a good number of > really great books on Reinforcement Learning. So why a new book? I had > selfish reasons: I wanted a **short** book, which nevertheless contained > the major ideas underlying **state-of-the-art RL algorithms** (back in > 2010), a discussion of their relative **strengths** and **weaknesses**, with > hints on what is known (and not known, but would be good to know) > about these algorithms. > > > Upvotes: 2 <issue_comment>username_2: [Foundations of Deep Reinforcement Learning: Theory and Practice in Python (Addison-Wesley Data & Analytics Series) 1st Edition](https://www.pearson.com/us/higher-education/program/Graesser-Foundations-of-Deep-Reinforcement-Learning-Theory-and-Practice-in-Python/PGM2027228.html) This book does not give a detailed background information on Markov Decision Processes, different Bellman equations and relationships between the value function and action-value function, etc. It focuses on Deep Reinforcement Learning and goes straight to Policy and Value - based algorithms using neural networks. It might be good for someone trying to quickly understand what Deep RL algorithms are out there and apply them. Upvotes: 1 <issue_comment>username_3: The (draft) book [Reinforcement Learning: Theory and Algorithms](https://rltheorybook.github.io/rltheorybook_AJKS.pdf), by [<NAME>](https://sham.seas.harvard.edu/) (who published a [natural policy gradient](https://proceedings.neurips.cc/paper/2001/file/4b86abe48d358ecf194c56c69108433e-Paper.pdf) algorithm and other important research) and others, introduces RL in a mathematical/formal way. It seems to me that this is a reliable book, but a bit advanced for "regular people". Yes, I know the question was about introductory books on RL, but this may be suitable for people that have a solid knowledge of math and would like a hardcore intro to RL. For example, the book starts with a non-trivial (in my view) proof that there exists an optimal stationary and deterministic policy for an MDP. Upvotes: 2
2020/06/19
1,006
3,638
<issue_start>username_0: I was running into a situation in which my input feature experience a very large variation in term of magnitude. Particularly, consider feature 1 belong to group 1 and feature 2 3 4 belong to group 2, Like this picture below [![enter image description here](https://i.stack.imgur.com/4FHOK.png)](https://i.stack.imgur.com/4FHOK.png) I was really worried that in this case **feature 1 might dominate feature 2,3,4 (group 2)** because its corresponding value is so large (I was trying to train this data set on a neural network). In this situation, what would be the appropriate scaling strategy ? Update: I know for sure that the value of feature 1 is an integer that is uniform on the interval [22,42] But for feature 2 ,3 ,4 I do not have any insight Thank you for your enthusiast !<issue_comment>username_1: In addition to the ones you mentioned, I would add [Algorithms of Reinforcement Learning](https://sites.ualberta.ca/%7Eszepesva/rlbook.html) by <NAME>. There is a number of professors who use it as a reference in their RL teaching materials (for example [this one](https://cs.uwaterloo.ca/%7Eppoupart/teaching/cs885-spring20/textbook.html)). It generally follows the same outline as Sutton & Barto's book (except the part on bandits, it is included in the Chapter on *Control*). In fact, it may be considered as a condensed version of Sutton & Barto (about 100 pages). In addition, it's freely available online. I like the author's justification as to why he wrote this book, so I'm just going to quote it: > > Why did I write this book? Good question! There exist a good number of > really great books on Reinforcement Learning. So why a new book? I had > selfish reasons: I wanted a **short** book, which nevertheless contained > the major ideas underlying **state-of-the-art RL algorithms** (back in > 2010), a discussion of their relative **strengths** and **weaknesses**, with > hints on what is known (and not known, but would be good to know) > about these algorithms. > > > Upvotes: 2 <issue_comment>username_2: [Foundations of Deep Reinforcement Learning: Theory and Practice in Python (Addison-Wesley Data & Analytics Series) 1st Edition](https://www.pearson.com/us/higher-education/program/Graesser-Foundations-of-Deep-Reinforcement-Learning-Theory-and-Practice-in-Python/PGM2027228.html) This book does not give a detailed background information on Markov Decision Processes, different Bellman equations and relationships between the value function and action-value function, etc. It focuses on Deep Reinforcement Learning and goes straight to Policy and Value - based algorithms using neural networks. It might be good for someone trying to quickly understand what Deep RL algorithms are out there and apply them. Upvotes: 1 <issue_comment>username_3: The (draft) book [Reinforcement Learning: Theory and Algorithms](https://rltheorybook.github.io/rltheorybook_AJKS.pdf), by [<NAME>](https://sham.seas.harvard.edu/) (who published a [natural policy gradient](https://proceedings.neurips.cc/paper/2001/file/4b86abe48d358ecf194c56c69108433e-Paper.pdf) algorithm and other important research) and others, introduces RL in a mathematical/formal way. It seems to me that this is a reliable book, but a bit advanced for "regular people". Yes, I know the question was about introductory books on RL, but this may be suitable for people that have a solid knowledge of math and would like a hardcore intro to RL. For example, the book starts with a non-trivial (in my view) proof that there exists an optimal stationary and deterministic policy for an MDP. Upvotes: 2
2020/06/22
910
3,773
<issue_start>username_0: I am a computer science student. I learned about programming languages recently, but I don't know much about artificial intelligence. I want to know, why don't we program something in a way that we could tell the program > > Hey! Do this for me! > > > And then just sit down and wait that the AI does the job? Is this currently possible to do?<issue_comment>username_1: Normally when you write a program, you are acting like a boss that micromanages the job, telling the workers *how* to accomplish a task, perhaps without even letting them know what the purpose is. What you are hoping to be is a boss that gives the workers a *goal* and allows them to determine how to accomplish it. In many ways, that is one of the aims of AI. We already have small examples, such as a smart washing machine that weighs the clothes, monitors the amount of dirt in the water, and continually decides how much water to add or drain, when to agitate, when to rinse, and when to spin. All you had to tell it was "*clean these*", and perhaps say what kind of material they are made of. As a much larger example, automobiles were traditionally operated by turning the steering wheel to cause it to change direction and by pressing the pedals to increase or decrease the speed, but now we are working on car controllers that can respond to "*go to Cleveland*" by determining the best speed and direction by itself. But note that those two examples (the first requiring only a little "intellegence", the second a lot) were for very specific requests that could be expressed in a few words. As soon as the request becomes even slightly more complicated, the difficulty of creating a solution becomes much much more difficult. Ask your "*Do this for me!*" request of a human being. The first thing they'll do is ask "what does *this* mean?". And then you'll have to give a lot of details. And then they'll have questions about what you really want. And so on. Providing the requirements for "this" will be neither simple nor easy. Human intellegence is still vastly superior to current AI. In particular, it is capable of not only recognizing that additional information is required, but of having the intuitive ability to know what that missing information must be like. Upvotes: 2 <issue_comment>username_2: This is still a long way off and would require what is known as [Artificial General Intelligence](https://en.wikipedia.org/wiki/Artificial_general_intelligence), which is probably the best descriptive term for AI with cognitive capacity akin to humans. *(Another term often used is "[superintelligence](https://en.wikipedia.org/wiki/Superintelligence)", but it's less precise in regard to general intelligence, in that it could be understood to connote superior intelligence that is not general. The same problem exists for Searle's "Strong AI", an AI that matches or exceeds humans capacity, since AI can now be "strong but narrow", exceeding human capability in a single task or a set of tasks, such as a game, or managing fully definable systems such as air-conditioning.)* Part of the problem is we still don't really understand how human cognition works, so the famous projection by an incredibly important figure in AI, [<NAME>](https://en.wikipedia.org/wiki/Herbert_A._Simon), that: > > "Machines will be capable, within twenty years, of doing any work a man can do." > > > is still, like fusion power, potentially always 20 years in the future. (Worth noting that if a general superintelligence is developed, it could not only do anything humans could do, but things we can't even conceive of, because its intelligence would greatly exceed our own, and would likely be capable of continual self-optimization.) Upvotes: 0
2020/06/23
835
2,566
<issue_start>username_0: In the proof of the policy gradient theorem in the [RL book of Sutton and Barto](http://incompleteideas.net/book/RLbook2020.pdf) (that I shamelessly paste here): [![enter image description here](https://i.stack.imgur.com/ASU0q.png)](https://i.stack.imgur.com/ASU0q.png) there is the "unrolling" step that is supposed to be immediately clear > > With just elementary calculus and re-arranging of terms > > > Well, it's not. :) Can someone explain this step in more detail? How exactly is $Pr(s \rightarrow x, k, \pi)$ deduced by "unrolling"?<issue_comment>username_1: It looks like "v of s prime" is just substituted with the already derived value for "v of s". You can call it a recursion of a kind. In other words, v(s) is dependent on v(s') and that implies that v(s') is dependent on v(s''). So we can combine that and get the dependency of v(s) of v(s''). Upvotes: -1 <issue_comment>username_2: The unrolling step is due to the fact you end up with an equation that you can keep expanding indefinitely. Note that we start with calculating $\nabla v\_\pi(s)$ and arrive at $$\nabla v\_\pi(s) = \sum\_a\left[ \nabla \pi(a|s) q\_\pi(s,a) + \pi(a|s) \sum\_{s'}p(s'|s,a) \nabla v\_\pi (s') \right]\;,$$ which contains a term for $\nabla v\_\pi(s')$. This is a recursive relationship, similar to the bellman equation, so we can substitute in a term for $\nabla v\_\pi(s')$ which will be a term similar just with $\nabla v\_\pi(s'')$. As I mentioned, we can do this indefinitely which leads us to $$\nabla v\_\pi(s) = \sum\_{x \in \mathcal{S}} \sum\_{k=0}^\infty \mathbb{P}(s\rightarrow x, k, \pi) \sum\_a \nabla \pi(a|x) q\_\pi(x,a)\;.$$ We need the term $\sum\_{x \in \mathcal{S}} \sum\_{k=0}^\infty \mathbb{P}(s\rightarrow x, k, \pi)$ because we want to take an average over the state space, however due to unrolling there are many different $s\_t$'s that we need to average over (this comes from the $s',s'',s''',...$ in the unrolling) so we also need to add the probability of transitioning from state $s$ to state $x$ in $k$ time steps, where we sum over an infinite horizon due to the repeated unrolling. If you are wondering what happens to the terms $\pi(a|s)$ and $p(s'|s,a)$ terms and why they are not explicitly shown in this final form, it is because this is exactly what the $\mathbb{P}(s\rightarrow x, k, \pi)$ represents. The average over all possible states accounts for the $p(s'|s,a)$ and the fact that we follow policy $\pi$ in the probability statement accounts for the $\pi(a|s)$. Upvotes: 3 [selected_answer]
2020/06/23
407
1,605
<issue_start>username_0: Why don't we use a trigonometric function, such as $\tan(x)$, where $x$ is an element of the interval $[0,pi/2)$, instead of the sigmoid function for the output neurons (in the case of classification)?<issue_comment>username_1: The main reason why the sigmoid function is used is because it 'does not blow up' since it is between 0 and 1 always. As for the relu it is used because it is computationally cheap and even resolves the problem of vanishing gradient(and hence used more often than sigmoid). So a function like tan(x) will blow up for certain values of x. This can cause a problem of exploding gradients. So, I believe for this reason tan(x) cannot be a good non-linearity to be used. As for any other function, it is more because of the results that we have gotten over the years and sigmoid and relu have been promising. Upvotes: 2 <issue_comment>username_2: Although it's true that if you use certain trigonometric functions, such as the tangent, you could have numerical problems (as suggested [in this answer](https://ai.stackexchange.com/a/22108/2444)), that's not the only reason for not using trigonometric functions. Trigonometric functions are periodic. In general, we may not want to convert a non-periodic function to a periodic one. To be more concrete, let's suppose we use the sine function as the activation function of the output neurons of a neural network. Assuming only one input, if the input to any of those output neurons is $360k$, for any integer $k$, the result will always be $0$, but that may not be desirable. Upvotes: 1 [selected_answer]
2020/06/23
698
3,028
<issue_start>username_0: I am trying to understand the genetic algorithm in terms of feature selection and these features are extracted using a machine learning algorithm. Let's suppose I have data of heart rate for 3 minutes collected from $50$ subjects. From these 3-minute heart rate, I extracted $5$ features, like the mean, standard deviation, variance, skewness and kurtosis. Now the shape of my feature set is `(50, 5)`. I want to know what are gene, chromosome and population in genetic algorithm related to the above scenario. What I understand is each feature is a gene, and a set of all features for one subject `(1, 5)` is the chromosome, and the whole feature set `(50, 5)` is a population. But I think this concept is not correct. Because in the genetic algorithm, we take a random population, but according to my concept complete data is population, so how random data is selected. Can anyone help me to understand it? [![enter image description here](https://i.stack.imgur.com/ZKMxm.png)](https://i.stack.imgur.com/ZKMxm.png)<issue_comment>username_1: Genetic algorithms, also known as evolutionary search, provide a general technique to optimize an objective function. We also say that we are trying to maximize fitness. This means that we are trying to find an individual with the highest possible fitness. We start with a population, say 100 individuals, and using mutation and crossover we generate offspring among whom we hope to find fitter individuals and as the generations progress, we get better and better. One way to start this all is to think about the "fitness" or objective function. What is it that we want in the best individuals? Can we model that? How do we model that? In your case, does a specific measurement (those 5 numbers you mention) say how fit an individual is? And that fitness can be one number, say from 1 to 100, or it could be unbounded (as in real life where things get better and better with temporary regressions). So the challenge is how to map the features to a number. That's a math function to design. Genes are what change from individual to individual and they mutate from parent to offspring and they are shared in crossover. Given a set of genes, what is the fitness? If you can answer that question (meaning a mathematical function to map the genes of each individual to a number), then you have a genetic algorithm to run and it will find the fittest individuals according to your (math) function. Upvotes: 0 <issue_comment>username_2: A chromosome in this case could be a set of filters, each extracting a different feature (analogous to Convolutional Neural Network). Your question doesn't say what you want to do with these features, so this solution is made under the assumption that there is a fitness function which would take these features as an input and output a score. Then, each gene is a parameter for a filter, each chromosome defines a set of such filters, which makes up an individual. A population is a set of such individuals. Upvotes: 1
2020/06/23
1,115
3,837
<issue_start>username_0: I'm building a denoising autoencoder. I want to have the same input and output shape image. This is my architecture: ``` input_img = Input(shape=(IMG_HEIGHT, IMG_WIDTH, 1)) x = Conv2D(32, (3, 3), activation='relu', padding='same')(input_img) x = MaxPooling2D((2, 2), padding='same')(x) x = Conv2D(64, (3, 3), activation='relu', padding='same')(x) encoded = MaxPooling2D((2, 2), padding='same')(x) x = Conv2D(32, (3, 3), activation='relu', padding='valid')(encoded) x = UpSampling2D((2, 2))(x) x = Conv2D(32, (3, 3), activation='relu', padding='same')(x) x = UpSampling2D((2, 2))(x) decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x) # decodedSize = K.int_shape(decoded)[1:] # x_size = K.int_shape(input_img) # decoded = Reshape(decodedSize, input_shape=decodedSize)(decoded) autoencoder = Model(input_img, decoded) autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy') ``` My input shape is: **1169x827** This is Keras output: ``` Model: "model_6" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_7 (InputLayer) [(None, 1169, 827, 1)] 0 _________________________________________________________________ conv2d_30 (Conv2D) (None, 1169, 827, 32) 320 _________________________________________________________________ max_pooling2d_12 (MaxPooling (None, 585, 414, 32) 0 _________________________________________________________________ conv2d_31 (Conv2D) (None, 585, 414, 64) 18496 _________________________________________________________________ max_pooling2d_13 (MaxPooling (None, 293, 207, 64) 0 _________________________________________________________________ conv2d_32 (Conv2D) (None, 291, 205, 32) 18464 _________________________________________________________________ up_sampling2d_12 (UpSampling (None, 582, 410, 32) 0 _________________________________________________________________ conv2d_33 (Conv2D) (None, 582, 410, 32) 9248 _________________________________________________________________ up_sampling2d_13 (UpSampling (None, 1164, 820, 32) 0 _________________________________________________________________ conv2d_34 (Conv2D) (None, 1162, 818, 1) 289 =============================================================== ``` **How can I have the same input and output shape?**<issue_comment>username_1: If you look at Keras' output, there are various steps which lose pixels: Max pooling on odd sizes will always lose one pixel. Conv2D using 3x3 kernels will also lose 2pixels, although I'm puzzled that it doesn't seem to happen in the downsampling steps. Intuitively, padding the original images with enough border pixels to compensate for the pixel loss due to the various layers would be the simplest solution. At the moment I can't calculate how much it should be, but I suspect rounding up to a multiple of 4 should take care of the max pooling layers. For denoising, borders could be just copied from the outermost pixels, probably with some sort of low pass filtering to avoid artefacts. Upvotes: 1 <issue_comment>username_2: I don't know if this is the right way of doing it but I solved the problem. Following the code from above I've added: ``` img_size = K.int_shape(input_img)[1:] resized_image_tensor = tf.image.resize(decoded, list(img_size[:2]))**** autoencoder = Model(input_img, resized_image_tensor) autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy') ``` **I used tf.image.resize to synchronize the shape of reconstructed image and input image.** Hope it helps. Upvotes: 0
2020/06/24
414
1,620
<issue_start>username_0: Suppose we have a small space state and that, after about 2000 episodes, we've accurately explored the environment and known the accurate $Q$ values. In that case, why do we still leave a small probability for exploration? My guess is in the case of a dynamic environment where a bigger reward might pop up in another state. Is my assumption correct?<issue_comment>username_1: If you look at Keras' output, there are various steps which lose pixels: Max pooling on odd sizes will always lose one pixel. Conv2D using 3x3 kernels will also lose 2pixels, although I'm puzzled that it doesn't seem to happen in the downsampling steps. Intuitively, padding the original images with enough border pixels to compensate for the pixel loss due to the various layers would be the simplest solution. At the moment I can't calculate how much it should be, but I suspect rounding up to a multiple of 4 should take care of the max pooling layers. For denoising, borders could be just copied from the outermost pixels, probably with some sort of low pass filtering to avoid artefacts. Upvotes: 1 <issue_comment>username_2: I don't know if this is the right way of doing it but I solved the problem. Following the code from above I've added: ``` img_size = K.int_shape(input_img)[1:] resized_image_tensor = tf.image.resize(decoded, list(img_size[:2]))**** autoencoder = Model(input_img, resized_image_tensor) autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy') ``` **I used tf.image.resize to synchronize the shape of reconstructed image and input image.** Hope it helps. Upvotes: 0
2020/06/25
399
1,553
<issue_start>username_0: Why is non-linearity desirable in a neural network? I couldn't find satisfactory answers to this question on the web. I typically get answers like "real-world problems require non-linear solutions, which are not trivial. So, we use non-linear activation functions for non-linearity".<issue_comment>username_1: If you look at Keras' output, there are various steps which lose pixels: Max pooling on odd sizes will always lose one pixel. Conv2D using 3x3 kernels will also lose 2pixels, although I'm puzzled that it doesn't seem to happen in the downsampling steps. Intuitively, padding the original images with enough border pixels to compensate for the pixel loss due to the various layers would be the simplest solution. At the moment I can't calculate how much it should be, but I suspect rounding up to a multiple of 4 should take care of the max pooling layers. For denoising, borders could be just copied from the outermost pixels, probably with some sort of low pass filtering to avoid artefacts. Upvotes: 1 <issue_comment>username_2: I don't know if this is the right way of doing it but I solved the problem. Following the code from above I've added: ``` img_size = K.int_shape(input_img)[1:] resized_image_tensor = tf.image.resize(decoded, list(img_size[:2]))**** autoencoder = Model(input_img, resized_image_tensor) autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy') ``` **I used tf.image.resize to synchronize the shape of reconstructed image and input image.** Hope it helps. Upvotes: 0
2020/06/25
423
1,579
<issue_start>username_0: I am looking at a lecture on [POMDP](https://youtu.be/I2uSCTUHsUI?t=3951), and the context is that, when the quadcopter can't see the landmarks, it has to use reckoning. And then he mentions the transition model is not deterministic, hence the uncertainty grows. Can transition models in MDP be deterministic?<issue_comment>username_1: If you look at Keras' output, there are various steps which lose pixels: Max pooling on odd sizes will always lose one pixel. Conv2D using 3x3 kernels will also lose 2pixels, although I'm puzzled that it doesn't seem to happen in the downsampling steps. Intuitively, padding the original images with enough border pixels to compensate for the pixel loss due to the various layers would be the simplest solution. At the moment I can't calculate how much it should be, but I suspect rounding up to a multiple of 4 should take care of the max pooling layers. For denoising, borders could be just copied from the outermost pixels, probably with some sort of low pass filtering to avoid artefacts. Upvotes: 1 <issue_comment>username_2: I don't know if this is the right way of doing it but I solved the problem. Following the code from above I've added: ``` img_size = K.int_shape(input_img)[1:] resized_image_tensor = tf.image.resize(decoded, list(img_size[:2]))**** autoencoder = Model(input_img, resized_image_tensor) autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy') ``` **I used tf.image.resize to synchronize the shape of reconstructed image and input image.** Hope it helps. Upvotes: 0
2020/06/26
253
1,124
<issue_start>username_0: During the learning phase, why don't we have a 100% exploration rate, to allow our agent to fully explore our environment and update the Q values, then during testing we bring in exploitation? Does that make more sense than decaying the exploration rate?<issue_comment>username_1: No - imagine if you were playing an Atari game and took completely random actions. Your games would not last very long and you would never get to experience all of the state space because the game would end too soon. This is why you need to combine exploration and exploitation to fully explore the state space. Upvotes: 2 <issue_comment>username_2: While *theoretically* you can do something like this if you're very confident you'll cover most of the state space in exploration, this is still a suboptimal strategy. Even in the case of multi-armed bandits, this strategy can be much less sample efficient than $\epsilon$-greedy, and exploration is much easier in this case. So, even if your strategy miraculously works on a decently sized MDP, it'll be worse than combining exploration and exploitation. Upvotes: 0
2020/06/28
454
1,895
<issue_start>username_0: I am inspired by the paper [Neural Architecture Search with Reinforcement Learning](https://arxiv.org/abs/1611.01578) to use reinforcement learning for optimizing a child network (learner). My meta-learner (controller or parent network) is an MLP and will take as the reward function a silhouette score. Its output is a vector of real numbers between 0 and 1. These values are k different possibilities for the number of clusters (the goal is to cluster the result of the child network which is an auto-encoder, embedded images are the input to the meta-learner). What I am confused about is the environment here and how to implement this network. I was reading [this tutorial](http://karpathy.github.io/2016/05/31/rl/) and the author has used [gym](https://gym.openai.com/) library to set the environment. Should I build an environment from scratch myself or it is not always needed? I appreciate any help or hints or links to a source that helps me understand better RL concepts. I am new to it and easily gets confused.<issue_comment>username_1: No - imagine if you were playing an Atari game and took completely random actions. Your games would not last very long and you would never get to experience all of the state space because the game would end too soon. This is why you need to combine exploration and exploitation to fully explore the state space. Upvotes: 2 <issue_comment>username_2: While *theoretically* you can do something like this if you're very confident you'll cover most of the state space in exploration, this is still a suboptimal strategy. Even in the case of multi-armed bandits, this strategy can be much less sample efficient than $\epsilon$-greedy, and exploration is much easier in this case. So, even if your strategy miraculously works on a decently sized MDP, it'll be worse than combining exploration and exploitation. Upvotes: 0
2020/06/29
631
1,784
<issue_start>username_0: I know that $G\_t = R\_{t+1} + G\_{t+1}$. Suppose $\gamma = 0.9$ and the reward sequence is $R\_1 = 2$ followed by an infinite sequence of $7$s. What is the value of $G\_0$? As it's infinite, how can we deduce the value of $G\_0$? I don't see the solution. It's just $G\_0 = 5 + 0.9\*G\_1$. And we don't know $G\_1$ value, and we don't know $R\_2, R\_3, R\_4, ...$<issue_comment>username_1: You know all the rewards. They're 5, 7, 7, 7, and 7s forever. The problem now boils down to essentially a geometric series computation. $$ G\_0 = R\_0 + \gamma G\_1 $$ $$ G\_0 = 5 + \gamma\sum\_{k=0}^\infty 7\gamma^k $$ $$ G\_0 = 5 + 7\gamma\sum\_{k=0}^\infty\gamma^k $$ $$ G\_0 = 5 + \frac{7\gamma}{1-\gamma} = \frac{5 + 2\gamma}{1-\gamma} $$ Upvotes: 2 <issue_comment>username_2: There are a few ways to resolve values of infinite sums. In this case, we can use a simple technique of self-reference to create a solvable equation. I will show how to do it for the generic case here of an MDP with same reward $r$ on each timestep: $$G\_t = \sum\_{k=0}^{\infty} \gamma^k r$$ We can "pop off" the first item: $$G\_t = r + \sum\_{k=1}^{\infty} \gamma^k r$$ Then we can note that the second term is just $\gamma$ times the orginal term: $$G\_t = r + \gamma G\_t$$ (There are situations where this won't work, such as when $\gamma \ge 1$ - essentially we are taking advantage that the high order terms are arbitrarily close to zero, so can be ignored) Re-arrange it again: $$G\_t = \frac{r}{1 - \gamma}$$ This means that you can get the value for a return for the discounted sum of repeating rewards. Which allows you to calculate your $G\_1$. I will leave that last part as an execrise for you, as you already figured the first part out. Upvotes: 1
2020/06/30
1,823
8,083
<issue_start>username_0: I have studied linear algebra, probability, and calculus twice. But I don't understand how can I reach the level that I can read any AI paper and understand mathematical notation in it. What is your strategy when you see the mathematical expression that you can't understand? For example, in Wasserstein GAN article, there are many advanced mathematical notations. Also, some papers are written by people who have a master's in mathematics, and those people use advanced mathematics in some papers, but I have a CS background. When you come across this kind of problem, what do you do?<issue_comment>username_1: I think the best way to make reading papers easier is to practice (as in, read lots of papers, try implementing them, etc), and to discuss them with other students/researchers. Sometimes it's tough to avoid some obscure or really technical math, so you may just need to do extra reading. The Wasserstein metric, for example, is used a lot in ML but I kinda doubt most ML researchers have a good understanding of it. This metric comes from a branch of math called "optimal transportation theory", which is super interesting, but very real-analysis-heavy. If you're really interested in learning about the Wasserstein metric, I recommend <NAME>'s book "Optimal Transport: Old and New". I also recommend [this awesome paper](https://arxiv.org/abs/1705.10743). Nevertheless, learning analysis is likely gonna serve you very well for understanding a wide range of ML papers. Finally, as a beginning grad student, I have experienced your issue as well. I made a tool to help me with this at [this repo](https://github.com/username_1/sagerank), which manages a library of papers you're interested in. It then uses a PageRank algorithm to recommend new papers to you that are commonly referred to by the papers you want to read, with the goal of helping you read up on the foundational "prerequisite" material. Upvotes: 2 <issue_comment>username_2: When I read papers in a new domain and when I started reading theoretical ML paper, I faced similar problems. I usually start with the introduction then related work and try to understand all the concepts and related papers cited that are relevant to understanding the paper. Specifically when it comes to difficult mathematical formulations, as @username_1 said the more you read about it the easier it gets. There may be a set of papers with concepts that are similar to the paper you are reading but are well-explained I usually read them first (or if it is an important mathematical concept you can find some blogs describing the intuitions/basics behind it). Upvotes: 1 <issue_comment>username_3: From my experience (and I've been reading many research papers for a while), it's rare to find a research paper where you fully understand everything in one go, especially if the research paper was published or released recently or a very long time ago (because, back then, maybe people had a different writing style, used a different notation, or something like that), unless you are an expert on the topic, which is probably not the case, unless you are doing serious research on the topic (i.e. you're doing a Ph.D. and beyond; in that case, you probably don't need to ask questions on this site: hopefully, you have a qualified advisor to whom you can ask these questions!), or the paper is really easy and does not contain any formulas. Of course, if a paper is published, it must contain something novel, so that something novel could be one of the things that you need to spend some time to understand, but the hardest parts of a paper could also easily be the prerequisites (i.e. the concepts that the paper builds upon), because you may not have a very solid knowledge of those topics (as you probably have already experienced). There are at least three ways to proceed when you are stuck because you don't understand something 1. If you can ignore what you don't understand (i.e. you don't need it for your purposes because e.g. you just need to have a high-level understanding of the topics), ignore it (really!!) 2. If it cannot be ignored (e.g. because you really need to know all the details of the paper because e.g. you need to give a presentation at your university), try to understand what you don't understand by picking up a resource on that topic that you don't understand, then read it; spend the time that you think is opportune (i.e. do not spend 6.5 days to understand a detail of a paper if you only have 7 days to read that paper and prepare a presentation or whatever you need to do) 3. If you can afford it, stop reading that paper and go back to the basics. In general, learning is not an easy process and, more specifically, reading research papers is not the easiest reading (because research papers are typically concise, i.e. there's a lot of information compression), so do not expect to understand everything of a paper in one go. In fact, the paper [How to Read a Paper](https://web.stanford.edu/class/ee384m/Handouts/HowtoReadPaper.pdf) by <NAME>, which gives you some guidelines on how to read a paper, tells you to read a paper in three steps. For more details about these three steps, please, read the paper! Upvotes: 1 <issue_comment>username_4: I think the answer depends very much on **why** you are reading the paper, what are you trying to get out of it? There are plenty of papers that I "read" (or often really just quickly skim through) where I'll definitely not understand all the math. More often than not, this will be because I don't actually care to deeply understand it. There is plenty of more "practical" research to be done in AI, which definitely doesn't always require a deep understanding of all the math. Intuition can often be enough, at least to get started, for meaningful practical contributions. If this is the sort of research that you're interested in doing, you probably don't need to understand as many of the mathematical parts of AI papers as you do if you're really trying to do research directly in that theoretical area. Personally, when I write "math-heavy" parts in my own papers (and that will often already be restricted to a rather simple level of math in comparison to the "real theory" ML papers), I always try to make sure to include intuitive, English descriptions of what we're doing around it. Even if you don't immediately understand a full equation, just having the intuitive explanation around it to tell you what it means can be enough for a broad understanding of the paper. Then you only have to dive deep into the details of the equations if -- based on the English text -- you decide that you're actually really interested. So, if there are sufficient, intuitive explanations surrounding the equations, I'd recommend to focus heavily on that first. Not every paper does this though, sometimes there's very little text and very much math, and then this can be difficult. Even if it turns out that you do have to understand math, you may not have to understand ALL of it right away though. The important parts that I would try to focus on understanding first are: * A mathematical description of the "problem". This could be an objective function, a metric to be optimised/minimised/maximised, or an existing equation from previous literature that the authors take as a starting point and inspect some detail of in greater detail. * Mathematical descriptions of the outcomes/results. These could be equations that they actually use in concrete algorithms (see if you can relate them to any pseudocode that may be present), or the final equations stated in theorems / at the end of proofs. All the complex parts in between are probably less important. Just a vague idea of what the starting point is, and a vague understanding of the final outcome, can be enough to at least know what the paper is about. Then you can decide for yourself whether you really need to know more about the details in between, or if they're maybe not relevant to you / your work / your research. Upvotes: 2
2020/07/01
1,227
5,372
<issue_start>username_0: I am new to reinforcement learning. For my application, I have found out that if my reward function contains some negative and positive values, my model does not give the optimal solution, but the solution is not bad as it still gives positive reward at the end. However, if I just shift all readings by subtracting a constant until my reward function is all negative, my model can reach the optimal solution easily. Why is this happening? I am using DQN for my application. I feel that this is also the reason why the gym environment mountaincar-v0 uses $-1$ for each time step and $0.5$ at the goal, but correct me if I am wrong.<issue_comment>username_1: You have some freedom to re-define reward schemes, whilst still describing the same goals for an agent. How this works depends to some degree on whether you are dealing with an episodic or continuing problem. Episodic problems ----------------- An episodic problem ends, and once an agent reaches the terminal state, it is guaranteed zero rewards from that point on. The optimal behaviour can therefore depend quite critically on balance between positive and negative rewards. * If an environment contains many unavoidable negative rewards, and these outweigh total positive rewards, then the agent will be motivated to complete an episode sooner. * If an environment contains repeatable positive rewards, and these outweigh total negative rewards, then the agent will be motivated to loop through the postive rewards and not end the episode. Scaling all rewards by the same positive factor makes no difference to the goals of an agent in an episodic problem. Adding a positive or negative offset to all rewards can make a difference though. It is likely to be most notable when such a change moves rewards from positive to negative or vice versa. In the MountainCar example, adding +2 to all rewards would mean the agent would gain +1 for each time step. As it would stop gaining any reward for reaching the goal, even though reaching that goal would score the highest possible +2.5 reward, the fact that this ends the episode means that it now becomes a poor choice. The best action for the car in this modified MountainCar is to stay at the bottom of the valley collecting the +1 reward per time step forever. Continuing problems ------------------- In a continuing problem, there is no way for the agent to avoid the stream of new reward data. That means any positive scaling of all reward values or positive or negative offset, by the same amount, has no impact on what counts as the optimal policy. The calculated value of any state under the same policy, but with rewards all transformed with the same multiplier and offset, will be different, but the optimal policy in that environment will be the same. If you scale or offset rewards differently to each other, then that can change the goals of the agent and what the optimal policy is. The balance does not really depend on whether rewards are positive or negative in a continuing environment. There may be some exceptions to this for continuing problems when using a discount factor, and setting it relatively low (compared to the typical state "cycling" length in the problem). That can cause changes in behaviour due to offsets, similar to those seen in episodic problems. If you use an average reward setting this tends to be less relevant. Often in DQN, you will choose a high discount factor e.g. 0.99 or 0.999, and this will tend to behave close to an average reward setting provided rewards are not very sparse. In general ---------- In either case, if you change a reward system, and that results in an agent that consistently learns a different policy, that will usually mean one of two things: * The original reward system was incorrect. It described a goal that you did not intend, or had "loopholes" that the agent could exploit to gain more reward in a way that you did not intend. * The implementation of the agent was sensitive in some way to absolute values of total reward. That could be due to a hyperparameter choice in something like a neural network for example, or maybe a bug. Another possibility, that you may see if you only run a few experiments, is that the agent is not learning 100% consistently, and you are accidentally correlating your changes to reward scheme with the noise/randomness in the results. A DQN-based agent will usually have some variability in how well it solves a problem. After training, DQN is usually only approximately optimal, and by chance some approximations are closer than others. Upvotes: 2 <issue_comment>username_2: Our paper 'Exploit Reward Shifting in Value-Based DRL' answered this question. In the case mentioned, using a negative shift will lead to explorative behaviors, therefore the DQN agent has better convergence behavior. In general, we show that a negative (constant) shift is equivalent to optimistic value initialization, while a positive shift is equivalent to pessimistic (conservative) value initialization --- hence the former helps exploration, and the latter helps exploitation. Reference: [1](https://sites.google.com/view/rewardshaping) [[Google Site]](https://sites.google.com/view/rewardshaping) [2](https://neurips.cc/virtual/2022/poster/55336) [NeurIPS'2022 Paper](https://neurips.cc/virtual/2022/poster/55336) Upvotes: 1
2020/07/02
309
1,331
<issue_start>username_0: Feature scaling, in general, is an important stage in the data preprocessing pipeline. Decision Tree and Random Forest algorithms, though, are scale-invariant - i.e. they work fine without feature scaling. Why is that?<issue_comment>username_1: Scaling only makes sense when there is something that reacts to that scale. Decision Trees though, just make a cut at a certain number. Imagine: For a feature that goes from 0 to 100 a cut at 50 may be improving performance. Scaling this down to 0 to 1 making the cut a 0.5 doesn't change a thing. Now on the other hand NN have some kind of activation function (leaving RELu aside) that react differently to input that is above 1. Here Normalization, putting every feature between 0 and 1 makes sense. Upvotes: 3 [selected_answer]<issue_comment>username_2: Feature scaling happens to be a problem when a model is characterized by having a distance metric (or another kind of numerical evaluation for that matter). Therefore models such as support vector machines, neural networks, distance based clustering methods (e.g. k means) and linear/logistic regression are prone to changes by feature scaling. Those which are based on **probability** rather than distances are not scale variant. These include Naive Bayes Classifiers, or decision trees. Upvotes: 2
2020/07/04
446
1,913
<issue_start>username_0: Let's say we have a captcha system that consists of a greyscale picture (of a part of a street or something akin to re-captcha), divided into 9 blocks, with 2 missing pieces. You need to choose the appropriate missing pieces from over 15 possibilities to complete the picture. The puzzle pieces have their edges processed with glitch treatment as well as they have additional morphs such as heavy jpeg compression, random affine transform, and blurred edges. Every challenge picture is unique - pulled from a dataset of over 3 million images. Is it possible for the neural network to reliably (above 50%) predict the missing pieces? Sometimes these are taken out of context and require human logic to estimate the correct piece. The chance of selecting two answers in correct order is 1/15\*1/14.<issue_comment>username_1: Scaling only makes sense when there is something that reacts to that scale. Decision Trees though, just make a cut at a certain number. Imagine: For a feature that goes from 0 to 100 a cut at 50 may be improving performance. Scaling this down to 0 to 1 making the cut a 0.5 doesn't change a thing. Now on the other hand NN have some kind of activation function (leaving RELu aside) that react differently to input that is above 1. Here Normalization, putting every feature between 0 and 1 makes sense. Upvotes: 3 [selected_answer]<issue_comment>username_2: Feature scaling happens to be a problem when a model is characterized by having a distance metric (or another kind of numerical evaluation for that matter). Therefore models such as support vector machines, neural networks, distance based clustering methods (e.g. k means) and linear/logistic regression are prone to changes by feature scaling. Those which are based on **probability** rather than distances are not scale variant. These include Naive Bayes Classifiers, or decision trees. Upvotes: 2
2020/07/06
864
3,406
<issue_start>username_0: I am trying to build a recurrent neural network from scratch. It's a very simple model. I am trying to train it to predict two words (dogs and gods). While training, the value of cost function starts to increase for some time, after that, the cost starts to decrease again, as can be seen in the figure. [![enter image description here](https://i.stack.imgur.com/ak7oj.jpg)](https://i.stack.imgur.com/ak7oj.jpg) I am using the gradient descend method for optimization. Decreasing the step size/learning rate does not change the behavior. I have checked the code and math, again and again, I don't think there is an error (I could be wrong). Why is the cost function not decreasing monotonically? Could there be a reason other than an error in my code/math? If there is an error, do you think that it is just a coincidence that each time the system finally converges to a very small value of error? I am a beginner in the field of machine learning, hence, many questions I have asked may seem foolish to you. I am saving the values after every 100 iterations so the figure is actually for 15000 iterations. **About training:** I am using one-hot encoding. As the training data has only two samples ("gods" and "dogs"), where each alphabet is represented as d=[1,0,0,0],o=[0,1,0,0],g=[0,0,1,0],s=[0,0,0,1]. The recurrent neural network (RNN) goes back to a maximum of 3 time units, (e.g for dogs, the first input is 'd', then 'o', followed by 'g' and s). So, for the second input, the RNN goes back to 1 input, for the third input the RNN observes both previous inputs and so on. After calculating the gradients for the word "dogs", the values of the gradients are saved and the process is repeated for the word "gods". The gradients calculated for the second input "gods" are summed with the gradients calculated for "dogs" at the end of each epoch/iteration, and then the sum is used to updated all the weights. In each epoch, the inputs remain the same i.e "gods" and "dogs". In mini-batch training, in some epoch the RNN may encounter new inputs, hence, the loss may increase. However, I do not think that what I am doing qualifies as mini-batch training as there are only two inputs, both inputs are used in each epoch, and the sum of calculated gradients is used to update the weights.<issue_comment>username_1: Scaling only makes sense when there is something that reacts to that scale. Decision Trees though, just make a cut at a certain number. Imagine: For a feature that goes from 0 to 100 a cut at 50 may be improving performance. Scaling this down to 0 to 1 making the cut a 0.5 doesn't change a thing. Now on the other hand NN have some kind of activation function (leaving RELu aside) that react differently to input that is above 1. Here Normalization, putting every feature between 0 and 1 makes sense. Upvotes: 3 [selected_answer]<issue_comment>username_2: Feature scaling happens to be a problem when a model is characterized by having a distance metric (or another kind of numerical evaluation for that matter). Therefore models such as support vector machines, neural networks, distance based clustering methods (e.g. k means) and linear/logistic regression are prone to changes by feature scaling. Those which are based on **probability** rather than distances are not scale variant. These include Naive Bayes Classifiers, or decision trees. Upvotes: 2