date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2020/07/09
1,113
4,692
<issue_start>username_0: I'm training a robot to walk to a specific $(x, y)$ point using TD3, and, for simplicity, I have something like `reward = distance_x + distance_y + standing_up_straight`, and then it adds this reward to the replay buffer. However, I think that it would be more efficient if it can break the reward down by category, so it can figure out "that action gave me a good distance `distance_x`, but I still need work on `distance_y` and `standing_up_straight`". Are there any existing algorithms that add rewards this way? Or have these been tested and proven not to be effective?<issue_comment>username_1: If I understood correctly you're looking at a [Multi-Objective Reinforcement Learning](https://ewrl.files.wordpress.com/2015/02/ewrl12_2015_submission_2.pdf) (MORL). Keep in mind however that many scientist will often follow the *reward hypothesis* (Sutton and Barto) which says that > > *All of what we mean by goals and purposes can be well thought of as the maximization of the expected value of the cumulative sum of a received scalar signal (called reward)* > > > The argument for a scalar reward could be that even if you define your policy using some objective *vector* (as in MORL) - you will find a *pareto bound* of optimal policies, some of which favour one component of the objective over the other - leaving you (the scientist) responsible for making the ultimate decision concerning the objectives' tradeoff - thus eventually degenerating the reward objective into scalar. In your example there might be two different "optimal" policies - one which results in a very high value of `distance_x` but relatively poor `distance_y` and a one that favours `distance_y` instead. It'll be up to you to find the sweet spot and collapse a reward function back to a scalar. Upvotes: 4 [selected_answer]<issue_comment>username_2: I agree with Tomasz that the approach you are describing falls within the field of MORL. For a solid introduction MORL I would recommend the survey by <NAME>., <NAME>., <NAME>., & <NAME>. (2013). A survey of multi-objective sequential decision-making. Journal of Artificial Intelligence Research, 48, 67-113. <https://www.jair.org/index.php/jair/article/view/10836> (disclaimer: I'm an author in this, but I genuinely believe it will be useful to you). Our survey provides arguments for the need for multiobjective methods by describing three scenarios where agents using single-objective RL may be unable to provide a satisfactory solution which matches the needs of the user. Briefly these are (a) the unknown weights scenario where the required trade-off between the objectives isn't known in advance, and so to be effective the agent must learn multiple policies corresponding to different trade-offs and then at run-time select the one which matches the current preferences (eg this can arise when the objectives correspond to different costs which vary in relative price over time; (b) the decision support scenario where scalarization of a reward vector is not viable (for example, in the case of subjective preferences which defy explicit quantification), so the agent needs to learn a set of policies, and then present these to a user who will select their preferred option, and (c) the known weights scenario where the desired trade-off between objectives is known but its nature is such that the returns are non-additive (ie if the user's utility function is non-linear) and therefore standard single-objective methods based on the Bellman equation can't be directly applied. We propose a taxonomy of MORL problems in terms of the number of policies they require (single or multi-policy), the form of utility/scalarization function supported (linear or non-linear), and whether deterministic or stochastic policies are allowed, and relate this to the nature of the set of solutions which the MO algorithm needs to output. This taxonomy is then used to categorise existing MO planning and MORL methods. One final important contribution is identifying the distinction between maximising Expected Scalarised Return (ESR) or Scalarised Expected Return (SER). The former is appropriate in cases where we are concerned about the results within each individual episode (for example, when treating a patient - that patient will only care about their own individual experience), while SER is appropriate if we care about the average return over multiple episodes. This has turned out to be a much more important issue than I anticipated at the time of the survey, and <NAME> and his colleagues have examined it more closely since then (eg <http://roijers.info/pub/esr_paper.pdf>) Upvotes: 2
2020/07/10
1,082
4,646
<issue_start>username_0: I actually went through the Keras' batch normalization tutorial and the description there puzzled me more. Here are some facts about batch normalization that I read recently and want a deep explanation on it. 1. If you froze all layers of neural networks to their random initialized weights, except for batch normalization layers, you can still get 83% accuracy on CIFAR10. 2. When setting the trainable layer of batch normalization to false, it will run in inference mode and will not update its mean and variance statistics.<issue_comment>username_1: If I understood correctly you're looking at a [Multi-Objective Reinforcement Learning](https://ewrl.files.wordpress.com/2015/02/ewrl12_2015_submission_2.pdf) (MORL). Keep in mind however that many scientist will often follow the *reward hypothesis* (Sutton and Barto) which says that > > *All of what we mean by goals and purposes can be well thought of as the maximization of the expected value of the cumulative sum of a received scalar signal (called reward)* > > > The argument for a scalar reward could be that even if you define your policy using some objective *vector* (as in MORL) - you will find a *pareto bound* of optimal policies, some of which favour one component of the objective over the other - leaving you (the scientist) responsible for making the ultimate decision concerning the objectives' tradeoff - thus eventually degenerating the reward objective into scalar. In your example there might be two different "optimal" policies - one which results in a very high value of `distance_x` but relatively poor `distance_y` and a one that favours `distance_y` instead. It'll be up to you to find the sweet spot and collapse a reward function back to a scalar. Upvotes: 4 [selected_answer]<issue_comment>username_2: I agree with Tomasz that the approach you are describing falls within the field of MORL. For a solid introduction MORL I would recommend the survey by <NAME>., <NAME>., <NAME>., & <NAME>. (2013). A survey of multi-objective sequential decision-making. Journal of Artificial Intelligence Research, 48, 67-113. <https://www.jair.org/index.php/jair/article/view/10836> (disclaimer: I'm an author in this, but I genuinely believe it will be useful to you). Our survey provides arguments for the need for multiobjective methods by describing three scenarios where agents using single-objective RL may be unable to provide a satisfactory solution which matches the needs of the user. Briefly these are (a) the unknown weights scenario where the required trade-off between the objectives isn't known in advance, and so to be effective the agent must learn multiple policies corresponding to different trade-offs and then at run-time select the one which matches the current preferences (eg this can arise when the objectives correspond to different costs which vary in relative price over time; (b) the decision support scenario where scalarization of a reward vector is not viable (for example, in the case of subjective preferences which defy explicit quantification), so the agent needs to learn a set of policies, and then present these to a user who will select their preferred option, and (c) the known weights scenario where the desired trade-off between objectives is known but its nature is such that the returns are non-additive (ie if the user's utility function is non-linear) and therefore standard single-objective methods based on the Bellman equation can't be directly applied. We propose a taxonomy of MORL problems in terms of the number of policies they require (single or multi-policy), the form of utility/scalarization function supported (linear or non-linear), and whether deterministic or stochastic policies are allowed, and relate this to the nature of the set of solutions which the MO algorithm needs to output. This taxonomy is then used to categorise existing MO planning and MORL methods. One final important contribution is identifying the distinction between maximising Expected Scalarised Return (ESR) or Scalarised Expected Return (SER). The former is appropriate in cases where we are concerned about the results within each individual episode (for example, when treating a patient - that patient will only care about their own individual experience), while SER is appropriate if we care about the average return over multiple episodes. This has turned out to be a much more important issue than I anticipated at the time of the survey, and <NAME> and his colleagues have examined it more closely since then (eg <http://roijers.info/pub/esr_paper.pdf>) Upvotes: 2
2020/07/11
1,097
4,538
<issue_start>username_0: I have developed a basic feedforward neural network from scratch to classify whether image is of cat or not cat. It works fine, but after 2500 iterations, my cost function is not reducing properly. The loss function which I am using is $L(\hat{y},y) = -ylog\hat{y}-(1-y)log(1-\hat{y})$ Can you please point out where I am going wrong the link to the notebook is <https://www.kaggle.com/sidcodegladiator/catnoncat-nn>?<issue_comment>username_1: If I understood correctly you're looking at a [Multi-Objective Reinforcement Learning](https://ewrl.files.wordpress.com/2015/02/ewrl12_2015_submission_2.pdf) (MORL). Keep in mind however that many scientist will often follow the *reward hypothesis* (Sutton and Barto) which says that > > *All of what we mean by goals and purposes can be well thought of as the maximization of the expected value of the cumulative sum of a received scalar signal (called reward)* > > > The argument for a scalar reward could be that even if you define your policy using some objective *vector* (as in MORL) - you will find a *pareto bound* of optimal policies, some of which favour one component of the objective over the other - leaving you (the scientist) responsible for making the ultimate decision concerning the objectives' tradeoff - thus eventually degenerating the reward objective into scalar. In your example there might be two different "optimal" policies - one which results in a very high value of `distance_x` but relatively poor `distance_y` and a one that favours `distance_y` instead. It'll be up to you to find the sweet spot and collapse a reward function back to a scalar. Upvotes: 4 [selected_answer]<issue_comment>username_2: I agree with Tomasz that the approach you are describing falls within the field of MORL. For a solid introduction MORL I would recommend the survey by <NAME>., <NAME>., <NAME>., & <NAME>. (2013). A survey of multi-objective sequential decision-making. Journal of Artificial Intelligence Research, 48, 67-113. <https://www.jair.org/index.php/jair/article/view/10836> (disclaimer: I'm an author in this, but I genuinely believe it will be useful to you). Our survey provides arguments for the need for multiobjective methods by describing three scenarios where agents using single-objective RL may be unable to provide a satisfactory solution which matches the needs of the user. Briefly these are (a) the unknown weights scenario where the required trade-off between the objectives isn't known in advance, and so to be effective the agent must learn multiple policies corresponding to different trade-offs and then at run-time select the one which matches the current preferences (eg this can arise when the objectives correspond to different costs which vary in relative price over time; (b) the decision support scenario where scalarization of a reward vector is not viable (for example, in the case of subjective preferences which defy explicit quantification), so the agent needs to learn a set of policies, and then present these to a user who will select their preferred option, and (c) the known weights scenario where the desired trade-off between objectives is known but its nature is such that the returns are non-additive (ie if the user's utility function is non-linear) and therefore standard single-objective methods based on the Bellman equation can't be directly applied. We propose a taxonomy of MORL problems in terms of the number of policies they require (single or multi-policy), the form of utility/scalarization function supported (linear or non-linear), and whether deterministic or stochastic policies are allowed, and relate this to the nature of the set of solutions which the MO algorithm needs to output. This taxonomy is then used to categorise existing MO planning and MORL methods. One final important contribution is identifying the distinction between maximising Expected Scalarised Return (ESR) or Scalarised Expected Return (SER). The former is appropriate in cases where we are concerned about the results within each individual episode (for example, when treating a patient - that patient will only care about their own individual experience), while SER is appropriate if we care about the average return over multiple episodes. This has turned out to be a much more important issue than I anticipated at the time of the survey, and <NAME> and his colleagues have examined it more closely since then (eg <http://roijers.info/pub/esr_paper.pdf>) Upvotes: 2
2020/07/13
707
2,982
<issue_start>username_0: Should the training data be the same in each epoch? If the training data is generated on the fly, for example, is there a difference between training 1000 samples with 1 epoch or training 1000 epochs with 1 sample each? To elaborate further, samples do not need to be saved or stay in memory if they are never used again. However, if training performs best by training over the same samples repeatedly, then data would have to be stored to be reused in each epoch. More samples is generally considered advantageous. Is there a disadvantage to never seeing the same sample twice in training?<issue_comment>username_1: Let's quickly get out our copies of [**Deep Learning**](https://www.deeplearningbook.org/contents/optimization.html) by *Goodfellow et al.* (2016). More specifically, I'm referring to [page 276](https://www.deeplearningbook.org/contents/optimization.html). On this page, the authors argue for a relatively small minibatch size, since there are *less than linear returns* for estimating the gradient when increasing the minibatch size. *Returns* here refer to the reduction of the standard error of the mean (gradient per weight) computed over a minibatch. So, yes. In *theory*, having unlimited resources, you will get the best performance when averaging the loss over all samples in your dataset. In practice, however, the larger the size of minibatches, the slower the training procedure, and consequently the less the total number of weight updates that can be afforded. Reversely, in *practice*, the cheaper the weight updates, the quicker the training procedure can converge to a (subjectively) satisfactory result. Eventually, also Goodfellow et al. state that *rapidly* computing gradients leads to *much* faster convergence (in terms of total computations) for most optimization algorithms than when training them more slowly on exact gradients. So, to summarize: If the main concern is to get to a specific level of accuracy at all, go for rather low minibatch sizes, whereas you could go up to a few hundreds (as the Goodfellow et al. state as a reasonable upper bound on page 148) if you are interested in more accurate gradients for your weight updates. Upvotes: 3 <issue_comment>username_2: This would be more suitable as a comment but I don't have enough points; but here's my opinion. Optimisation algorithms like gradient descent are iterative algorithms. So it is rarely possible that they arrive at the minima in 1 epoch. A single epoch means that all data points have been visited once or a certain number of data samples have been taken from a distribution. However more passes might be necessary. > > generated on the fly > > > I am assuming that the data is being generated as a part of a fixed distribution. Hence multiple epochs of multiple samples is still the ideal scenario. 1000 samples 1 epoch: Not enough training. 1 sample 1000 epoch: Overfitting or possibly not enough training. Upvotes: 0
2020/07/16
887
3,436
<issue_start>username_0: I'm looking for intuition in simple words but also some simple insights (I don't know if the latter is possible). Can anybody shed some light on the Turing test?<issue_comment>username_1: The **Turing test** is a test proposed by <NAME> (one of the founders of computer science and artificial intelligence), described in section 1 of paper [Computing Machinery and Intelligence](https://academic.oup.com/mind/article/LIX/236/433/986238) (1950), to answer the question > > **Can machines think?** > > > More precisely, the Turing test was originally framed as an interactive quiz (denoted as the **imitation game** by Turing) where a human interrogator $C$ asks multiple questions to two entities, $A$ (a computer) and $B$ (a human), which stay in different rooms than the room of the interrogator, so the interrogator cannot see them, in order to figure out which one is $A$ (the computer) and which one is $B$ (the human). $A$ and $B$ can only communicate in written form or any form that avoids them being easily recognized by $C$. The goal of the computer is to fool the interrogator and make him/her believe that it is a human and the goal of $B$ is to somehow help him and make him believe that he/she is the actual human. If the computer is able to fool the interrogator and make him/her believe that it is a human, then that would be an indication that machines can think. However, note that even Turing called this game the ***imitation** game*, so Turing was aware of the fact that this game would only really show that a machine can **imitate** a human (unless he was using the term "imitation" differently than its current meaning). Nowadays, there are different variations of the Turing test and some people use the term *Turing test* to refer to any test that attempts to tell humans and computers apart. For example, some people consider the **CAPTCHA** test a Turing test. In fact, [CAPTCHA stands for "*Completely Automated Public **Turing Test** To Tell Computers and Humans Apart*"](http://www.captcha.net/). The Turing test also has different interpretations and meanings. Some people think that the Turing test is sufficient to test that a machine can actually think and possesses consciousness, other people think that this only tests human-like intelligence (and there could be other intelligences) and some people (like me) think that this test is limited and only tests the conversational skills (and maybe other properties too) of the machine. Even Turing attempted to address these issues in the same paper (section 2), where he discusses some advantages and disadvantages of his imitation game. In any case, we can all agree that, if machines (in particular, programs like Siri, Google Home, Cortana, or Alexa) were always able to pass the Turing test, they would be a lot more useful, interesting and entertaining than they are now. Upvotes: 3 [selected_answer]<issue_comment>username_2: According to [Wikipedia](https://en.wikipedia.org/wiki/Turing_test) > > The "standard interpretation" of the Turing test, in which player C, the interrogator, is given the task of trying to determine which player – A or B – is a computer and which is a human. The interrogator is limited to using the responses to written questions to make the determination. > > > [![Turing test diagram](https://i.stack.imgur.com/BqThf.png)](https://i.stack.imgur.com/BqThf.png) Upvotes: 1
2020/07/16
1,256
4,473
<issue_start>username_0: I'm doing some introductory research on classical (stochastic) MABs. However, I'm a little confused about the common notation (e.g. in the popular paper of [Auer (2002)](https://homes.di.unimi.it/cesa-bianchi/Pubblicazioni/ml-02.pdf) or [Bubeck and Cesa-Bianchi (2012)](https://arxiv.org/pdf/1204.5721.pdf)). As in the latter study, let us consider an MAB with a finite number of arms $i\in\{1,...,K\}$, where an agent choses at every timestep $t=1,...,n$ an arm $I\_t$ which generates a reward $X\_{I\_t,t}$ according to a distribition $v\_{I\_t}$. In my understanding, each arm has an inherent distribution, which is unknown to the agent. Therefore, I'm wondering why the notation $v\_{I\_t}$ is used instead of simply using $v\_{i}$? Isn't the distribution independent of the **time** the arm $i$ was chosen? Furthermore, I ask myself: Why not simply use $X\_i$ instead of $X\_{I\_t,t}$ (in terms of rewards). Is it because the chosen arm at step $t$ (namely $I\_t$) is a random variable and $X$ depends on it? If I am right, why is $t$ used twice in the index (namely $I\_t,t$)? Shouldn't $X\_{I\_t}$ be sufficient, since $X\_{I\_t,m}$ and $X\_{I\_t,n}$ are drawn from the same distribution?<issue_comment>username_1: The **Turing test** is a test proposed by <NAME> (one of the founders of computer science and artificial intelligence), described in section 1 of paper [Computing Machinery and Intelligence](https://academic.oup.com/mind/article/LIX/236/433/986238) (1950), to answer the question > > **Can machines think?** > > > More precisely, the Turing test was originally framed as an interactive quiz (denoted as the **imitation game** by Turing) where a human interrogator $C$ asks multiple questions to two entities, $A$ (a computer) and $B$ (a human), which stay in different rooms than the room of the interrogator, so the interrogator cannot see them, in order to figure out which one is $A$ (the computer) and which one is $B$ (the human). $A$ and $B$ can only communicate in written form or any form that avoids them being easily recognized by $C$. The goal of the computer is to fool the interrogator and make him/her believe that it is a human and the goal of $B$ is to somehow help him and make him believe that he/she is the actual human. If the computer is able to fool the interrogator and make him/her believe that it is a human, then that would be an indication that machines can think. However, note that even Turing called this game the ***imitation** game*, so Turing was aware of the fact that this game would only really show that a machine can **imitate** a human (unless he was using the term "imitation" differently than its current meaning). Nowadays, there are different variations of the Turing test and some people use the term *Turing test* to refer to any test that attempts to tell humans and computers apart. For example, some people consider the **CAPTCHA** test a Turing test. In fact, [CAPTCHA stands for "*Completely Automated Public **Turing Test** To Tell Computers and Humans Apart*"](http://www.captcha.net/). The Turing test also has different interpretations and meanings. Some people think that the Turing test is sufficient to test that a machine can actually think and possesses consciousness, other people think that this only tests human-like intelligence (and there could be other intelligences) and some people (like me) think that this test is limited and only tests the conversational skills (and maybe other properties too) of the machine. Even Turing attempted to address these issues in the same paper (section 2), where he discusses some advantages and disadvantages of his imitation game. In any case, we can all agree that, if machines (in particular, programs like Siri, Google Home, Cortana, or Alexa) were always able to pass the Turing test, they would be a lot more useful, interesting and entertaining than they are now. Upvotes: 3 [selected_answer]<issue_comment>username_2: According to [Wikipedia](https://en.wikipedia.org/wiki/Turing_test) > > The "standard interpretation" of the Turing test, in which player C, the interrogator, is given the task of trying to determine which player – A or B – is a computer and which is a human. The interrogator is limited to using the responses to written questions to make the determination. > > > [![Turing test diagram](https://i.stack.imgur.com/BqThf.png)](https://i.stack.imgur.com/BqThf.png) Upvotes: 1
2020/07/17
813
3,336
<issue_start>username_0: > > There was a lot of Negative news on [Artificial Intelligence](https://www.lasserouhiainen.com/7-questions-artificial-intelligence/). Most people were first exposed to the idea of artificial intelligence from Hollywood movies, long before they ever started seeing it in their day-to-day lives. This means that many people misunderstand the technology. When they think about common examples that they’ve seen in movies or television shows, they may not realize that the killer robots they’ve seen were created to sell emotional storylines and drive the entertainment industry, rather than to reflect the actual state of AI technology. > > > There are few questions on our SE on how AI impacts/harms humankind. For example, [How could artificial intelligence harm us?](https://ai.stackexchange.com/a/15478/2444) and [Could artificial general intelligence harm humanity?](https://ai.stackexchange.com/a/10504/2444) However, now, I'm looking for the positive impacts of AI on humans. How could AI help humankind?<issue_comment>username_1: We've already seen significant progress in fields that we could not even come close to prior to the explosion in AI research. For example, [the automated identification of cancerous tumours in lung tissues](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6895901/) could save countless lives, as a computer never tires and has no bias. Incredible advances in [speech synthesis](https://deepmind.com/blog/article/wavenet-generative-model-raw-audio) can allow people who lost or never had a voice to have their own human-sounding voice. Personally, I also predict soon we will be able to synthesise music to an individual's tastes. Advances in rapid image processing, thanks to the polynomial complexity of machine learning techniques, allows for advanced guidance systems and autonomous cars that are only improving. Ranking algorithms for listings allow for finely tuned results for a user's search. Unsupervised techniques can help us separate suspicious activity in banking records to catch fraudulent transactions. There is a very, very long list here and this only scratches the surface. AI is already benefiting us enormously, to the point where very soon, if not already, I would say we will become so dependant on learning techniques that if they were outlawed, entire systems may collapse. Upvotes: 0 <issue_comment>username_2: For good or bad, AI is the next step in automation. The impact which is already visible, and trends show will continue in the future, is the eradication of repetitive and body-straining labor. Hopefully, the transformation will be gradual enough for the global labor market to re-adjust, otherwise, we'll face a problem of growing unemployment. It seems to me that we've become aware enough to foresee bad outcomes of our inventions, hence in almost every dimension affected by AI, a plausible and either positive or negative future can be presented, depending on the sentiment of the storyteller. Regardless of what different experts and sci-fi writers tell us about the future, actually predicting it is a futile endeavour. Considering that predictions made for a dynamic system, even when we have a lot of data and good models (like about the weather), become unreliable just for a few weeks ahead. Upvotes: 2 [selected_answer]
2020/07/17
654
2,805
<issue_start>username_0: I am working on OpenAI's "MountainCar-v0" environment. In this environment, each step that an agent takes returns (among other values) the variable named `done` of type boolean. The variable gets a `True` value when the episode ends. However, I am not sure how each episode ends. My initial understanding was that an episode should end when the car reaches the flagpost. However, that is not the case. What are the states/actions under which the episode terminates in this environment?<issue_comment>username_1: We've already seen significant progress in fields that we could not even come close to prior to the explosion in AI research. For example, [the automated identification of cancerous tumours in lung tissues](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6895901/) could save countless lives, as a computer never tires and has no bias. Incredible advances in [speech synthesis](https://deepmind.com/blog/article/wavenet-generative-model-raw-audio) can allow people who lost or never had a voice to have their own human-sounding voice. Personally, I also predict soon we will be able to synthesise music to an individual's tastes. Advances in rapid image processing, thanks to the polynomial complexity of machine learning techniques, allows for advanced guidance systems and autonomous cars that are only improving. Ranking algorithms for listings allow for finely tuned results for a user's search. Unsupervised techniques can help us separate suspicious activity in banking records to catch fraudulent transactions. There is a very, very long list here and this only scratches the surface. AI is already benefiting us enormously, to the point where very soon, if not already, I would say we will become so dependant on learning techniques that if they were outlawed, entire systems may collapse. Upvotes: 0 <issue_comment>username_2: For good or bad, AI is the next step in automation. The impact which is already visible, and trends show will continue in the future, is the eradication of repetitive and body-straining labor. Hopefully, the transformation will be gradual enough for the global labor market to re-adjust, otherwise, we'll face a problem of growing unemployment. It seems to me that we've become aware enough to foresee bad outcomes of our inventions, hence in almost every dimension affected by AI, a plausible and either positive or negative future can be presented, depending on the sentiment of the storyteller. Regardless of what different experts and sci-fi writers tell us about the future, actually predicting it is a futile endeavour. Considering that predictions made for a dynamic system, even when we have a lot of data and good models (like about the weather), become unreliable just for a few weeks ahead. Upvotes: 2 [selected_answer]
2020/07/18
607
2,630
<issue_start>username_0: Is there an upper limit to the maximum cumulative reward in a deep reinforcement learning problem? For example, you want to train a DQN agent in an environment, and you want to know what the highest possible value you can get from the cumulative reward is, so you can compare this with your agents performance.<issue_comment>username_1: We've already seen significant progress in fields that we could not even come close to prior to the explosion in AI research. For example, [the automated identification of cancerous tumours in lung tissues](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6895901/) could save countless lives, as a computer never tires and has no bias. Incredible advances in [speech synthesis](https://deepmind.com/blog/article/wavenet-generative-model-raw-audio) can allow people who lost or never had a voice to have their own human-sounding voice. Personally, I also predict soon we will be able to synthesise music to an individual's tastes. Advances in rapid image processing, thanks to the polynomial complexity of machine learning techniques, allows for advanced guidance systems and autonomous cars that are only improving. Ranking algorithms for listings allow for finely tuned results for a user's search. Unsupervised techniques can help us separate suspicious activity in banking records to catch fraudulent transactions. There is a very, very long list here and this only scratches the surface. AI is already benefiting us enormously, to the point where very soon, if not already, I would say we will become so dependant on learning techniques that if they were outlawed, entire systems may collapse. Upvotes: 0 <issue_comment>username_2: For good or bad, AI is the next step in automation. The impact which is already visible, and trends show will continue in the future, is the eradication of repetitive and body-straining labor. Hopefully, the transformation will be gradual enough for the global labor market to re-adjust, otherwise, we'll face a problem of growing unemployment. It seems to me that we've become aware enough to foresee bad outcomes of our inventions, hence in almost every dimension affected by AI, a plausible and either positive or negative future can be presented, depending on the sentiment of the storyteller. Regardless of what different experts and sci-fi writers tell us about the future, actually predicting it is a futile endeavour. Considering that predictions made for a dynamic system, even when we have a lot of data and good models (like about the weather), become unreliable just for a few weeks ahead. Upvotes: 2 [selected_answer]
2020/07/20
1,501
6,156
<issue_start>username_0: Everybody is implementing and using DNN with, for example, TensorFlow or PyTorch. I thought IBM's Deep Blue was an ANN-based AI system, but [this article](https://analyticsindiamag.com/understanding-difference-symbolic-ai-non-symbolic-ai/) says that IBM's Deep Blue was symbolic AI. Are there any special features in symbolic AI that explain why it was used (instead of ANN) by IBM's Deep Blue?<issue_comment>username_1: ANNs as used today need 1. a lot of data 2. a lot of computational power. Before we had any of the above two, we didn't really know how to properly build ANNs since we didn't quite have the means to train the network, and thus couldn't evaluate it. "Symbolic AI" on the other hand, is very much just a bunch of if-else/logical conditions, much like regular programming. You don't need to think too much about the whole "symbolic" part of it. The main/big breakthrough is that you had a lot of clever "search algorithms" and a lot of computation power relative to before. Point being, is just that symbolic AI was the main research program at the time, and people didn't really bother with "connectionist" methods. Upvotes: 3 [selected_answer]<issue_comment>username_2: You might also ask if there's any particular reason why we *would* use a neural net. If we're to train a neural net to play chess, we need to be able to: **1.** Feed it positions as input vectors (easy enough), **2.** Decide on an output format. Perhaps a distribution over possible moves (but then, how to represent that such that the meaning of a specific output cell doesn't change drastically based on the board state? Or perhaps instead, we let the resulting board state after a candidate move be the input, and let the output be a score that represents the desirability of that state. That'll require exponentially more forward/backprop passes, though. **3.** Provide it with an error signal to whatever output vector it produces. This is the really tricky bit, since we don't *know* whether a given move will result in victory until the very end. Do we play the game to the very end, storing decisions as we go, and then at the end, replay each input, feeding it an error signal if we lost? This will give the same error to the good moves as to the ones that actually lost the game. With enough games, this will work, since the good moves will get positive feedback a bit more often than negative, and vice versa for the bad ones. But it'll take a lot of games. More than a human is going to be willing to play. We can have different networks learn by playing against each other, but not on 1996 hardware. Do we instead provide a score based on another heuristic of the board state? In that case, why not just use [minimax](https://en.wikipedia.org/wiki/Minimax)? It's provably optimal for a given heuristic up to however many moves deep we look, and it doesn't need training. --- Add to this the fact that if we don't choose a good representation at each of these steps, there's a good chance that the network will only learn the positions it's specifically been trained on, rather than generalizing to unseen states, which is the main reason for using a neural network in the first place. It's certainly possible to use neural nets to learn chess (DeepMind's approach can be found [here](https://arxiv.org/abs/1712.01815), for instance), but they're not a natural fit to the problem by any means. Minimax, by contrast, fits the problem very well, which is why it was one of the techniques used by Deep Blue. Neural nets are an amazing tool, but they're not always the right tool for the job. Addendum: I didn't stress this point much, since K.C. already brought it up, but training large neural nets require us to perform a huge number of matrix vector multiplications, and this wasn't especially practical before GPUs got powerful and cheap. Upvotes: 2 <issue_comment>username_3: * I'm not sure any intelligent mechanism can be entirely free of symbolic logic. > > Even where a decision is statistically based, a machine that takes actions must include some form of: > > > IF {some condition} > > THEN {some action} > > > As to the popularity of newly proven statistical AI methods (ANN and genetic algorithms), this derives from the greater utility they demonstrate at ever more complex problems compared to expert systems ("good old fashioned AI") for problems that do not have a mathematical solution. (i.e. the statistical approach for 3x3 Tic-tac-toe is overkill and unnecessary b/c the 3x3 form is a [solved game](https://en.wikipedia.org/wiki/Solved_game). But for larger-order gameboards $m\*m$ or $m\*n$, the n-dimensional game, $m^n$, barring a mathematical solution that applies to every variation, ANN is way to go.) The main issue with expert systems, no matter how complex, is "brittleness"—inability to adapt to changes without human programmer intervention. As conditions change, the mechanism demonstrates diminishing utility, or simply "breaks" (invalid input as an example.) * The amount of human effort required to create DeepBlue was monumental, which is why it took decades to achieve it goal, funded by a large corporation with a history of basic research. Compare to a simple ANN that can be trained to achieve the same goal in an extremely short timeframe. It's possible future artificial general intelligences of whatever strength would involve statistical AI programming and adapting its own symbolic functions. Finally, symbolic AI is still vastly more widely implemented than statistical AI, in that all of the basic functions of modern computing, all of the mathematical functions, all traditional software and apps, utilize symbolic logic, even if the high level function is statistically driven. This will likely always be the case. Thus, in terms of what method is best for a given problem, it really depends on the nature/structure of the problem, it's solvability or even [decidability](https://en.wikipedia.org/wiki/Decidability_(logic)), as well as its [tractability](https://en.wikipedia.org/wiki/Computational_complexity_theory#Intractability). Upvotes: 2
2020/07/22
313
1,177
<issue_start>username_0: For the past few days, I am trying to learn graph convolutional networks. I saw some of the lectures on youtube. But I can not able to get any clear concept of how those networks are trained. I have a vague understanding of how to perform convolution, but I can not understand how we train them. I want a solid mathematical understanding of graph convolutional networks. So, can anyone please suggest me how to start learning graph convolutional network from start to expert level?<issue_comment>username_1: I believe [*Graph Representation Learning*](http://web.archive.org/web/20201126231906/https://www.cs.mcgill.ca/%7Ewlh/grl_book/) book by <NAME> is a great resource to start Upvotes: 3 [selected_answer]<issue_comment>username_2: There is also the proto-book [Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges](https://arxiv.org/pdf/2104.13478.pdf) (2021), written by some of the experts on the topic. The book does not focus only on graphs and graph neural networks (GNNs), but also covers manifolds, geodesics, and other mathematical concepts related to geometric deep learning and other GDL models. Upvotes: 1
2020/07/23
619
2,080
<issue_start>username_0: Suppose I have a model that was trained with a dataset that contains the features `(f1, f2, f3, f4, f5, f6)`. However, my test dataset does not contain all features of the training dataset, but only `(f1, f2, f3)`. How can I predict the true label of the entries of this test dataset without all features?<issue_comment>username_1: I assume you trained your model on `(f1, f2, f3, f4, f5, f6)` and in your test data you sometimes have `(f1, f2, f3)` and sometimes have for example `(f1, f2, f3, f4, f5, f6)`, right? Because if your test data always have `(f1, f2, f3)`, then isn't it better to just train a model on available features? So if my assumption is correct what I would do is to manipulate the training set a bit, keeping some training set with `(f1, f2, f3, f4, f5, f6)` and some others with `(f1, f2, f3)` with replacement of real values in their `(f4, f5, f6)` by e.g. mean of respective feature. So all training set still have `(f1, f2, f3, f4, f5, f6)` but some of them have manipulated `(f4, f5, f6)`. Then finally when testing, do the same manipulation to those test data that have a smaller number of features. I think like this your model learn how to predict base on `(f1, f2, f3)` when other features are not available. but at the same time, take advantage of all features if they are all available. It's probably not the best approach but it worth to try. Upvotes: 0 <issue_comment>username_2: Assuming that you have access to the training data set, you could use an [autoencoder network](https://medium.com/pytorch/implementing-an-autoencoder-in-pytorch-19baa22647d1) to predict what features f4, f5, f6 'could be' for the test data set. The way to do this is to train the autoencoder on the training data set with features f1, f2, f3 as inputs, and then use f1,f2,f3,f4,f5,f6 as the output of the network. The autoencoder then effectively learns to map any input samples with (f1,f2,f3) to (f1,f2,f3,f4,f5,f6). By passing your test data through the autoencoder, you can then use the output and pass it to your model. Upvotes: 1
2020/07/26
395
1,607
<issue_start>username_0: I have an NLP model for answer-extraction. So, basically, I have a paragraph and a question as input, and my model extracts the span of the paragraph that corresponds to the answer to the question. I need to know how to compute the F1 score for such models. It is the standard metric (along with Exact Match) used in the literature to evaluate question-answering systems.<issue_comment>username_1: It really depends on what you are looking for your model to do. For example, do false negatives or false positives really cost your research (or your business)? Also, it's very important to consider your label (class) distribution. If you just want to achieve the highest accuracy, and you don't have any issue with your class distribution (that I believe you probably don't in your case) then accuracy works pretty well. F1 score might be a better option to use if you need to seek a balance between precision and recall and there is an uneven class distribution. Upvotes: 0 <issue_comment>username_2: In QA, it's computed over the individual words in the prediction against those in the True Answer. The number of shared words between the prediction and the truth is the basis of the F1 score: precision is the ratio of the number of shared words to the total number of words in the prediction, and recall is the ratio of the number of shared words to the total number of words in the ground truth. [Source](https://qa.fastforwardlabs.com/no%20answer/null%20threshold/bert/distilbert/exact%20match/f1/robust%20predictions/2020/06/09/Evaluating_BERT_on_SQuAD.html#F1) Upvotes: 2
2020/07/27
845
3,244
<issue_start>username_0: What is the difference between vanilla policy gradient (VPG) with a baseline as value function and advantage actor-critic (A2C)? By vanilla policy gradient I am specifically referring to spinning up's explanation of VPG.<issue_comment>username_1: The difference between Vanilla Policy Gradient (VPG) with a baseline as value function and Advantage Actor-Critic (A2C) is very similar to the difference between Monte Carlo Control and SARSA: * The value estimates used in updates for VPG are based on full sampled returns, calculated at the end of episodes. * The value estimates used in updates for A2C are based on temporal difference *bootstrapped* from e.g. a single step difference, and the Bellman function. This leads to the following practical differences: * A2C can learn *during* an episode which can lead to faster refinements in policy than with VPG. * A2C can learn in continuing environments, whilst VPG cannot. * A2C relies on initially biased value estimates, so can take more tuning to find hyperparameters for the agent that allows for stable learning. Whilst VPG typically has higher variance and can require more samples to achieve the same degree of learning. Upvotes: 4 [selected_answer]<issue_comment>username_2: Given the formula for the policy gradient with baseline: $$ \nabla J(\theta) = \mathbb{E}\_{a,s \sim \pi\_\theta} \bigg[ \nabla \log \pi\_\theta(a|s) \Big(R(s, a) - V\_\phi(s) \Big) \bigg] $$ How do you compute the return $R(s,a)$ ? If you use a simple Monte-Carlo estimate, i.e. $R = \sum\_{t=t'}^{T} r\_{t+1}$, then you get the "vpg with baseline" as it is called in the spinning up documentation. Note that in this case you have to rollout episode until it is finished, otherwise you can't compute the return ! If you use a one-step bootstrapped estimate, i.e. $ R(s,a) = r + V\_\phi(s')$, then you would get the actor-critic setup, where we have an actor (the policy network) that selects actions to perform the rollout and a critic (the value network) that is used to compute the returns, i.e. it grades the performance. And since you are baseline-ing with the value function you actually get an advantage actor-critic. Note that now you can calculate the return without the episode being finished. Thus, you could step the environment for a few steps, then update the policy, and then continue stepping. Its all about how you calculate the return. As a side note: I really don't like the fact that people use "vanilla policy gradient" to mean that they are using a Monte-Carlo estimate for the return. In my opinion "vanilla" policy gradient means that you perform a "vanilla" update of the policy using the calculated gradient, i.e.: $$ \theta\_{new} = \theta - \alpha \nabla J(\theta).$$ Instead of a "vanilla" update you could update the weights using the natural gradient (TRPO) or you could perform multiple clipped updates (PPO) (see [here](https://username_2.github.io/posts/actor-critic/#ppo) for more). There are of course other types of policy gradient algorithms, but the idea is that once you have the gradient estimate you do not perform a simple update in the direction of the gradient, but instead do something more sophisticated with it. Upvotes: 0
2020/07/27
264
1,088
<issue_start>username_0: I always use RELUs actication functions when I need to and I understand limitations of ELUs. So in what situation do I need to consider ELUs over RELUs?<issue_comment>username_1: ELU does not suffer from dying neurons issue, unlike ReLU. While ELU can help you to achieve better accuracy, it is slower than ReLU because of its non-linearity in its negative range. Choosing the right activation function totally depends on the situation but you need to also consider other similar types of activation functions such as leaky ReLU. Check this [link](https://mlfromscratch.com/activation-functions-explained/#/) out. It could be useful. Upvotes: 3 [selected_answer]<issue_comment>username_2: The answer above makes some great comparisons/trade-offs. To help address the non-linearity issue with eLU units that the previous answer brings up, you can also use Leaky-ReLU units, which are linear in both the positive and negative range, and piecewise linear across the whole real domain. Please see the link [here](https://qr.ae/pNsatH) for more details. Upvotes: 0
2020/07/29
1,068
4,647
<issue_start>username_0: For example, if I have the following architecture: -------------------------------------------------- [![3 layer neural network](https://i.stack.imgur.com/I1SWw.gif)](https://i.stack.imgur.com/I1SWw.gif) * Each neuron in the hidden layer has a connection from each one in the input layer. * 3 x 1 Input Matrix and a 4 x 3 weight matrix (for the backpropagation we have of course the transformed version 3 x 4) But until now, I still don't understand what the point is that a neuron has 3 inputs (in the hidden layer of the example). It would work the same way, if I would only adjust one weight of the 3 connections. But in the current case the information flows only distributed over several "channels", but what is the point? With backpropagation, in some cases the weights are simply adjusted proportionally based on the error. Or is it just done that way, because then you can better mathematically implement everything (with matrix multiplication and so on)? Either my question is stupid or I have an error in my thinking and assume wrong ideas. Can someone please help me with the interpretation. In tensorflow playground for example, I cut the connections (by setting the weight to 0), it just compansated it by changing the other still existing connection a bit more: [![TensorflowImage](https://i.stack.imgur.com/lNHq4.png)](https://i.stack.imgur.com/lNHq4.png)<issue_comment>username_1: **It doesn't.** Whether or not this is useful is another story, but it is totally fine to do that neural net you have with just one input value. Perhaps you choose one pixel of the photo and make your classification based on the intensity in that one pixel (I guess I'm assuming a black-and-white photo), or you have some method to condense an entire photograph into one value that summarizes the photo. Then each neuron in the hidden layer only has one input connection. Likewise, you are allowed to decide that the top neuron in the hidden layer should have only one input connection; just drop the other two. Again, this might not give useful results, but they're still neural networks. Upvotes: 0 <issue_comment>username_2: If you adopt a slightly different point-of-view, then a neural network of this static kind is just a big function with parameters, $y=F(x,P)$, and the task of training the network is a non-linear fit of this function to the data set. That is, training the network is to reduce all of the residuals $y\_k-F(x\_k,P)$ simultaneously. This is a balancing act, just tuning one weight to adjust one residual will in general worsen some other residuals. Even if that is taken into account, methods that adjust one variable at a time are usually much slower than methods that adjust all variables simultaneously along some gradient or Newton direction. The usual back-propagation algorithm sequentializes the gradient descent method for the square sum of the residuals. Better variants improve that to a Newton-like method by some estimate of the Hessean of this square sum or following along the idea of the Gauß-Newton method. Upvotes: 0 <issue_comment>username_3: There's a few reasons I can think of, though I have not read an explicit description of why it is done this way. It's likely that people just started doing it this way because it's most logical, and people who have attempted to try your method of having reduced connections have seen a performance hit and so no change was made. The first reason is that if you allow all nodes from one layer to connect to all others in the next, the network will optimise unnecessary connections out. Essentially, the weighting of these connections will become 0. This, however, does not mean you can trim these connections, as ignoring them in this local minima might be optimal, but later it might be really important these connections remain. As such, you can never truly know if a connection between one layer and the next is necessary, so it's just better to leave it in case it helps improve network performance. The second reason is it's just simpler mathematically. Networks are implemented specifically so it's very easy to apply a series of matrix calculations to perform all computations. Trimming connections means either: * A matrix must contain 0 values, wasting computation time * A custom script must be written to calculate this networks structure, which in the real world can take a very long time as it must be implemented using something like CUDA (on a GPU level, making it very complicated) Overall, it's just a lot simpler to have all nodes connected between layers, rather than on connection per node. Upvotes: 2
2020/07/31
3,468
8,979
<issue_start>username_0: In [these slides](https://rlchina.org/lectures/lecture2.pdf#page=17), it is written \begin{align} \left\|T^{\pi} V-T^{\pi} U\right\|\_{\infty} & \leq \gamma\|V-U\|\_{\infty} \tag{9} \label{9} \\ \|T V-T U\|\_{\infty} & \leq \gamma\|V-U\|\_{\infty} \tag{10} \label{10} \end{align} where * $F$ is the space of functions on domain $\mathbb{S}$. * $T^{\pi}: \mathbb{F} \mapsto \mathbb{F}$ is the Bellman *policy* operator * $T: \mathbb{F} \mapsto \mathbb{F}$ is the Bellman **optimality** operator In [slide 19](https://rlchina.org/lectures/lecture2.pdf#page=19), they say that equality $9$ follows from \begin{align} {\scriptsize \left\| T^{\pi} V-T^{\pi} U \right\|\_{\infty} = \max\_{s} \gamma \sum\_{s^{\prime}} \operatorname{Pr} \left( s^{\prime} \mid s, \pi(s) \right) \left| V\left(s^{\prime}\right) - U \left(s^{\prime}\right) \right| \\ \leq \gamma \left(\sum \operatorname{Pr} \left(s^{\prime} \mid s, \pi(s)\right)\right) \max \_{s^{\prime}}\left|V\left(s^{\prime}\right)-U\left(s^{\prime}\right)\right| \\ \leq \gamma\|U-V\|\_{\infty} } \end{align} Why is that? Can someone explain to me this derivation? They also write that inequality \ref{10} follows from \begin{align} {\scriptsize \|T V-T U\|\_{\infty} = \max\_{s} \left| \max\_{a} \left\{ R(s, a) + \gamma \sum\_{s^{\prime}} \operatorname{Pr} \left( s^{\prime} \mid s, a \right) V \left( s^{\prime} \right) \right\} -\max\_{a} \left\{R(s, a)+\gamma \sum\_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, a\right) U\left(s^{\prime}\right)\right\} \right| \\ \leq \max \_{s, a}\left|R(s, a)+\gamma \sum\_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, a\right) V\left(s^{\prime}\right) -R(s, a)-\gamma \sum \operatorname{Pr}\left(s^{\prime} \mid s, a\right) V\left(s^{\prime}\right) \right| \\ = \gamma \max \_{s, a}\left|\sum\_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, a\right)\left(V\left(s^{\prime}\right)-U\left(s^{\prime}\right)\right)\right| \\ \leq \gamma\left(\sum\_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, a\right)\right) \max \_{s^{\prime}}\left|\left(V\left(s^{\prime}\right)-U\left(s^{\prime}\right)\right)\right| \\ \leq \gamma\|V-U\|\_{\infty} } \end{align} Can someone explain to me also this derivation?<issue_comment>username_1: The inequality \begin{align} \left\|T^{\pi} V-T^{\pi} U\right\|\_{\infty} & \leq \gamma\|V-U\|\_{\infty} \label{1}\tag{1}, \end{align} where $U$ and $V$ are two value functions, follows from the definition of *Bellman **policy** operator* (at [slide 16](http://web.archive.org/web/20210506172558/https://rlchina.org/lectures/lecture2.pdf#page=16)) \begin{align} T^{\pi} V(s) &\triangleq R(s, a)+\gamma \sum\_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, a\right) V\left(s^{\prime}\right) \\ &=R(s, \pi(s))+\gamma \sum\_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, \pi(s)\right) V\left(s^{\prime}\right), \; \forall s \in S \tag{2}\label{2}, \end{align} where $\triangleq$ means "defined as". Note the $\pi$ in the definition, hence the name *Bellman **policy** operator* (BPO), and note that the BPO holds for all $s$. To prove (\ref{1}), first recall that \begin{align} \left\|\mathbf {x} \right\|\_{\infty } \triangleq \max \_{i}\left|x\_{i}\right| \label{3}\tag{3}. \end{align} In the case of value functions $V$ and $U$, we have \begin{align} \left\|V - U \right\|\_{\infty } \triangleq \max\_{s \in S}\left|V(s) - U(s) \right|. \label{4}\tag{4} \end{align} Note also that $Pr$ is always non-negative (specifically, between $0$ and $1$). Successively, we expand the **left-hand side** of (\ref{1}) by applying the definition (\ref{2}) and using the properties just mentioned \begin{align} &\left\|T^{\pi} V-T^{\pi} U\right\|\_{\infty} = \\ &\left\| \left( R(s, \pi(s))+\gamma \sum\_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, \pi(s) \right) V\left(s^{\prime}\right) \right) - \\ \left( R(s, \pi(s))+\gamma \sum\_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, \pi(s) \right) U\left(s^{\prime}\right) \right) \right\|\_{\infty} =\\ &\max\_{s \in S} \left| \left( R(s, \pi(s))+\gamma \sum\_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, \pi(s) \right) V\left(s^{\prime}\right) \right) - \\ \left( R(s, \pi(s))+\gamma \sum\_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, \pi(s) \right) U\left(s^{\prime}\right) \right) \right| = \\ & \max\_{s \in S} \left| \gamma \sum\_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, \pi(s) \right) V\left(s^{\prime}\right) - \gamma \sum\_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, \pi(s) \right) U\left(s^{\prime}\right) \right| = \\ & \gamma \max\_{s \in S} \left| \sum\_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, \pi(s) \right) V\left(s^{\prime}\right) - \sum\_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, \pi(s) \right) U\left(s^{\prime}\right) \right| = \\ & \gamma \max\_{s \in S} \left| \sum\_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, \pi(s)\right) \left ( V\left(s^{\prime}\right) - U\left(s^{\prime}\right) \right) \right| = \\ & \gamma \max\_{s \in S} \sum\_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, \pi(s)\right) \left| V\left(s^{\prime}\right) - U\left(s^{\prime}\right) \right| \\ & \leq \gamma \max\_{s \in S} \sum\_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, \pi(s)\right) \max\_{x \in S }\left| V\left(x\right) - U\left(x\right) \right| \label{5}\tag{5} \\ & \leq \gamma \max \_{x \in \mathcal{S}}\left|V\left(x\right)-U\left(x\right)\right| \label{6}\tag{6} \\ &= \gamma \| V - U \|\_{\_{\infty}} \label{7}\tag{7} \end{align} Here are a few notes to help you understand this derivation * Equation \ref{7} is just the direct application of the definition of the $\infty$-norm in equation \ref{4} * The inequalities \ref{5} and \ref{6} come from the fact that $\mathbb{E}[f(x)] \leq \max\_x f(x)$. When we take $\max\_s$, we choose among all conditional distributions $p$ (which are conditioned on $s$), but the differences $\left| V\left(s^{\prime}\right) - U\left(s^{\prime}\right) \right|$ don't change in that process. So, no matter which $p$ we choose, i.e. no matter which distribution of the function $\left| V\left(s^{\prime}\right) - U\left(s^{\prime}\right) \right|$ we choose, we know that $\mathbb{E} \left[ \left| V\left(s^{\prime}\right) - U\left(s^{\prime}\right) \right| \right] \leq \max \_{x \in \mathcal{S}}\left|V\left(x\right)-U\left(x\right)\right|$ Upvotes: 3 <issue_comment>username_2: I am assuming you are aware of the meaning of the notations. I will provide an informal explanation. From your comment I am guessing you have difficulty in this portion in the 1st equation: \begin{align} {\scriptsize \max\_{s} \gamma \sum\_{s^{\prime}} \operatorname{Pr} \left( s^{\prime} \mid s, \pi(s) \right) \left| V\left(s^{\prime}\right) - U \left(s^{\prime}\right) \right| \\ \leq \gamma \left(\sum \operatorname{Pr} \left(s^{\prime} \mid s, \pi(s)\right)\right) \max \_{s^{\prime}}\left|V\left(s^{\prime}\right)-U\left(s^{\prime}\right)\right| \\ \leq \gamma\|U-V\|\_{\infty} } \end{align} The first inequality arises simply due to the fact that you are assigning a probability $1$ to the succesor state which has the maximum difference under the $2$ value functions, whereas previously you wee maximizing the entire equation with respect to a state $s$, and hence certain probabilities get assigned to low value diiference states as well (i.e $|U(s') - V(s')|$ is small compared to the largest value difference), whereas now you just pick the maximum difference between a succesor state, under the 2 value functions $V,U$ and assign the entire probability to it i.e ($(\sum\_{s'}Pr(s'|s, \pi(s))) = 1$). The second inequality is due to the fact, that now instead of selecting from a successor state, you select the maximum difference under the 2 value functions ($U(s),V(s)$) from the entire state space. In the 2nd equation: \begin{align} {\scriptsize \gamma \max \_{s, a}\left|\sum\_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, a\right)\left(V\left(s^{\prime}\right)-U\left(s^{\prime}\right)\right)\right| \\ \leq \gamma\left(\sum\_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, a\right)\right) \max \_{s^{\prime}}\left|\left(V\left(s^{\prime}\right)-U\left(s^{\prime}\right)\right)\right| \\ \leq \gamma\|V-U\|\_{\infty} } \end{align} The first inequality is again due to the same reasoning as above, that you assign the entire probability to the succesor state with highest value difference (under $U,V$) the maximum probability. And the second inequality is also due to the same reasoning as the 1st equation. You look for the maximum difference in the entire state space instead of just among successor states. **NOTE:** In general succesor states can be the entire state space with those unreachable from state having $Pr(s'|s) = 0$, in that case the last inequality will become equality in both the equations. Upvotes: 0
2020/08/01
1,195
5,336
<issue_start>username_0: The genetic algorithm consists of 5 phases of which 4 are repeated: 1. Initial population (initially) 2. Fitness function 3. Selection 4. Crossover 5. Mutation In the selection phase, the number of solutions decreases. How is it avoided to run out of the population before reaching a suitable solution?<issue_comment>username_1: There are multiple ways to interpret those steps. The most common standard approaches are * select two parents and produce two offspring; repeat until child population is the same size as parent population, and let the children replace their parents unconditionally (generational GA) * same as the above, but allow a few parents to live on instead of a few children if the parents have higher fitness (elitism) * each iteration, select two parents, produce one child, let the child replace a member of the parent population if it is better (steady state GA) But there are other ways to go. There's an algorithm called CHC that lets the child population get smaller over time, and when it reaches zero, the algorithm triggers a smart restart. The point is there's no single definition for what makes an evolutionary algorithm. It's up to you to decide how to make something that works well for your problem. When you're a beginner though, it's handy to start from known points, like the three I mentioned above. Upvotes: 2 <issue_comment>username_2: This is a more complex question than it might initially seem. A genetic algorithm models a biological process,namely population genetics. No biological population evolves to a single cloned individual, a process in genetic algorithms referred to as premature convergence where the population converges to a single non optimal, though possibly locally optimal, solution. The avoidance of premature convergence or the maintenance of population diversity is an important aspect of the genetic model that is often not well addressed, and one that the five step model you detail definitely does not. The one operator that will maintain diversity is mutation, since it is a purely random operator. However, what the mutation rate should be is highly argued over. A general consensus is that if each chromosome is of length N then the mutation rate should be 1/N. Likewise, the consensus is that 60% of the population should be replaced in each breeding cycle. However, these settings do not emerge directly from biological reality and premature convergence remains problematic. A more realistic model is to reflect the fact that in biology resources are finite, and to adjust the fitness of individuals proportionate to the number of similar individuals on the assumption that similar individuals are chasing the same resource. The fitness landscape is thus dynamically warped by the changing distribution of the population. You will still have to retain memory of the fittest solution before adjustment. A common solution is to apply cluster analysis to the population, reducing the individual’s fitness by the size of the cluster to which it is allotted. A seminal paper is by [Yin and Germay A Fast Genetic Algorithm with Sharing Scheme Using Cluster Analysis Methods in Multimodal Function Optimization](https://www.semanticscholar.org/paper/A-Fast-Genetic-Algorithm-with-Sharing-Scheme-Using-Yin-Germay/87e16bb2c15dbe699b84ef07722661c6acdff88e) `. The assumption is still made that the population is modelling a single biological species. How diversity does not merely maintain diversity but results in a population dividing into separate reproductively isolated species is a question for another day, and one that divides biologists to the current day. Upvotes: 1 <issue_comment>username_3: It is not true that the number of solutions necessarily decreases during the selection phase (if by solutions you mean the number of individuals in the population). The number of solutions is usually constant, i.e., you can start with $N$ individuals, then, every iteration (or generation), you can e.g. select two individuals from the population (typically, the fittest ones, but you can have some more sophisticated selection criteria), then you merge them to create two new individuals (i.e. crossover), which will then replace (with a certain probability) the two least fit individuals from the current population, so the population's size remains constant. If you are talking about reaching a local minimum, i.e. none of the solutions in the population are "good enough", then, as someone has already suggested, there are potentially multiple ways to address this issue, such as * increase the population size * run the genetic algorithm for a longer time (if you have the resources) * change your genetic operators (i.e. the mutation and crossover) so that to introduce more diversity * tweak the replacement, mutation, and crossover rates * change your selection strategy (there are many selection strategies) * make sure that the representation of the solutions is suitable (e.g. once, by mistake, I was using an array of integers rather than floating-point numbers, so I couldn't ever find the correct solution, which was an array of floating-point numbers) * use something like [novelty search](http://people.idsia.ch/%7Etino/papers/cuccu.evostar11.pdf) The correct approach will probably depend on the context. Upvotes: 2
2020/08/03
1,876
7,239
<issue_start>username_0: Generally speaking, is there a best-practice procedure to follow when trying to define a reward function for a reinforcement-learning agent? What common pitfalls are there when defining the reward function, and how should you avoid them? What information from your problem should you take into consideration when going about it? Let us presume that our environment is fully observable MDP.<issue_comment>username_1: ### Designing reward functions Designing a reward function is sometimes straightforward, if you have knowledge of the problem. For example, consider the game of chess. You know that you have three outcomes: win (good), loss (bad), or draw (neutral). So, you could reward the agent with $+1$ if it wins the game, $-1$ if it loses, and $0$ if it draws (or for any other situation). However, in certain cases, the specification of the reward function can be a difficult task [[1](https://ai.stanford.edu/%7Eang/papers/icml04-apprentice.pdf), [2](https://es.mathworks.com/help/reinforcement-learning/ug/define-reward-signals.html), [3](http://incompleteideas.net/book/RLbook2020.pdf#page=491)] because there are many (often unknown) factors that could affect the performance of the RL agent. For example, consider the driving task, i.e. you want to teach an agent to drive e.g. a car. In this scenario, there are so many factors that affect the behavior of a driver. How can we incorporate and combine these factors in a reward function? How do we deal with unknown factors? So, often, designing a reward function is a **trial-and-error** and engineering process (so there is no magic formula that tells you how to design a reward function in all cases). More precisely, you define an initial reward function based on your knowledge of the problem, you observe how the agent performs, then tweak the reward function to achieve greater performance (for example, in terms of observable behavior, so **not** in terms of the collected reward; otherwise, this would be an easy problem: you could just design a reward function that gives infinite reward to the agent in all situations!). For example, if you have trained an RL agent to play chess, maybe you observed that the agent took a lot of time to converge (i.e. find the best policy to play the game), so you could design a new reward function that [penalizes the agent for every non-win move (maybe it will hurry up!)](https://ai.stackexchange.com/q/24375/2444). Of course, this trial-and-error approach is not ideal, and it can sometimes be impractical (because maybe it takes a lot of time to train the agent) and lead to misspecified reward signals. ### Misspecification of rewards It is well known that the misspecification of the reward function can have unintended and even dangerous consequences [[5](https://openai.com/blog/faulty-reward-functions/)]. To overcome the misspecification of rewards or improve the reward functions, you have some options, such as 1. **Learning from demonstrations** (aka *apprenticeship learning*), i.e. do not specify the reward function directly, but let the RL agent imitate another agent's behavior, either to * learn the policy directly (known as **imitation learning** [[8](https://papers.nips.cc/paper/2016/file/cc7e2b878868cbae992d1fb743995d8f-Paper.pdf)]), or * learn a reward function first to later learn the policy (known as **inverse reinforcement learning** [[1](https://ai.stanford.edu/%7Eang/papers/icml04-apprentice.pdf)] or sometimes known as **reward learning**) 2. Incorporate **human feedback** [[9](https://openai.com/blog/deep-reinforcement-learning-from-human-preferences/)] in the RL algorithms (in an interactive manner) 3. **Transfer the information** in the policy learned in another but similar environment to your environment (i.e. use some kind of transfer learning for RL [[10](https://www.jmlr.org/papers/volume10/taylor09a/taylor09a.pdf)]) Of course, these solutions or approaches can also have their shortcomings. For example, interactive human feedback can be tedious. ### Reward shaping Regarding the common pitfalls, although **reward shaping** (i.e. augment the natural reward function with more rewards) is often suggested as a way to improve the convergence of RL algorithms, [[4](https://hal.archives-ouvertes.fr/file/index/docid/331752/filename/matignon2006ann.pdf)] states that reward shaping (and progress estimators) should be used cautiously. If you want to perform reward shaping, you should probably be using [**potential-based reward shaping**](https://people.eecs.berkeley.edu/%7Epabbeel/cs287-fa09/readings/NgHaradaRussell-shaping-ICML1999.pdf) (which is guaranteed not to change the optimal policy). ### Further reading The MathWorks' article [Define Reward Signals](https://es.mathworks.com/help/reinforcement-learning/ug/define-reward-signals.html) discusses **continuous** and **discrete** reward functions (this is also discussed in [[4](https://hal.archives-ouvertes.fr/file/index/docid/331752/filename/matignon2006ann.pdf)]), and addresses some of their advantages and disadvantages. Last but not least, the 2nd edition of the RL bible contains a section ([17.4 Designing Reward Signals](http://incompleteideas.net/book/RLbook2020.pdf#page=491)) completely dedicated to this topic. Another similar question was also asked [here](https://ai.stackexchange.com/q/12264/2444). Upvotes: 4 [selected_answer]<issue_comment>username_2: If your objective is for the agent to attain some goal (say, reaching a target), then a valid reward function is to assign a reward of 1 when the goal is attained and 0 otherwise. The problem with this reward function is that it's too *sparse*, meaning the agent has little guidance on how to modify their behavior to become better at attaining said goal, especially if the goal is hard to attain through a random policy in the first place (which is probably roughly what the agent starts with). The practice of modifying the reward function to guide the learning agent is called *reward shaping*. A good start is [*Policy invariance under reward transformations: Theory and application to reward shaping*](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.345) by Ng et al. The idea is to create a *reward potential* (see Theorem 1) on top of the existing reward. This reward potential should be an approximation of the *true value* of a given state. For instance, if you have a gridworld scenario where the goal is for the agent to reach some target square, you could create a reward potential based on the Manhattan distance to this target (without accounting for obstacles), which is an *approximation* to the true value of a given position. Intuitively, creating a reward potential that is close to the true values makes the job easier for the learning agent because it reduces the disadvantage of being myopic, and the agent more quickly gets closer to a "somewhat good" policy from which it is easier to crawl toward the optimal policy. Moreover, reward potentials have the property that they are *consistent* with the optimal policy. That is, the optimal policy to the *true* problem will not become suboptimal under the new, modified problem (with the new reward function). Upvotes: 3
2020/08/04
908
3,672
<issue_start>username_0: What if I have some data, let's say I'm trying to answer if education level and IQ affect earnings, and I want to analyze this data and put in a regression model to predict earnings based on the IQ and education level. My confusion is, what if the data is not linear or polynomial? What if it's a mess but there are still patterns that the linear plane algorithm can't capture? How do I figure out if plotting all of the independent variables will form a line or a polynomial curve like here? [![enter image description here](https://i.stack.imgur.com/LZ5Bd.png)](https://i.stack.imgur.com/LZ5Bd.png) I mean, with one dependent and one independent variable it's easy because you can plot it and see, but in a situation with multiple independent variables... how do I figure out if the relationship is linear or something like this? How do I figure out if I should use a regression model? Let's say I want to predict a store's daily revenue based on the day of the week, weather and the number of people arrived in the city. My data would look something like this: ``` +-----------+---------+----------------+---------+ | DAY | WEATHER | PEOPLE ARRIVED | REVENUE | +-----------+---------+----------------+---------+ | Monday | Sunny | 1115 | $500 | +-----------+---------+----------------+---------+ | Tuesday | Cloudy | 808 | $250 | +-----------+---------+----------------+---------+ | Wednesday | Sunny | 450 | $300 | +-----------+---------+----------------+---------+ ``` I'm a bit confused about what ML algorithm I should use in such a scenario. I can represent the days of the week as (Monday - 1, Tuesday - 2, Wednesday - 3, etc.) and the weather as (Sunny - 1, Cloudy - 2, Normal - 3, etc.) but would a regression model work? I'm skeptical because I'm not sure if there's a linear relationship between the variables and I'm not sure if a hyperplane can create accurate representation of what's going on.<issue_comment>username_1: Regression Model will definitely work on That problem. You only need to change shape of predicting variables like(Day, Weather, people arrived) into 1D array if you got error.. Otherwise you can simply apply Linear Regression, SVM etc to get your output with good accuracy. Upvotes: 0 <issue_comment>username_2: What you should do as part of your exploration is to learn various models of increasing complexity. Start from a simple linear model, ending in multi-layer neural networks (with non-linear activations of course). If the nonlinear models are better then that implies that your data do not follow a linear hyperplane. Also check this out for recent trends: <https://machinelearningmastery.com/auto-sklearn-for-automated-machine-learning-in-python/> Upvotes: 2 [selected_answer]<issue_comment>username_3: There is a special *model selection technique* that is called `K-Fold Cross Validation` just for this situation. It is basically dividing your dataset into separate pieces, training and evaluating on each of them iteratively. Check the example image below: [![enter image description here](https://i.stack.imgur.com/O1Gkv.png)](https://i.stack.imgur.com/O1Gkv.png) Each of these `e` values represents the exclusive error on that fold of data. Applying summation to them and dividing into fold count would give you the **model error**. [![enter image description here](https://i.stack.imgur.com/LzNTx.png)](https://i.stack.imgur.com/LzNTx.png) **Model error** represents the model's performance on that specific given dataset. If you want to see which model would suit your data better, I suggest you to comparing their model errors. Upvotes: 1
2020/08/05
469
1,952
<issue_start>username_0: Let's say that I want to classify whether a document is a legal document or not. I have a list of keywords that will be presented only in legal documents. What is the proper way or algorithm to calculate probability based on this list?<issue_comment>username_1: Regression Model will definitely work on That problem. You only need to change shape of predicting variables like(Day, Weather, people arrived) into 1D array if you got error.. Otherwise you can simply apply Linear Regression, SVM etc to get your output with good accuracy. Upvotes: 0 <issue_comment>username_2: What you should do as part of your exploration is to learn various models of increasing complexity. Start from a simple linear model, ending in multi-layer neural networks (with non-linear activations of course). If the nonlinear models are better then that implies that your data do not follow a linear hyperplane. Also check this out for recent trends: <https://machinelearningmastery.com/auto-sklearn-for-automated-machine-learning-in-python/> Upvotes: 2 [selected_answer]<issue_comment>username_3: There is a special *model selection technique* that is called `K-Fold Cross Validation` just for this situation. It is basically dividing your dataset into separate pieces, training and evaluating on each of them iteratively. Check the example image below: [![enter image description here](https://i.stack.imgur.com/O1Gkv.png)](https://i.stack.imgur.com/O1Gkv.png) Each of these `e` values represents the exclusive error on that fold of data. Applying summation to them and dividing into fold count would give you the **model error**. [![enter image description here](https://i.stack.imgur.com/LzNTx.png)](https://i.stack.imgur.com/LzNTx.png) **Model error** represents the model's performance on that specific given dataset. If you want to see which model would suit your data better, I suggest you to comparing their model errors. Upvotes: 1
2020/08/05
585
2,027
<issue_start>username_0: I know it cost around $4.3 million dollars to train, but how much computing power does it cost to run the finished program? IBM Watson chatbot AI only costs a few cents per chat message to use, OpeenAI Five seemed to run on a single gaming PC setup. So I'm wondering how much computing power does it need to run the finished ai program.<issue_comment>username_1: I can't anwser your question on how much computing power you might need, but you'll need atleast a smallgrid to run the biggest model just looking at the memory requirments (175B parameters so 700GB of memory). The biggest gpu has 48 GB of vram I've read that gtp-3 will come in eigth sizes, 125M to 175B parameters. So depending upon which one you run you'll need more or less computing power and memory. (<https://lambdalabs.com/blog/demystifying-gpt-3/>) For an idea of the size of the smallest, "The smallest GPT-3 model is roughly the size of BERT-Base and RoBERTa-Base." Upvotes: 3 [selected_answer]<issue_comment>username_2: I think it is premature to answer your question as OpenAI has not made GPT-3 available yet other than via a web-based API. For more information see [OpenAI API](https://openai.com/blog/openai-api/). From [OpenAI will start selling its text-generation tech, and the first customers include Reddit](https://www.theverge.com/2020/6/11/21287966/openai-commercial-product-text-generation-gpt-3-api-customers), by <NAME>: **Access to the GPT-3 API is invitation-only, and pricing is undecided.** You can join the OpenAI wait list here: <https://beta.openai.com/> I read somewhere that to load GPT-3 for inferencing requires 300GB if using half-precision floating point (FP16). There are no GPU cards today that even in a set of four will provide 300GB of video RAM. For example, the best I believe you can do in a single desktop box is four NVLinked Nvidia RTX 8000 cards on a single motherboard. Each card has 48GB of VRAM each. That would only provide a total of 192GB of VRAM. Upvotes: 1
2020/08/05
456
1,930
<issue_start>username_0: I am quite new to neural networks. I am trying to implement in Python a neural network having only one hidden layer with $N$ neurons and $1$ output layer. The point is that I am analyzing time series and would like to use the output layer as the input of the next unit: by feeding the network with the input at time $t-1$ I obtain the output $O\_{t-1}$ and, in the next step, I would like to use both the input at time $t$ and $O\_{t-1}$, introducing a sort of auto-regression. I read that *recurrent neural network* are suitable to address this issue. Anyway I cannot imagine how to implement a network in Keras that involves *multilayer recurrence*: all the references I found are linked to using the output of a layer as input of the same layer in the next step. Instead, I would like to include the output of the last layer (the output layer) in the inputs of the first hidden layer.<issue_comment>username_1: You want to look at recurrent neural networks. Upvotes: 1 <issue_comment>username_2: You could just do this; concatenate your input\_vector with zero's vector that has the size of your output. Then in the first pass you concatenate with the output instaid of the zero's vector. After that repeat.. At the end just compare (compute the loss) your entire output from t0 to t1 to your target and backprop. You might want to look into recurrent layers, these are layers that have connections back to themselves so that the network can learn what to "remember". These have some problems with longer sequences, so the "newer" versions try to deal with that. (LSTM and GRU) You can also use attention mechanisms if you're dealing with sequences. (basically you learn what parts of your input sequence to look at given a certain "query", in your case maybe the last timestep) (generally used in natural language processing) But it's a bit more exotic and complicated. Upvotes: 1 [selected_answer]
2020/08/06
472
1,895
<issue_start>username_0: Could someone please help me gain some **intuition** as to why the optimal policy for a Markov Decision Process in the infinite horizon case (agent acts forever) is deterministic?<issue_comment>username_1: Suppose you learned your action-value function perfectly. Recall that the action-value function measures the expected return after taking a given action in a given state. Now, the goal when solving an MDP is to find a policy that maximizes expected returns. Suppose you're in state $s$. According to your action-value function, let's say actions $a$ maximizes the expected return. So, according to the goal of solving an MDP, the only action you would ever take from state $s$ is $a$. In other words $\pi(a'\mid s) = \mathbf{1}[a'=a]$, which is a deterministic policy. Now, you might argue that your action-value function will never be perfect. However, this just means you need more exploration, which can manifest itself as stochasticity in the policy. However, in the limit of infinite data, the optimal policy will be deterministic since the action-value function will have converged to the optimum. Upvotes: 3 [selected_answer]<issue_comment>username_2: The premise of this question is somewhat misleading. There is a deterministic optimal policy for a MDP, but this does not mean a stochastic optimal policy never exists. Talking about *the* optimal policy can be misleading, as there may be many different optimal policies. For example, certainly we could imagine an MDP where $Q^\*(s,a\_0) = Q^\*(s,a\_1)$ for two different actions $a\_1$ and $a\_2$ that both maximize the optimal action-value function $Q^\*$ at some state $s$. Then a stochastic policy choosing randomly between $a\_1$ and $a\_2$ at $s$ is optimal, but so is a deterministic policy that always picks $a\_1$ at $s$, and a deterministic policy that always picks $a\_2$ at $s$. Upvotes: 1
2020/08/06
1,703
7,097
<issue_start>username_0: I'm reading Reinforcement Learning by Sutton & Barto, and in section 3.2 they state that the reward in a Markov decision process is always a scalar real number. At the same time, I've heard about the problem of assigning credit to an action for a reward. Wouldn't a vector reward make it easier for an agent to understand the effect of an action? Specifically, a vector in which different components represent different aspects of the reward. For example, an agent driving a car may have one reward component for driving smoothly and one for staying in the lane (and these are independent of each other).<issue_comment>username_1: If you have multiple types of rewards (say, R1 and R2), then it is no longer clear what would be the optimal way to act: it can happen that one way of acting would maximize R1 and another way would maximize R2. Therefore, optimal policies, value functions, etc., would all be undefined. Of course, you could say that you want to maximize, for example, R1+R2, or 2R1+R2, etc. But in that case, you're back at a scalar number again. It can still be helpful for other purposes to split up the reward into multiple components as you suggest, e.g., in a setup where you need to learn to predict these rewards. But for the purpose of determining optimal actions, you need to boil it down into a single scalar. Upvotes: 4 [selected_answer]<issue_comment>username_2: Markov decision problems are usually defined with a reward function $r:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}$, and in these cases the rewards are expected to be scalar real values. This makes reinforcement learning (RL) easier, for example when defining a policy $\pi(s,a)=\arg\max\_a Q(s,a)$, it is clear what is the maximum of the Q-factors in state $s$. As you might have also realized, in practice however, problems often have multiple objectives that we wish to optimize at the same time. This is called multiobjective optimization and the related RL field is multiobjective reinforcement learning (MORL). If you have access to the paper *<NAME>: Multiobjective Reinforcement Learning: A Comprehensive Overview (2015)* you might be interested in reading it. (Edit: as Peter noted in his answer, the original version of this paper was found to be a plagiarism of various other works. Please refer to his answer for better resources.) The above-mentioned paper categorizes methods for dealing with multiple rewards into two categories: * **single objective strategy**, where multiple rewards are somehow aggregated into one scalar value. This can be done by giving weights to rewards, making some of the objectives a constraint and optimize the others, ranking the objectives and optimize them in order etc. (Note: in my experience, weighted sum of rewards is not a good objective as it might combine two completely unrelated objectives in a very forced way.) * **Pareto strategy**, where the goal is to find Pareto-optimal strategies or a Pareto front. In this case we keep the rewards a vector and may compute a composite Q-factor, eg.: $\bar{Q}(s,a)=[Q\_1(s,a), \ldots, Q\_N(s,a)]$ and may have to modify the $\arg\max\_a$ function to select the maximum in a Pareto sense. Finally, I believe it is important to remind you that all these methods really depend on the use-case and on what you really want to achieve and that there is no one solution that fits all. Even after finding an appropriate method you might find yourself spending time tweaking hyper-parameters just so that your RL agent would do what you would like it to do in one specific scenario and do something else in a slightly different scenario. (Eg. taking over on a highway vs. taking over on a country road). Upvotes: 3 <issue_comment>username_3: Rather than the survey by Liu et al. recommended above, I'd suggest you read the following survey paper for an overview of MORL (disclaimer - I was a co-author on this, but I genuinely think it is a much more useful introduction to this area) > > [<NAME>., <NAME>., <NAME>., & <NAME>. (2013). A survey of multi-objective sequential decision-making. Journal of Artificial Intelligence Research, 48, 67-113.](https://www.jair.org/index.php/jair/article/view/10836) > > > Liu et al's survey, in my opinion, doesn't do much more than list and briefly describe the MORL algorithms which existed at that point. There's no deeper analysis of the field. The original version of their paper was also retracted due to blatant plagiarism of several other authors, including myself as can be confirmed [here](https://ieeexplore.ieee.org/document/6509978). Our survey provides arguments for the need for multiobjective methods by describing 3 scenarios where agents using single-objective RL may be unable to provide a satisfactory solution that matches the needs of the user. Briefly, these are 1. the **unknown weights** scenario where the required trade-off between the objectives isn't known in advance, and so to be effective the agent must learn multiple policies corresponding to different trade-offs and then at run-time select the one which matches the current preferences (e.g. this can arise when the objectives correspond to different costs which vary in relative price over time); 2. the **decision support** scenario where scalarization of a reward vector is not viable (for example, in the case of subjective preferences, which defy explicit quantification), so the agent needs to learn a set of policies, and then username_1 these to a user who will select their preferred option, and 3. the **known weights** scenario where the desired trade-off between objectives is known, but its nature is such that the returns are non-additive (i.e. if the user's utility function is non-linear), and therefore standard single-objective methods based on the Bellman equation can't be directly applied. We propose a **taxonomy** of MORL problems in terms of the *number of policies* they require (single or multi-policy), the form of utility/scalarization function supported (linear or non-linear), and whether *deterministic* or *stochastic* policies are allowed, and relate this to the nature of the set of solutions which the MO algorithm needs to output. This taxonomy is then used to categorize existing MO planning and MORL methods. One final important contribution is identifying the distinction between maximising *Expected Scalarised Return (ESR)* or *Scalarised Expected Return (SER)*. The former is appropriate in cases where we are concerned about the results within each individual episode (for example, when treating a patient - that patient will only care about their own individual experience), while SER is appropriate if we care about the average return over multiple episodes. This has turned out to be a much more important issue than I anticipated at the time of the survey, and <NAME> and his colleagues have examined it more closely since then (e.g., [Multi-objective Reinforcement Learning for the Expected Utility of the Return](http://roijers.info/pub/esr_paper.pdf)) Upvotes: 3
2020/08/08
222
861
<issue_start>username_0: What are some books on reinforcement learning (RL) and deep RL for beginners? I'm looking for something as friendly as the head first series, that breaks down every single thing.<issue_comment>username_1: **Reinforcement Learning: An Introduction** by *<NAME> and <NAME>* is undoubtedly one of the best books, to begin with. Despite its age, the book is still the canonical introduction to reinforcement learning. It does require some patience, but I think it's very approachable and rigorous at the same time! Upvotes: 3 [selected_answer]<issue_comment>username_2: If you are looking for a book that is more beginner friendly than the Sutton and Barto book (which you should of course check out also), try out: [Deep Reinforcement Learning Hands On](https://rads.stackoverflow.com/amzn/click/com/1838826998) Upvotes: 0
2020/08/09
545
1,721
<issue_start>username_0: Can a computer solve the following problem, i.e. make a proof by induction? And why? > > Prove by **induction** that $$\sum\_{k=1}^nk^3=\left(\frac{n(n+1)}{2}\right)^2, \, \, \, \forall n\in\mathbb N .$$ > > > I'm doing a Ph.D. in pure maths. I love coding when I wanna have some fun, but I've never got too far in this field. I say my background because maybe there's someone who wants to explain this in a more abstract language there's a chance that I will understand it.<issue_comment>username_1: It is possible for some classes of problems. For instance, [WolframAlpha can generate an induction proof](https://www.wolframalpha.com/input/?i=Prove%20%20by%20induction%20sum%20of%20k%5E3%20from%201%20to%20n%20%3D%20%28n%28n%2B1%29%29%5E2%2F2) to the problem posed in the question. According to the author of this proof generator, he built a library of pattern-matched proofs to generate the proofs. More details about his approach can be find in [his write-up about the problem](https://blog.wolfram.com/2016/07/14/behind-wolframalphas-mathematical-induction-based-proof-generator/). Other alternative (thought not induction-based) for automatically verifying these kind of identities (in special, hypergeometric identities) is by using algorithms such as Zeilberger's method along with the HYPER algorithm, both described in the excellent book [A=B](https://www2.math.upenn.edu/%7Ewilf/AeqB.html), currently available for free by one of its co-authors. Upvotes: 3 [selected_answer]<issue_comment>username_2: There are programming languages that allow you to verify a proof by induction. For example, I used [Coq](https://coq.inria.fr/), but I'm sure there are also others. Upvotes: 2
2020/08/10
637
2,901
<issue_start>username_0: The transformer, introduced in the paper [Attention Is All You Need](https://arxiv.org/abs/1706.03762), is a popular new neural network architecture that is commonly viewed as an alternative to recurrent neural networks, like LSTMs and GRUs. However, having gone through the paper, as well as several online explanations, I still have trouble wrapping my head around how they work. How can a non-recurrent structure be able to deal with inputs of arbitrary length?<issue_comment>username_1: Actually, there is usually an upper bound for inputs of transformers, due to the inability of handling long-sequence. Usually, the value is set as 512 or 1024 at current stage. However, if you are asking handling the various input size, adding padding token such as [PAD] in BERT model is a common solution. The position of [PAD] token could be masked in self-attention, therefore, causes no influence. Let's say we use a transformer model with 512 limit of sequence length, then we pass a input sequence of 103 tokens. We padded it to 512 tokens. In the attention layer, positions from 104 to 512 are all masked, that is, they are not attending or being attended. Upvotes: 6 [selected_answer]<issue_comment>username_2: The accepted answer is wonderful; this answer provides an alternative approach for dealing with variable length inputs. More specifically, what might be done when the input is longer than the maximum sequence length supported by the transformer you have built. We have found it useful to wrap our transformer in a class that allows us to programmatically use a sliding window across inputs that are longer than the supported transformer input length. If the input is less than or equal to the supported length, it is simply processed. If it is longer, we iteratively slide across the data, passing each window into the transformer and then aggregate the outputs. When we take this approach, we do not typically slide the window one element (word embedding) at a time, but instead use a longer stride, usually two to five embeddings at a time. We have been intending to do some research into evaluating whether using multiple strides improves overall performance, but have not yet done so because of the prohibitive computational performance characteristics of using multiple strides. Using a sliding window already significantly impacts the time to predict since we are running the predictions multiple times. If this approach seems useful, a simple insight is that you need not pass the inputs in sequentially; instead, we typically build a batch with all of the windows and pass them through all at once. Of course, there is a downside to this approach that might make you decide to choose to split the windows based on sentences or paragraphs. otherwise, your positional encoding will end up being "off" since you are sliding across the inputs. Upvotes: 2
2020/08/10
709
3,248
<issue_start>username_0: I recently read some introductions to AI alignment, AIXI and decision theory things. As far as I understood, one of the main problems in *AI alignment* is how to define a *utility function* well, not causing something like the paperclip apocalypse. Then a question comes to my mind that whatever the utility function would be, we need a computer to compute the utility and reward, so that there is no way to prevent AGI from seeking it to manipulate the utility function to always give the maximum reward. Just like we humans know that we can give happiness to ourselves in chemical ways and some people actually do so. Is there any way to prevent this from happening? Not just protecting the utility calculator physically from AGI (How can we sure it works forever?), but preventing AGI from thinking of it?<issue_comment>username_1: Actually, there is usually an upper bound for inputs of transformers, due to the inability of handling long-sequence. Usually, the value is set as 512 or 1024 at current stage. However, if you are asking handling the various input size, adding padding token such as [PAD] in BERT model is a common solution. The position of [PAD] token could be masked in self-attention, therefore, causes no influence. Let's say we use a transformer model with 512 limit of sequence length, then we pass a input sequence of 103 tokens. We padded it to 512 tokens. In the attention layer, positions from 104 to 512 are all masked, that is, they are not attending or being attended. Upvotes: 6 [selected_answer]<issue_comment>username_2: The accepted answer is wonderful; this answer provides an alternative approach for dealing with variable length inputs. More specifically, what might be done when the input is longer than the maximum sequence length supported by the transformer you have built. We have found it useful to wrap our transformer in a class that allows us to programmatically use a sliding window across inputs that are longer than the supported transformer input length. If the input is less than or equal to the supported length, it is simply processed. If it is longer, we iteratively slide across the data, passing each window into the transformer and then aggregate the outputs. When we take this approach, we do not typically slide the window one element (word embedding) at a time, but instead use a longer stride, usually two to five embeddings at a time. We have been intending to do some research into evaluating whether using multiple strides improves overall performance, but have not yet done so because of the prohibitive computational performance characteristics of using multiple strides. Using a sliding window already significantly impacts the time to predict since we are running the predictions multiple times. If this approach seems useful, a simple insight is that you need not pass the inputs in sequentially; instead, we typically build a batch with all of the windows and pass them through all at once. Of course, there is a downside to this approach that might make you decide to choose to split the windows based on sentences or paragraphs. otherwise, your positional encoding will end up being "off" since you are sliding across the inputs. Upvotes: 2
2020/08/10
203
845
<issue_start>username_0: I have been reading more about computer vision and I'm bothered by YOLO and similar deep learning architectures. The thing I am confused about is how non-class image sections are dealt with. In particular, it's not clear to me at all why YOLO doesn't consider every part of an image a possible class. What actually sets the cutoff for detection and then classification?<issue_comment>username_1: The output of YOLO is (x,y,w,h,confidence,class). The confidence value presents whether the rectangle holds an object, the rectangle is non-classed when confidence is low. The `class` value will be used, only when `confidence` is high. Upvotes: 2 <issue_comment>username_2: As of now, it is not possible to detect the non-class object. You have to train that non-class object again then the model will detect. Upvotes: 0
2020/08/10
295
1,263
<issue_start>username_0: Transfer learning consists of taking features learned on one problem and leveraging them on a new, similar problem. In the Transfer Learning, we take layers from a previously trained model and freeze them. **Why is this layer freezing required and what are the effects of layer freezing?**<issue_comment>username_1: **Why is this layer freezing required?** It's not. **What are the effects of layer freezing?** The consequences are: (1) Should be *faster* to train (the gradient will have far less components) (2) Should require *less* data to train on If you do unfreeze the weights, I'd think your performance would be better because you are adjusting (i.e., fine-tuning) the parameters to your specific problem at hand. I am not sure what the marginal improvements are in practice, as I have not experiemented much with fine-tuning (like are the improvements typically a 0.01% reduction in error rate? Not sure.) Upvotes: 2 [selected_answer]<issue_comment>username_2: Layer freezing means that the layer weights of the trained model do not change when reused on a subsequent downstream mission, they remain frozen. Basically, when backpropagation is performed during training, these layer weights aren't compromised. Upvotes: 0
2020/08/11
4,332
10,457
<issue_start>username_0: I am implementing OpenAI gym's cartpole problem using Deep Q-Learning (DQN). I followed tutorials (video and otherwise) and learned all about it. I implemented a code for myself and I thought it should work, but the agent is not learning. I will really really really appreciate if someone can pinpoint where I am doing wrong. Note that I have a target neuaral network and a policy network already there. The code is as below. ``` import numpy as np import gym import random from keras.optimizers import Adam from keras.models import Sequential from keras.layers import Dense from collections import deque env = gym.make('CartPole-v0') EPISODES = 2000 BATCH_SIZE = 32 DISCOUNT = 0.95 UPDATE_TARGET_EVERY = 5 STATE_SIZE = env.observation_space.shape[0] ACTION_SIZE = env.action_space.n SHOW_EVERY = 50 class DQNAgents: def __init__(self, state_size, action_size): self.state_size = state_size self.action_size = action_size self.replay_memory = deque(maxlen = 2000) self.gamma = 0.95 self.epsilon = 1 self.epsilon_decay = 0.995 self.epsilon_min = 0.01 self.model = self._build_model() self.target_model = self.model self.target_update_counter = 0 print('Initialize the agent') def _build_model(self): model = Sequential() model.add(Dense(20, input_dim = self.state_size, activation = 'relu')) model.add(Dense(10, activation = 'relu')) model.add(Dense(self.action_size, activation = 'linear')) model.compile(loss = 'mse', optimizer = Adam(lr = 0.001)) return model def update_replay_memory(self, current_state, action, reward, next_state, done): self.replay_memory.append((current_state, action, reward, next_state, done)) def train(self, terminal_state): # Sample from replay memory minibatch = random.sample(self.replay_memory, BATCH_SIZE) #Picks the current states from the randomly selected minibatch current_states = np.array([t[0] for t in minibatch]) current_qs_list= self.model.predict(current_states) #gives the Q value for the policy network new_state = np.array([t[3] for t in minibatch]) future_qs_list = self.target_model.predict(new_state) X = [] Y = [] # This loop will run 32 times (actually minibatch times) for index, (current_state, action, reward, next_state, done) in enumerate(minibatch): if not done: new_q = reward + DISCOUNT * np.max(future_qs_list) else: new_q = reward # Update Q value for given state current_qs = current_qs_list[index] current_qs[action] = new_q X.append(current_state) Y.append(current_qs) # Fitting the weights, i.e. reducing the loss using gradient descent self.model.fit(np.array(X), np.array(Y), batch_size = BATCH_SIZE, verbose = 0, shuffle = False) # Update target network counter every episode if terminal_state: self.target_update_counter += 1 # If counter reaches set value, update target network with weights of main network if self.target_update_counter > UPDATE_TARGET_EVERY: self.target_model.set_weights(self.model.get_weights()) self.target_update_counter = 0 def get_qs(self, state): return self.model.predict(np.array(state).reshape(-1, *state.shape))[0] ''' We start here''' agent = DQNAgents(STATE_SIZE, ACTION_SIZE) for e in range(EPISODES): done = False current_state = env.reset() time = 0 total_reward = 0 while not done: if np.random.random() > agent.epsilon: action = np.argmax(agent.get_qs(current_state)) else: action = env.action_space.sample() next_state, reward, done, _ = env.step(action) agent.update_replay_memory(current_state, action, reward, next_state, done) if len(agent.replay_memory) < BATCH_SIZE: pass else: agent.train(done) time+=1 current_state = next_state total_reward += reward print(f'episode : {e}, steps {time}, epsilon : {agent.epsilon}') if agent.epsilon > agent.epsilon_min: agent.epsilon *= agent.epsilon_decay ``` Results for first 40ish iterations are below (look for the number of steps, they should be increasing and should reach a maximum of 199) ``` episode : 0, steps 14, epsilon : 1 episode : 1, steps 13, epsilon : 0.995 episode : 2, steps 17, epsilon : 0.990025 episode : 3, steps 12, epsilon : 0.985074875 episode : 4, steps 29, epsilon : 0.9801495006250001 episode : 5, steps 14, epsilon : 0.9752487531218751 episode : 6, steps 11, epsilon : 0.9703725093562657 episode : 7, steps 13, epsilon : 0.9655206468094844 episode : 8, steps 11, epsilon : 0.960693043575437 episode : 9, steps 14, epsilon : 0.9558895783575597 episode : 10, steps 39, epsilon : 0.9511101304657719 episode : 11, steps 14, epsilon : 0.946354579813443 episode : 12, steps 19, epsilon : 0.9416228069143757 episode : 13, steps 16, epsilon : 0.9369146928798039 episode : 14, steps 14, epsilon : 0.9322301194154049 episode : 15, steps 18, epsilon : 0.9275689688183278 episode : 16, steps 31, epsilon : 0.9229311239742362 episode : 17, steps 14, epsilon : 0.918316468354365 episode : 18, steps 21, epsilon : 0.9137248860125932 episode : 19, steps 9, epsilon : 0.9091562615825302 episode : 20, steps 26, epsilon : 0.9046104802746175 episode : 21, steps 20, epsilon : 0.9000874278732445 episode : 22, steps 53, epsilon : 0.8955869907338783 episode : 23, steps 24, epsilon : 0.8911090557802088 episode : 24, steps 14, epsilon : 0.8866535105013078 episode : 25, steps 40, epsilon : 0.8822202429488013 episode : 26, steps 10, epsilon : 0.8778091417340573 episode : 27, steps 60, epsilon : 0.8734200960253871 episode : 28, steps 17, epsilon : 0.8690529955452602 episode : 29, steps 11, epsilon : 0.8647077305675338 episode : 30, steps 42, epsilon : 0.8603841919146962 episode : 31, steps 16, epsilon : 0.8560822709551227 episode : 32, steps 12, epsilon : 0.851801859600347 episode : 33, steps 12, epsilon : 0.8475428503023453 episode : 34, steps 10, epsilon : 0.8433051360508336 episode : 35, steps 30, epsilon : 0.8390886103705794 episode : 36, steps 21, epsilon : 0.8348931673187264 episode : 37, steps 24, epsilon : 0.8307187014821328 episode : 38, steps 33, epsilon : 0.8265651079747222 episode : 39, steps 32, epsilon : 0.8224322824348486 episode : 40, steps 15, epsilon : 0.8183201210226743 episode : 41, steps 20, epsilon : 0.8142285204175609 episode : 42, steps 37, epsilon : 0.810157377815473 episode : 43, steps 11, epsilon : 0.8061065909263957 episode : 44, steps 30, epsilon : 0.8020760579717637 episode : 45, steps 11, epsilon : 0.798065677681905 episode : 46, steps 34, epsilon : 0.7940753492934954 episode : 47, steps 12, epsilon : 0.7901049725470279 episode : 48, steps 26, epsilon : 0.7861544476842928 episode : 49, steps 19, epsilon : 0.7822236754458713 episode : 50, steps 20, epsilon : 0.778312557068642 ```<issue_comment>username_1: I think the problem is with openAI gym CartPole-v0 environment reward structure. The reward is always +1 for each time step. So if pole falls reward is +1 itself. So we need to check and redefine the reward for this case. So in the train function try this: ``` if not done: new_q = reward + DISCOUNT * np.max(future_qs_list) else: # if done assign some negative reward new_q = -20 ``` (Or change the reward during replay buffer update) Check the lines 81 and 82 in Qlearning.py code in [this](https://github.com/girishdhegde/reinforcement-learning/) repo for further clarification. Upvotes: 0 <issue_comment>username_2: There is a really small mistake in here that causes the problem: ``` for index, (current_state, action, reward, next_state, done) in enumerate(minibatch): if not done: new_q = reward + DISCOUNT * np.max(future_qs_list) #HERE else: new_q = reward # Update Q value for given state current_qs = current_qs_list[index] current_qs[action] = new_q X.append(current_state) Y.append(current_qs) ``` Since np.max(future\_qs\_list) should be np.max(future\_qs\_list[index]) since you're now getting the highest Q of the entire batch. Instead of the getting the highest Q from the current next state. It's like this after changing that (remember an epsilon of 1 means that you get 100% of your actions taken by the a dice roll so I let it go for a few more epochs, also tried it with the old code but indeed didn't get more then 50 steps (even after 400 epochs/episodes)) ``` episode : 52, steps 16, epsilon : 0.7705488893118823 episode : 53, steps 25, epsilon : 0.7666961448653229 episode : 54, steps 25, epsilon : 0.7628626641409962 episode : 55, steps 36, epsilon : 0.7590483508202912 episode : 56, steps 32, epsilon : 0.7552531090661897 episode : 57, steps 22, epsilon : 0.7514768435208588 episode : 58, steps 55, epsilon : 0.7477194593032545 episode : 59, steps 24, epsilon : 0.7439808620067382 episode : 60, steps 46, epsilon : 0.7402609576967045 episode : 61, steps 11, epsilon : 0.736559652908221 episode : 62, steps 14, epsilon : 0.7328768546436799 episode : 63, steps 13, epsilon : 0.7292124703704616 episode : 64, steps 113, epsilon : 0.7255664080186093 episode : 65, steps 33, epsilon : 0.7219385759785162 episode : 66, steps 33, epsilon : 0.7183288830986236 episode : 67, steps 39, epsilon : 0.7147372386831305 episode : 68, steps 27, epsilon : 0.7111635524897149 episode : 69, steps 22, epsilon : 0.7076077347272662 episode : 70, steps 60, epsilon : 0.7040696960536299 episode : 71, steps 40, epsilon : 0.7005493475733617 episode : 72, steps 67, epsilon : 0.697046600835495 episode : 73, steps 115, epsilon : 0.6935613678313175 episode : 74, steps 61, epsilon : 0.6900935609921609 episode : 75, steps 43, epsilon : 0.6866430931872001 episode : 76, steps 21, epsilon : 0.6832098777212641 episode : 77, steps 65, epsilon : 0.6797938283326578 episode : 78, steps 45, epsilon : 0.6763948591909945 episode : 79, steps 93, epsilon : 0.6730128848950395 episode : 80, steps 200, epsilon : 0.6696478204705644 episode : 81, steps 200, epsilon : 0.6662995813682115 ``` Upvotes: 3 [selected_answer]
2020/08/12
1,043
4,678
<issue_start>username_0: I'm new to the artificial intelligence field. In our first chapters, there is one topic called "problem-solving by searching". After searching for it on the internet, I found the *depth-first search* algorithm. The algorithm is easy to understand, but no one explains why this algorithm is included in the artificial intelligence study. Where do we use it? What makes it an artificial intelligence algorithm? Is every search algorithm is an AI algorithm?<issue_comment>username_1: This is a fundamentally a philosophical question. What makes AI AI? But first things, why would DFS be considered an AI algorithm? In its most basic form, DFS is a very general algorithm that is applied to wildly different categories of problems: topological sorting, finding all the connected components in a graph, etc. It may be also used for searching. For instance, you could use DFS for finding a path in a 2D maze (although not necessarily the shortest one). Or you could use it to navigate through more abstract state spaces (e.g. between configuration of chess or in the towers of Hanoi). And this is where the connection to AI arises. DFS can be used on its own for navigating such spaces, or as a basic subroutine for more complex algorithms. I believe that in the book *Artificial Intelligence: A Modern Approach* (which you may be reading at the moment) they introduce DFS and Breadth-First Search this way, as a first milestone before reaching more complex algorithms like A\*. Now, you may be wondering why such search algorithms should be considered AI. Here, I'm speculating, but maybe the source of the confusion comes from the fact that DFS does not learn anything. This is a common misconception among new AI practitioners. Not every AI technique has to revolve around learning. In other words, AI != Machine Learning. ML is one of the many subfields within AI. In fact, early AI (around the 50s-60s) was more about logic reasoning than it was about learning. AI is about making an artificial system behave "intelligently" in a given setting, whatever it takes to reach that intelligent behavior. If what it takes is applying well-known algorithms from computer science like DFS, then so be it. Now, what is it that intelligent means? This is where we enter more philosophical grounds. My interpretation is that "intelligence" is a broad term to define the large set of techniques that we use to approach the immense complexity that reality and certain puzzle-like problems have to offer. Often, "intelligent behavior" revolves around heuristics and proxy methods away from the perfect, provable algorithms that work elsewhere in computer science. While certain algorithms (like DFS or A\*) may be proven to give optimal answers if infinitely many resources can be devoted to the task at hand, only in sufficiently constrained settings would such techniques be affordable. Fortunately, we can make them work in many situations (like A\* for chess or for robot navigation, or Monte Carlo Tree Search for Go), but only if reasonable assumptions and constraints over the state space are imposed. For all the rest is where learning techniques (like Markov Random Fields for image segmentation, or Neural Nets paired with Reinforcement Learning for situated agents) may come handy. Funny enough, even if intelligence is often regarded as a good thing, my interpretation can be summed up as *imperfect modes of behavior to address immensely complex problems for which no known perfect solution exists (with rare exceptions in sufficiently bounded problems)*. If we had a huge table that, for each chess position, gives the best possible move you can make, and put that table inside a program, would this program be intelligent? Maybe you'd think so, but in any case it seems more arguable than a program that makes real-time reasoning and spits a decision after some reasonable time, even if it's not the best one. Similarly, do you consider sorting algorithms intelligent? Again, the answer is arguable, but the fact is that algorithms exist with optimal time and memory complexities, we know that we can't do better than what those algorithms do, and we do not have to resort to any heuristic or any learning to do better (disclaimer: I haven't actually checked if there's some madman out in the wild applying learning to solve sorting with better average times). Upvotes: 5 [selected_answer]<issue_comment>username_2: `DFS` on its own would not typically be considered `AI` imo. It is a standard computer science deterministic algorithm. Instead an intelligent agent might use `DFS` to inform its decision making as part of an AI package. Upvotes: 1
2020/08/12
2,282
7,481
<issue_start>username_0: I am trying to re-implement the [SDNE algorithm](https://www.kdd.org/kdd2016/papers/files/rfp0191-wangAemb.pdf) for graph embedding by PyTorch. I get stuck at some issues about evaluation metric **Precision@K**. > > *precision@k* is a metric which gives equal weight to the returned instance. It is defined as follows > > > $$precision@k(i) = \frac{\left| \, \{ j \, | \, i, j \in V, index(j) \le k, \Delta\_i(j) = 1 \} \, \right|}{k}$$ > > > where $V$ is the vertex set, $index(j)$ is the ranked index of the $j$-th vertex and $\Delta\_i(j) = 1$ indicates that $v\_i$ and $v\_j$ have a link. > > > I don't understand what "ranked index of the $j$-th vertex" means. Beside, I am also confused about the **MAP** metric in **section 4.3**. I don't understand how to calculate it. > > *Mean Average Precision (MAP)* is a metric with good discrimination and stability. Compared with *precision@k*, it is > more concerned with the performance of the returned items ranked ahead. It is calculated as follows: > $$AP(i) = \frac{\sum\_j precision@j(i) \cdot \Delta\_i(j)}{\left| \{ \Delta\_i(j) = 1 \} \right|}$$ > $$MAP = \frac{\sum\_{i \in Q} AP(i)}{|Q|}$$ > where $Q$ is the query set. > > > If anyone is familiar with these metrics, could you help me to explain them?<issue_comment>username_1: These measures are used for evaluating how "good" an embedding of a graph is or how "good" the graph reconstructed from the embedding resembles the original. Given the embedding and vertex $i$, it seems to be that the rank of the vertices is dependent on the probability of there being a link between vertex $i$ and vertex $j$ in the original graph. If there is a higher probability of there being a link between $i$ and $j$ in the original graph, $j$ has a lower rank. In other words, $precision@k(i)$ is the proportion of vertices $j$ that vertex $i$ has a link to in the original graph out of the $k$ vertices for which vertex $i$ has the highest probability of having a link to, recovered from the embedding. This matches up with the common definition of $precision@n$ used in evaluating information/document retrieval, defined as the proportion of relevant documents out of the $n$ best retrieved documents. The average precision of a vertex, $AP(i)$, is the average of $precision@j$ over all $j$ such that there is a link between vertex $i$ and vertex $j$. Perhaps a more clear definition would have been $$AP(i) = \frac{\sum\_{j \in S\_i} precision@j(i)}{\left| S\_i \right|}$$ where $S\_i = \{j \, |\, \Delta\_i(j) = 1 \}$, the set of all $j$ such that there is a link from $i$ to $j$. $MAP$ for a query set $Q$ is then the mean of the average precision ($AP$) over all vertices in $Q$. Upvotes: 1 [selected_answer]<issue_comment>username_2: I understand the confusion and I wanted to refer to this (older post) because the metric really is unclear in the context of the SDNE paper. Perhaps I can try to explain it for future readers, in hopes that this makes sense. All this is my own interpretation, of course. SDNE is an autoencoder setup that outputs both node embeddings ($y\_i$ vector for focal node $i$) and a reconstruction of the ties of $i$ denoted by $\hat{x}\_i$ with the original being $x\_i$. Note that $y\_i$ is the input to the decoder component, and thus the reconstruction is a function of the embedding. In SDNE, $x\_i$ are the inputs and the "labels", hence autoencoder. Now, the notion of precision comes from information retrieval. However, for networks, the problem setting differs. We do not retrieve documents repeatedly, instead we literally predict an entire adjacency vector (especially if one uses transformer layers and such). For that reason, the "ranking" part needs to be reformulated to make any sense. Let's take a naive view and see what would make substantive sense. In the context of a reconstructed network, precision should mean the following: "What percentage of reconstructed ties are in the real network?" Whereas recall would mean. "What percentage of real ties in the network are found in the reconstruction?" So we have our reconstruction $\hat{x\_i}$ and our ground truth vector $x\_i$ - typically rows of the adjacency matrix of the network $\hat{X}$ and $X$ respectively. Let's denote these networks as $\hat{X}$ and $X$ as well as there won't be any confusion (in the paper the authors distinguish the network $G$, its adjacency matrix $S$ and, finally, the inputs and outputs $X$ as subset of $S$.) The vectors denote ties between $i$ and $j$. With some abuse of notation, we could write for unweighted networks $(i,j) \in X \Leftrightarrow x\_{i,j}=1$ Precision would be: $$\frac{|(i,j) \in \hat{X} \cap (i,j) \in X|}{|(i,j) \in \hat{X}|} \Leftrightarrow \frac{|\{j| x\_{i,j}=1 \cap \hat{x}\_{i,j}=1\}|}{|\{j| \hat{x}\_{i,j}=1\}|}$$ and recall would have the denominator with $x\_{i,j}=1$ instead of $\hat{x}\_{i,j}=1$. The only difference to the precision@k metric in the paper comes from the ranking. As mentioned above, it is not immediately apparent from the paper how a reconstruction would yield probabilities that we can use for a rank- especially if ties are binary. However, SDNE does not predict binary ties, even if these appear in the original graph. Instead, it applies a sigmoid function and thus gets some value that is proportional to the likelihood of a tie between two nodes. Long story short, each element in $\hat{x}\_i$ is akin to a probabilistic prediction across possible neighbors. To get the $index(j)$ we can thus rank the values of $\hat{x}\_i$ from highest to lowest. Let the top $k$ of $\hat{x}\_i$ be above some cutoff value $t\_i(k)$. We can write precision@k as $$\frac{|\{j| x\_{i,j}=1 \cap \hat{x}\_{i,j} \geq t\_i(k)\}|}{|\{j|\hat{x}\_{i,j} \geq t\_i(k)\}|}=\frac{|\{j| x\_{i,j}=1 \cap \hat{x}\_{i,j} \geq t\_i(k)\}|}{k}$$ If our network were weighted, we could do a similar ranking for $x\_i$. In any case, this solves the first issue. Now, the main problem comes from the description of $AP(i)$. Both in the paper, and in the previous answer given, there is an obvious mistake: Where precision@k takes an integer $k$ as parameter, we are to sum over $j$ in $AP(i)$. That is, we are told in the other answer (and in the paper) that $$AP(i) = \frac{\sum\_{j \in S\_i} \text{precision@}j(i)}{|S\_i|}$$ with $S\_i=\{j|x\_{i,j}=1\}$ This of course makes no sense. $j$ comes from the node set. Nodes could be numbers, but could also be things like $v\_i = $"Apple" and $v\_j=$"potato". Obviously, the measure precision@"apple"$(i)$ can not be derived from the above definition. So, we need to find an interpretation that works. Note first, that the denominator is the number of neighbors of $i$ in the network. Thus, the above sum should maximally yield $|S\_i|$. Furthermore, the authors want to sum over neighbors $j$, employing some sort of precision measure for each. Consequently, whatever is summed up, should sum up to 1 for each $j \in S\_i$. Let's consider an embedding that predicts everything perfectly. Note that then precision@k is $1$ for every $k$. That leaves us with the conclusion that the measure must be $$\frac{1}{|S\_i|}\sum\_{j \in S\_i} \frac{\sum\_k \text{precision@}k(i,j)}{|k|}$$ where $\text{precision@}k(i,j)$ denotes some node wise measure of precision. In any case, the measure collapses to $$\frac{\sum\_k \text{precision@}k(i)}{|k|}$$ for each $k$ where $\text{precision@}k(i)>0$. Upvotes: 1
2020/08/13
525
2,500
<issue_start>username_0: In Sutton and Barto's book about reinforcement learning, policy iteration and value iterations are presented as separate/different algorithms. This is very confusing because policy iteration includes an update/change of value and value iteration includes a change in policy. They are the same thing, as also shown in the Generalized Policy Iteration method. Why then, in many papers as well, they (i.e. policy and value iterations) are considered two separate update methods to reach an optimal policy?<issue_comment>username_1: Policy iteration is made up of two steps. The first is a full policy evaluation, where a value function is calculated for the current policy. The second is policy improvement, where the policy is made greedy with respect to the value function. Value iteration looks to speed things up by stopping policy evaluation after one iteration, make the policy greedy with respect to that value function, and repeat until convergence. Clearly, these are two different algorithms, hence why they are considered to be different. They are, however, very closely linked, which is why you might consider them to be 'the same thing'. I guess you could say they belong to the same family of algorithm. Upvotes: 2 <issue_comment>username_2: Policy iteration is based on the insight that for a *given* policy, it is straightforward to compute the value function (the long-run expected discounted value of being in a given stage) *exactly* -- it is a set of linear equations at that point. So, we update the policy, then calculate the *exact* values of the states for always following that particular policy, and based on that we update the policy again, etc. Value iteration, in contrast, does not use that insight. It just updates estimates of the values of being in the states one step at a time. If these values are initialized at 0, you can think of this of the $i$th iteration computing the value of what would be the optimal policy *if we knew the MDP would end after $i$ iterations*. We never really have to think explicitly about policies (though we are in effect computing a policy each iteration), and never directly calculate the infinite sum of expected discounted rewards. These are just the vanilla variants and it is possible to mix and match these ideas -- e.g., you might not evaluate a policy by explicitly solving a system of linear equations but rather just do some iterations -- but the vanilla variants are clearly distinct. Upvotes: 1
2020/08/15
591
2,745
<issue_start>username_0: I would like to bind kernel parameters through channels/feature-maps for each filter. In a conv2d operation, each filter consists of HxWxC parameters I would like to have filters that have HxW parameters, but the same (HxWxC) form. The scenario I have is that I have 4 gray pictures of bulb samples (yielding similar images from each side), which I overlay as channels, but a possible failure that needs to be detected might only appear on one side (a bulb has 4 images and a single classification). The rotation of the object when the picture is taken is arbitrary. Now I solve this by shuffling the channels at training, but it would be more efficient if I could just bind the kernel parameters. Pytorch and Tensorflow solutions are both welcome.<issue_comment>username_1: Policy iteration is made up of two steps. The first is a full policy evaluation, where a value function is calculated for the current policy. The second is policy improvement, where the policy is made greedy with respect to the value function. Value iteration looks to speed things up by stopping policy evaluation after one iteration, make the policy greedy with respect to that value function, and repeat until convergence. Clearly, these are two different algorithms, hence why they are considered to be different. They are, however, very closely linked, which is why you might consider them to be 'the same thing'. I guess you could say they belong to the same family of algorithm. Upvotes: 2 <issue_comment>username_2: Policy iteration is based on the insight that for a *given* policy, it is straightforward to compute the value function (the long-run expected discounted value of being in a given stage) *exactly* -- it is a set of linear equations at that point. So, we update the policy, then calculate the *exact* values of the states for always following that particular policy, and based on that we update the policy again, etc. Value iteration, in contrast, does not use that insight. It just updates estimates of the values of being in the states one step at a time. If these values are initialized at 0, you can think of this of the $i$th iteration computing the value of what would be the optimal policy *if we knew the MDP would end after $i$ iterations*. We never really have to think explicitly about policies (though we are in effect computing a policy each iteration), and never directly calculate the infinite sum of expected discounted rewards. These are just the vanilla variants and it is possible to mix and match these ideas -- e.g., you might not evaluate a policy by explicitly solving a system of linear equations but rather just do some iterations -- but the vanilla variants are clearly distinct. Upvotes: 1
2020/08/16
595
2,421
<issue_start>username_0: I have two functions $f(x)$ and $g(x)$, and each of them can be computed with a neural network $\phi\_f$ and $\phi\_g$. My question is, how can I write a neural net for $f(x)g(x)$? So, for example, if $g(x)$ is constant and equal to $c$ and $\phi\_f = ((A\_1,b\_1),...(A\_L,b\_L))$, then $\phi\_{fg} = ((A\_1,b\_1),...,(cA\_L,cb\_L))$. Actually, I need to show it for $f(x)=x$ and $g(x)=x^2$ if this make something easier.<issue_comment>username_1: Policy iteration is made up of two steps. The first is a full policy evaluation, where a value function is calculated for the current policy. The second is policy improvement, where the policy is made greedy with respect to the value function. Value iteration looks to speed things up by stopping policy evaluation after one iteration, make the policy greedy with respect to that value function, and repeat until convergence. Clearly, these are two different algorithms, hence why they are considered to be different. They are, however, very closely linked, which is why you might consider them to be 'the same thing'. I guess you could say they belong to the same family of algorithm. Upvotes: 2 <issue_comment>username_2: Policy iteration is based on the insight that for a *given* policy, it is straightforward to compute the value function (the long-run expected discounted value of being in a given stage) *exactly* -- it is a set of linear equations at that point. So, we update the policy, then calculate the *exact* values of the states for always following that particular policy, and based on that we update the policy again, etc. Value iteration, in contrast, does not use that insight. It just updates estimates of the values of being in the states one step at a time. If these values are initialized at 0, you can think of this of the $i$th iteration computing the value of what would be the optimal policy *if we knew the MDP would end after $i$ iterations*. We never really have to think explicitly about policies (though we are in effect computing a policy each iteration), and never directly calculate the infinite sum of expected discounted rewards. These are just the vanilla variants and it is possible to mix and match these ideas -- e.g., you might not evaluate a policy by explicitly solving a system of linear equations but rather just do some iterations -- but the vanilla variants are clearly distinct. Upvotes: 1
2020/08/19
563
2,113
<issue_start>username_0: I have an image dataset, which is composed of 113695 images for training and 28424 images for validation. Now, when I use `ImageDataGenerator` and `flow_from_dataframe`, it as the parameter `batch_size`. How can I take the correct number for `batch_size` because both numbers cannot be divided by the same number? Should I need to drop four images in the validation data to make them `batch_size` of 5? Or is there another way?<issue_comment>username_1: [This](https://stats.stackexchange.com/a/153535/264183) Cross Validated post might answer your question. In a nutshell: * A single batch (that is all your data in one batch) will result in a smooth trajectory on the loss surface. The drawback is that all your data might not fit into your memory. Which is highly likely for ~100k images. * One image per batch (batch size = no. examples) will result in a more stochastic trajectory since the gradients are calculated on a single example. Advantages are of computational nature and faster training time. The middle way is to choose the batch size in such a way that your batch fits into memory and gradients behave less 'noisy'. To be honest there is no 'golden' number, personally I like to choose powers of two. Don't worry that your data is not divisible by the batch size. Libraries will take care about that internally, the last batch will just be a smaller than the defined batch size ($N \text{ mod } b$). Upvotes: 1 <issue_comment>username_2: From Andrew lesson on Coursera, `batch_size` should be the power of 2, ex: 512, 1024, 2048. It will faster for training. And you don't need to drop your last images to `batch_size` of 5 for example. The library likes Tensorflow or Pytorch, the last `batch_size` will be `number_training_images % 5` which 5 is your `batch_size`. Last but not least, batch\_size need to fit your memory training (CPU or GPU). You can try several large batch\_size to know which value is not out of memory. The smaller `number_mini_batch = number_training_image//batch_size + 1`, the faster for training time. Hope they can help you! Upvotes: 0
2020/08/21
567
2,430
<issue_start>username_0: When training a network using word embeddings, it is standard to add an embedding layer to first convert the input vector to the embeddings. However, assuming the embeddings are pre-trained and frozen, there is another option. We could simply preprocess the training data prior to giving it to the model so that it is already converted to the embeddings. This will speed up training, since this conversion need only be performed once, as opposed to on the fly for each epoch. Thus, the second option seems better. But the first choice seems more common. Assuming the embeddings are pre-trained and frozen, is there a reason I might choose the first option over the second?<issue_comment>username_1: Assuming that the dictionary of the words, that your model comes up with, is a subset of the pretrained embeddings, for example of Google's pretrained word2vec, then it is maybe a better option following these embeddings, if your model can handle that size of dimension. However, sometimes that would not always be the best solution, taking into account the nature of the problem. For example, if you are trying to use NLP on medical texts that contain rare and special words, then maybe you should use your embedding layer, assuming that you have an adequate data size, or both of them. That is just a thought of mine. For sure, there can be several other use cases which should propose the embedding layer. Upvotes: 0 <issue_comment>username_2: There are multiple ways to get word embedding from a corpus. * **Count Vectorizer:** You can use the `CountVectorizer()` from `sklearn.feature_extraction.text` and then use the `fit_transform()` if the corpus has been converted into a list of sentences * **TF-IDF Vectorizer:** You can use the `TfidfVectorizer` from `sklearn.feature_extraction.text` and then again use the `fit_transform()` on a list of sentences * **word2vec:** You can make a `word2vec` model from `gensim.models` by using `word2vec.Word2vec`. Upvotes: 1 <issue_comment>username_3: If you have to move a lot of data around during training (like retrieving batches from disk/network/what have you), it's much faster to do so as a rank-3 tensor of [batches, documents, indices] than as a rank-4 tensor of [batches, documents, indices, vectors]. In this case, while the embedding is O(1) wherever you put it, it's more efficient to do so as part of the graph. Upvotes: 3 [selected_answer]
2020/08/21
586
2,328
<issue_start>username_0: I am practicing with an image dataset which is having different dimensions. [![here is the sample images ](https://i.stack.imgur.com/FnnVo.png)](https://i.stack.imgur.com/FnnVo.png) If I simply crop and pad them to 1024X1024(the original images having smallest width is around 300 and largest is around 2400 and widths and heights of the images are not the same) I am not getting good val\_accuracy. It's just giving 49% accuracy. How to do image processing to these images because the brightness of the images is also changing. My task is to classify them into 5 classes.<issue_comment>username_1: Assuming that the dictionary of the words, that your model comes up with, is a subset of the pretrained embeddings, for example of Google's pretrained word2vec, then it is maybe a better option following these embeddings, if your model can handle that size of dimension. However, sometimes that would not always be the best solution, taking into account the nature of the problem. For example, if you are trying to use NLP on medical texts that contain rare and special words, then maybe you should use your embedding layer, assuming that you have an adequate data size, or both of them. That is just a thought of mine. For sure, there can be several other use cases which should propose the embedding layer. Upvotes: 0 <issue_comment>username_2: There are multiple ways to get word embedding from a corpus. * **Count Vectorizer:** You can use the `CountVectorizer()` from `sklearn.feature_extraction.text` and then use the `fit_transform()` if the corpus has been converted into a list of sentences * **TF-IDF Vectorizer:** You can use the `TfidfVectorizer` from `sklearn.feature_extraction.text` and then again use the `fit_transform()` on a list of sentences * **word2vec:** You can make a `word2vec` model from `gensim.models` by using `word2vec.Word2vec`. Upvotes: 1 <issue_comment>username_3: If you have to move a lot of data around during training (like retrieving batches from disk/network/what have you), it's much faster to do so as a rank-3 tensor of [batches, documents, indices] than as a rank-4 tensor of [batches, documents, indices, vectors]. In this case, while the embedding is O(1) wherever you put it, it's more efficient to do so as part of the graph. Upvotes: 3 [selected_answer]
2020/08/23
354
1,671
<issue_start>username_0: The 'by the book' method of delivering final machine learning models is to include all data in the final training (including validation and test sets). To check robustness of my model I use randomly chosen population for training and validation sets with each training (no set random seed). The results on validation and then test sets are pretty satisfactory for my case however they are always different each time, precision spans between 0.7 and 0.9. This is due to fact that each time different data points fall to set with which model is trained. My question is: how do I know that final training will also generate good model and how to estimate its precision when I do not have anymore unseen data?<issue_comment>username_1: The purpose of the test set is to test your model before deploying, otherwise, you would not need the test set in the first place. If you retrain your model by also including the validation and test datasets, of course, you cannot test your model anymore. You need to leave the test dataset separate and not use it for retraining, unless you have more data for testing. Upvotes: 2 <issue_comment>username_2: We usually divide the dataset into multiple subsets namely (training, validation and test sets). During training, we validate the model against the validation set. And during testing, we use the test dataset to obtain metrics for the model. We should make sure the subsets are taken from the same sample. Once you've tested it against the test subset, there's nothing really we can do. You can also increase your dataset, by using multiple data sources if the problem statement allows you to. Upvotes: 1
2020/08/24
824
3,521
<issue_start>username_0: The definitions for these two appear to be very similar, and frankly, I've been only using the term "active learning" the past couple of years. What is the actual difference between the two? Is one a subset of the other?<issue_comment>username_1: **Active learning (AL)** is a *weakly supervised learning* (WSL) technique where you can have both labelled and unlabelled data [[1](https://academic.oup.com/nsr/article/5/1/44/4093912)]. The main idea behind AL is that the learner (or learning algorithm) can query an "oracle" (e.g. a human) to label some unlabelled instances. AL is similar to *semi-supervised learning* (SSL), which is also a WSL technique, given that both deal with unlabelled and labeled data, but do that differently (i.e. SSL does not use an oracle). [**Online learning**](https://en.wikipedia.org/wiki/Online_machine_learning) are machine learning techniques that update the models as new data is collected or arrives sequentially, as opposed to **batch learning** (or *offline learning*), where you first collect a dataset of multiple instances and then you train a model once (although you can later update it when you update your dataset). Batch learning is currently the common way of training machine learning models, given that it avoids problems like the known [**catastrophic interference**](https://ai.stackexchange.com/a/13293/2444) (aka *catastrophic forgetting*) problem, which can occur if you learn online. For example, neural networks are known to face this problem when learning online. There are **incremental learning** (aka *lifelong learning*) algorithms that attempt to address this *catastrophic interference* problem. Upvotes: 5 [selected_answer]<issue_comment>username_2: As it is referred in the survey paper "Active Learning Literature Survey": > > The key idea behind **active learning** is that a machine learning algorithm can > achieve greater accuracy with fewer training labels if it is allowed to choose the > data from which it learns. An active learner may pose queries, usually in the form > of unlabeled data instances to be labeled by an *oracle* (e.g., a human annotator). > Active learning is well-motivated in many modern machine learning problems, > where unlabeled data may be abundant or easily obtained, but labels are difficult, > time-consuming, or expensive to obtain. > > > **Online learning** uses data which become available in a sequential order. It's main goal is to update the best predictor for future data at each step. So, online learning is a more general method of machine learning that is opposed to **offline learning**, or **batch learning**, where the whole dataset has already been generated and used for training / updating the model's parameters. Moreover, a common technique for training Machine Learning models is to first perform online learning, in order to acquire an adequate data size, and then perform offline learning on the whole dataset and finaly compare the results generated by the two learning processes. On the other hand, active learning can be performed both with online learning[[1](https://www.eecs.tufts.edu/%7Edsculley/papers/activeSpam.pdf)] and offline learning, in order to reduce manual annotation effort during the annotation of training data for machine learning classifiers. That is, independently of how data have been generated and with what order, active learning should make the least queries, to an Oracle, needed for annotation of a subset of the data. Upvotes: 3
2020/08/25
374
1,435
<issue_start>username_0: In the convolutional layer for CNNs, when you specify the stride of a filter, typical notes show some examples of this but only for the horizontal panning. Is this same stride applied for the vertical direction too when you're done with the current row? In other words, say our input volume is 7x7, and we apply a stride of 1 for a 3x3 filter. Is the output volume 5x5? (which would mean you applied the stride in both the horizontal and vertical panning). Is it possible to apply a different stride for each direction?<issue_comment>username_1: Yes, in Keras this is simply implemented by using a tuple for the stride argument of a convolutional layer, with each element of the tuple corresponding to the stride of each dimension. Upvotes: 2 <issue_comment>username_2: Yes, in Keras you can apply different strides by giving a **tuple/list**, specifying the value of strides along the height and width. If you just give a **single** value the API assumes the **same value** for **all spatial dimensions**. You can find the official documentation [here](https://keras.io/api/layers/convolution_layers/convolution2d/#:%7E:text=strides%3A%20An%20integer%20or%20tuple,Specifying%20any%20stride%20value%20!%3D) In Pytorch, too you can specify the values in a tuple for the `stride` argument. Link to Pytorch Documentation for [stride](https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html) Upvotes: 2
2020/08/26
470
2,102
<issue_start>username_0: In Simulated Annealing, a worse solution is accepted with this probability: $$p=e^{-\frac{E(y)-E(x)}{kT}}.$$ If that understanding is correct: Why is this probability function used? This means that, the bigger the energy difference, the smaller the probability of accepting the new solution. I would say the bigger the difference the more we want to escape a local minimum. I plotted that function in Matlab in two dimensions: [![enter image description here](https://i.stack.imgur.com/x1P0p.png)](https://i.stack.imgur.com/x1P0p.png)<issue_comment>username_1: Nice question! My guess is that, if the probability of acceptance increases the bigger the difference between the current and new solutions is, then there's the risk that you need to search a lot again to find a good solution, i.e. you may oscillate between different subspaces or you could actually often end up in subspaces where there are only bad solutions. Your reasoning probably makes sense at the beginning of the search when the initial solutions may not be good enough or maybe if you have parallel searches (and you want to explore the search space at different search subspaces), but once you have a good solution, you don't want to completely discard it and replace it with a quite worse solution. If you perform some experiments with your idea, I would like to see the results. Upvotes: 0 <issue_comment>username_2: Note that you can't really predict whether your escape from a local minimum will work or not - you might just wind up in another, worse local minimum. The probability function you describe increases the likelihood of this happening. By upweighting the likelihood of allowing small energy differences, you allow for the possibility of escaping local minima, while ensuring that whatever new minimum you find can't be that much worse that where you started. If you make the acceptance of large energy differences more likely, you can escape local minima more often, but you increase the likelihood that you'll just wind up in a region with an *even higher* local minimum. Upvotes: 1
2020/08/26
664
2,679
<issue_start>username_0: I was reading [this article](https://www.unil.ch/files/live/sites/ln/files/shared/Revonsuo_1.pdf) about the question "Why do we dream?" in which the author discusses dreams as a form of rehearsal for future threats, and presents it as an evolutive advantage. My question is **whether this idea has been explored in the context of RL**. For example, in a competition between AIs on a shooter game, one could design an agent that, besides the behavior it has learned in a "normal" training, seeks for time in which is out of danger, to then use its computation time in the game to produce simulations that would further optimize its behavior. As the agent still needs to be somewhat aware of its environment, it could alternate between processing the environment and this kind of simulation. Note that this "in-game" simulation has an advantage with respect to the "pre-game" simulations used for training; the agent in the game experiences the behavior of the other agents, which could not have been predicted beforehand, and then simulates on top of these experiences, e.g. by slightly modifying them. **For more experienced folks, does this idea make sense? has something similar been explored?** I have absolutely no experience in the field, so I apologize if this question is poorly worded, dumb or obvious. I would appreciate suggestions on how to improve it if this is the case.<issue_comment>username_1: Model-based RL is obviously the correct approach. Mainly because it lets you simulate the environment internally without having direct interaction. And all successful RL algorithms essentially are model-based because nobody has done real-time RL and been successful. Upvotes: 0 <issue_comment>username_2: Yes, the concept of *dreaming* or *imagining* has already been explored in reinforcement learning. For example, have a look at [Metacontrol for Adaptive Imagination-Based Optimization](https://arxiv.org/pdf/1705.02670.pdf) (2017) by <NAME> et al., which is a paper that I gave a talk/presentation on 1-2 years ago (though I don't remember well the details anymore). There is also a blog post about the topic [Agents that imagine and plan](https://deepmind.com/blog/article/agents-imagine-and-plan) (2017) by DeepMind, which discusses two more recent papers and also mentions Hamrick's paper. In 2018, another related and interesting paper was also presented at NIPS, i.e. [World Models](https://worldmodels.github.io/), by <NAME>. If you search for "imagination/dreaming in reinforcement learning" on the web, you will find more papers and articles about this interesting topic. Upvotes: 3 [selected_answer]
2020/08/27
1,636
4,518
<issue_start>username_0: I came across the following proof of what's commonly referred to as the **log-derivative trick** in policy-gradient algorithms, and I have a question - [![enter image description here](https://i.stack.imgur.com/tqATW.png)](https://i.stack.imgur.com/tqATW.png) While transitioning from the first line to the second, the gradient with respect to policy parameters $\theta$ was pushed into the summation. What bothers me is how it skipped over $\mu (s)$, the distribution of states - which (the way I understand it), **is induced by the policy $\pi\_\theta$ itself!** Why then does it not depend on $\theta$? Let me know what's going wrong! Thank you!<issue_comment>username_1: The reason you are confused is because this is not the full derivation of the Policy Gradient Theorem. You are correct in thinking that $\mu(s)$ depends on the policy $\pi$ which in turn depends on the policy parameters $\theta$, and so there *should* be a derivative of $\mu$ wrt $\theta$, however the Policy Gradient Theorem doesn't require you to take this derivative. In fact, the great thing about the Policy Gradient Theorem is that the final result *does not* require you to take a derivative of the state distribution with respect to the policy parameters. I would encourage you to read and go through the derivation of the Policy Gradient Theorem from e.g. Sutton and Barto to see why you don't need to take the derivative. [![enter image description here](https://i.stack.imgur.com/5IN1C.png)](https://i.stack.imgur.com/5IN1C.png) Above is an image of the Policy Gradient Theorem proof from the Sutton and Barto book. If you carefully go through this line by line you will see that you are not required to take a derivative of the state distribution anywhere in the proof. Upvotes: 2 <issue_comment>username_2: The proof you are given in the above post is not wrong. It's just they skip some of the steps and directly written the final answer. Let me go through those steps: I will simplify some of the things to avoid complication but the generosity remains the same. Like I will think of the reward as only dependent on the current state, $s$, and current action, $a$. So, $r = r(s,a)$ First, we will define the average reward as: $$r(\pi) = \sum\_s \mu(s)\sum\_a \pi(a|s)\sum\_{s^{\prime}} P\_{ss'}^{a} r $$ We can further simplify average reward as: $$r(\pi) = \sum\_s \mu(s)\sum\_a \pi(a|s)r(s,a) $$ My notation may be slightly different than the aforementioned slides since I'm only following Sutton's book on RL. Our objective function is: $$ J(\theta) = r(\pi) $$ We want to prove that: $$ \nabla\_{\theta} J(\theta) = \nabla\_{\theta}r(\pi) = \sum\_s \mu(s) \sum\_a \nabla\_{\theta}\pi(a|s) Q(s,a)$$ Now let's start the proof: $$\nabla\_{\theta}V(s) = \nabla\_{\theta} \sum\_{a} \pi(a|s) Q(s,a)$$ $$\nabla\_{\theta}V(s) = \sum\_{a} [Q(s,a) \nabla\_{\theta} \pi(a|s) + \pi(a|s) \nabla\_{\theta}Q(s,a)]$$ $$\nabla\_{\theta}V(s) = \sum\_{a} [Q(s,a) \nabla\_{\theta} \pi(a|s) + \pi(a|s) \nabla\_{\theta}[R(s,a) - r(\pi) + \sum\_{s^{\prime}}P\_{ss^{\prime}}^{a}V(s^{\prime})]]$$ $$\nabla\_{\theta}V(s) = \sum\_{a} [Q(s,a) \nabla\_{\theta} \pi(a|s) + \pi(a|s) [- \nabla\_{\theta}r(\pi) + \sum\_{s^{\prime}}P\_{ss^{\prime}}^{a}\nabla\_{\theta}V(s^{\prime})]]$$ $$\nabla\_{\theta}V(s) = \sum\_{a} [Q(s,a) \nabla\_{\theta} \pi(a|s) + \pi(a|s) \sum\_{s^{\prime}}P\_{ss^{\prime}}^{a}\nabla\_{\theta}V(s^{\prime})] - \nabla\_{\theta}r(\pi)\sum\_{a}\pi(a|s)$$ Now we will rearrange this: $$\nabla\_{\theta}r(\pi) = \sum\_{a} [Q(s,a) \nabla\_{\theta} \pi(a|s) + \pi(a|s) \sum\_{s^{\prime}}P\_{ss^{\prime}}^{a}\nabla\_{\theta}V(s^{\prime})] - \nabla\_{\theta}V(s)$$ Multiplying both sides by $\mu(s)$ and summing over $s$: $$\nabla\_{\theta}r(\pi) \sum\_{s}\mu(s)= \sum\_{s}\mu(s) \sum\_{a} Q(s,a) \nabla\_{\theta} \pi(a|s) + \sum\_{s}\mu(s) \sum\_a \pi(a|s) \sum\_{s^{\prime}}P\_{ss^{\prime}}^{a}\nabla\_{\theta}V(s^{\prime}) - \sum\_{s}\mu(s) \nabla\_{\theta}V(s)$$ $$\nabla\_{\theta}r(\pi) = \sum\_{s}\mu(s) \sum\_{a} Q(s,a) \nabla\_{\theta} \pi(a|s) + \sum\_{s^{\prime}}\mu(s^{\prime})\nabla\_{\theta}V(s^{\prime}) - \sum\_{s}\mu(s) \nabla\_{\theta}V(s)$$ Now we are there: $$\nabla\_{\theta}r(\pi) = \sum\_{s}\mu(s) \sum\_{a} Q(s,a) \nabla\_{\theta} \pi(a|s)$$ This is the policy gradient theoram for average reward formulation (ref. [Policy gradient](https://papers.nips.cc/paper/1713-policy-gradient-methods-for-reinforcement-learning-with-function-approximation.pdf)). Upvotes: 2 [selected_answer]
2020/08/27
785
3,092
<issue_start>username_0: I have a scheduling problem in which there are $n$ slots and $m$ clients. I am trying to solve the problem using Q-learning so I have made the following state-action model. A state $s\_t$ is given by the current slot $t=1,2,\ldots,n$ and an action $a\_t$ at slot $t$ is given by one client, $a\_t\in\{1,2,\ldots,m\}$. In my situation, I do not have any reward associated with a state-action pair $(s\_t,a\_t)$ until the terminal state which is the last slot. In other words, for all $s\_t\in\{1,2,\ldots,n-1\}$, the reward is $0$ and for $s\_t=n$ I can compute the reward given $(a\_1,a\_2,\ldots,a\_n)$. In this situation, the Q table, $Q(s\_t,a\_t)$, will contain only zeros except for the last row in which it will contain the updated reward. Can I still apply Q-learning in this situation? Why do I need a Q table if I only use the last row?<issue_comment>username_1: Having only a non-zero reward at the very end is not uncommon. When rewards are sparse, it becomes a bit harder to learn compared to having lots of different rewards along the way, but for your problem, the goal state is always reached, so that should not be a problem. (The real problem with sparse rewards is that, if an agent can do a lot of exploration without every finding the goal, it essentially receives no feedback and will behave randomly, until it happens to stumble upon the very rare reward state.) What concerns me more about your problem is that the final reward depends not just on the last state visited, but also on the chain of actions taken so far. That means that, to make this a proper MDP, you need to keep the chain of action in the state. So, your state would be something of the type $(s\_k, [a\_1, a\_2, \ldots, a\_{k-1}])$. This kind of combinatorial problem is not what RL is really great at. RL is really good when the state and action together give a lot of information about the next state. Here it seems that, in your formulation, the next state is independent of the previous action. Instead of seeing this as a RL problem, you might want to express this as sequences of actions with an associated reward, and look at it as a combinatorial optimization problem. Upvotes: 3 [selected_answer]<issue_comment>username_2: If by, > > I can compute the reward given $(a\_1, a\_2, \dots, a\_n)$ > > > you simply mean that your game is deterministic, this is absolutely fine. I feel another answer had assumed you were implying your terminal reward is a matter of some sequence. RL does, however, struggle more greatly in games with indeterminable reward until the terminal state, however, it is perfectly normal, but a significant challenge. In terms of implementation, simply record the reward at the end of each match, and perform a train step only after each terminal state, assigning this reward to each transition recorded in that game. Instead of updating your target network after a given number of steps, update it after some number of terminal states. I suggest 20 games as a starter for the frequency of updating your target network. Upvotes: 1
2020/08/27
1,256
4,584
<issue_start>username_0: Are there any examples of single player games that use modern ML technique in its games? By this I mean AI that plays with or against the human player, and not just play the game by itself (like Atari). "Modern ML techniques" is a vague term, but for example, Neural Networks, Reinforcement Learning, or probabilistic methods. Basically anything that goes above and beyond traditional search methods that most games use nowadays. Ideally, the AI would be: * widely available (i.e. not like the OpenAI Five, which was only available for a limited amount of time and requires a high amount of computational power) * human level (not overpowered) Ideally, the game would be: * symmetrical (the AI has the same agent capabilities as the player, though answers similar to [The Director](https://left4dead.fandom.com/wiki/The_Director) would be very interesting as well) * "complex environment" (more complex than, say, a board game, but a CIV5 game might work) But any answer would be appreciated, as some of the criteria above are quite vauge. Edit: the ideal cases listed above are not meant to discourage other answers, nor are they intended to be of strictly inclusionary (ie: any game would need to satisfy all of the above requirements)<issue_comment>username_1: [Beating the World’s Best at Super Smash Bros. Melee with Deep Reinforcement Learning](https://arxiv.org/pdf/1702.06230.pdf) Firoiu, Whitney, Tenenbaum created a RL agent that plays and defeats professional players in Super Smash Bros Melee. The RL agent first played against the built-in AI, and then via self-play. Only one character playing on a single stage was trained. The character picked (Captain Falcon) has no "projectile attacks" to simplify training. Upvotes: 1 <issue_comment>username_2: There is [Google Research Football](https://ai.googleblog.com/2019/06/introducing-google-research-football.html), which is an open-source platform to develop reinforcement learning algorithms to play a game similar to FIFA or PES, although the football simulation is not as realistic as the current versions of FIFA or PES. You can play this game against different RL agents (e.g. DQN or IMPALA) and, of course, you can even develop your own RL agents and play against them. [Here](https://www.youtube.com/watch?v=F8DcgFDT9sc&feature=emb_title) is a video that illustrates the environment. [Here is the code and instructions to use it](https://github.com/google-research/football). As far as I know, there isn't yet an AI that plays simulated football at a human-level (i.e. as good as the best human players). For example, I can regularly (although not always) beat the legendary level-AI at FIFA, but I also don't know the details about this AI (which could also be rule-based). Upvotes: 3 <issue_comment>username_3: There are many games where AI involved but less of them against the human player such as playing the video game. For example in [this paper](https://science.sciencemag.org/content/364/6443/859), they proposed a three-dimensional multiplayer first-person video game called Quake III Arena in Capture the Flag mode. Also, [this paper](https://link.springer.com/article/10.1007/s13218-020-00647-w) and [this paper](https://uhra.herts.ac.uk/handle/2299/22881) show many games where AI involved and they show which kind of games that a human player can play against. I recommend also [Awesome-Game-AI](https://github.com/datamllab/awesome-game-ai). [![enter image description here](https://i.stack.imgur.com/EdmKr.png)](https://i.stack.imgur.com/EdmKr.png) Upvotes: 0 <issue_comment>username_4: I don't know about specific game titles but in terms of research University of Malta has a strong team working with application of machine learning to games. The key figure there used to be [<NAME>](http://yannakakis.net/) who published a lot of good papers and even wrote a [book](https://www.springer.com/gp/book/9783319635187) about content generation, smart game agents and, imho the most interesting, player modelling. Upvotes: 0 <issue_comment>username_5: Not sure if this fits your requirements of the AI playing *with* the player, but I still wanted to mention it because to me it is the quintessential AI-based game: [AIDungeon](https://play.aidungeon.io/main/landing), which is a text based story-telling game, where you can do literally anything. It's using GPT-2/GPT-3(paid) and has blown my mind several times. You've probably heard of it, but in case you haven't give it a try, only takes a couple of minutes to see what it can do. Upvotes: 0
2020/08/28
670
2,018
<issue_start>username_0: I was going through Sutton's book and, using sample-based learning for estimating the expectations, we have this formula $$ \text{new estimate} = \text{old estimate} + \alpha(\text{target} - \text{old estimate}) $$ What I don't quite understand is why it's called the target, because since it's the sample, it’s not the actual target value, so why are we moving towards a wrong value?<issue_comment>username_1: It is our "current" target. We assume that the value we get now is at least a closer approximation to the "true" target. We're not so much moving towards a wrong value as we are moving away from a more wrong value. Of course, it is all base on random trials, so saying anything definite (such as: "we are guaranteed to improve at each step") is hard to show without working probabilistically. The *expectation* of the error of the value function (as compared to the true value function) will decrease, that is all we can say. Upvotes: 3 <issue_comment>username_2: It would be heplful for me if you specify the section and page number of the Sutton's book. But as far as I understand your question I will try explain this. Think of TD update. The sample contains $(s\_t,a\_t,r\_{t+1},s\_{t+1})$. Using incremental update we can write: $$ v\_{t}(s) = \frac{1}{t} \sum\_{j=1}^{t}(r\_{t+1} + \gamma v\_{s\_{t+1}})$$ $$ v\_{t}(s) = v\_{t-1}(s) + \alpha (r\_{t+1} + \gamma v\_{t-1}(s\_{t+1}) - v\_{t-1}(s\_t))$$ We are calling this $r\_{t+1} + \gamma v\_{t-1}(s\_{t+1})$ as the TD target. From the above equation you can already see that $r\_{t+1} + \gamma v\_{t-1}(s\_{t+1})$ is actually the unbaised estimate for $v(s)$. We are calling $r\_{t+1} + \gamma v\_{t-1}(s\_{t+1})$ an unbiased estimate since $E[r\_{t+1} + \gamma v\_{t-1}(s\_{t+1})] = v\_t(s\_t)$. That means expectation over $r\_{t+1} + \gamma v\_{t-1}(s\_{t+1})$ lead us to true state value function, $v\_t(s)$. For the monte carlo update same explain will be applied. I hope that this answer your question. Upvotes: 0
2020/08/29
703
2,398
<issue_start>username_0: I've been looking into self-attention lately, and in the articles that I've been seeing, they all talk about "weights" in attention. My understanding is that the weights in self-attention are not the same as the weights in a neural network. From this article, <http://peterbloem.nl/blog/transformers>, in the additional tricks section, it mentions, The query is the dot product of the query weight matrix and the word vector, `ie, q = W(q)x` and the key is the dot product of the key weight matrix and the word vector, `k = W(k)x` and similarly for the value it is `v = W(v)x`. So my question is, where do the weight matrices come from?<issue_comment>username_1: The answer is actually really simple: they are all randomly initialised. So they are to all intents and purposes "normal" weights of a neural network. This is also the reason why in the [original paper](https://arxiv.org/pdf/1706.03762.pdf) the authors tested several setting with single and multiple attention heads. If these matrices were somehow "special" or predetermined they would all serve the same purpose. Instead, because of their random initialisation, each attention head learn to contribute to solve a different task, like they show in Figure 3 and 4. Upvotes: 2 <issue_comment>username_2: In my mind there are two weight matrices, the one you get prior to applying softmax: $$ \alpha\_{i,j} = \frac{\langle q\_i, k\_j \rangle}{\sqrt{d}}$$ the other you get after applying the softmax: $$ \text{Attn}\_{i,j}(X) = \frac{\exp \left( \frac{\langle q\_i, k\_j \rangle}{\sqrt{d}}\right)}{\sum\_k \exp\left( \frac{\langle q\_i, k\_k \rangle}{\sqrt{d}} \right)}.$$ Either way, you can view them as directed "[attention graphs](https://ai.stackexchange.com/questions/32036/is-the-multi-head-attention-in-the-transformer-a-weighted-adjacency-matrix/39571#39571)" and they can be interpreted in terms of [graph neural networks](https://docs.dgl.ai/en/0.8.x/tutorials/models/4_old_wines/7_transformer.html) using only complete graphs and graph attention: $$ A\_{i,j} = \frac{\exp(w(i,j))}{\sum\_{j \in \mathcal{N}(i)} \exp(w(i,k))},$$ where $w(i,j)$ is a weight function, which can be the scaled dot-product of attention for example. The sum is over the neighborhood of a node (think token) $\mathcal{N}(i)$. Hope that helps a little with intuition and that it's close to what you meant. Upvotes: 0
2020/08/29
4,090
15,369
<issue_start>username_0: In general, how do I calculate the GPU memory need to run a deep learning network? I'm asking this question because my training for some network configuration is getting out of memory. If the TensorFlow only store the memory necessary to the tunable parameters, and if I have around 8 million, I supposed the RAM required will be: RAM = 8.000.000 \* (8 (float64)) / 1.000.000 (scaling to MB) RAM = 64 MB, right? The TensorFlow requires more memory to store the image at each layer? By the way, these are my GPU Specifications: * Nvidia GeForce 1050 4GB Networking topology * Unet * Input Shape (256,256,4) ``` Model: "functional_1" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 256, 256, 4) 0 __________________________________________________________________________________________________ conv2d (Conv2D) (None, 256, 256, 64) 2368 input_1[0][0] __________________________________________________________________________________________________ dropout (Dropout) (None, 256, 256, 64) 0 conv2d[0][0] __________________________________________________________________________________________________ conv2d_1 (Conv2D) (None, 256, 256, 64) 36928 dropout[0][0] __________________________________________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 128, 128, 64) 0 conv2d_1[0][0] __________________________________________________________________________________________________ conv2d_2 (Conv2D) (None, 128, 128, 128 73856 max_pooling2d[0][0] __________________________________________________________________________________________________ dropout_1 (Dropout) (None, 128, 128, 128 0 conv2d_2[0][0] __________________________________________________________________________________________________ conv2d_3 (Conv2D) (None, 128, 128, 128 147584 dropout_1[0][0] __________________________________________________________________________________________________ max_pooling2d_1 (MaxPooling2D) (None, 64, 64, 128) 0 conv2d_3[0][0] __________________________________________________________________________________________________ conv2d_4 (Conv2D) (None, 64, 64, 256) 295168 max_pooling2d_1[0][0] __________________________________________________________________________________________________ dropout_2 (Dropout) (None, 64, 64, 256) 0 conv2d_4[0][0] __________________________________________________________________________________________________ conv2d_5 (Conv2D) (None, 64, 64, 256) 590080 dropout_2[0][0] __________________________________________________________________________________________________ max_pooling2d_2 (MaxPooling2D) (None, 32, 32, 256) 0 conv2d_5[0][0] __________________________________________________________________________________________________ conv2d_6 (Conv2D) (None, 32, 32, 512) 1180160 max_pooling2d_2[0][0] __________________________________________________________________________________________________ dropout_3 (Dropout) (None, 32, 32, 512) 0 conv2d_6[0][0] __________________________________________________________________________________________________ conv2d_7 (Conv2D) (None, 32, 32, 512) 2359808 dropout_3[0][0] __________________________________________________________________________________________________ conv2d_transpose (Conv2DTranspo (None, 64, 64, 256) 524544 conv2d_7[0][0] __________________________________________________________________________________________________ concatenate (Concatenate) (None, 64, 64, 512) 0 conv2d_transpose[0][0] conv2d_5[0][0] __________________________________________________________________________________________________ conv2d_8 (Conv2D) (None, 64, 64, 256) 1179904 concatenate[0][0] __________________________________________________________________________________________________ dropout_4 (Dropout) (None, 64, 64, 256) 0 conv2d_8[0][0] __________________________________________________________________________________________________ conv2d_9 (Conv2D) (None, 64, 64, 256) 590080 dropout_4[0][0] __________________________________________________________________________________________________ conv2d_transpose_1 (Conv2DTrans (None, 128, 128, 128 131200 conv2d_9[0][0] __________________________________________________________________________________________________ concatenate_1 (Concatenate) (None, 128, 128, 256 0 conv2d_transpose_1[0][0] conv2d_3[0][0] __________________________________________________________________________________________________ conv2d_10 (Conv2D) (None, 128, 128, 128 295040 concatenate_1[0][0] __________________________________________________________________________________________________ dropout_5 (Dropout) (None, 128, 128, 128 0 conv2d_10[0][0] __________________________________________________________________________________________________ conv2d_11 (Conv2D) (None, 128, 128, 128 147584 dropout_5[0][0] __________________________________________________________________________________________________ conv2d_transpose_2 (Conv2DTrans (None, 256, 256, 64) 32832 conv2d_11[0][0] __________________________________________________________________________________________________ concatenate_2 (Concatenate) (None, 256, 256, 128 0 conv2d_transpose_2[0][0] conv2d_1[0][0] __________________________________________________________________________________________________ conv2d_12 (Conv2D) (None, 256, 256, 64) 73792 concatenate_2[0][0] __________________________________________________________________________________________________ dropout_6 (Dropout) (None, 256, 256, 64) 0 conv2d_12[0][0] __________________________________________________________________________________________________ conv2d_13 (Conv2D) (None, 256, 256, 64) 36928 dropout_6[0][0] __________________________________________________________________________________________________ conv2d_14 (Conv2D) (None, 256, 256, 1) 65 conv2d_13[0][0] ================================================================================================== Total params: 7,697,921 Trainable params: 7,697,921 Non-trainable params: 0 ``` **This is the error given.** ``` --------------------------------------------------------------------------- ResourceExhaustedError Traceback (most recent call last) in 23 # Train the model, doing validation at the end of each epoch. 24 epochs = 30 ---> 25 result\_model = model.fit(train\_gen, epochs=epochs, validation\_data=val\_gen, callbacks=callbacks) ~\Anaconda3\envs\tf23\lib\site-packages\tensorflow\python\keras\engine\training.py in \_method\_wrapper(self, \*args, \*\*kwargs) 106 def \_method\_wrapper(self, \*args, \*\*kwargs): 107 if not self.\_in\_multi\_worker\_mode(): # pylint: disable=protected-access --> 108 return method(self, \*args, \*\*kwargs) 109 110 # Running inside `run\_distribute\_coordinator` already. ~\Anaconda3\envs\tf23\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch\_size, epochs, verbose, callbacks, validation\_split, validation\_data, shuffle, class\_weight, sample\_weight, initial\_epoch, steps\_per\_epoch, validation\_steps, validation\_batch\_size, validation\_freq, max\_queue\_size, workers, use\_multiprocessing) 1096 batch\_size=batch\_size): 1097 callbacks.on\_train\_batch\_begin(step) -> 1098 tmp\_logs = train\_function(iterator) 1099 if data\_handler.should\_sync: 1100 context.async\_wait() ~\Anaconda3\envs\tf23\lib\site-packages\tensorflow\python\eager\def\_function.py in \_\_call\_\_(self, \*args, \*\*kwds) 778 else: 779 compiler = "nonXla" --> 780 result = self.\_call(\*args, \*\*kwds) 781 782 new\_tracing\_count = self.\_get\_tracing\_count() ~\Anaconda3\envs\tf23\lib\site-packages\tensorflow\python\eager\def\_function.py in \_call(self, \*args, \*\*kwds) 838 # Lifting succeeded, so variables are initialized and we can run the 839 # stateless function. --> 840 return self.\_stateless\_fn(\*args, \*\*kwds) 841 else: 842 canon\_args, canon\_kwds = \ ~\Anaconda3\envs\tf23\lib\site-packages\tensorflow\python\eager\function.py in \_\_call\_\_(self, \*args, \*\*kwargs) 2827 with self.\_lock: 2828 graph\_function, args, kwargs = self.\_maybe\_define\_function(args, kwargs) -> 2829 return graph\_function.\_filtered\_call(args, kwargs) # pylint: disable=protected-access 2830 2831 @property ~\Anaconda3\envs\tf23\lib\site-packages\tensorflow\python\eager\function.py in \_filtered\_call(self, args, kwargs, cancellation\_manager) 1846 resource\_variable\_ops.BaseResourceVariable))], 1847 captured\_inputs=self.captured\_inputs, -> 1848 cancellation\_manager=cancellation\_manager) 1849 1850 def \_call\_flat(self, args, captured\_inputs, cancellation\_manager=None): ~\Anaconda3\envs\tf23\lib\site-packages\tensorflow\python\eager\function.py in \_call\_flat(self, args, captured\_inputs, cancellation\_manager) 1922 # No tape is watching; skip to running the function. 1923 return self.\_build\_call\_outputs(self.\_inference\_function.call( -> 1924 ctx, args, cancellation\_manager=cancellation\_manager)) 1925 forward\_backward = self.\_select\_forward\_and\_backward\_functions( 1926 args, ~\Anaconda3\envs\tf23\lib\site-packages\tensorflow\python\eager\function.py in call(self, ctx, args, cancellation\_manager) 548 inputs=args, 549 attrs=attrs, --> 550 ctx=ctx) 551 else: 552 outputs = execute.execute\_with\_cancellation( ~\Anaconda3\envs\tf23\lib\site-packages\tensorflow\python\eager\execute.py in quick\_execute(op\_name, num\_outputs, inputs, attrs, ctx, name) 58 ctx.ensure\_initialized() 59 tensors = pywrap\_tfe.TFE\_Py\_Execute(ctx.\_handle, device\_name, op\_name, ---> 60 inputs, attrs, num\_outputs) 61 except core.\_NotOkStatusException as e: 62 if name is not None: ResourceExhaustedError: OOM when allocating tensor with shape[8,64,256,256] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU\_0\_bfc [[node gradient\_tape/functional\_1/conv2d\_14/Conv2D/Conv2DBackpropInput (defined at :25) ]] Hint: If you want to see a list of allocated tensors when OOM happens, add report\_tensor\_allocations\_upon\_oom to RunOptions for current allocation info. [Op:\_\_inference\_train\_function\_17207] Function call stack: train\_function ``` Is there any type of mistake in the network definition? How could I improve the network to solve this problem?<issue_comment>username_1: In fact, I do not know how to calculate GPU memory to run a neural network but I have a solution for allocation problems in GPUs while using tensorflow framework. ``` import tensorflow as tf gpus = tf.config.experimental.list_physical_devices('GPU') if gpus: # Restrict TensorFlow to only allocate 2GB * 2 of memory on the first GPU try: tf.config.experimental.set_virtual_device_configuration( gpus[0], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=2048 * 2)]) logical_gpus = tf.config.experimental.list_logical_devices('GPU') print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs") except RuntimeError as e: # Virtual devices must be set before GPUs have been initialized print(e) ``` You can set a memory limit on GPU which sometimes solves memory allocation problems. As shown above, you can set "memory\_limit" parameter as your configuration requires. Also be careful about using correct framework. If you want to use above code to set memory, you have to build your neural network from tensorflow with keras backend. ``` from tensorflow.python.keras.models import Sequential ``` Upvotes: 1 <issue_comment>username_2: You can calculate the memory requirement analytically, but it's still not going to beat physical test in practice as there are so many unknown variables in the system which can takes the GPU memory. Maybe tensorflow will decide to store the gradients, then you have to take into account the memory usage of it also. The way I do it is by setting the GPU memory limit to a high value e.g. 1GB, then test the model inference speed. Then I repeat the process with half the memory. I do it until the model refuses to run or the model speed drops. For example, I start with 1GB, then 512MB, then 256MB, eventually I got to 32 MB and the model speed drops. At 16MB, the model refuses to run. So I know that 64 MB is the minimum requirement I should use for my model. If I want to get a more precise number, I'd repeat the binary search process a couple more time between 64 MB and 32 MB. You can see how to limit the GPU memory here: <https://www.tensorflow.org/guide/gpu#limiting_gpu_memory_growth> ``` gpus = tf.config.experimental.list_physical_devices('GPU') if gpus: # Restrict TensorFlow to only allocate 1GB of memory on the first GPU try: tf.config.experimental.set_virtual_device_configuration( gpus[0], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024)]) logical_gpus = tf.config.experimental.list_logical_devices('GPU') print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs") except RuntimeError as e: # Virtual devices must be set before GPUs have been initialized print(e) ``` Upvotes: 1 <issue_comment>username_3: TensorFlow is interesting that it can store not only weights, but also training data in video RAM. ``` with tf.device('/gpu:0'): tensorflow_dataset = tf.constant(numpy_dataset) ``` Feeding training data and weights to GPU for matrix mul is faster than from regular RAM. ``` Video RAM required = Number of params * sizeof(weight type) + Training data amount in bytes ``` However, I believe that video RAM required should be at least 1.5 times the above value just to be sure things would be working. Upvotes: 0
2020/08/30
599
1,983
<issue_start>username_0: What are examples of problems where **neural networks** have been used and have achieved human-level or higher performance? Each answer can contain one or more examples. Please, provide links to research papers or reliable articles that validate your claims.<issue_comment>username_1: Here is an initial list of AI systems that used neural networks and have achieved human-level or superhuman performance. All of these systems are reinforcement learning systems that play videogames. * [AlphaGo](https://www.nature.com/articles/nature16961.pdf) and [AlphaGo Zero](https://discovery.ucl.ac.uk/id/eprint/10045895/1/agz_unformatted_nature.pdf) (an improved version of AlphaGo that does not use human knowledge but learns by playing against itself) have achieved superhuman performance in the game of **go** and, in the case of [AlphaZero](https://arxiv.org/pdf/1712.01815.pdf) (a generalized version of AlphaGo Zero), also in the games of **chess** and **shogi**. * [DQN](https://www.cs.toronto.edu/%7Evmnih/docs/dqn.pdf) has achieved human-level or superhuman performance in many **Atari games** * [DeepStack](https://science.sciencemag.org/content/356/6337/508) has achieved human-level performance in poker * [AlphaStar](https://deepmind.com/blog/article/alphastar-mastering-real-time-strategy-game-starcraft-ii) defeated a top professional player in the real-time strategy game **StarCraft 2** * [OpenAI Five](https://arxiv.org/pdf/1912.06680.pdf) defeated world champions in the game of **Dota** Upvotes: 2 <issue_comment>username_2: **AlphaDogfight** - from [Defense Advanced Research Projects Agency (DARPA)](https://en.wikipedia.org/wiki/DARPA) a programme that pitted computers using F-16 flight simulators against one another and later went on to defeat Air Force’s top F-16 fighter pilots. Check out [this](https://www.darpa.mil/news-events/2020-08-26) and [this](https://www.darpa.mil/news-events/2020-08-07) news and events by DARPA. Upvotes: 1
2020/08/31
649
2,127
<issue_start>username_0: I have watched Stanford's lectures about artificial intelligence, I currently have one question: why don't we use autoencoders instead of GANs? Basically, what GAN does is it receives a random vector and generates a new sample from it. So, if we train autoencoders, for example, on cats vs dogs dataset, and then cut off the decoder part and then input random noise vector, wouldn't it do the same job?<issue_comment>username_1: Here is an initial list of AI systems that used neural networks and have achieved human-level or superhuman performance. All of these systems are reinforcement learning systems that play videogames. * [AlphaGo](https://www.nature.com/articles/nature16961.pdf) and [AlphaGo Zero](https://discovery.ucl.ac.uk/id/eprint/10045895/1/agz_unformatted_nature.pdf) (an improved version of AlphaGo that does not use human knowledge but learns by playing against itself) have achieved superhuman performance in the game of **go** and, in the case of [AlphaZero](https://arxiv.org/pdf/1712.01815.pdf) (a generalized version of AlphaGo Zero), also in the games of **chess** and **shogi**. * [DQN](https://www.cs.toronto.edu/%7Evmnih/docs/dqn.pdf) has achieved human-level or superhuman performance in many **Atari games** * [DeepStack](https://science.sciencemag.org/content/356/6337/508) has achieved human-level performance in poker * [AlphaStar](https://deepmind.com/blog/article/alphastar-mastering-real-time-strategy-game-starcraft-ii) defeated a top professional player in the real-time strategy game **StarCraft 2** * [OpenAI Five](https://arxiv.org/pdf/1912.06680.pdf) defeated world champions in the game of **Dota** Upvotes: 2 <issue_comment>username_2: **AlphaDogfight** - from [Defense Advanced Research Projects Agency (DARPA)](https://en.wikipedia.org/wiki/DARPA) a programme that pitted computers using F-16 flight simulators against one another and later went on to defeat Air Force’s top F-16 fighter pilots. Check out [this](https://www.darpa.mil/news-events/2020-08-26) and [this](https://www.darpa.mil/news-events/2020-08-07) news and events by DARPA. Upvotes: 1
2020/09/01
885
3,100
<issue_start>username_0: I have a data set with 36 rows and 9 columns. I am trying to make a model to predict the 9th column I have tried modeling the data using a range of models using caret to perform cross-validation and hyper parameter tuning: 'lm', random forrest (ranger) and GLMnet, with range of different folds and hyper-parameter tuning, but the modeling has not been very successful. Next I have tried to use some of the neural-network models. I tried the 'monmlp'. During hyper parameter tuning I could see that the RMSE drops to a level when using ~ 6 hidden units. The problem I observe using this model is 1. Prediction is almost equal to data 2. When doing a "manual" cross validation by removing a single datapoint and using the trained model to predict, it has no predictive power I have tried to use a range of different hidden units, but i think the problem is that the model is overfitted despite using caret cross validation feature. There two feedbacks I would appreciate 1. Is there a way to prevent overfitting, by chosen optimal number of training iterations ( optimal RMSE on out of sample ). Can this by done using caret or some other package 2. Am I using the right model? I am relatively unexperienced with ML and choosing a good model is tough: when you look at the available packages it is overwhelming: <https://topepo.github.io/caret/train-models-by-tag.html><issue_comment>username_1: Here is an initial list of AI systems that used neural networks and have achieved human-level or superhuman performance. All of these systems are reinforcement learning systems that play videogames. * [AlphaGo](https://www.nature.com/articles/nature16961.pdf) and [AlphaGo Zero](https://discovery.ucl.ac.uk/id/eprint/10045895/1/agz_unformatted_nature.pdf) (an improved version of AlphaGo that does not use human knowledge but learns by playing against itself) have achieved superhuman performance in the game of **go** and, in the case of [AlphaZero](https://arxiv.org/pdf/1712.01815.pdf) (a generalized version of AlphaGo Zero), also in the games of **chess** and **shogi**. * [DQN](https://www.cs.toronto.edu/%7Evmnih/docs/dqn.pdf) has achieved human-level or superhuman performance in many **Atari games** * [DeepStack](https://science.sciencemag.org/content/356/6337/508) has achieved human-level performance in poker * [AlphaStar](https://deepmind.com/blog/article/alphastar-mastering-real-time-strategy-game-starcraft-ii) defeated a top professional player in the real-time strategy game **StarCraft 2** * [OpenAI Five](https://arxiv.org/pdf/1912.06680.pdf) defeated world champions in the game of **Dota** Upvotes: 2 <issue_comment>username_2: **AlphaDogfight** - from [Defense Advanced Research Projects Agency (DARPA)](https://en.wikipedia.org/wiki/DARPA) a programme that pitted computers using F-16 flight simulators against one another and later went on to defeat Air Force’s top F-16 fighter pilots. Check out [this](https://www.darpa.mil/news-events/2020-08-26) and [this](https://www.darpa.mil/news-events/2020-08-07) news and events by DARPA. Upvotes: 1
2020/09/01
698
1,997
<issue_start>username_0: How does PCA work when we reduce the original space to a 2 or higher-dimensional space? I understand the case when we reduce the dimensionality to $1$, but not this case. $$\begin{array}{ll} \text{maximize} & \mathrm{Tr}\left( \mathbf{w}^T\mathbf{X}\mathbf{X}^T\mathbf{w} \right)\\ \text{subject to} & \mathbf{w}^T\mathbf{w} = 1\end{array}$$<issue_comment>username_1: You might want to have a look at the wikipedia article of [PCA](https://en.wikipedia.org/wiki/Principal_component_analysis#Further_components), where it says: *"The $k$th component can be found by subtracting the first $k − 1$ principal components from $\mathbf{X}$:"* $$\hat{\mathbf{X}}\_k = \mathbf{X} - \sum\_{s=1}^{k-1}\mathbf{X}\mathbf{w}\_s\mathbf{w}\_s^T$$ Then you repeat the process to find the next component: $$\mathbf{w}\_k = \arg\max \mathbf{w}^T\mathbf{\hat{X}}^T\_k\mathbf{\hat{X}}\_k\mathbf{w}$$ $$\text{s.t. } \mathbf{w}\_k^T\mathbf{w}\_k = 1$$ Upvotes: 2 <issue_comment>username_2: You can also understand the logic from the view of constrained optimisation. Introduce a Lagrange function: $$ \mathcal{L} = \text{Tr} (w^{T} X X^{T} w) - \lambda w^{T} w $$ And take the derivative with respect to $w$: $$ \frac{\partial \mathcal{L}}{\partial w} = 2 (X X^{T} - \lambda) w $$ For the general case of dimension $\geqslant 1$ $w$ is a set of vectors $w = (w\_1 w\_2 \ldots w\_n)$. This expression vanishes, if for some index $i$ $w\_i$ is an of eigenvector of $XX^{T}$ with the eigenvalue $\lambda\_i$, and all other components are set to zero. In other words, stationary points are the eigenvectors of $X X^{T}$. The condititon $w^T w = 1$ imposes the orthogonality condition on the eigenvectors. In fact, going back to the initial functional, one sees, that $w\_i X X^{T} w\_j = \lambda\_j w\_i^{T} w\_j = 0$ for $i \neq j$. Therefore, we have finally: $$ \mathcal{L} =\sum \lambda\_i - \lambda $$ Which is maximized for any $k \geq 1$, by taking $k$ largest eigenvalues. Upvotes: 0
2020/09/01
490
1,868
<issue_start>username_0: As it is discussed [here](https://www.reddit.com/r/latin/comments/6akqdi/why_is_google_translate_so_bad_for_latin_a/), and I saw it on other Latin language forums too, everybody complains about how Google Translate fails to translate the Latin language. From my personal experience, it is not that much bad on other languages, including romance languages. So, what makes Google Translate fail so much to translate the Latin language? Is it about its syntax and grammar or lack of data?<issue_comment>username_1: I don't know what model Google is using for their translations, but it's highly likely that they're using one of today's SOTA deep learning models. The latest NLP models are trained on data scraped from the web, e.g. OpenAI's [GPT-2](https://openai.com/blog/better-language-models/#fn1) was trained on a dataset of 8 million web pages, Google's [BERT](https://arxiv.org/pdf/1810.04805.pdf) was trained on the BookCorpus (800M words) and English Wikipedia (2.500M words) pages. Now think about the amount of latin web pages and notice that there are over 6 million english wikipedia articles but less than 135.000 in latin (see [here](https://en.wikipedia.org/wiki/List_of_Wikipedias#Detailed_list)). As you can see, massive amounts of data are crucial for neural machine translation and I assume there is simply not enough out there for latin. Plus latin is one of the most complex and complicated languages, this makes the task not easier. Maybe Google and Co also focus less on a 'dead' language which is not spoken anymore and has it's right to exist more for educational purposes. Upvotes: 4 [selected_answer]<issue_comment>username_2: Simple old Latin is different from Latin and in the language words are added to written language that are not spoken as well as reverse order of words to have forward meaning. Upvotes: 1
2020/09/02
440
1,571
<issue_start>username_0: What I want to achieve is this: If my desired outputs are [1, 2, 3, 4] I would rather have my network produce this output: [0.99, 2.01, 999, 4.01] than say this: [0.94, 1.88, 3.12, 4.1] So I'd rather have a few very accurate outputs and the rest completely off, than have them all be decent but no more than that. My question is, is there a known way to do this? If not, would it make sense to remove the inputs that produce poor outputs, and redo the learning phase?<issue_comment>username_1: Use genetic algorithm. Run like 25 neural networks at once and choose the most successful one. This method is similar to evolution, which is why it is very effective. I created a model like this with similar sized training data as yours and it reached an overall error rate of 0.06% in a second. Don’t get rid of nodes. Instead, eliminate the bad networks. However, this doesn’t produce extremely high error rates if that is what you want. Upvotes: 0 <issue_comment>username_2: I assume [1, 2, 3, 4] are the desired outputs for different examples for a regression task. Sound like you need a different loss function. From your description it seems you don't care how big the error is if it's bigger than some value. Try the [Huber loss](https://en.wikipedia.org/wiki/Huber_loss)(in [Pytorch](https://pytorch.org/docs/stable/generated/torch.nn.SmoothL1Loss.html#smoothl1loss) and [TensorFlow](https://www.tensorflow.org/api_docs/python/tf/keras/losses/Huber)). Examples that are far from the expected value won't produce big gradients (: Upvotes: 2
2020/09/05
1,169
3,671
<issue_start>username_0: I have been reading: [Reinforcement Learning: An Introduction by Sutton and Barto](http://incompleteideas.net/book/the-book-2nd.html). I admit it's a good read for learning RL whereas it's more theoretical with detailed algorithms. Now, **I want something more programming oriented resource(s)** maybe a course, book, etc. I have been exploring Kaggle, Open-source RL projects. I need this to learn and grasp a deeper understanding of RL from the perspective of a developer i.e optimized way of writing code, explanation about using the latest RL libraries, cloud services, etc.<issue_comment>username_1: <NAME> has some interesting Medium articles on reinforcement learning with TensorFlow backed up with code on [GitHub](https://github.com/awjuliani/DeepRL-Agents). 1. [Part 0 — Q-Learning Agents](https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0) 2. [Part 1 — Two-Armed Bandit](https://medium.com/@awjuliani/super-simple-reinforcement-learning-tutorial-part-1-fd544fab149) 3. [Part 1.5 — Contextual Bandits](https://medium.com/@awjuliani/simple-reinforcement-learning-with-tensorflow-part-1-5-contextual-bandits-bff01d1aad9c) 4. [Part 2 — Policy-Based Agents](https://medium.com/@awjuliani/super-simple-reinforcement-learning-tutorial-part-2-ded33892c724) 5. [Part 3 — Model-Based RL](https://medium.com/@awjuliani/simple-reinforcement-learning-with-tensorflow-part-3-model-based-rl-9a6fe0cce99) 6. [Part 4 — Deep Q-Networks and Beyond](https://medium.com/@awjuliani/simple-reinforcement-learning-with-tensorflow-part-4-deep-q-networks-and-beyond-8438a3e2b8df) 7. [Part 5 — Visualizing an Agent’s Thoughts and Actions](https://medium.com/@awjuliani/simple-reinforcement-learning-with-tensorflow-part-5-visualizing-an-agents-thoughts-and-actions-4f27b134bb2a) 8. [Part 6 — Partial Observability and Deep Recurrent Q-Networks](https://medium.com/@awjuliani/simple-reinforcement-learning-with-tensorflow-part-6-partial-observability-and-deep-recurrent-q-68463e9aeefc) 9. [Part 7 — Action-Selection Strategies for Exploration](https://medium.com/@awjuliani/simple-reinforcement-learning-with-tensorflow-part-7-action-selection-strategies-for-exploration-d3a97b7cceaf) 10. [Part 8 — Asynchronous Actor-Critic Agents (A3C)](https://medium.com/@awjuliani/simple-reinforcement-learning-with-tensorflow-part-8-asynchronous-actor-critic-agents-a3c-c88f72a5e9f2) As [nbro](https://ai.stackexchange.com/users/2444/nbro) pointed out <NAME> has a good repository: <https://github.com/dennybritz/reinforcement-learning> As you have seen with Sutton & Barto's book the code is mostly in Lisp. Shangtong Zhang has replicated the code in Python: <https://github.com/ShangtongZhang/reinforcement-learning-an-introduction> Sudharsan and Ravichandiran wrote a book "Hands-On Reinforcement Leraning with Python" which uses OpenAI Gym and TensorFlow. You can find more information on the book along with their code repository at [Hands-On Reinforcement Learning with Python](https://github.com/sudharsan13296/Hands-On-Reinforcement-Learning-With-Python) Upvotes: 2 <issue_comment>username_2: I suggest you to have a look at [this](https://github.com/aikorea/awesome-rl) repo. It contains state-of-the art algorithms, papers, frameworks, courses and some implementations. You can also check "Deep Reinforcement Learning Hands On" book examples written by <NAME> [here](https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On). This repo contains many programming and reinforcement learning examples with PyTorch framework. Upvotes: 1
2020/09/06
1,291
3,931
<issue_start>username_0: I've read through the Alpha(Go)Zero paper and there is only one thing I don't understand. The paper on page 1 states: > > The MCTS search outputs probabilities π of playing each move. These search probabilities usually select much stronger moves than the raw move probabilities p of the neural network fθ(s); > > > My question: Why is this the case? Why is $\pi$ usually better than $p$? I think I can imagine why it's the case but I'm looking for more insight. what $\pi$ and $p$ are: Say we are in state $s\_1$. We have a network that takes the state and produces $p\_1$ (probabilities for actions) and $v\_1$ (a value for the state). We then run MCTS from this state and extract a policy $\pi(a|s\_1) = \frac{N(s\_1,a)^{1/\tau}}{\sum\_b N(s\_1,b)^{1/\tau}}$. The paper is saying that $\pi(-|s\_1)$ is usually better than $p\_1$.<issue_comment>username_1: <NAME> has some interesting Medium articles on reinforcement learning with TensorFlow backed up with code on [GitHub](https://github.com/awjuliani/DeepRL-Agents). 1. [Part 0 — Q-Learning Agents](https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0) 2. [Part 1 — Two-Armed Bandit](https://medium.com/@awjuliani/super-simple-reinforcement-learning-tutorial-part-1-fd544fab149) 3. [Part 1.5 — Contextual Bandits](https://medium.com/@awjuliani/simple-reinforcement-learning-with-tensorflow-part-1-5-contextual-bandits-bff01d1aad9c) 4. [Part 2 — Policy-Based Agents](https://medium.com/@awjuliani/super-simple-reinforcement-learning-tutorial-part-2-ded33892c724) 5. [Part 3 — Model-Based RL](https://medium.com/@awjuliani/simple-reinforcement-learning-with-tensorflow-part-3-model-based-rl-9a6fe0cce99) 6. [Part 4 — Deep Q-Networks and Beyond](https://medium.com/@awjuliani/simple-reinforcement-learning-with-tensorflow-part-4-deep-q-networks-and-beyond-8438a3e2b8df) 7. [Part 5 — Visualizing an Agent’s Thoughts and Actions](https://medium.com/@awjuliani/simple-reinforcement-learning-with-tensorflow-part-5-visualizing-an-agents-thoughts-and-actions-4f27b134bb2a) 8. [Part 6 — Partial Observability and Deep Recurrent Q-Networks](https://medium.com/@awjuliani/simple-reinforcement-learning-with-tensorflow-part-6-partial-observability-and-deep-recurrent-q-68463e9aeefc) 9. [Part 7 — Action-Selection Strategies for Exploration](https://medium.com/@awjuliani/simple-reinforcement-learning-with-tensorflow-part-7-action-selection-strategies-for-exploration-d3a97b7cceaf) 10. [Part 8 — Asynchronous Actor-Critic Agents (A3C)](https://medium.com/@awjuliani/simple-reinforcement-learning-with-tensorflow-part-8-asynchronous-actor-critic-agents-a3c-c88f72a5e9f2) As [nbro](https://ai.stackexchange.com/users/2444/nbro) pointed out <NAME> has a good repository: <https://github.com/dennybritz/reinforcement-learning> As you have seen with Sutton & Barto's book the code is mostly in Lisp. Shangtong Zhang has replicated the code in Python: <https://github.com/ShangtongZhang/reinforcement-learning-an-introduction> Sudharsan and Ravichandiran wrote a book "Hands-On Reinforcement Leraning with Python" which uses OpenAI Gym and TensorFlow. You can find more information on the book along with their code repository at [Hands-On Reinforcement Learning with Python](https://github.com/sudharsan13296/Hands-On-Reinforcement-Learning-With-Python) Upvotes: 2 <issue_comment>username_2: I suggest you to have a look at [this](https://github.com/aikorea/awesome-rl) repo. It contains state-of-the art algorithms, papers, frameworks, courses and some implementations. You can also check "Deep Reinforcement Learning Hands On" book examples written by <NAME> [here](https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On). This repo contains many programming and reinforcement learning examples with PyTorch framework. Upvotes: 1
2020/09/07
328
1,470
<issue_start>username_0: I have got numerous frames and I've detected all the faces in all the frames using Retinaface. However I need to track the faces of people over frames. For this purpose, I assumed I could try finding the landmarks from the face using libraries like `dlib` and maybe compare these landmarks to check if they are infact the face of the same person. I would like to know if there are other methods or some useful resources I could refer for the same. Thanks a lot in advance.<issue_comment>username_1: The topic of your problem is person re-identification. You can check [here](https://github.com/bismex/Awesome-person-re-identification). Upvotes: 2 <issue_comment>username_2: You can try using something called as a Siamese network if you are willing to train the network on your own using something called as triplet loss(if you have lots of face images). Another approach would we something called a one-shot using FaceNet(transfer learning approach) FaceNet uses deep convolutional neural network (CNN). The network is trained such that the squared L2 distance between the embeddings correspond to face similarity. The images used for training are scaled, transformed and are tightly cropped around the face area. Another important aspect of FaceNet is its loss function . It is already trained using triplet loss function.In this case you could just feed to face images and you would get a thershold score for the similarities. Upvotes: 2
2020/09/10
329
1,258
<issue_start>username_0: I want to create a CNN in Python, specifically, only with NumPy, if possible. For optimizing the time of convolution (actually correlation) in the network, I wanna try to use FFT-based convolution. The data that needs to be convoluted (correlated) is a 4D image tensor with shape `[batch_size, width, height, channels]` and 4D filter tensor `[filter_width, filter_height, in_channel, out_channel]`. I read a lot of articles about FFT-based convolution, but they aren't doing it in my way. Thus, I need your help. How could I fft-convolve a 4D image and a 4D filter with stride?<issue_comment>username_1: I think this should work for you: [scipy.signal.correlate | SciPy](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.correlate.html) I used it myself while I was writing a CNN in numpy. Upvotes: 1 <issue_comment>username_2: I think what you need to use is 3D convolution operation. Your data is 3D, width, height, and num\_channels. Your data is similar to color images with RGB channels. However, since you are trying to consider the correlation amongst channels 2D convolution will not work for you. You can use 3D convolution which is available to use with deep learning tools such as Tensorflow. Upvotes: 0
2020/09/10
1,603
6,092
<issue_start>username_0: I am looking for a book about machine learning that would suit my physics background. I am more or less familiar with classical and complex analysis, theory of probability, сcalculus of variations, matrix algebra, etc. However, I have not studied topology, measure theory, group theory, and other more advanced topics. I try to find a book that is written neither for beginners, nor for mathematicians. Recently, I have read the great book "Statistical inference" written by <NAME> Berger. They write in the introduction that "The purpose of this book is to build theoretical statistics (as different from mathematical statistics) from the first principles of probability theory". So, *I am looking for some "theoretical books" about machine learning*. There are many online courses and brilliant books out there that focus on the practical side of applying machine learning models and using the appropriate libraries. It seems to me that there are no problems with them, but I would like to find a book on theory. By now I have skimmed through the following books * [Pattern Recognition And Machine Learning](https://rads.stackoverflow.com/amzn/click/com/0387310738) It looks very nice. The only point of concern is that the book was published in 2006. So, I am not sure about the relevance of the chapters considering neural nets, since this field is developing rather fast. * [The elements of statistical learning](https://rads.stackoverflow.com/amzn/click/com/0387848576) This book also seems very good. It covers most of the topics as well as the first book. However, I am feeling that its style is different and I do not know which book will suit me better. * [Artificial Intelligence. A Modern Approach](https://rads.stackoverflow.com/amzn/click/com/0136042597) This one covers more recent topics, such as natural language processing. As far as I understand, it represents the view of a computer scientist on machine learning. * [Machine Learning A Probabilistic Perspective](https://rads.stackoverflow.com/amzn/click/com/0262018020) Maybe it has a slight bias towards probability theory, which is stated in the title. However, the book looks fascinating as well. I think that the first or the second book should suit me, but I do not know what decision to make. I am sure that I have overlooked some books. **Are there some other ML books that focus on theory?**<issue_comment>username_1: [Pattern Recognition And Machine Learning](http://users.isr.ist.utl.pt/%7Ewurmd/Livros/school/Bishop%20-%20Pattern%20Recognition%20And%20Machine%20Learning%20-%20Springer%20%202006.pdf) is a great theoretical book. I don't know anything better on standard ML. I read several pages from it myself and all my colleagues researchers suggest to look there if you are not sure about some concepts. The 2 problems with it are that it's huge and it doesn't cover almost all deep learning models known for today. So, in addition, I'd suggest you look at [Deep Learning](https://www.deeplearningbook.org/) by <NAME> et al. Your concerns about not studying topology, measure theory and group theory are groundless. These sections of math aren't prerequisites in any way, they aren't even discussed anywhere I know. Actually, ML theory is more like probability theory and statistics. Especially, statistical learning theory (which is nothing more than probability theory and statistics). I haven't read any books on SLT so have a look at [this answer](https://ai.stackexchange.com/a/20358/2444). Upvotes: 1 <issue_comment>username_2: Some of the books that you mention are often used as reference books in introductory courses to machine learning or artificial intelligence. For example, if I remember correctly, in my introductory course to machine learning, the professor suggested the book [Pattern Recognition And Machine Learning](http://users.isr.ist.utl.pt/%7Ewurmd/Livros/school/Bishop%20-%20Pattern%20Recognition%20And%20Machine%20Learning%20-%20Springer%20%202006.pdf) (2006) by Bishop, although we never used it during the lessons. This is a good book, but, in my opinion, it covers many topics, such as variational inference or sampling methods, that are not suited for an introductory course. The book [Artificial Intelligence. A Modern Approach](http://aima.cs.berkeley.edu/), by <NAME> Russell, definitely does not focus on machine learning, but it covers many other aspects of **artificial intelligence**, such as search, planning, knowledge representation, machine learning, robotics, natural language processing or computer vision. This is probably the book that you should read and use if you want to have an extensive overview of the AI field. Although I never fully read it, I often used it as a reference, as I use the other mentioned book. For instance, during my bachelor's and, more specifically, an introductory course to artificial intelligence, we had used this book as the reference book, but note that there are [other books that provide an extensive overview of the AI field](https://ai.stackexchange.com/a/25799/2444). The other two books are not as famous as these two, but they are probably also good books, although their focus may be different. There are at least three other books that I think you should also be aware of, given that they also cover the actual *theory of learning*, aka **(computational) learning theory**, before diving into more specific topics, such as kernel methods. * [Machine Learning](http://profsite.um.ac.ir/%7Emonsefi/machine-learning/pdf/Machine-Learning-Tom-Mitchell.pdf) (1997) by <NAME>chell * [Foundations of Machine Learning](https://mitpress.ublish.com/ereader/7093/?preview=#page/Cover) (2nd edition, 2018) by <NAME> et al. * [Understanding Machine Learning: From Theory to Algorithms](https://www.cs.huji.ac.il/%7Eshais/UnderstandingMachineLearning/understanding-machine-learning-theory-algorithms.pdf) (2014) by <NAME> et al. You can find more books on learning theory [here](https://ai.stackexchange.com/a/20358/2444). Upvotes: 3 [selected_answer]
2020/09/11
1,102
4,086
<issue_start>username_0: Each person probably uses an app that tracks his/her position periodically and sends it to our servers. What I want is to use these data to train a model to predict the rush hours of each bus-stop on the map, so we can send extra buses to handle the predicted cumulation before it happens. I have no experience in AI nor machine learning. So, which model should I use to do this?<issue_comment>username_1: [Pattern Recognition And Machine Learning](http://users.isr.ist.utl.pt/%7Ewurmd/Livros/school/Bishop%20-%20Pattern%20Recognition%20And%20Machine%20Learning%20-%20Springer%20%202006.pdf) is a great theoretical book. I don't know anything better on standard ML. I read several pages from it myself and all my colleagues researchers suggest to look there if you are not sure about some concepts. The 2 problems with it are that it's huge and it doesn't cover almost all deep learning models known for today. So, in addition, I'd suggest you look at [Deep Learning](https://www.deeplearningbook.org/) by <NAME> et al. Your concerns about not studying topology, measure theory and group theory are groundless. These sections of math aren't prerequisites in any way, they aren't even discussed anywhere I know. Actually, ML theory is more like probability theory and statistics. Especially, statistical learning theory (which is nothing more than probability theory and statistics). I haven't read any books on SLT so have a look at [this answer](https://ai.stackexchange.com/a/20358/2444). Upvotes: 1 <issue_comment>username_2: Some of the books that you mention are often used as reference books in introductory courses to machine learning or artificial intelligence. For example, if I remember correctly, in my introductory course to machine learning, the professor suggested the book [Pattern Recognition And Machine Learning](http://users.isr.ist.utl.pt/%7Ewurmd/Livros/school/Bishop%20-%20Pattern%20Recognition%20And%20Machine%20Learning%20-%20Springer%20%202006.pdf) (2006) by Bishop, although we never used it during the lessons. This is a good book, but, in my opinion, it covers many topics, such as variational inference or sampling methods, that are not suited for an introductory course. The book [Artificial Intelligence. A Modern Approach](http://aima.cs.berkeley.edu/), by <NAME> Russell, definitely does not focus on machine learning, but it covers many other aspects of **artificial intelligence**, such as search, planning, knowledge representation, machine learning, robotics, natural language processing or computer vision. This is probably the book that you should read and use if you want to have an extensive overview of the AI field. Although I never fully read it, I often used it as a reference, as I use the other mentioned book. For instance, during my bachelor's and, more specifically, an introductory course to artificial intelligence, we had used this book as the reference book, but note that there are [other books that provide an extensive overview of the AI field](https://ai.stackexchange.com/a/25799/2444). The other two books are not as famous as these two, but they are probably also good books, although their focus may be different. There are at least three other books that I think you should also be aware of, given that they also cover the actual *theory of learning*, aka **(computational) learning theory**, before diving into more specific topics, such as kernel methods. * [Machine Learning](http://profsite.um.ac.ir/%7Emonsefi/machine-learning/pdf/Machine-Learning-Tom-Mitchell.pdf) (1997) by <NAME>ll * [Foundations of Machine Learning](https://mitpress.ublish.com/ereader/7093/?preview=#page/Cover) (2nd edition, 2018) by <NAME> et al. * [Understanding Machine Learning: From Theory to Algorithms](https://www.cs.huji.ac.il/%7Eshais/UnderstandingMachineLearning/understanding-machine-learning-theory-algorithms.pdf) (2014) by <NAME>-Shwartz et al. You can find more books on learning theory [here](https://ai.stackexchange.com/a/20358/2444). Upvotes: 3 [selected_answer]
2020/09/12
1,228
4,636
<issue_start>username_0: Genetic algorithms are used to solve many optimization tasks. If I have a dataset, can I evolve it with a genetic algorithm to create an evolved version of the same dataset? We could consider each feature of the initial dataset as a chromosome (or individual), which is then combined with other chromosomes (features) to find more features. Is this possible? Has this been done? I will like to edit the details with an example so that it is easier to understand. Example: In practice cyber-security attacks evolve over time since it finds a new way to breach a system. The main draw-back of intrusion detection model is that it needs to be trained every time attack evolves. So I was hoping if genetic algorithm can be used on the present benchmarked datasets (like NSL-KDD) to come up with a futuristic type dataset maybe after X-number of generations. And check if a model is able to classify that generated dataset as well.<issue_comment>username_1: [Pattern Recognition And Machine Learning](http://users.isr.ist.utl.pt/%7Ewurmd/Livros/school/Bishop%20-%20Pattern%20Recognition%20And%20Machine%20Learning%20-%20Springer%20%202006.pdf) is a great theoretical book. I don't know anything better on standard ML. I read several pages from it myself and all my colleagues researchers suggest to look there if you are not sure about some concepts. The 2 problems with it are that it's huge and it doesn't cover almost all deep learning models known for today. So, in addition, I'd suggest you look at [Deep Learning](https://www.deeplearningbook.org/) by I<NAME>fellow et al. Your concerns about not studying topology, measure theory and group theory are groundless. These sections of math aren't prerequisites in any way, they aren't even discussed anywhere I know. Actually, ML theory is more like probability theory and statistics. Especially, statistical learning theory (which is nothing more than probability theory and statistics). I haven't read any books on SLT so have a look at [this answer](https://ai.stackexchange.com/a/20358/2444). Upvotes: 1 <issue_comment>username_2: Some of the books that you mention are often used as reference books in introductory courses to machine learning or artificial intelligence. For example, if I remember correctly, in my introductory course to machine learning, the professor suggested the book [Pattern Recognition And Machine Learning](http://users.isr.ist.utl.pt/%7Ewurmd/Livros/school/Bishop%20-%20Pattern%20Recognition%20And%20Machine%20Learning%20-%20Springer%20%202006.pdf) (2006) by Bishop, although we never used it during the lessons. This is a good book, but, in my opinion, it covers many topics, such as variational inference or sampling methods, that are not suited for an introductory course. The book [Artificial Intelligence. A Modern Approach](http://aima.cs.berkeley.edu/), by <NAME> Russell, definitely does not focus on machine learning, but it covers many other aspects of **artificial intelligence**, such as search, planning, knowledge representation, machine learning, robotics, natural language processing or computer vision. This is probably the book that you should read and use if you want to have an extensive overview of the AI field. Although I never fully read it, I often used it as a reference, as I use the other mentioned book. For instance, during my bachelor's and, more specifically, an introductory course to artificial intelligence, we had used this book as the reference book, but note that there are [other books that provide an extensive overview of the AI field](https://ai.stackexchange.com/a/25799/2444). The other two books are not as famous as these two, but they are probably also good books, although their focus may be different. There are at least three other books that I think you should also be aware of, given that they also cover the actual *theory of learning*, aka **(computational) learning theory**, before diving into more specific topics, such as kernel methods. * [Machine Learning](http://profsite.um.ac.ir/%7Emonsefi/machine-learning/pdf/Machine-Learning-Tom-Mitchell.pdf) (1997) by <NAME> * [Foundations of Machine Learning](https://mitpress.ublish.com/ereader/7093/?preview=#page/Cover) (2nd edition, 2018) by <NAME> et al. * [Understanding Machine Learning: From Theory to Algorithms](https://www.cs.huji.ac.il/%7Eshais/UnderstandingMachineLearning/understanding-machine-learning-theory-algorithms.pdf) (2014) by <NAME> et al. You can find more books on learning theory [here](https://ai.stackexchange.com/a/20358/2444). Upvotes: 3 [selected_answer]
2020/09/12
675
3,013
<issue_start>username_0: I read an [article](https://blog.coast.ai/five-video-classification-methods-implemented-in-keras-and-tensorflow-99cad29cc0b5) about captioning videos and I want to use solution number 4 (extract features with a CNN, pass the sequence to a separate RNN) in my own project. But for me, it seems really strange that in this method we use the Inception model without any retraining or something like that. Every project has different requirements and even if you use pretrained model instead of your own, you should do some training. And I wonder how to do this? For example, I created a project where I use the network with CNN layers and then LSTM and Dense layers. And in every epoch, there is feed-forward and backpropagation through the whole network, all layers. But what if you have CNN network to extract features and LSTM network that takes sequences as inputs. How to train CNN network if there is no defined output? This network should only extract features but the network doesn't know what features. So the question is: How to train CNN to extract relevant features and then passing these features to LSTM?<issue_comment>username_1: The approach that you don't train the whole net, but just the latter part of it (all starting with lstm in our case), can actually work. The idea is that the inception was already pretrained a very large dataset (imagenet for instance). And it's capable of extracting some useful information from it. Actually there are different domains of images in the imagenet and the inception net needed to capture a vast variety of input information to classify images well. The idea is that the pretrained inception is already capable to extract almost everything what could possibly be useful (unless your images aren't something completely different from imagenet, but that a rare case). Then you adapt the lstm layers and the fully connected layers to correctly process that information. Maybe you aren't going to get the perfect score with this approach and maybe it's better to train the whole large net including the inception part on the new data to lower the distributional shift and that's what people usually do in fact, but it takes more time to train and if you don't have enough data you won't be able to achieve results that are significantly better than those with a frozen CNN part. Upvotes: 2 [selected_answer]<issue_comment>username_2: you could also just use a **Task-agnostic CNN** as an **encoder** to get extract features like in (1) and then use the output of the last global pooling layer and then feed that as an input to the LSTM layer or any other downstream task. Add another small Neural Network (**projection head**) after the CNN. And then use contrastive loss on output of this projection head to improve upon the model. (1) Big Self-Supervised Models are Strong Semi-Supervised Learners (<NAME>, <NAME>, <NAME>, <NAME>, <NAME> ) Link: <https://arxiv.org/abs/2006.10029> Upvotes: 0
2020/09/13
1,402
5,154
<issue_start>username_0: As we all know, ["Hello World"](https://en.wikipedia.org/wiki/%22Hello,_World!%22_program) is usually the first program that any programmer learns/implements in any language/framework. As <NAME> mentioned in his [book](https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/) that [MNIST](http://yann.lecun.com/exdb/mnist/) is often called the *Hello World of Machine Learning*, **is there any "Hello World" problem of Reinforcement Learning?** A few candidates that I could think of are [multi armed bandits problem](https://en.wikipedia.org/wiki/Multi-armed_bandit) and [Cart Pole Env](https://gym.openai.com/envs/CartPole-v0/).<issue_comment>username_1: MNIST (along with CIFAR) may be the "Hello World" of **supervised** learning for image classification, but it is definitely not the "Hello World" of all machine learning techniques, given that RL is also part of ML and MNIST is definitely not the "Hello World" of RL. I don't think there is a single "Hello World" problem for RL. However, if you are looking for simple problems (or environments) that are usually used as baselines to assess the quality of RL agents, then I would say that the simple **grid worlds** where you need to move from one place to the other, the **CartPole**, **MountainCar**, **Pendulum** or other environments listed [here](https://github.com/openai/gym/wiki/Leaderboard) are often used. The environment that you choose to train and test your RL agent depends on your goals. For example, if you designed an algorithm that is supposed to deal with continuous action spaces, then an environment where you can take only a discrete number of actions may not be a good option. The mentioned environments are very simple (i.e. toy problems). In my opinion, we need more serious environments that can show the applicability of RL to other areas other than (relatively simple) games. Upvotes: 3 <issue_comment>username_2: While there's no simple Hello World problem of RL, if your aim is to understand the basic working of Reinforcement Learning and see it at play while using as few moving parts as possible, a simple suggestion would be using **Tabular** Q-Learning in a toy environment (like your suggested Cart-Pole Env). **Here's the reasoning behind this suggestion** Let's say we interpret MNIST's label as a *Hello World* of Supervised Learning to mean something showing the basic steps of doing Supervised Learning: Create a model, load the data, then train. If that interpretation is not far off, we can say a simple introductory problem to Reinforcement Learning (RL) should focus on easily demonstrating a working [Markov Decision Process (MDP)](https://www.coursera.org/lecture/fundamentals-of-reinforcement-learning/markov-decision-processes-8T0GQ) which is the backbone of the RL decision making process. As such, this minimal working would involve: Observing the world, selecting an action, as shown in this loop: [![Simple MDP](https://i.stack.imgur.com/rrgVV.jpg)](https://i.stack.imgur.com/rrgVV.jpg) This picture is missing two important steps in an RL algorithm learning loop: 1. Estimating the rewards or Fitting the model 2. Improving how you select actions. (Updating your policy) How we decide to update the policy, or fit the model is what makes difference in the RL algorithm most of the time. So a suggested first problem would be one that helps you **see the MDP in action**, while keeping steps 1 and 2 simple enough so that you **understand how the agent learns**. Tabular Q-Learning seems clear enough for this because it uses a [Q-table](https://observablehq.com/@frynd/q-table-reinforcement-learning) represented as a 2D array to do the two steps. This should not suggest Q-learning is a "Hello World" RL algorithm because of the said relative ease in understanding it :) You will be unable to use it's Tabular version anywhere else than in a toy environment though, typically Frozen-Lake and CartPole. An improvement would be [using a neural network](https://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html) instead of a table to estimate Q values. Here are a few useful resources: 1. [Q-Learning with Tables](https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0) (Guide) 2. [Q-learning jupyter notebook](https://gist.github.com/awjuliani/9024166ca08c489a60994e529484f7fe#file-q-table-learning-clean-ipynb) (Code ~25 lines) 3. [Q-Learning with Frozen-Lake and Taxi](https://github.com/username_2/reinforcementLearning/tree/master/algorithms/qlearning) (Code) 4. [Reinforcement Learning with Q-Learning](https://www.freecodecamp.org/news/diving-deeper-into-reinforcement-learning-with-q-learning-c18d0db58efe/) (Guide) A multi-armed bandit would also be great in introducing you to [exploration-exploitation trade-off](https://or.stackexchange.com/questions/4598/difference-between-exploration-and-exploitation-in-simulated-annealing-algorithm) (which Q-learning does too), though it wouldn't be considered a full RL algorithm since it has no context. Upvotes: 2
2020/09/14
2,216
7,170
<issue_start>username_0: I am constructing a convolutional variational autoencoder for images, starting out with mnist digits. Typically I would specify convolutional layers in the following way: ``` input_img = layers.Input(shape=(28,28,1)) conv1 = keras.layers.Conv2D(32, (3,3), strides=2, padding='same', activation='relu')(input_img) conv2 = keras.layers.Conv2D(64, (3,3), strides=2, padding='same', activation='relu')(conv1) ... ``` However, I would also like to construct a convolutional filter/kernel that is fixed BUT dependent on some content related to the input, which we can call an auxiliary label. This could be a class label or some other piece of relevant information corresponding to the input. For example, for MNIST I can use the class label as auxiliary information and map the digit to a (3,3) kernel and essentially generate a distinct kernel for each digit. This specific filter/kernel is not learned through the network so it is fixed, but it is class dependent. This filter will then be concatenated with the traditional convolutional filters shown above. ``` input_img = layers.Input(shape=(28,28,1)) conv1 = keras.layers.Conv2D(32, (3,3), strides=2, padding='same', activation='relu')(input_img) # TODO: add a filter/kernel that is fixed (not learned by model) but is class label specific # Not sure how to implement this? # auxiliary_conv = keras.layers.Conv2D(1, (3,3), strides=2, padding='same', activation='relu')(input_img) ``` I know there are kernel initializers to specify initial weights <https://keras.io/api/layers/initializers/>, but I'm not sure if this is relevant and if so, how to make this work with a class specific initialization. In summary, I want a portion of the model's weights to be input content dependent so that some of the trained model's weights vary based on the auxiliary information such as class label, instead of being completely fixed regardless of the input. Is this even possible to achieve in Keras/Tensorflow? I would appreciate any suggestions or examples to get started with implementation.<issue_comment>username_1: Not a tensorflow expert but I may be able to offer some conceptual advice. Since you do not care to learn the filter, but instead want to fix a discrete set of possible values for a discrete set of cases, you can use tensor operations (i.e convolutions) rather than neural network layer operations. Essentially, in framework-agnostic pseudocode this would look like: ``` # layers with learned parameters output1 = layers1(input) # apply unlearned but changeable layer convolutions kernel_val = kernel_val_selection_function(output1) output2 = convolve_2D(output1,kernel_val) # more layers with learned parameters output3 = layers3(output2) ... ``` The function graph will treat `kernel_val` as a constant for purposes of backpropagation, so as long as your convolution operations are done within the framework used to create the function graph (i.e. tensorflow) you shouldn't have any problems with backprop. Upvotes: 0 <issue_comment>username_2: Here is one way of achieving this. This network is an autoencoder, with extra `auxiliary_convs`. The active convolution depends on the input image's class, since each convolution layer's output is multiplied with the one-hot encoded class input. ``` import matplotlib.pyplot as plt import numpy as np from tensorflow import keras from tensorflow.keras import layers import tensorflow.keras.backend as K BN = layers.BatchNormalization (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() x_train = x_train.astype('float32') / 255 x_test = x_test.astype('float32') / 255 # Make sure images have shape (28, 28, 1) x_train = np.expand_dims(x_train, -1) x_test = np.expand_dims(x_test, -1) # Convert class vectors to binary class matrices num_classes = y_train.max() + 1 y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) # Build the model inputs = [keras.Input(shape=x_train.shape[1:]), keras.Input(shape=y_train.shape[1:])] dim = 8 x = inputs[0] x = BN()(layers.Conv2D(dim, kernel_size=3, activation='relu')(x)) x = BN()(layers.Conv2D(dim, kernel_size=3, activation='relu')(x)) auxiliary_convs = [layers.Conv2D(dim, kernel_size=3, padding='same', activation='relu') for _ in range(num_classes)] x_auxs = [conv(x) * inputs[1][:,ix:ix+1,None,None] for ix, conv in enumerate(auxiliary_convs)] x = K.sum(K.concatenate([x_aux[:,:,:,:,None] for x_aux in x_auxs]), axis=-1) x = layers.AveragePooling2D(pool_size=2)(x) x = BN()(layers.Conv2D(dim, kernel_size=3, activation='elu')(x)) x = layers.AveragePooling2D(pool_size=2)(x) x = layers.Flatten()(x) x = layers.Dense(16, activation='elu')(x) x = layers.Dense(2, activation='tanh')(x) y = x y = BN()(layers.Dense(4, activation='elu')(y)) y = BN()(layers.Dense(8, activation='elu')(y)) y = BN()(layers.Dense(16, activation='elu')(y)) y = layers.Dense(np.prod(inputs[0].shape[1:]), activation='sigmoid')(y) y = layers.Reshape(inputs[0].shape[1:])(y) model = keras.Model(inputs, y) model.summary() ``` It is a bit simpler if you don't sum auxiliary tensors together, but simply concat them: ``` x = K.concatenate(x_auxs) ``` However in this case the last `Conv2D` layer would have redundant parameters to train, since `x` has `dim * num_classes` dimensions after the `K.concatenate`. If you don't want to train the input-dependent part you can freeze the layers, but I don't know why you would want to do that. I also tested a variation of this for fun, having a constrained convolutional network but modifying the autoencoded feature based on the image's class: ``` inputs = [keras.Input(shape=x_train.shape[1:]), keras.Input(shape=y_train.shape[1:])] act, dim, enc_dim, l2_reg = 'elu', 16, 2, 1e-1 x = inputs[0] x = BN()(layers.Conv2D(8, kernel_size=3, activation=act)(x)) for _ in range(3): x = BN()(layers.Conv2D(dim, kernel_size=3, activation=act)(x)) x = layers.AveragePooling2D(pool_size=2)(x) for _ in range(3): x = BN()(layers.Conv2D(dim, kernel_size=3, activation=act)(x)) x = layers.Flatten()(x) for k in [6, 8, 4, 3]: x = BN()(layers.Dense(k, activation='elu')(x)) x = layers.Dense(enc_dim, activation='linear', use_bias=False, activity_regularizer=keras.regularizers.l2(l2_reg))(x) a = layers.Dense(enc_dim, activation='exponential', use_bias=False)(inputs[1]) b = layers.Dense(enc_dim, activation='linear')(inputs[1]) x = keras.activations.tanh(a * x + b) y = x for k in [3, 4, 8, 32, 128]: y = BN()(layers.Dense(k, activation='elu')(y)) y = layers.Dense(np.prod(inputs[0].shape[1:]), activation='sigmoid')(y) y = layers.Reshape(inputs[0].shape[1:])(y) model = keras.Model(inputs, y) model.summary() ``` The scatter plot on the left shows the codes (aka. embeddings), and the image's class is varied on the decoded examples on the right. Higher the `l2_reg` is, the tighter the classes' distribution is. Although I don't know what is the utility of this network :D [![2d autoencoder](https://i.stack.imgur.com/G80zx.png)](https://i.stack.imgur.com/G80zx.png) Upvotes: 2 [selected_answer]
2020/09/15
1,932
5,977
<issue_start>username_0: I have roughly 30,000 images of two categories, which are 'crops' and 'weeds.' An example of what I have can be found below: [![enter image description here](https://i.stack.imgur.com/ZIwtM.jpg)](https://i.stack.imgur.com/ZIwtM.jpg) The goal will use my training images to detect weeds among crops, given an orthomosaic GIS image of a given field. I guess you could say that I'm trying to detect certain objects in the field. As I'm new to deep learning, how would one go about generating training labels for this task? Can I just label the entire photo as a 'weed' using some type of text file, or do I actually have to draw bounding boxes (around weeds) on each image that will be used for training? If so, is there an easier way than going through all 30,000 of my images? I'm very new to this, so any specific details would really help a lot!<issue_comment>username_1: Not a tensorflow expert but I may be able to offer some conceptual advice. Since you do not care to learn the filter, but instead want to fix a discrete set of possible values for a discrete set of cases, you can use tensor operations (i.e convolutions) rather than neural network layer operations. Essentially, in framework-agnostic pseudocode this would look like: ``` # layers with learned parameters output1 = layers1(input) # apply unlearned but changeable layer convolutions kernel_val = kernel_val_selection_function(output1) output2 = convolve_2D(output1,kernel_val) # more layers with learned parameters output3 = layers3(output2) ... ``` The function graph will treat `kernel_val` as a constant for purposes of backpropagation, so as long as your convolution operations are done within the framework used to create the function graph (i.e. tensorflow) you shouldn't have any problems with backprop. Upvotes: 0 <issue_comment>username_2: Here is one way of achieving this. This network is an autoencoder, with extra `auxiliary_convs`. The active convolution depends on the input image's class, since each convolution layer's output is multiplied with the one-hot encoded class input. ``` import matplotlib.pyplot as plt import numpy as np from tensorflow import keras from tensorflow.keras import layers import tensorflow.keras.backend as K BN = layers.BatchNormalization (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() x_train = x_train.astype('float32') / 255 x_test = x_test.astype('float32') / 255 # Make sure images have shape (28, 28, 1) x_train = np.expand_dims(x_train, -1) x_test = np.expand_dims(x_test, -1) # Convert class vectors to binary class matrices num_classes = y_train.max() + 1 y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) # Build the model inputs = [keras.Input(shape=x_train.shape[1:]), keras.Input(shape=y_train.shape[1:])] dim = 8 x = inputs[0] x = BN()(layers.Conv2D(dim, kernel_size=3, activation='relu')(x)) x = BN()(layers.Conv2D(dim, kernel_size=3, activation='relu')(x)) auxiliary_convs = [layers.Conv2D(dim, kernel_size=3, padding='same', activation='relu') for _ in range(num_classes)] x_auxs = [conv(x) * inputs[1][:,ix:ix+1,None,None] for ix, conv in enumerate(auxiliary_convs)] x = K.sum(K.concatenate([x_aux[:,:,:,:,None] for x_aux in x_auxs]), axis=-1) x = layers.AveragePooling2D(pool_size=2)(x) x = BN()(layers.Conv2D(dim, kernel_size=3, activation='elu')(x)) x = layers.AveragePooling2D(pool_size=2)(x) x = layers.Flatten()(x) x = layers.Dense(16, activation='elu')(x) x = layers.Dense(2, activation='tanh')(x) y = x y = BN()(layers.Dense(4, activation='elu')(y)) y = BN()(layers.Dense(8, activation='elu')(y)) y = BN()(layers.Dense(16, activation='elu')(y)) y = layers.Dense(np.prod(inputs[0].shape[1:]), activation='sigmoid')(y) y = layers.Reshape(inputs[0].shape[1:])(y) model = keras.Model(inputs, y) model.summary() ``` It is a bit simpler if you don't sum auxiliary tensors together, but simply concat them: ``` x = K.concatenate(x_auxs) ``` However in this case the last `Conv2D` layer would have redundant parameters to train, since `x` has `dim * num_classes` dimensions after the `K.concatenate`. If you don't want to train the input-dependent part you can freeze the layers, but I don't know why you would want to do that. I also tested a variation of this for fun, having a constrained convolutional network but modifying the autoencoded feature based on the image's class: ``` inputs = [keras.Input(shape=x_train.shape[1:]), keras.Input(shape=y_train.shape[1:])] act, dim, enc_dim, l2_reg = 'elu', 16, 2, 1e-1 x = inputs[0] x = BN()(layers.Conv2D(8, kernel_size=3, activation=act)(x)) for _ in range(3): x = BN()(layers.Conv2D(dim, kernel_size=3, activation=act)(x)) x = layers.AveragePooling2D(pool_size=2)(x) for _ in range(3): x = BN()(layers.Conv2D(dim, kernel_size=3, activation=act)(x)) x = layers.Flatten()(x) for k in [6, 8, 4, 3]: x = BN()(layers.Dense(k, activation='elu')(x)) x = layers.Dense(enc_dim, activation='linear', use_bias=False, activity_regularizer=keras.regularizers.l2(l2_reg))(x) a = layers.Dense(enc_dim, activation='exponential', use_bias=False)(inputs[1]) b = layers.Dense(enc_dim, activation='linear')(inputs[1]) x = keras.activations.tanh(a * x + b) y = x for k in [3, 4, 8, 32, 128]: y = BN()(layers.Dense(k, activation='elu')(y)) y = layers.Dense(np.prod(inputs[0].shape[1:]), activation='sigmoid')(y) y = layers.Reshape(inputs[0].shape[1:])(y) model = keras.Model(inputs, y) model.summary() ``` The scatter plot on the left shows the codes (aka. embeddings), and the image's class is varied on the decoded examples on the right. Higher the `l2_reg` is, the tighter the classes' distribution is. Although I don't know what is the utility of this network :D [![2d autoencoder](https://i.stack.imgur.com/G80zx.png)](https://i.stack.imgur.com/G80zx.png) Upvotes: 2 [selected_answer]
2020/09/16
660
2,748
<issue_start>username_0: That is, if AGI were an existing technology, how much would it be valued to? Obviously it would depend on its efficiency, if it requires more than all the existing hardware to run it, it would be impossible to market. This question is more about getting a general picture of the economy surrounding this technology. Assuming a specific definition of AGI and that we implemented that AGI, what is its potential economical value? Current investments in this research field are also useful data.<issue_comment>username_1: I will try to give some sense to this question. > > Artificial general intelligence (AGI) is the hypothetical[1] intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI,[2][3][4] full AI,[5] or general intelligent action.[6] Some academic sources reserve the term "strong AI" for machines that can experience consciousness. > > > These are the first sentences on AGI on wikipedia ([link](https://en.wikipedia.org/wiki/Artificial_general_intelligence)), and the softest limit there is > > [learn] any intellectual task that a human being can. > > > Even taking only this, it would mean that any AGI has infinite economic value. As soon as there is something that can learn any human task and has the speed of current GPUs/CPUs it could potentially immediatly replace every human in every task. There are certainly enough computers with CPUs and GPUs out there. This question is still a little flawed because you not only have to constrain the definition of AGI but also how it would actually be implemented. Upvotes: 1 <issue_comment>username_2: The economic value would be high indeed, as, combined with robotics, AGI would be able to replace all human workers. So: * Whatever the economic value of the sum of human labor is, in an ideal sense Of course, there would also be the question of the **cost of computation**, the cost of the hardware & software required for AGI, and whether that cost is higher or lower than the cost of human labor. (My guess is biological machines such as humans would be cheaper, both in production and processing, until AGI leverages molecular computing via an inexpensive, ubiquitous substrate. Also worth noting that biological systems such as humans and canines may be more fault-tolerant, and more resilient in that they can persist even where the technological base collapses.) Currently, cost of training even narrowly superintelligent Neural Networks which exceed humans at a single function is extremely high. Upvotes: 0
2020/09/16
1,104
4,257
<issue_start>username_0: How much is currently invested in artificial general intelligence research and development *worldwide*? Feel free to add company or VC names, but this is not the point. The point is to get an idea of the economics around artificial general intelligence.<issue_comment>username_1: In the last years, there have been big investments in AI technologies. For an overview, maybe take a look at this article [Artificial Intelligence: Investment Trends and Selected Industry Uses](http://documents1.worldbank.org/curated/ar/617511573040599056/pdf/Artificial-Intelligence-Investment-Trends-and-Selected-Industry-Uses.pdf) (2019). A few companies that have the **long-term** goal of creating an AGI, although, currently, they mainly do research on specific problems (e.g. video games) or AI techniques (e.g. reinforcement learning), have received many funds. I will only list a few (maybe the most well-known ones) of these companies below, but there are probably many other companies that have this long-term goal and have been funded by other companies or people. ### DeepMind In [their site](https://deepmind.com/about), they write > > Like the Hubble telescope that helps us see deeper into space, we aim to build advanced AI - sometimes known as **Artificial General Intelligence (AGI)** - to expand our knowledge and find new answers. By solving this, we believe we could help people solve thousands of problems. > > > DeepMind was acquired by Google in 2015 for $500 million, given its success in playing games at a superhuman performance, which is a promising step towards the development of more AI techniques and maybe AGI. The [Wikipedia article on DeepMind](https://en.wikipedia.org/wiki/DeepMind) contains some information about people or companies that have invested in DeepMind, which includes companies Horizons Ventures and Founders Fund, and people <NAME>, <NAME>, <NAME>, and <NAME>, although I cannot give you the exact numbers in terms of capital. In any case, serious investments have been done in DeepMind, which is definitely one of the promising companies that could develop good insights into the development of AGI systems. ### OpenAI Another company that has a similar goal and is doing research on similar topics (such as reinforcement learning or natural language processing) is OpenAI, which also has the long-term goal of creating AGI systems, as they write in [their website](https://openai.com/about/) > > OpenAI’s mission is to ensure that **artificial general intelligence (AGI)** — by which we mean highly autonomous systems that outperform humans at most economically valuable work — benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. > > > [Their investors include Microsoft, <NAME>'s charitable foundation, and <NAME>](https://openai.com/about/). ### Vicarious They write in [their site](https://www.vicarious.com/company/) > > **Artificial general intelligence** is the finish line on our journey toward progressively more capable robots. Our approach leverages deep expertise in neuroscience and is shaped by a decade of research. > > > [They are apparently backed by more than 150 million dollars](https://www.vicarious.com/company/) from people like <NAME>, <NAME> and <NAME> and companies like Samsung. Upvotes: 3 <issue_comment>username_2: Forget about OpenAI and DeepMind etc. in that order. They are just doing stupid deep learning and backward propagation which is technology from the 50s. There are virtually no pure AGI companies because it's still early days. Total investment per year of serious AGI is far below 100 Million a year. Go read some AGI papers from the AGI conference and adjacent conferences, some entities like to point out publically their funding sources. Sometimes even with the exact amount of funding. Bottom line: Funding is still ridiculous low compared to what say Google makes in 2 months alone with stupid ads. The field is still massivly underfunded <https://medium.com/@petervoss/why-arent-more-people-working-on-agi-7c8367bf2615> Upvotes: 0
2020/09/20
1,332
3,879
<issue_start>username_0: I am trying to create a simple Deep Q-Network with 2d convolutional layers. I can't figure out what I am doing wrong, and the only thing I can see that doesn't seem right is when I get the model prediction for a state after the optimizer step it doesn’t seem to get closer to the target. I am using pixels from pong in OpenAI's gym with single-channel 90x90 images, a batch size of 32, and replay memory. As an example, if I try with a batch size of 1, and try running `self(states)` again right after the optimizer step the output is as follows: ``` current_q_values -> -0.16351485 0.29163417 0.11192469 -0.08969332 0.11081569 0.37215832 q_target -> -0.16351485 0.5336551 0.11192469 -0.08969332 0.11081569 0.37215832 self(states) -> -0.8427617 0.6415581 0.44988257 -0.43897176 0.8693738 0.40007943 ``` Does this look as what would be expected for a single step? The network with loss and optimizer: ``` self.in_layer = Conv2d(channels, 32, 8) self.hidden_conv_1 = Conv2d(32, 64, 4) self.hidden_conv_2 = Conv2d(64, 128, 3) self.hidden_fc1 = Linear(128 * 78 * 78, 64) self.hidden_fc2 = Linear(64, 32) self.output = Linear(32, action_space) self.loss = torch.nn.MSELoss() self.optimizer = torch.optim.Adam( self.parameters(), lr=learning_rate) # lr is 0.001 def forward(self, state): in_out = fn.relu(self.in_layer(state)) in_out = fn.relu(self.hidden_conv_1(in_out)) in_out = fn.relu(self.hidden_conv_2(in_out)) in_out = in_out.view(-1, 128 * 78 * 78) in_out = fn.relu(self.hidden_fc1(in_out)) in_out = fn.relu(self.hidden_fc2(in_out)) return self.output(in_out) ``` Then the learning block: ``` self.optimizer.zero_grad() sample = self.sample(self.batch_size) states = torch.stack([i[0] for i in sample]) actions = torch.tensor([i[1] for i in sample], device=device) rewards = torch.tensor([i[2] for i in sample], dtype=torch.float32, device=device) next_states = torch.stack([i[3] for i in sample]) dones = torch.tensor([i[4] for i in sample], dtype=torch.uint8, device=device) current_q_vals = self(states) next_q_vals = self(next_states) q_target = current_q_vals.clone() q_target[torch.arange(states.size()[0]), actions] = rewards + (self.gamma * next_q_vals.max(dim=1)[0]) * (~dones).float() loss = fn.smooth_l1_loss(current_q_vals, q_target) loss.backward() self.optimizer.step() ``` ```<issue_comment>username_1: Since you are looking at a single iteration and expect a meaningful change my guess is that you aren't training for long enough. Q-learning can take very long, for many environments it takes millions of iterations. Upvotes: 0 <issue_comment>username_2: In my experience, neural networks with convolutional layers take much much longer to train, so try increasing the number of iterations (time steps). After running, save the network model (I dont know how to do it in torch, but in tensorflow it was model.save("filename"+".h5") ). Then, load this saved model file and do a test run to see if it worked. In this case, you should notice pretty easily if it learned or not). Upvotes: 1 <issue_comment>username_3: I found the reason it wasn't learning. The issue was this line of code: ``` q_target[torch.arange(states.size()[0]), actions] = rewards + (self.gamma * next_q_vals.max(dim=1)[0]) * (~dones).float() ``` I had been using the tilde operator before to invert uint8 tensors, but recently I had updated to the latest version of pytorch that seems to have changed how the operator works. It was changing the done values to 255. Changing to this line fixed it: ``` q_target[torch.arange(states.size()[0]), actions] = rewards + (self.gamma * next_q_vals.max(dim=1)[0]) * (1 - dones) ``` Upvotes: 0
2020/09/20
290
1,311
<issue_start>username_0: I know that if you use an ReLU activation function at a node in the neural network, the output of that node will be non-negative. I am wondering if it is possible to have a negative output in the final layer, provided that you do not use any activation functions in the final layer, and all the activation functions in the previous hidden layers are ReLU?<issue_comment>username_1: Yes, if there's no activation function in the last layer, the weights could simply be negative there, so the network would multiply a positive value with a negative weight, therefore outputting a negative value. There is still an activation function, but it is the identity. Upvotes: 2 <issue_comment>username_2: I guess you are using NN for Regresions. In the most common aplication a scale of the outputs is implemented. This is recommended. Specialy if you have more than one output with diferent scales. Otherwise, you will remunerate the neural network for correcting the error of one variable over the other. If you still want to avoid a scale of the outputs. Yes. You can use the identity function in the output layer or a linear function (tha same with different slope). The weights and bias of some conections will become negative and the hidden neurons are going to work as always. Upvotes: 0
2020/09/20
369
1,648
<issue_start>username_0: There are many types of CNN architectures: LeNet, AlexNet, VGG, GoogLeNet, ResNet, etc. Can we apply transfer learning between any two different CNN architectures? For instance, can we apply transfer learning from AlexNet to GoogLeNet, etc.? Or even just from a "conventional" CNN to one of these other architectures, or the other way around? Is this possible in general? EDIT: My understanding is that *all* machine learning models have the ability to perform transfer learning. If this is true, then I guess the question is, as I said, whether we can transfer between two *different* CNN architectures – for instance, what was learned by a conventional CNN to a different CNN architecture.<issue_comment>username_1: Yes, if there's no activation function in the last layer, the weights could simply be negative there, so the network would multiply a positive value with a negative weight, therefore outputting a negative value. There is still an activation function, but it is the identity. Upvotes: 2 <issue_comment>username_2: I guess you are using NN for Regresions. In the most common aplication a scale of the outputs is implemented. This is recommended. Specialy if you have more than one output with diferent scales. Otherwise, you will remunerate the neural network for correcting the error of one variable over the other. If you still want to avoid a scale of the outputs. Yes. You can use the identity function in the output layer or a linear function (tha same with different slope). The weights and bias of some conections will become negative and the hidden neurons are going to work as always. Upvotes: 0
2020/09/21
1,218
4,983
<issue_start>username_0: While exploration is an integral part of reinforcement learning (RL), it does not pertain to supervised learning (SL) since the latter is already provided with the data set from the start. That said, can't hyperparameter optimization (HO) in SL be considered as exploration? The more I think about this the more I'm confused as to what exploration really means. If it means exploring the environment in RL and exploring the model configurations via HO in SL, isn't its end goal "mathematically" identical in both cases?<issue_comment>username_1: In reinforcement learning, **exploration** has a specific meaning, which is in contrast with the meaning of **exploitation**, hence the so-called **exploration-exploitation dilemma** (or trade-off). You **explore** when you decide to visit states that you have not yet visited or to take actions you have not yet taken. On the other hand, you **exploit** when you decide to take actions that you have already taken and you know how much reward you can get. It's like in life: maybe you like cereals $A$, but you never tried cereals $B$, which could be tastier. What are you going to do: continue to eat cereals $A$ (exploitation) or maybe try once $B$ (exploration)? Maybe cereals $B$ are as tasty as $A$, but, in the long run, $B$ are healthier than $A$. More concretely, recall that, in RL, the goal is to collect as much reward as you can. Let's suppose that you are in state $s$ and, in the past, when you were in that state $s$, you had already taken the action $a\_1$, but not the other actions $a\_2, a\_3$ and $a\_4$. The last time you took action $a\_1$, you received a reward of $1$, which is a good thing, but what if you take action $a\_2, a\_3$ or $a\_4$? Maybe you will get a higher reward, for example, $10$, which is better. So, you need to decide whether to choose again action $a\_1$ (i.e. whether to **exploit** your current knowledge) or try another action that may lead to a higher (or smaller) reward (i.e. you **explore** the environment). The problem with exploration is that you don't know what's going to happen, i.e. you are risking if you already get a nice amount of reward if you take an action already taken, but sometimes exploration is the best thing to do, given that maybe the actions you have taken so far have not led to any good reward. In hyper-parameter optimization, you do not need to collect any reward, unless you formulate your problem as a reinforcement learning problem ([which is possible](https://arxiv.org/pdf/1906.11527.pdf)). The goal is to find the best set of hyper-parameters (e.g. the number of layers and neurons in each layer of the neural network) that performs well, typically, on the validation dataset. Once you have found a set of hyper-parameters, you usually do not talk about exploiting it, in the sense that you will not continually receive any type of reward if you use that set of hyper-parameters, unless you conceptually decide that this is the case, i.e., whenever you use that set of hyper-parameters you are exploiting that model to get good performance on the test sets that you have. You could also say that when you are searching for new sets of hyper-parameters you are exploring the search space, but, again, the distinction between exploitation and exploitation, in this case, is typically not made, but you can well talk about it. It makes sense to talk about the exploitation-exploration trade-off when there is stochasticity involved, but in the case of the hyper-parameter optimization there may not be such a stochasticity, but it's usually a deterministic search, which you can, if you like, call exploration. Upvotes: 4 [selected_answer]<issue_comment>username_2: Just to add up to the answer above. In fact if the reward that you get are not stochastic in RL then you simply take a step into your parameter space that guaranteed you the best reward so far (after the evaluation of all other states). So for example if action up is the best one so far, nothing motivates you to try an other one. When you are doing naïve HO it could be seen as an exploration of the space. The environment is not stochastic but the reward (loss decrease) that you will get are not known by the agent beforehand. That's enough to make the exploration step mandatory. So let's say the combination (up, up, down) has got you the best loss so far, you need to actually try other combinations to know if they are the best above all others. In that sense you are exploring too. So when are you not exploring ? If the next step in your HO is given by an optimization step, let's say by a function $f$, then you are not exploring anymore. You are progressing toward the objective given by $f$. Thus, you have to make sure that $f$ correctly gives you the best combination of parameters - mathematically $f$ is converging to a global optimum. So grid search could be viewed as exploration, Bayesian optimization HO not that much. Upvotes: -1
2020/09/21
549
2,594
<issue_start>username_0: This is a simple question. I know the weights in a neural network can be initialized in many different ways like: random uniform distribution, normal distribution, and Xavier initialization. **But what is the weight initialization trying to achieve?** Is it trying to allow the gradients to be large so it can quickly converge? Is it trying to make sure there is no symmetry in the gradients? Is it trying to make the outputs as random as possible to learn more from the loss function? Is it only trying to prevent exploding and vanishing gradients? Is it more about speed or finding a global maximum? What would the perfect weights (without being learned parameters) for a problem achieve? What makes them perfect? What are the properties in an initialization that makes the network learn faster?<issue_comment>username_1: The most important thing we achieve is indeed making sure the weights are not all equal. If they were, every layer would behave as if it were a single cell. We typically want weights that are near zero (so unimportant connections will not accidentally dominate) but non-zero. The different types of initialization all have different motivations, including those mentioned in the question. If you're curious what the motivation for each one is, I would recommend you check the documentation and try to find the original papers where they were first introduced. Upvotes: 1 <issue_comment>username_2: * **Is it trying to make sure there is no symmetry in the gradients**? The aim of weight initialization is to make sure that we don't converge to a trivial solution. That's why we have different kinds of initialization depending on the dataset type. So, Yes it is trying to avoid symmetry. * **Is it trying to allow the gradients to be large so it can quickly converge?** The time it takes to converge, is I think a property of the optimizer and not of the weights initialization. Of course, the manner in which we initialize our weights matters but I think Optimization Algorithms contribute more towards convergence * **What are the properties in an initialization that makes the network learn faster?** Glorot and Bengio believed that Xavier weight initialization would maintain the variance of activations and back-propagated gradients all the way up or down the layers of a network. Incidentally, when they trained deeper networks that used ReLUs, it was found that a 30-layer CNN using Xavier initialization stalled completely and didn’t learn at all. Thus, it depends on the particular problem at hand. Upvotes: 3 [selected_answer]
2020/09/23
699
2,470
<issue_start>username_0: Recently I have come up with a VGG16 model for my binary classification task. I have relatively simple signal images [![this](https://i.stack.imgur.com/w6K40.png)](https://i.stack.imgur.com/w6K40.png) Therefore (maybe?) other deeper models like `resnet18` and `Inceptionv3` were not as good. As known, VGG uses `3x3` filters for convolving the images to make feature maps. I have tried several hyper-parameters to get a desired performance. However, there are still some things I need to do. I was thinking of replacing the `3x3` `conv` filters with `3x1` followed by `1x3` filters to reduce the compute. I think it will definitely do so considering the multiplications (9 operations for `3x3`and 6 for `3x1` followed by `1x3`). Then I came to think: ***If I replace all the `3x3` filters with separable filters, will I get any performance improvement?*** What are the benefits of replacing `3x3` filters with separable ones? Thanks<issue_comment>username_1: First of all, Keep in mind that maths operations aren’t the only thing that contribute to performance. Memory bandwidth can also be a factor. And most importantly, we want to capture as much area as we can in lowest possible number of operations. So in 3x3 kernel case, we can capture 9 cells in one shot, but with 3x1 followed by 1x3, we have to compute 6 times to capture 9 cells. Which clearly states that 3x3 kernel is far more efficient than these two sequential kernels. So, answer to your question will be no, it will not improve performance, instead it will increase computation overhead for your system. Upvotes: -1 <issue_comment>username_2: If the filter is separable, that is, the NxM kernel can be mathematically equal to the convolution of a Nx1 filter and a 1xM filter, there are a very important increase in performance. Using separable convolution, the network can made an optimal usage of the shared memory and of the parallelism in memory access. See [this](https://www.google.es/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwitnfqX98zsAhULkhQKHREPDtwQFjAAegQIBhAC&url=http%3A%2F%2Fdeveloper.download.nvidia.com%2Fassets%2Fcuda%2Ffiles%2FconvolutionSeparable.pdf&usg=AOvVaw1eIqyLuJT6nJDVK7jmyMmX) excellent article for details. These improvements are bigger for bigger kernels. Also the training is improved, starting by the simple fact that a NxM filter has a number of parameters proportional to N\*M but the related separable one has N+M. Upvotes: 1
2020/09/23
505
1,797
<issue_start>username_0: I've started to work on time series. I was wondering what would be the best data normalizing and pre-processing technique for non-linear models, specifically, neural networks. One I can think of is min-max normalization $$z = \frac{x - min(x)}{max(x) - min(x)}$$<issue_comment>username_1: First of all, Keep in mind that maths operations aren’t the only thing that contribute to performance. Memory bandwidth can also be a factor. And most importantly, we want to capture as much area as we can in lowest possible number of operations. So in 3x3 kernel case, we can capture 9 cells in one shot, but with 3x1 followed by 1x3, we have to compute 6 times to capture 9 cells. Which clearly states that 3x3 kernel is far more efficient than these two sequential kernels. So, answer to your question will be no, it will not improve performance, instead it will increase computation overhead for your system. Upvotes: -1 <issue_comment>username_2: If the filter is separable, that is, the NxM kernel can be mathematically equal to the convolution of a Nx1 filter and a 1xM filter, there are a very important increase in performance. Using separable convolution, the network can made an optimal usage of the shared memory and of the parallelism in memory access. See [this](https://www.google.es/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwitnfqX98zsAhULkhQKHREPDtwQFjAAegQIBhAC&url=http%3A%2F%2Fdeveloper.download.nvidia.com%2Fassets%2Fcuda%2Ffiles%2FconvolutionSeparable.pdf&usg=AOvVaw1eIqyLuJT6nJDVK7jmyMmX) excellent article for details. These improvements are bigger for bigger kernels. Also the training is improved, starting by the simple fact that a NxM filter has a number of parameters proportional to N\*M but the related separable one has N+M. Upvotes: 1
2020/09/24
448
1,865
<issue_start>username_0: I would like to classify the subject of a conversation. I could classify each messages of the conversation, but I will loose some imformation because of related messages. I also need to do it gradually and not at the end of the conversation. I searched near **recurrent neural network** and **connectionist classification** but I'm not sure it answer really well my issue.<issue_comment>username_1: This is a difficult problem. First, how do you define 'subject'? Do you have a (closed) lists of labels you want to assign? What about subjects that overlap, or don't occur in your list? What even is a subject? This is a non-trivial issue. Second, and this is even harder, how do you want to recognise subjects? A simple solution could be using a list of associated keywords, but this is problematic as many words have multiple meanings, and words are not really a good indicator of a conversation topic in the first place. Instead of jumping to an implementation method, be clear about how you want to tackle these two items first. Start by annotating a conversation transcript by hand. You will then get a feeling for the problems and possible solutions. After you have done this, you can think about how to get a machine to do it efficiently. UPDATE: For a scheme to annotate the functions of lines within a conversation have a look at Francis & Hunston (1992) *Analysing Everyday Conversation*. In Coulthard, M. (ed.) "Advances in Spoken Discourse Analysis". London: Routledge. pp.123-161. This is more oriented towards linguistics, but might give you some ideas on how to proceed. Upvotes: 1 <issue_comment>username_2: Thank you very much for your help, all of you. I finally find on the Internet key words : "Dialog act classification". I don't know yet how to implement it, but it's a good start ! Upvotes: 1 [selected_answer]
2020/09/25
1,316
4,631
<issue_start>username_0: Consider [this slide](https://youtu.be/FgzM3zpZ55o?list=PLoROMvodv4rOSOPzutgyCTapiGlY2Nd8u&t=2705) from a Stanford lecture on reinforcement learning. It states that a model is > > the agent's representation of how the world changes in response to the agent's action. > > > I've been experimenting with Q-learning for simple problems such as OpenAI's [FrozenLake](https://gym.openai.com/envs/FrozenLake-v0/) and [Mountain Car](https://gym.openai.com/envs/MountainCar-v0/), which both are amenable to the Q-learning framework (the latter upon discretization). I consider the topologies of the lake and the mountain to be the "worlds" (aka. environments) in the two cases, respectively. Q-learning is said to be ["model-free"](https://spinningup.openai.com/en/latest/_images/rl_algorithms_9_15.svg). Given the two examples above, is it because neither the lake's topology nor that of the mountain are changed by the actions taken?<issue_comment>username_1: A reinforcement learning algorithm is considered model based if it uses estimates of the environments dynamics to help learn. For instance, in the Tabular Dyna-Q algorithm, every time you visit a state action tuple you store in a look-up table the reward received and the next state transitioned to, and after every execution of an action you loop $n$ times to further back up your $Q$ table using these stored model values from the look-up table. I will attach a copy of the pseudo-code for an algorithm at the bottom of this post. Algorithms like vanilla $Q$-learning are model free because they don't require a model of the environment to learn. [![enter image description here](https://i.stack.imgur.com/VF35X.png)](https://i.stack.imgur.com/VF35X.png) Upvotes: 1 <issue_comment>username_2: > > Q-learning is said to be "model-free". Given the two examples above, is it because neither the lake's topology nor that of the mountain are changed by the actions taken? > > > No. That's not why Q-learning is model-free. Q-learning assumes that the underlying environment (FrozenLake or MountainCar, for example) can be modelled as a **Markov decision process (MDP)**, which is a mathematical model that describes problems where decisions/actions can be taken and the outcomes of those decisions are at least partially stochastic (or random). More precisely, an MDP is composed of * A set of actions $A$ (that the RL agent can take); for example, up and down, in some grid world * A set of states $S$ (where the RL agent can be); * A **transition function** $p(s\_{t+1} = s' \mid s\_{t} = s , a\_t = a)$ (aka the **model**), which represents the probability of going to state $s'$ at time step $t+1$, given that at time step $t$ the RL agent is in the state $s$ and takes action $a$. * A **reward function** $r(s, a, s')$ (sometimes also denoted as $r(s)$ or $r(s, s')$, although [these can have different semantics](https://ai.stackexchange.com/q/10442/2444)); the reward function gives the reward (or reinforcement) to the RL agent when it takes an action in a certain state and moves to another state; the reward function can also be included in the transition function, i.e., often you will also see $p(s\_{t+1} = s', r\_{t+1} = r \mid s\_{t} = s , a\_t = a)$, and this is the **model**: this is what we mean by **model** in reinforcement learning, it's this $p$ (which is a probability distribution)! A model-free algorithm is any algorithm that does not use or estimate this $p$. Q-learning, [if you look at its pseudocode](http://www.incompleteideas.net/book/RLbook2020.pdf#page=153), does not make use of this model. Q-learning estimates the value function $q(s, a)$ by interacting with the environment (taking actions and receiving rewards), but, meanwhile, it does not know or keep track of the dynamics (i.e. $p$) of the environment, and that's why it's model-free. And, no, the value function is not what we mean by "model" in reinforcement learning. The value function is, as the name suggests, a function. > > How does one know that a problem is "model-free" in reinforcement learning? > > > A problem is not model-free or model-based. An **algorithm** is model-free or model-based. Again, a model-free algorithm does not use or estimate $p$, a model-based one uses (and/or estimates) it. > > Given the two examples above, is it because neither the lake's topology nor that of the mountain are changed by the actions taken? > > > No. As stated in the [other answer](https://ai.stackexchange.com/a/23749/2444), you could apply the model-based algorithm Dyna-Q to these environments. Upvotes: 4 [selected_answer]
2020/09/26
1,732
6,257
<issue_start>username_0: Consider a multi-armed bandit(MAB). There are $k$ arms, with reward distributions $R\_i$ where $1 \leq i \leq k$. Let $\mu\_i$ denote the mean of the $i^{th}$ distribution. If we run the multi-armed bandit *experiment* for $T$ rounds, the "pseudo regret" is defined as $$\text{Regret}\_T = \sum\_{t=1}^T \mu^\* - \mu\_{it},$$ where $\mu^\*$ denotes the highest mean among all the $k$ distributions. **Why is regret defined like this?** From what I understand, at time-step $t$, the actual reward received is $r\_t \sim R\_{it} $ and not $\mu\_{it}$ - so shouldn't that be a part of the expression for regret instead?<issue_comment>username_1: In short, you don't regret your bad luck that you could do nothing about, you regret your bad choices that you could have done something about if only you knew. The point of regret as a metric therefore is to compare your choices with the ideal choices. This makes sense in MABs, because although the primary goal is to gain the most reward, the *learning* part of the goal is to calculate from experience what are the best choices - usually whilst sacrificing as little as possible in the process. The formula captures that concept, so does not concern itself with individual rewards in the past that could have been due to good or bad luck. Hence it uses expected (or mean) rewards. Upvotes: 3 [selected_answer]<issue_comment>username_2: What you define as regret is the case of **[Stochastic MAB's](http://www.shivani-agarwal.net/Teaching/E0370/Aug-2013/Lectures/22.pdf)** i.e MAB's with a fixed distribution. First of all the idea of regret in an Online setting is the loss incurred compared to the best agent (**NOTE**: I have used the term best agent as it can have differeing strategies, resulting in different best agents, in general we deal with a static agent, i.e whose policy/strategy is fixed over the entire horizon). When we are talking about MAB's we always talk about what happens in 'expectation' rather than what 'actually' happens. This is because we are dealing with incomplete information i.e at each time step we don't actually know what losses we have incurred, and thus the algorithms designed to handle such problems are probabilistic in nature. Compared to this, there are things like Online Convex Optimization where complete information about the loss function is available (i.e we are given how the loss was calculated) and we actually use the following regret formulation. $$\sum\_{t=1}^T(f\_t(w\_t) - f\_t(u))$$ where $u$ is the minimizer of $\sum\_{t=1}^Tf\_t(w)$ and $f\_t$ are a sequence of loss functions which are fully revealed to a learner. Now, compared to this in MAB you don't get the loss function revealed to you. You only get to know the reward of the arm you pulled (you don't get to know what was the best arm). Hence, you deal in probabilities i.e you want to maintain a probability distribution over the arms rather than pulling a fixed arm once (**NOTE**: The losses maybe Stochastic, adversarial etc). This will ensure the arm which is producing the maximum reward gets the maximum probability (if the algorithm works or in technical term 'consistent'). Herein comes the the principal of importance sampling, to have a good estimate of the loss incurred in expectation without knowing the actual loss. In general $f\_t$ is assumed to be a linear function (as it can be shown linear functions always have the worst case regret), and hence parametrized by $z\_t$ (A vector). Now consider defining: $$\tilde{z} = [0,0,.....\frac{z(I\_t)}{p(I\_t)},0,0,...0]$$ where $I\_t$ is the arm pulled at time $t$ and $p$ is the probability distribution or strategy to pull your arms. You can check that $\mathbb E[\tilde{z}] = z$ i.e the actual loss vector parametrization in the first place! This $z$ in MABs are nothing but the vector of rewards obtained by pulling an arm i.e. a $k$-D vector of rewards, hence you want to pulll an arm with maximum reward. Thus we see via importance sampling we were able to recover $z$ in 'expectation'. Thus now regret can be defined as (due to the involvement of probabilistic strategies): $$R\_T=\mathbb E[\sum\_{t=1}^T - \min\_u \sum\_{t=1}^T]$$ where $w\_t$ is nothing but $[0,0,0,...1,0,...0]$ i.e the arm you played after sampling from the probability distribution you use as strategy. Actually one uses importance sampling and pretty involved derivation to get a bound on the aforementioned expectation in the famous EXP3 algorithm. Thus, the bottomline is, due to incomplete information we use a probability distribution to pull the arms and then using an update rule which uses $\tilde{z}$ we can derive bounds for the aforementioned expression. Now that we have understood the motivation, in **Stochastic MAB** our goal is to maximize rewards (we have $K$ arms, also I have used standard notations, so it might differ from your notation) i.e $$\mathbb E[\sum\_{t=1}^T X\_{I\_t}]$$ i.e the arm played at time $t$ which can be written as $$E[\sum\_{i=1}^K \mu\_iN\_i(t)]$$ (**NOTE**: The earlier expectation was with both your probabilistic strategy of arm plays as well as the probabilistic rewards, as we are dealing with Stochastic MAB's, thus if we eliminate the expectation w.r.t $X\_{I\_t}$ we get $\mu\_{I\_t}$ which is written as $\mu\_i$ multiplied with the number of times it is played $N\_i$ which is a random variable or has a probability associated with it). Thus this can be further simplified to: $$\sum\_{i=1}^K \mu\_iE[N\_i(t)]$$. Now if the highest mean is $\mu^\*$ it is clear from the above expression that the expression will be maximized if $\sum\_{i=1}^KE[N\_{i^\*}(t)] = T$ where $i^\*$ is the arm corresponding to $\mu^\*$ and thus finally we get the regret as: $$R\_T = \sum\_{t=1}^K (\mu^\* - \mu\_i)E[N\_i(t)]$$. THe bottomline is that due to incomplete information we use probabilistic strategies, resulting in an expectation of regret, while in **Stochastic MAB's** the rewards are also probabilistic, but in the regret formulation the expectation w.r.t the rewards can be evaluated to $\mu\_i$ (if the distribution is stationary) A useful reference maybe [here](https://www.youtube.com/watch?v=Wh6lHcaErsk&t=550s) (the first part of the video). Upvotes: 1
2020/09/26
417
1,812
<issue_start>username_0: A model can be classified as parametric or non-parametric. How are models classified as parametric and non-parametric models? What is the difference between the two approaches?<issue_comment>username_1: ### Parametric Methods A parametric approach (Regression, Linear Support Vector Machines) has a fixed number of parameters and it makes a lot of assumptions about the data. This is because they are used for known data distributions, i.e., it makes a lot of presumptions about the data. ### Non-Parametric Methods A non-parametric approach (k-Nearest Neighbours, Decision Trees) has a flexible number of parameters, there are no presumptions about the data distribution. The model tries to "explore" the distribution and thus has a flexible number of parameters. ### Comparision Comparatively speaking, parametric approaches are computationally faster and have more statistical power when compared to non-parametric methods. Upvotes: 4 [selected_answer]<issue_comment>username_2: I provided some [details](https://programming-review.com/machine-learning/parametric-vs-nonparametric/) but the most important excerpt is from <NAME> and <NAME>'s [AIMA](https://aima.cs.berkeley.edu/) book: *A learning model that summarizes data with a set of parameters of fixed size (independent of the number of training examples) is called a parametric model. No matter how much data you throw at a parametric model, it won’t change its mind about how many parameters it needs.* For nonparametric models ask yourself a question: What is the number of parameters of the decision tree? As the decision tree is an example of nonparametric model the number of parameters in a decision tree depends on the quantity of the data. The more data we have means more parameters. Upvotes: 0
2020/09/26
495
2,054
<issue_start>username_0: *Boosting* refers to a family of algorithms which converts *weak learners* to *strong learners*. How does it happen?<issue_comment>username_1: You take a bunch of weak learners, each of them trained on a subset of the data. You then just get all of them to make a prediction, and you learn how much you can trust each one, resulting in a weighted vote or other type of combination of the individual predictions. Upvotes: 1 <issue_comment>username_2: As @desertnaut mentioned in the comment > > No weak learner becomes strong; it is the ensemble of the weak learners that turns out to be strong > > > Boosting is an ensemble method that integrates multiple models(called as weak learners) to produce a supermodel (Strong learner). [![Boosting technique is illustrated](https://i.stack.imgur.com/tdxd7.png)](https://i.stack.imgur.com/tdxd7.png) Basically boosting is to train weak learners sequentially, each trying to correct its predecessor. For boosting, we need to specify a weak model (e.g. regression, shallow decision trees, etc.), and then we try to improve each weak learner to learn something from the data. *AdaBoost* is a boosting algorithm where a decision tree with a single split is used as a weak learner. Also, we have *gradient boosting* and *XG boosting*. Upvotes: 4 [selected_answer]<issue_comment>username_3: In Boosting, we improve the overall metrics of the model by sequentially building weak models and then building upon the weak metrics of previous models. We start out by applying basic non-specific algorithms to the problem, which returns some weak prediction functions by taking arbitrary solutions (like sparse weights or assigning equal weights/attention). We improve upon this in the following predictions by adjusting weights to those having a higher error rate. After going through many iterations, we combine it to create a single Strong Prediction Function which has better metrics. --- Some popular Boosting Algorithms : * AdaBoost * Gradient Tree Boosting * XGBoost Upvotes: 1
2020/09/29
483
2,189
<issue_start>username_0: What does "ground truth" mean in the context of AI especially in the context of machine learning? I am a little confused because I have read that the ground truth is the same as a label in supervised learning. And I think that's not quite right. I thought that ground truth refers to a model (or maybe the nature) of a problem. I always considered it as something philosophical (and that's what also the vocabulary 'ground truth' implies), because in ML we often don't build a describing model of the problem (like in classical mechanics) but rather some sort of a simulator that behaves like it is a describing model. That's what we/I call sometimes black box. What is the correct understanding?<issue_comment>username_1: In the context of ML, ground truth refers to information provided by direct observation (empirical evidence). If you're training an algorithm to classify your data, then the ground truth will be the actual, true labels which could for example be manually annotated by an domain expert. Please note that the models prediction or the inferred labels, are **not** considered ground truth. Upvotes: 3 <issue_comment>username_2: It really depends on what words you put after "ground truth". Sometimes people will talk about "ground truth labels", for example in the context of classification or regression problems. The "ground truth labels" in such a case would refer to the true labels of instances; the labels that we use as target labels for instances from a training set, or the labels that we expect our models to output (and "punish" them for if they fail to do so) when evaluating/testing a trained model. This basically follows [username_1's answer](https://ai.stackexchange.com/a/23826/1641). "Ground truth" can also refer to something more abstract though, something that we know exists in some form or another, but we may not even know how to express it. For example, there may be "ground truth laws of physics", the laws of physics that our world "follows". We may build or train a simulator trying to approximate those ground truth functions / laws, but we may not actually know how to explicitly express all of them. Upvotes: 3
2020/09/30
572
2,492
<issue_start>username_0: A stable/smooth learning validation curve often seems to keep improving over more epochs than an unstable learning curve. My intuition is that dropping the learning rate and increasing the patience of a model that produces a stable learning curve could lead to better validation fit. The counter argument is that jumps in the curve could mean that the model has just learned something significant, but they often jump back down or tail off after that. Is one better than the other? Is it possible to take aspects of both to improve learning?<issue_comment>username_1: There is an approach to machine learning, called [Simulated Annealing](https://en.wikipedia.org/wiki/Simulated_annealing), which varies the rate: starting from a large rate, it is slowly reduced over time. The general idea is that the initial larger rate will cover a broader range, while the increasingly lower rate then produces a less 'erratic' climb towards a maximum. If you only use a low rate, you risk getting stuck in a local maximum, while too large a rate will not find the best solution but might end up close to one. Adjusting the rate gives you the best of both. Upvotes: 2 <issue_comment>username_2: If you have an erratic loss landscape, it can lead to an unstable learning curve. Thus, it's always better to choose a simpler function which creates a simple landscape. Sometimes even due to uneven dataset distribution, we can observe those jumps/irregularities in the training curve. And yes, those jumps do mean it might've found something significant in the landscape. Those jumps can arise while the model is exploring the multiple local minima of the landscape. During Machine Learning Optimization, we usually use algorithms like **Stochastic Gradient Descent** and **Adam** to find *Local Minima's* whereas approaches like **Simulated Annealing** find *global minima*. There have been multiple discussions around why to use local minima's instead of global minima. Some argue that local minima are just as useful as global minima in case of machine learning problems. Thus, Stable Learning is preferable as it symbolizes that the model is converging to local minima. References ========== --- You can read [A Survey of Optimization Methods from a Machine Learning Perspective](https://arxiv.org/pdf/1906.06821.pdf) by <NAME>, <NAME>, <NAME>, and <NAME> et al. and read about all the optimization functions commonly used in Machine Learning. Upvotes: 1
2020/09/30
1,207
5,133
<issue_start>username_0: I was watching this series: <https://www.youtube.com/watch?v=aircAruvnKk> The series demonstrates neural networks by building a simple number recognizing network. It got me thinking: Why neural networks try to recognize multiple labels instead of just one? In the above example, the network tries to recognize numbers from 0 to 9. What is the benefit of trying to recognize so many things simultaneously? Wouldn't it make it easier to reason about if there would be 10 different neural networks which would specialize to recognize only one number at a time?<issue_comment>username_1: In practice you never want to classify just a single digit rather than series. In such case you have to pass a patch of image to multiple network, which would make it inconvenient. If you built different accurate models, training parameters will not significantly reduced. For example sloppy written 6, in a single model the probability of being 6 and 0 would be close, not same if you consider likelihood you may get closest answer. While with different models the probability may vary in a greater scale and you may not have good generalization as you have in single model. At the end everything boils down to generalization and in my experience neural networks trained with multiple things have good generalization property that a single. Upvotes: 1 <issue_comment>username_2: Imagine a small kid who has no idea about the world around it. You teach the kid how to write the number "6" and that is the only thing that it knows. Now, No matter what other number you show the kid , it's gonna always respond with "6" because that is the only thing it knows or it has learned. You teach the kid how to write the number "9", so now it knows how to differentiate a "6" from a "9" and no matter what other number you show the kid, there is a 50 % chance of it responding with a "6" or a "9" because it knows only that much. The purpose of a neural network is to understand underlying distribution in data that can help it in classifying different numbers. It's important to have a classifier that understands general characteristics of numbers and help us with our task. If you have 10 neural networks trained on 10 different digits, and you show each of these networks the number "10", each network will output the number on which it was trained because that is all it knows(similar to the naive kid above). I hope this answers your question! Upvotes: 0 <issue_comment>username_3: Your question seems to be talking about two slightly different topics: * Pros and cons of 'one vs rest' approach in multi-class classification * Use of Neural Networks in single-output vs multi-class classification problems **One vs Rest in Multi-Class Classification** Recognising digits is an example of multi-class classification. The approach you outline is the kind of approach summarised in the "One vs Rest" section of the [Wikipedia page on multi-class classification](https://en.wikipedia.org/wiki/Multiclass_classification). The page notes the following issues with this approach: > > Firstly, the scale of the confidence values may differ between the binary classifiers. Second, even if the class distribution is balanced in the training set, the binary classification learners see unbalanced distributions because typically the set of negatives they see is much larger than the set of positives. > > > You might also like to look into another approach called One vs One (['One vs Rest' vs 'One vs One'](https://machinelearningmastery.com/one-vs-rest-and-one-vs-one-for-multi-class-classification/)) which sets up the classification problem as a set of binary alternatives. In the digit recognition case you'd end up with a classifier for "1 or 2?", "1 or 3?", "1 or 4?" etc. This might help with the "4 vs 9" problem but it does mean an enormous amount of classifiers, that might be better represented in some kind of network. Perhaps even a network inspired by brain neurons. **Use of Neural Networks in single output vs multi-class classification** There is nothing magical about a neural network that means it has to be used for multi-class classification. Nor is there anything magical about it that makes it the only option for multi-class classification. For example: * [Using a Neural Network for sentiment analysis](https://builtin.com/data-science/how-build-neural-network-keras) outputs one single answer about how positive/negative a piece of text is. * [Digit recognition using SVM](https://github.com/pramodini18/Digit-recognition-using-SVM) uses something that isn't a neural network for multi-class classification **Conclusions** A 10-class neural network is used to identify digits because this has turned out to be an efficient way of doing so when compared with one vs rest and one vs all approaches. A bit off-topic, perhaps, but if you think about this in the context of [T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html), there does seem to be a trend of moving towards larger more multi-purpose models rather than lots of small specialised models. Upvotes: 1
2020/10/02
544
2,215
<issue_start>username_0: I try to create autonomous car using keyboard data so this is a multi class classification problem. I have keys W,A,S and D. So I have four categories. My model should decide what key should be pressed based on the screenshot (or some other data, I have some ideas). I have some API that I can use to capture keyboard data and screen (while gathering data) and also to simulate keyboard events (in autonomous mode when the car is driven by neural network). Should I create another category called for example "NOKEY"? I will use sigmoid function on each output neuron (instead of using softmax on the all neurons) to have probabilities from 0 to one for each category. But I could have very low probabilities for each neuron. And it can mean either that no key should be pressed or that the network doesn't know what to do. So maybe I should just create additional "artificial" category? What is the standard way to deal with such situations?<issue_comment>username_1: I know this is not a straight answer to your question, but I couldn't comment on your post so decided to post it (so maybe I will delete it after you received a better answer). I think [this](https://youtu.be/ks4MPfMq8aQ) playlist by `sentdex` can be handy as he goes through a lot of details to teach a neural network model that can drive cars in GTA-V by simply looking at each frame of the game. You can find the code of each step in [this](https://github.com/sentdex/pygta5) link. Upvotes: 0 <issue_comment>username_2: In short: yes, you must allow "do nothing" decision as a first level result. Your system must decide the action to be taken, including "do nothing" action. This is different to low network outputs, that can be translated as "don't know what to do". In other words, the network can result in: * "I don't know what to do now" when all results in the output have low probabilities. (Obviously, this is a bad network result, to be fixed as much as possible). * "I know I must do nothing", when "do nothing" action has high probability, greater than the others. * "I know I must do W", when "W" action has high probability, greater than the others. * ... Kind regards. Upvotes: 2 [selected_answer]
2020/10/03
1,381
5,001
<issue_start>username_0: I'm trying to implement transformer model using [this tutorial](https://www.tensorflow.org/tutorials/text/transformer#create_the_transformer). In the decoder block of the Transformer model, a mask is passed to "**pad and mask future tokens in the input received by the decoder**". This mask is added to attention weights. ``` import tensorflow as tf def create_look_ahead_mask(size): mask = 1 - tf.linalg.band_part(tf.ones((size, size)), -1, 0) return mask ``` Now my question is, how is doing this step (adding mask to the attention weights) equivalent to revealing the words to model one by one? I simply can't grasp the intuition of it's role. Most tutorials won't even mention this step like it's very obvious. Please help me understand. Thanks.<issue_comment>username_1: The Transformer model presented in this tutorial is an auto-regressive Transformer. Which means that prediction of next token only depends on it's previous tokens. So in order to predict next token, you have to make sure that only previous token are attended. (If not, this would be a cheating because model already knows whats next). So attention mask would be like this [0, 1, 1, 1, 1] [0, 0, 1, 1, 1] [0, 0, 0, 1, 1] [0, 0, 0, 0, 1] [0, 0, 0, 0, 0] For example: If you are translating English to Spanish **Input: How are you ?** **Target: < start > Como estas ? < end >** Then decoder will predict something like this < start > (it will be given to decoder as initial token) < start > Como < start > Como estas < start > Como estas ? < start > Como estas ? < end > Now compare this step by step prediction sequences to attention mask given above, It would make sense now to you Upvotes: 4 [selected_answer]<issue_comment>username_2: We give the target input into the transformer decoder while training the model. So it is easy for the model to "peek ahead" and learn what the next word would be. To ensure that this doesn't happen we apply an additive mask after the dot product between Query and Key. In the original paper "Attention is all you need", the triangular matrix had 0's in the lower triangle and -10e9 (You can see negative infinity used in recent examples) in the upper triangle. So when the mask is added to the attention score the attention scores in the upper triangle would be really low. When this matrix is passed through the softmax function, these really low values become close to 0, which essentially means not to attend to the words after timestep t. To put in matrix format, ``` [8.1, 0.04, 5.2, 4.2] [0.5, 9.2, 2.33, 0.7] [0.2, 0.4, 6.11, 1.0] [3.1, 2.1. 2.19, 8.1] ``` Let the above matrix `A` the result of the dot product between query and key. The `A[0][0]` contains the attention score of the first-word query to the first word of the key, `A[0][1]` contains the attention score of the first word of the query to the second of the key, and so on. So as you can see the after adding the mask and performing softmax on `A`, the result would be, ``` [8.1, 0.0, 0.0, 0.0] [0.5, 9.2, 0.0, 0.0] [0.2, 0.4, 6.11, 0.0] [3.1, 2.1. 2.19, 8.1] ``` This forces the transformer only to attend to words that are before it. You can check out the Transformer lecture available in CS224n for full detail. Upvotes: 3 <issue_comment>username_3: the mask is needed to prevent the decoder from "peeking ahead" at ground truth during training, when using its Attention mechanism. **Encoder:** * **Both runtime or training:** the encoder will always happen in a single iteration, because it will process all embeddings separately, but in parallel. This helps us save time. --- **Decoder:** * **runtime:** Here the decoder will run in several non-parallel iterations, generating one "output" embedding at each iteration. Its output can then be used as input at the next iteration. * **training:** Here the decoder can do all of it in a single iteration, because it simply receives "ground truth" from us. Because we know these "truth" embeddings beforehand, they can be stored into a matrix as rows, so that they can be then submitted to decoder to be processed separately, but in parallel. As you can see during training, actual predictions by the decoder are not used to build up the target sequence (like LSTM would). Instead, what essentially is used here is a standard procedure called "teacher forcing". As others said, the mask is needed to prevent the decoder from "peeking ahead" at ground truth during training, when using its Attention mechanism. As a reminder, in transformer, embeddings are **never** concatenated during input. Instead, each word flows through encoder and decoder separately, but simultaneously. Also, notice that the mask contains **negative infinities**, **not zeros**. This is due to how the Softmax works in Attention. We always first run the encoder, which always takes 1 iteration. The encoder then sits patiently on the side, as the decoder uses its values as needed. Upvotes: 2
2020/10/07
559
1,907
<issue_start>username_0: This is the Short Corridor problem taken from the Sutton & Barto book. Here it's written: > > The problem is difficult because all the states appear identical under the function approximation > > > But this doesn't make much sense as we can always choose states as 0,1,2 and corresponding feature vectors as x(S = 0,right) = [1 0 0 0 0 0] x(S = 0 , left) = [0 1 0 0 0 0] x(S = 1,right) = [0 0 1 0 0 0] x(S = 1 , left) = [0 0 0 1 0 0] x(S = 2,right) = [0 0 0 0 1 0] x(S = 2 , left) = [0 0 0 0 0 1]\ So why is it written that all the states appear identical under the function approximation? [![enter image description here](https://i.stack.imgur.com/lU6Nm.png)](https://i.stack.imgur.com/lU6Nm.png)<issue_comment>username_1: You can choose those states, but is the agent aware of the state it is in? From the text, it seems that the agent cannot distinguish between the three states. Its observation function is completely uninformative. This is why a *stochastic* policy is what is needed. This is common for POMDPs, whereas for regular MDPs we can always find a *deterministic* policy that is guaranteed to be optimal. Upvotes: 3 [selected_answer]<issue_comment>username_2: In toy problems like the Short Corridor task, you can choose the state representation to explore a key property, such as the ability of a particular method to solve it. Often this is done to extremes and heavily simplified. That is what is going on here. The state space that the agent is allowed to use is made highly degenerate with respect to the problem. This stands in for perhaps more complex partially observable systems, but in a way that is really clear to the reader. Also, it is still possible to derive analytically what the best policy should be, so methods can be examined as to how well they deal with the core issue (here, that state data is ambiguous). Upvotes: 1
2020/10/07
1,830
6,912
<issue_start>username_0: In RL, both the KL divergence (DKL) and Total variational divergence (DTV) are used to measure the distance between two policies. I'm most familiar with using DKL as an early stopping metric during policy updates to ensure the new policy doesn't deviate much from the old policy. I've seen DTV mostly being used in papers giving approaches to safe RL when placing safety constraints on action distributions. Such as in [Constrained Policy Optimization](https://arxiv.org/pdf/1705.10528) and [Lyapunov Approach to safe RL](https://arxiv.org/pdf/1805.07708). I've also seen that they are related by this formula: $$ D\_{TV} = \sqrt{0.5 D\_{KL}} $$ When you compute the $D\_{KL}$ between two polices, what does that tell you about them, and how is it different from what a $D\_{TV}$ between the same two policies tells you? Based on that, are there any specific instances to prefer one over the other?<issue_comment>username_1: I did not read those two specified linked/cited papers and I am not currently familiar with the [total variation distance](https://en.wikipedia.org/wiki/Total_variation_distance_of_probability_measures), but I think I can answer some of your questions, given that I am reasonably familiar with the KL divergence. > > When you compute the $D\_{KL}$ between two polices, what does that tell you about them > > > The KL divergence is a measure of "distance" (or divergence, as the name suggests) between two probability distributions (i.e. probability measures) or probability densities. In reinforcement learning, [(stochastic) policies](https://ai.stackexchange.com/q/12274/2444) are probability distributions. For example, in the case your Markov decision process (MDP) has a discrete set of actions, then your policy can be denoted as $$\pi(a \mid s),$$which is the conditional probability distribution over all possible actions, given a specific state $s$. Hence, the KL divergence is a natural measure of how two policies are similar or different. There are 4 properties of the KL divergence that you always need to keep in mind 1. It is asymmetric, i.e., in general, $D\_{KL}(q, p) \neq D\_{KL}(p, q)$ (where $p$ and $q$ are p.d.s); consequently, the KL divergence **cannot** be a [metric](https://en.wikipedia.org/wiki/Metric_(mathematics)) (because metrics are symmetric!) 2. It is always non-negative 3. It is zero when $p = q$. 4. It is unbounded, i.e. it can be arbitrarily large; so, in other words, two probability distributions can be infinitely different, which may not be very intuitive: in fact, in the past, I used the KL divergence and, because of this property, it wasn't always clear how I should interpret the KL divergence (but this may also be due to my not extremely solid understanding of this measure). > > and how is it different from what a $D\_{TV}$ between the same two policies tells you? > > > $D\_{TV}$ is also a measure of the distance between two probability distributions, but it is **bounded**, specifically, in the range $[0, 1]$ [[1](https://arxiv.org/pdf/math/0209021.pdf)]. This property may be useful in some circumstances (which ones?). In any case, the fact that it lies in the range $[0, 1]$ potentially makes its interpretation more *intuitive*. More precisely, if you know the maximum and minimum values that a measure can give you, you can have a better idea of the relative difference between probability distributions. For instance, imagine that you have p.d.s $q$, $p$ and $p'$. If you compute $D\_{TV}(q, p)$ and $D\_{TV}(q, p')$, you can have a sense (in terms of percentage) of how much $p'$ and $p$ differ with respect to $q$. The choice between $D\_{TV}$ and $D\_{KL}$ is probably motivated by their specific properties (and it will probably depend on a case by case basis, and I expect the authors of the research papers to motivate the usage of a specific measure/[metric](https://en.wikipedia.org/wiki/Metric_(mathematics))). However, keep in mind that there is not always a closed-form solution not even to calculate the KL divergence, so you may need to approximate it (e.g. by sampling: note that the KL divergence is defined as an expectation/integral so you can approximate it with a sampling technique). So, this (computability and/or approximability) may also be a parameter to take into account when choosing one over the other. By the way, I think that your definition of the *total variational divergence* is wrong, although the DTV is related to the DKL, specifically, as follows [[1](https://arxiv.org/pdf/math/0209021.pdf)] \begin{align} D\_{TV} \leq \sqrt{\frac{1}{2} D\_{KL}} \end{align} So the DTV is bounded by the KL divergence. Given that the KL divergence is unbounded (e.g. it can take very big values, such as 600k, this bound should be very loose). Take a look at the paper [On choosing and bounding probability metrics](https://arxiv.org/pdf/math/0209021.pdf) (2002, by <NAME> and <NAME>) or [this book](http://www.yaroslavvb.com/papers/peres-markov.pdf#page=58) for information about $D\_{TV}$ (and other measures/metrics). Upvotes: 2 [selected_answer]<issue_comment>username_2: To add to username_1's answer, I'd say also that much of the time the distance measure isn't simply a design decision, rather it comes up naturally from the model of the problem. For instance, minimizing the KL divergence between your policy and the softmax of the Q values at a given state is equivalent to policy optimization where the optimality at a given state is Bernoulli with respect to the exponential of the reward (see maximum entropy RL algorithms). As another example, the KL divergence in the VAE loss is a result of the model and not just a blind decision. I'm less familiar with total variation distance, but I know there's a nice relationship between the total variation distance of a state probability vector and a Markov chain stationary distribution relative to the timestep and the mixing time of the chain. Finally, another thing to consider is the properties of the gradients of these divergence measures. Note that the gradient of the total variation distance might blow up as the distance tends to $0$. Additionally, one must consider if unbiased estimators of the gradients from samples can be feasible. While this is generally the case with KL divergence, I'm not sure about total variation distance (as in, I literally don't know), and this is generally *not* the case with the Wasserstein metric (see <NAME>. al's paper "The Cramér distance as a solution to biased wasserstein gradients"). However, of course there's other scenarios where the tables are turned -- for instance, the distributional bellman operator is a contraction in the supremal Wasserstein metric but *not* in KL or total variation distance. **TL; DR:** Many times mathematical/statistical constraints suggest particular metrics. Upvotes: 2
2020/10/08
431
1,634
<issue_start>username_0: There are lots of research papers available that are worth reading. We can read papers easily, but the associated code (not necessarily the official one developed by the authors of the paper) is often not available. [Papers with Code](https://paperswithcode.com/) (and [the associated Github repo](https://github.com/paperswithcode)) already lists many research papers and often there is a link to the associated Github repo with the code, but sometimes the code is missing. So, are there alternatives to Papers with Code (for such cases)?<issue_comment>username_1: Recently arxiv.org added a Code Tab towards the end of paper descriptions. Which contains links to both the official and community code. [![enter image description here](https://i.stack.imgur.com/08v3H.png)](https://i.stack.imgur.com/08v3H.png) I don't know if this is the case for all the papers or not till know. But I'm sure it'll be extended to all the papers in a short while. Upvotes: 3 <issue_comment>username_2: Another good resource is the free CatalyzeX browser extension — it adds in-line links to any relevant code wherever you come across papers on various websites: AI/ML Papers with Code Everywhere - CatalyzeX * [Chrome extension](https://chrome.google.com/webstore/detail/find-code-for-research-pa/aikkeehnlfpamidigaffhfmgbkdeheil) * [Firefox extension](https://addons.mozilla.org/en-US/firefox/addon/code-finder-catalyzex/) * The corresponding website is [catalyzeX.com](https://www.catalyzex.com). Full disclosure: I'm one of the creators. It's actively maintained and all feedback and requests are welcome! Upvotes: 2
2020/10/09
813
3,381
<issue_start>username_0: In many applications and domains, computer vision, natural language processing, image segmentation, and many other tasks, neural networks (with a certain architecture) are considered to be by far the most powerful machine learning models. Nevertheless, algorithms, based on different approaches, such as ensemble models, like *random forests* and *gradient boosting*, are not completely abandoned, and actively developed and maintained by some people. Do I correctly understand that the neural networks, despite being very flexible and universal approximators, for a certain kind of tasks, regardless of the choice of the architecture, are not the optimal models? For the tasks in computer vision, the core feature, which makes CNNs superior, is the *translational invariance* and the encoded ability to capture the proximity properties of an image or some sequential data. And the more recent *transformer* models have the ability to choose which of the neighboring data properties is more important for its output. But let's say I have a dataset, without a certain structure and patterns, some number of numerical columns, a lot of categorical columns, and in the feature space (for classification task) the classes are separated by some nonlinear hypersurface, would the ensemble models be the optimal choice in terms of performance and computational time? In this case, I do not see a way to exploit CNNs or attention-based neural networks. The only thing that comes to my head, in this case, is the ordinary MLP. It seems that, on the one hand, it would take significantly more time to train the weights than the trees from the ensemble. On the other hand, both kinds of models work without putting prior knowledge to data and assumptions on its structure. So, given enough amount of time, it should give a comparable quality. Or can there be some reasoning that neural network is sometimes bound to give rather a poor quality?<issue_comment>username_1: **Speed**: A classic random forest is O(n) to train and O(1) to run while a feedforward neural network is something like O(n^5) to train and O(n^4) to run, so for many tasks the CART ensemble can train fast and run fast. **Robustness (kinda):** A random forest tends to be natively robust, while GBM and neural networks tend to not be as robust. There are tweaks to loss functions, and to training sampling, that can make the network less not-robust, but that isn't the same as being robust. Dropout doesn't bootstrap the domain or target, only the structure. **Basis**: The CART presumes hard edges. The neural network presumes "continuous". They are decent at handling hard-edged surfaces, as long as you use them well. <https://stats.stackexchange.com/questions/164048/can-a-random-forest-be-used-for-feature-selection-in-multiple-linear-regression/164250#164250> **References**: <https://lunalux.io/computational-complexity-of-neural-networks/> Upvotes: 1 <issue_comment>username_2: This is a great question. Unfortunately the answer is that this is still not very well understood and is an active area of research. I think doing justice to this problem is beyond the scope of an answer here. Instead I will refer you to some recent research papers that attempt to answer this question. <https://arxiv.org/abs/2106.03253> <https://arxiv.org/abs/2207.08815> Upvotes: 0
2020/10/11
903
3,659
<issue_start>username_0: I am trying to do the standard MNIST dataset image recognition test with a standard feed forward NN, but my network failed pretty badly. Now I have debugged it quite a lot and found & fixed some errors, but I had a few more ideas. For one, I am using the sigmoid activation function and MSE as an error function, but the internet suggests that I should rather use softmax for the output layer, and cross entropy loss as an error function. Now I get that softmax is a nice activation function for this task, because you can treat the output as a propability vector. But, while being a nice thing to have, that's more of a convinience thing, isn't it? Easier to visualize? But when I looked at what the derivative of softmax & CEL combined is (my plan was to compute that in one step and then treat the activation function of the last layer as linear, as not to apply the softmax derivative again), I found: $\frac{δE}{δi}$ = $t$ − $o$ (With $i$ being the input of the last layer, $t$ the one hot target vector and $o$ the prediction vector). That is the same as the MSE derivative. So what benefits does softmax + CEL actually have when propagating, if the gradients produced by them are exactly the same?<issue_comment>username_1: If you look at the definition of the cross-entropy (e.g. [here](https://en.wikipedia.org/wiki/Cross_entropy)), you will see that it is defined for probability distributions (in fact, it comes from information theory). You can also show that the maximization of the (binomial/Bernoulli) log-likelihood is equivalent to the minimization of the cross-entropy, i.e. when you minimize the cross-entropy you actually maximize the log-likelihood of the parameters given your labelled data. Hence the use of the softmax is theoretically founded. Regarding the supposed derivative of the cross-entropy loss function preceded by the softmax, even if that derivative is correct ([I didn't think about it and I don't want to think about it now](https://www.ics.uci.edu/%7Epjsadows/notes.pdf)), note that then $t - o$ is different depending on whether $o$ is a probability vector or an unnormalized vector (which can take arbitrarily large numbers). If $o$ is a probability vector and $t$ a one-hot encoded vector (i.e. also a probability vector), then all numbers of $t - o$ will be in the range $[-1, 1]$. However, if $o\_i$ can be arbitrarily large, e.g. $o\_i = 10$, then $t\_i - o\_i \in [-10, -9]$. So, the propagated error would be different if $o$ was not a probability vector. Upvotes: -1 <issue_comment>username_2: **Short answer: larger gradients** That is not the derivative of the softmax function. $t - o$ is the combined derivative of the softmax function and cross entropy loss. Cross entropy loss is used to simplify the derivative of the softmax function. In the end, you do end up with a different gradients. It would be like if you ignored the sigmoid derivative when using MSE loss and the outputs are different. Using softmax and cross entropy loss has different uses and benefits compared to using sigmoid and MSE. It will help prevent gradient vanishing because the derivative of the sigmoid function only has a large value in a very small space of it. It is similar to using a different cross entropy loss where the combined derivative of the loss and sigmoid is $t - o$. Information on derivatives of cross entropy [with sigmoid function](http://neuralnetworksanddeeplearning.com/chap3.html) and [with softmax function](https://peterroelants.github.io/posts/cross-entropy-softmax/). I would also suggest some more research on cross entropy loss functions beyond my links. Upvotes: 1
2020/10/11
804
3,364
<issue_start>username_0: For example, if AlphaZero plays with an opponent who has a right to move chess figures any way she wants, or make more than 1 move in a turn? Will a neural network adapt to that, as it adapted to an absurd move made by <NAME> in 2015?<issue_comment>username_1: The behaviour when playing against "cheats" depends on how the agent has been trained, and how different the game becomes from the training scenarios. It will also depend on how much of the agent's behaviour is driven by training, and how much by just-in-time planning. In general, unless game playing bots are written specifically to detect or cope with opponents that are given unfair advantages, they will continue to play in the same style as if the cheating had not occurred, and assuming that the rules are still being followed strictly. If the cheating player only makes one or two rules-breaking moves, and the resulting game state is still something feasible within the game, then the agent should continue to play well. If the agent significantly outclasses the human opponent, it may still win. A completed, trained agent will not adapt its style to "now my opponent can cheat". An agent still being trained could do so in theory, but it would take many games with cheating allowed for it to learn tactics that cope with an opponent that had an unfair advantage. Agents that plan by looking ahead during play can cope with more unusual/unseen game state - things that may not have been seen in training. However, they still look ahead on the assumption that game play is as desiged/trained for, they cannot adapt to new rules unless those rules are added to the planning by the bot designers. For instance if the allowed cheating was a limited number of extra moves, but only for the human player, the effects of that could be coded into the planning engine, and the bot would "adapt" with help from its designers. > > [AlphaGo] adapted to an absurd move made by <NAME> in 2015? > > > Assuming you are referring to [game 4](https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol#Game_4), then as far as I know, AlphaGo did not "adapt" to this play, after <NAME> managed to put it in a losing position then it started playing badly as it could not find a winning strategy from the board positions it was in, and could not recover. I don't think any effort was put into refining AlphaGo during this game or afterwards to patch it for game 5. Upvotes: 2 <issue_comment>username_2: "Will a neural network adapt to that ?" No. The big functional difference between human mind and neural networks : **human mind learns by itself, a NN not**. If we call NN the net with its layers, weights, ... this is a static system, unable to learn anything new. The back-propagation algorithm that made intelligent the NN runs outside the NN itself, in a different stage, different hardware and software, software that is not a NN but classic programming. Thus, a NN never learns nothing while playing, driving, or any other action it is designed for. If, in the learning stage, some cheats are done, the learning algorithm will learn and adapt to these cheats, thus the resulting NN configuration will be able to react to these cheats in the best way. But this is equivalent, in fact, to learn a different game where these cheats are valid movements. Upvotes: 1
2020/10/12
663
2,667
<issue_start>username_0: Consider the following coding line related to CNNS ``` Conv2D(64, (3,3), strides=(2, 2), padding='same') ``` It is a convolution layer with filter size $3 \times 3$ and step size of $2\times 2$. I am confused about the need for $64$ filters. Are they doing the same task? Obviously, it is no. (one is enough in this case) Then how do each filter differ by? Is it in hovering over the input matrix? Or is it in the values contained by filter itself? Or differs in both hovering and content? I am finding difficulty in visualizing it.<issue_comment>username_1: > > Then how do each filter differ by? Is it in hovering over the input matrix? Or is it in the values contained by filter itself? Or differs in both hovering and content? > > > The filters (aka kernels) are the **learnable parameters** of the CNN, in the same way that the weights of the connections between the neurons (or nodes) are the learnable parameters of a multi-layer perceptron (or feed-forward neural network). So, the value of these filters is not fixed or pre-determined, but will depend on how you train the CNN, i.e. the learning algorithm, the objective function and the data. If you use gradient descent as the learning algorithm, you will be minimizing a loss (aka cost or error) function (e.g. the cross-entropy, in the case of classification problems). To do that, you need to find the gradient of the loss function with respect to the filters. You then apply a step of gradient descent (i.e. you add a scaled version of the gradient of the loss function with respect to the parameters to the parameters), so that this loss decreases. To answer your question more directly, the only thing that usually changes is just the value of the filters. The [convolution (or cross-correlation)](https://ai.stackexchange.com/q/21999/2444) operation is the same for all filters. Why do you use more than one filter? The usual explanation is that each filter, when convolved with the input, will extract different features from it, and the specific features that they will extract will depend on the specific values of the filters, which, in turn, depend on the data, so we can say that CNNs are data-driven feature extractors. If you are familiar with image processing techniques, then you know that different filters, when convolved with the same image, can have different effects (e.g. blurring or de-noising). Upvotes: 3 [selected_answer]<issue_comment>username_2: All filters move across the same area, but the filter values (also called filter kernels) are different for each filter. This makes it possible to "filter out" different features. Upvotes: 1
2020/10/16
758
3,204
<issue_start>username_0: I am not fully understanding how to train a GAN's generator. I have a few questions below, but let me first describe what I am doing. I am using the MNIST dataset. 1. I generate a batch of random images (the faked ones) with the generator. 2. I train the discriminator with the set composed of faked images and real MNIST images. 3. After the training phase, the discriminator modifies the weights in the direction of recognizing fake (probability 0) from real (probability 1) ones. 4. At this point, I have to consider the combined model of generator and discriminator (keep untrainable the discriminator) and put in the generator as input the faked images with the tag of 1s (as was real one). My questions are: Why do I have to set to real these fake images, and what fake images are these? The one generated in the first round from the generator itself? Or only the one classified as faked by the discriminator? (Then they could be both real images classified wrongly or fake images classified in the right way). Finally, what the generator does to these faked images?<issue_comment>username_1: > > Then how do each filter differ by? Is it in hovering over the input matrix? Or is it in the values contained by filter itself? Or differs in both hovering and content? > > > The filters (aka kernels) are the **learnable parameters** of the CNN, in the same way that the weights of the connections between the neurons (or nodes) are the learnable parameters of a multi-layer perceptron (or feed-forward neural network). So, the value of these filters is not fixed or pre-determined, but will depend on how you train the CNN, i.e. the learning algorithm, the objective function and the data. If you use gradient descent as the learning algorithm, you will be minimizing a loss (aka cost or error) function (e.g. the cross-entropy, in the case of classification problems). To do that, you need to find the gradient of the loss function with respect to the filters. You then apply a step of gradient descent (i.e. you add a scaled version of the gradient of the loss function with respect to the parameters to the parameters), so that this loss decreases. To answer your question more directly, the only thing that usually changes is just the value of the filters. The [convolution (or cross-correlation)](https://ai.stackexchange.com/q/21999/2444) operation is the same for all filters. Why do you use more than one filter? The usual explanation is that each filter, when convolved with the input, will extract different features from it, and the specific features that they will extract will depend on the specific values of the filters, which, in turn, depend on the data, so we can say that CNNs are data-driven feature extractors. If you are familiar with image processing techniques, then you know that different filters, when convolved with the same image, can have different effects (e.g. blurring or de-noising). Upvotes: 3 [selected_answer]<issue_comment>username_2: All filters move across the same area, but the filter values (also called filter kernels) are different for each filter. This makes it possible to "filter out" different features. Upvotes: 1
2020/10/18
472
2,220
<issue_start>username_0: I'm trying to have a simple autoencoder but with variable latent length (the network can produce variable latent lengths with respect to the complexity of the input), but I've not seen any related work to get idea from. Have you seen any related work? Do you have any idea to do so? Actually, I want to use this autoencoder for transmitting the data over a noisy channel, so having a variable-length may help.<issue_comment>username_1: You might want to look at an encoder-decoder sequence to sequence model. This model allows you to input and output data with variable length. Upvotes: -1 <issue_comment>username_2: If you use RNNs, then I think the solution is to use padding (zero padding) with max sequence length (that is the max number of words in a text) in order to tell your model to skip the zeros when possible. In that way, your model will try to learn a good representation of your input with fixed size. If you do not know this dimension, a solution may be to grid search this hyperparameter. If you still want to exploit the dimensionality difference, maybe you can train different models with fixed size dimension of the representation dependently of the dimension of the input. That is, for example, use one for small, one for medium and one for large dimensions, but this should surely require to have a large and quite balanced initial dataset. Another idea could be to use the autoencoder with a fixed latent dimension. Then, do effective clustering on your samples using their latent representation, considering that similar representations should have similar dimensionality requirements (?). After that, you could train your initial dataset on k models, the same number as the clusters. That is, there should be k different latent spaces. The goal is to match each instance to the correct model. At first, you should train them all with each instance, but as the training progresses, you should maybe go with binary search for each instance in order it to find the correct model, assuming that there is total order in the measuring of the dimensionality requirements. Of course, this is just an idea, I don't know if it is going to be really helpful at all. Upvotes: 1
2020/10/20
525
2,293
<issue_start>username_0: I'm writing a DQN agent for the [Wumpus game](https://en.wikipedia.org/wiki/Hunt_the_Wumpus). Is the reward function to train the Q-networks (target network and policy) the same as the score of the game, i.e. +1000 for picking up gold, -1000 for falling in pits and dying from the wumpus, -1 each move? This is naturally cumulative, in that the score changes after each action taken by the agent. Alternatively, is it just a +1 for win, -1 for a loss and 0 in all other situations?<issue_comment>username_1: You might want to look at an encoder-decoder sequence to sequence model. This model allows you to input and output data with variable length. Upvotes: -1 <issue_comment>username_2: If you use RNNs, then I think the solution is to use padding (zero padding) with max sequence length (that is the max number of words in a text) in order to tell your model to skip the zeros when possible. In that way, your model will try to learn a good representation of your input with fixed size. If you do not know this dimension, a solution may be to grid search this hyperparameter. If you still want to exploit the dimensionality difference, maybe you can train different models with fixed size dimension of the representation dependently of the dimension of the input. That is, for example, use one for small, one for medium and one for large dimensions, but this should surely require to have a large and quite balanced initial dataset. Another idea could be to use the autoencoder with a fixed latent dimension. Then, do effective clustering on your samples using their latent representation, considering that similar representations should have similar dimensionality requirements (?). After that, you could train your initial dataset on k models, the same number as the clusters. That is, there should be k different latent spaces. The goal is to match each instance to the correct model. At first, you should train them all with each instance, but as the training progresses, you should maybe go with binary search for each instance in order it to find the correct model, assuming that there is total order in the measuring of the dimensionality requirements. Of course, this is just an idea, I don't know if it is going to be really helpful at all. Upvotes: 1
2020/10/20
563
2,444
<issue_start>username_0: Machine Learning books generally explains that the error calculated for a given sample $i$ is: $e\_i = y\_i - \hat{y\_i}$ Where $\hat{y}$ is the target output and $y$ is the actual output given by the network. So, a loss function $L$ is calculated: $L = \frac{1}{2N}\sum^{N}\_{i=1}(e\_i)^2$ The above scenario is explained for a binary classification/regression problem. Now, let's assume a MLP network with $m$ neurons in the output layer for a multiclass classification problem (generally one neuron per class). What changes in the equations above? Since we now have multiple outputs, both $e\_i$ and $y\_i$ should be a vector?<issue_comment>username_1: You might want to look at an encoder-decoder sequence to sequence model. This model allows you to input and output data with variable length. Upvotes: -1 <issue_comment>username_2: If you use RNNs, then I think the solution is to use padding (zero padding) with max sequence length (that is the max number of words in a text) in order to tell your model to skip the zeros when possible. In that way, your model will try to learn a good representation of your input with fixed size. If you do not know this dimension, a solution may be to grid search this hyperparameter. If you still want to exploit the dimensionality difference, maybe you can train different models with fixed size dimension of the representation dependently of the dimension of the input. That is, for example, use one for small, one for medium and one for large dimensions, but this should surely require to have a large and quite balanced initial dataset. Another idea could be to use the autoencoder with a fixed latent dimension. Then, do effective clustering on your samples using their latent representation, considering that similar representations should have similar dimensionality requirements (?). After that, you could train your initial dataset on k models, the same number as the clusters. That is, there should be k different latent spaces. The goal is to match each instance to the correct model. At first, you should train them all with each instance, but as the training progresses, you should maybe go with binary search for each instance in order it to find the correct model, assuming that there is total order in the measuring of the dimensionality requirements. Of course, this is just an idea, I don't know if it is going to be really helpful at all. Upvotes: 1
2020/10/21
584
2,586
<issue_start>username_0: Assume that I have a fully connected network that takes in a vector containing `1025` elements. First `1024` elements are related to the input image of size `32 x 32 x 1`, and the last element in the vector (`1025-th element`) is a control bit that I call it *special input*. When this bit is `zero`, the network should predict if there is a `cat` in the image or not, and when this bit is `one`, it should predict if there is a `dog` in the image or not. So how can I tell the network that your `1025-th element` should be special to you and you should pay more attention to it? Note that it's just an example and the real problem is more complex than this. So please don't bypass the goal of this question by using tricks special to this example. Any idea is appreciated.<issue_comment>username_1: You might want to look at an encoder-decoder sequence to sequence model. This model allows you to input and output data with variable length. Upvotes: -1 <issue_comment>username_2: If you use RNNs, then I think the solution is to use padding (zero padding) with max sequence length (that is the max number of words in a text) in order to tell your model to skip the zeros when possible. In that way, your model will try to learn a good representation of your input with fixed size. If you do not know this dimension, a solution may be to grid search this hyperparameter. If you still want to exploit the dimensionality difference, maybe you can train different models with fixed size dimension of the representation dependently of the dimension of the input. That is, for example, use one for small, one for medium and one for large dimensions, but this should surely require to have a large and quite balanced initial dataset. Another idea could be to use the autoencoder with a fixed latent dimension. Then, do effective clustering on your samples using their latent representation, considering that similar representations should have similar dimensionality requirements (?). After that, you could train your initial dataset on k models, the same number as the clusters. That is, there should be k different latent spaces. The goal is to match each instance to the correct model. At first, you should train them all with each instance, but as the training progresses, you should maybe go with binary search for each instance in order it to find the correct model, assuming that there is total order in the measuring of the dimensionality requirements. Of course, this is just an idea, I don't know if it is going to be really helpful at all. Upvotes: 1
2020/10/21
1,391
6,045
<issue_start>username_0: My question is about **neuroevolution** (genetic algorithm + neural network): I want to create **artificial life** by evolving agents. But instead of relying on a fitness function, I would like to have the agents reproduce with some mutation applied to the genes of their offspring and have some agents die through natural selection. Achieve evolution in this manner is my goal. Is this feasible? And has there been some prior work on this? Also, is it somehow possible to incorporate **NEAT** into this scheme? So far, I've implemented most of the basics in amethyst (a parallel game engine written in Rust), but I'm worried that the learning will happen very slowly. Should I approach this problem differently?<issue_comment>username_1: You do not always need an *explictly coded* fitness function to perform genetic algorithm searches. The more general need is for a selection process that favours individuals that perform better at the core tasks in an environment (i.e. that are "more fit"). One way of assessing performance is to award a numerical score, but other approaches are possible, including: * [Tournament selection](https://en.wikipedia.org/wiki/Tournament_selection) where two or more individuals compete in a game, and the winner is selected. * Opportunity-based selection, where agents in a shared environment - typically with limited resources and chances to compete - may reproduce as one of the available actions, provided they meet some criteria such as having collected enough of some resource. I was not able to find a canonical name for this form of selection, but it is commonly implemented in artificial life projects. A key distinction between [A-life](https://en.wikipedia.org/wiki/Artificial_life) projects and GA optimisation projects is that in A-life projects there is no goal behaviour or target performance. Typically A-life projects are simulations with an open ended result and the developer runs a genetic algorithm to "see what happens" as opposed to "make the best game-player". If your project is like this then you are most likely looking for the second option here. To discover more details about this kind of approach, you could try searching "artifical life genetic algorithms" as there are quite a few projects of this type published online, some of which use NEAT. Technically, you could view either of the methods listed above as ways of *sampling* comparisons between individuals against an unknown fitness function. Whether or not a true fitness function could apply is then partly a matter of philosophy. More importantly for you as the developer, is that you do not have to *write* one. Instead you can approximately measure fitness using various methods of individual selection. > > So far I've implemented most of the basics in amethyst (a parallel game engine written in rust), but I'm worried that the learning will happen very slowly. Should I approach this problem differently? > > > It is difficult to say whether you should approach the problem differently. However, the biggest bottlenecks against successful GA approaches are: * Time/CPU resources needed to assess agents. * Size of search space for genomes. Both of these can become real blockers for ambitious a-life projects. It is common to heavily simplify agents and environments in attempts address these issues. Upvotes: 3 [selected_answer]<issue_comment>username_2: How can you assess the quality of any solution without a measure of quality, which, in the context of genetic algorithms, is *known as* fitness function? The term *fitness function* is due to the well-known phrase "**Survival of the Fittest**", which is often used to describe the Darwinian theory of natural selection (which genetic algorithms are based on). However, note that the fitness function can take any form, such as * *How well this solution performs in a game?* (in this case, solutions could, for example, be policies to play a game), or * *How close this solution is to a minimum/maximum of some function $f$ (more precisely, if you want to find the maximum of the function $f(x) = x^2$, then individuals are scalars in $\hat{x} \in \mathbb{R}$, and the fitness could be determined by $f'(\hat{x})$ or by how big $f(\hat{x})$ with respect to other individuals); check how I did it [here](https://github.com/username_2/function_max_with_ga/blob/master/main.py))?* The definition of the fitness function depends on what problem you want to solve and which solutions you want to find. So, you need some kind of fitness function in genetic algorithms to perform selection in a reasonable way, so that to maintain the "best solutions" in the population. More precisely, while selecting the new individuals for the new generation (i.e. iteration), if you don't use a fitness (which you can also call *performance*, if you like) function to understand which individuals deserve to live or die, how do you know that the new solutions are better than the previous ones? You cannot know this without a fitness/performance function, so you cannot also logically decide which individuals to kill before the next generation. Mutations alone just change the solutions, i.e. they are used to explore the space of solutions. Genetic algorithms are always composed of * a population of solutions/individuals/chromosomes (i.e. usually at least $2$ solutions) * operations to randomly (or stochastically) change existing solutions to create new ones (typically mutations and crossovers) * a selection process that selects the new solutions/individuals for the next generation (or to be combined and mutated) * a fitness function to help you decide which solutions need to be selected (or even combined and mutated) For more info about genetic algorithms or, more generally, evolutionary algorithms, take a look at chapter 8 and 9 of the book [Computational Intelligence: An Introduction](https://papers.harvie.cz/unsorted/computational-intelligence-an-introduction.pdf#page=160) by <NAME>. Upvotes: 1