date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2019/01/14
614
2,465
<issue_start>username_0: I was wondering if machine learning algorithms (CNNs?) can be used/trained to differentiate between small differences in details between images (such as slight differences in shades of red or other colours, or the presence of small objects between otherwise very similar images?)? And then classify images based on these differences? If this is a difficult endeavour with our current machine learning algorithms, how can it be solved? Would using more data (more images) help? I would also appreciate it if people could please provide references to research that has focused on this, if possible. I've only just begun learning machine learning, and this is something that I've been wondering from my research.<issue_comment>username_1: [Attentive Recurrent Comparators](https://arxiv.org/pdf/1703.00767.pdf) (2017) by <NAME> et al. is an interesting paper that helps to answer the question you're wondering, along with a [blog post](https://medium.com/datadriveninvestor/spot-the-difference-deep-learning-edition-f1a891342e13) that helps to describe it in easier terms. The way it's implemented is actually rather intuitive. If you have ever played a "what is different" game with two images usually what you'd do is look back and forth between the images to see what the difference is. The network that the researchers created does just that! It looks at one image and then remembers important features about that images and looks at the other image and goes back and forth. Upvotes: 4 [selected_answer]<issue_comment>username_2: It exists networks built to learn how to differentiate between classes even if there are looking quite the same. Usually, a [triplet loss](https://towardsdatascience.com/siamese-network-triplet-loss-b4ca82c1aec8) is used in those networks to learn the difference between the target, a positive sample, and a negative one. For example, those networks are used to perform identity check with face images, the algorithm learns the differences between different people instead of recognizing people. Here are some keywords that are possibly relevant: discriminative function, triplet loss, siamese network, one-shot learning. Theses papers are interesting: * [FaceNet: A Unified Embedding for Face Recognition and Clustering](http://arxiv.org/abs/1503.03832) * [Dimensionality Reduction by Learning an Invariant Mapping](http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf) Upvotes: 2
2019/01/14
967
4,076
<issue_start>username_0: How do you distinguish between a complex and a simple model in machine learning? Which parameters control the complexity or simplicity of a model? Is it the number of inputs, or maybe the number of layers? Moreover, when should a simple model be used instead of a complex one, and vice-versa?<issue_comment>username_1: If you want to find a proper architecture for your model, you can use the [NAS](https://en.wikipedia.org/wiki/Neural_architecture_search) (neural architecture search) methods instead of running some naive models to find a model and involving to decide which model is more complex or simpler. Some methods which used in NAS to find a proper architecture are: 1. NAS with Reinforcement Learning 2. NAS with Evolution 3. NAS with Hill-climbing 4. Multi-objective Neural architecture search Upvotes: 1 <issue_comment>username_2: Consider a continuum of complexity in models. * **Trivial:** $y = x + a$ * **Simple:** $y = x \, \log \, (a x + b) + c$ * **Moderately complex:** A wind turbine under constant wind velocity * **Very complex:** Ray tracing of lit 3-D motion scenes to pixels * **Astronomically complex:** The weather Now consider a continuum regarding the generality or specificity of models. * **Very specific:** The robot for the Mars mission has an exact mechanical topology, materials call-out, and set of mechanical coordinates contained in the CAD files used to machine the robot's parts. * **Somewhat specific:** The formulas guiding the design of an internal combustion engine, which are well known. * **Somewhat general:** The phenomenon is deterministic and the variables and their domains are known. * **Very general:** There's probably some model because it works in nature but we know little more. There are twenty permutations at the above level of granularity. Every one has purpose in mathematical analysis, applied research, engineering, and monetization. Here are some general correlations between input, output, and layer counts. * Higher complexity often corresponds to larger layer count. * Higher i/o dimensionality corresponds to higher width to the corresponding i/o layers. * Mapping generality to or from specificity generally requires complexity. Now, to make this answer even less appealing to those who want formula answer they can memorize, ... * Each artificial network is a model of an arbitrary function before training and a model of a specific function afterward. * Loss functions are models of disparity. * An algorithm is a model of a process created by spreading a recursive definition out in time to map into a model of centralized computation called a CPU. * The recursive definition is a model too. There is almost nothing in science that is a not a model except ideas or data that are not yet modeled. Upvotes: 3 [selected_answer]<issue_comment>username_3: In what context are you asking this? It is totally different if you want to perform object detection, regression or, for example, reinforcement learning. For the first case I would say that main point in using simple vs complex model is size of training data. If you have 1000 training samples you can't expect large network to perform better than simple one. Upvotes: 1 <issue_comment>username_4: In a nutshell, if you already have a number of models, you usually should be able to distinguish (intuitively, if you will) between simpler and more complex ones. E.g. based on the number of inputs and number of layers, as you have already indicated in the question. Then, if a simpler model and a more complex model perform the same task, and the complex model does not perform significally better than the simpler one, you should use the simpler model. It's your role to decide what difference in performance would be significant, usually based on your use case. It's Occam's razor in practice (<https://en.m.wikipedia.org/wiki/Occam%27s_razor>). You might learn more practical aspects as part of this free course <https://lagunita.stanford.edu/courses/HumanitiesSciences/StatLearning/Winter2016/about> Upvotes: 1
2019/01/14
780
3,272
<issue_start>username_0: I was thinking of something of the sort: 1. Build a program (call this one fake user) that generates lots and lots and lots of data based on the usage of another program (call this one target) using stimuli and response. For example, if the target is a minesweeper, the fake user would play the game a carl sagan number of times, as well as try to click all buttons on all sorts of different situations, etc... 2. run a machine learning program (call this one the copier) designed to evolve a code that works as similar as possible to the target. 3. kablam, you have a "sufficiently nice" open source copy of the target. Is this possible? Is something else possible to achieve the same result, namely, to obtain a "sufficiently nice" open source copy of the original target program?<issue_comment>username_1: Remarkably, more or less the scenario you describe is not only feasible and has already been demonstrated [(detailed explanation and fascinating videos at link)](https://worldmodels.github.io/). However, the fidelity of the copy is currently quite limited: [![enter image description here](https://i.stack.imgur.com/FO0Af.png)](https://i.stack.imgur.com/FO0Af.png) So for now, your copy will be quite low quality. However, there is a big exception to this rule: if the software you are copying is itself based on machine learning, then you can probably make a high-quality copy quite cheaply and easy, [as I and my co-authors explain in this short article.](https://medium.com/@damilare/pirating-ai-800a8da6431b) Interesting question and I'm quite sure that the correct answer will change rapidly over the next few years. Upvotes: 1 <issue_comment>username_2: This is the proposed way to reverse engineer software using AI. * Program fake\_user operates program target\_prog in diverse ways to generate a huge and comprehensive data set. * The parameters of an artificial network are trained to produce within specified accuracy and reliability criteria a behavioral equivalent of target\_prog. Not only is this possible, but it is becoming standard practice for AI projects other than reverse engineering games. There are caveats. * Program target\_prog may be of sufficient complexity to exceed the capabilities of existing network designs and convergence techniques. * The project may lack access to funds and computing resources to complete the generation and training required to achieve reasonable accuracy, with sufficient reliability, in the time allotted. * The expertise of those involved may not be sufficient to produce satisfactory results. * Although the source code is not copied and the parameter state achieved through learning contains equivalent functionality, there is no guarantee that a civil liability may not result. Copyright law one or more jurisdictions may be interpreted as a protection against this kind of copying even though the text of the source code was not copied verbatim. Upvotes: 1 <issue_comment>username_3: A recent paper from January 2023 does this too: "an algorithm that synthesizes the source code of simple 2D video games from a small amount of observed video data" <https://www.basis.ai/blog/autumn/> Original research article: <https://dl.acm.org/doi/10.1145/3571249> Upvotes: 0
2019/01/14
3,191
10,187
<issue_start>username_0: I was trying to understand the loss function of GANs, but I found a little mismatch between different papers. This is taken from [the original GAN paper](https://arxiv.org/pdf/1406.2661.pdf): > > The adversarial modeling framework is most straightforward to apply when the models are both multilayer perceptrons. To learn the generator's distribution $p\_{g}$ over data $\boldsymbol{x}$, we define a prior on input noise variables $p\_{\boldsymbol{z}}(\boldsymbol{z})$, then represent a mapping to data space as $G\left(\boldsymbol{z} ; \theta\_{g}\right)$, where $G$ is a differentiable function represented by a multilayer perceptron with parameters $\theta\_{g} .$ We also define a second multilayer perceptron $D\left(\boldsymbol{x} ; \theta\_{d}\right)$ that outputs a single scalar. $D(\boldsymbol{x})$ represents the probability that $\boldsymbol{x}$ came from the data rather than $p\_{g}$. We train $D$ to maximize the probability of assigning the correct label to both training examples and samples from $G$. We simultaneously train $G$ to minimize $\log (1-D(G(\boldsymbol{z})))$ : > > > In other words, $D$ and $G$ play the following two-player minimax game with value function $V(G, D)$ : > > > $$ \min \_{G} \max \_{D} V(D, G)=\mathbb{E}\_{\boldsymbol{x} \sim p\_{\text {data }}(\boldsymbol{x})}[\log D(\boldsymbol{x})]+\mathbb{E}\_{\boldsymbol{z} \sim p\_{\boldsymbol{z}}(\boldsymbol{z})}[\log (1-D(G(\boldsymbol{z})))] $$ Equation (1) in this version of [the pix2pix paper](https://arxiv.org/pdf/1611.07004.pdf) > > The objective of a conditional GAN can be expressed as > $$ > \begin{aligned} > \mathcal{L}\_{c G A N}(G, D)=& \mathbb{E}\_{x, y}[\log D(x, y)]+\\ > & \mathbb{E}\_{x, z}[\log (1-D(x, G(x, z))], > \end{aligned} > $$ > where $G$ tries to minimize this objective against an adversarial $D$ that tries to maximize it, i.e. $G^{\*}=$ $\arg \min \_{G} \max \_{D} \mathcal{L}\_{c G A N}(G, D)$. > > > To test the importance of conditioning the discriminator, we also compare to an unconditional variant in which the discriminator does not observe $x$ : > $$ > \begin{aligned} > \mathcal{L}\_{G A N}(G, D)=& \mathbb{E}\_{y}[\log D(y)]+\\ > & \mathbb{E}\_{x, z}[\log (1-D(G(x, z))] . > \end{aligned} > $$ > > > Putting aside the fact that pix2pix is using conditional GAN, which introduces a second term $y$, the 2 formulas are quite resemble, except that in the pix2pix paper, they try to get minimax of ${\cal{L}}\_{cGAN}(G, D)$, which is defined to be $E\_{x,y}[...] + E\_{x,z}[...]$, whereas in the original paper, they define $\min\max V(G, D) = E[...] + E[...]$. I am not coming from a good math background, so I am quite confused. I'm not sure where the mistake is, but assuming that $E$ is expectation (correct me if I'm wrong), the version in pix2pix makes more sense to me, although I think it's quite less likely that Goodfellow could make this mistake in his amazing paper. Maybe there's no mistake at all and it's me who do not understand them correctly.<issue_comment>username_1: What is meant by both papers is that we have two agents (generator and discriminator) playing a game with the value function `V` defined as a sum of the expectations (i.e. an expectation of the outcome value defined as a sum of two terms, or actually a logarithm of a product...). The generator uses a strategy `G` encoded in the parameters of its neural network (`θg`), the discriminator uses a strategy `D` encoded in the parameters of its neural network (`θd`). Our goal is to (hopefully) find such a pair of strategies (a pair of parameter sets `θgmin` and `θdmax`) that produce the minimax value. While trying to find the (`θgmin`, `θdmax`) pair using gradient descent, we actually have two loss functions: one is the loss function for `G`, parameterized by `θg`, another is the loss function for `D`, parameterized by `θd`, and we train them alternatively on minibatches together. If you look at the Algorithm 1 in the original paper, the loss function for the discriminator is `-log(D(x; θd)) - log(1 - D(G(z); θd)`, and the loss function for the generator is `log(1 - D(G(z; θg))` (in both cases, in the original paper, `x` is sampled from the reference data distribution and `z` is sampled from noise): The ideal value for the loss function of the discriminator is 0, otherwise it's greater than 0. The "loss" function of the generator is actually negative, but, for better gradient descent behavior, can be replaced with `-log(D(G(z; θg))`, which also has the ideal value for the generator at 0. It is impossible to reach zero loss for both generator and discriminator in the same GAN at the same time. However, the idea of the GAN is not to reach zero loss for any of the game agents (this is actually counterproductive), but to use that "double gradient descent" to "converge" the distribution of `G(z)` to the distribution of `x`. Upvotes: 0 <issue_comment>username_2: I'm not sure I understand your question. However responding to your question in the comments. The difference between the two objectives is that: In an ordinary GAN, we want to push $p(G)$ to be as close as possible to $p(data)$ In a conditional GAN, we have a context $c$. If we imagine for ease of understanding that $c=[1,2,3]$ is a discrete variable where all the data can be categorised under one of these c values , then we want to: push $p(G|c=1)$ as close as possible to $p(data|c=1)$ push $p(G|c=2)$ as close as possible to $p(data|c=2)$ push $p(G|c=3)$ as close as possible to $p(data|c=3)$ Upvotes: -1 <issue_comment>username_3: The question is about a mismatch between the loss function in two papers on GANs. The first paper is [*Generative Adversarial Nets*](https://arxiv.org/pdf/1406.2661.pdf) <NAME> et. al., 2014, and the excerpt image in the question is this. > > The adversarial modeling framework is most straightforward to apply when the models are both multilayer perceptrons. To learn the generator’s distribution $p\_g$ over data $x$, we define a prior on input noise variables $p\_z (z)$, then represent a mapping to data space as $G (z; \theta\_g)$, where $G$ is a differentiable function represented by a multilayer perceptron with parameters $\theta\_g$. We also define a second multilayer perceptron $D (x; \theta\_d)$ that outputs a single scalar. $D (x)$ represents the probability that $x$ came from the data rather than pg. We train $D$ to maximize the probability of assigning the correct label to both training examples and samples from $G$. We simultaneously train $G$ to minimize $\log (1 − D(G(z)))$: > > > In other words, $D$ and $G$ play the following two-player minimax game with value function $V (G, D)$: > > > $$ \min\_G \, \max\_D V (D, G) = \mathbb{E}\_{x∼p\_{data}(x)} \, [\log \, D(x)] \\ > \quad\quad\quad\quad\quad\quad\quad + \, \mathbb{E}\_{z∼p\_z(z)} \, [\log \, (1 − D(G(z)))] \, \text{.} \quad \text{(1)} $$ > > > The second paper is [*Image-to-Image Translation with Conditional Adversarial Networks*](https://arxiv.org/pdf/1611.07004.pdf), <NAME>-<NAME> <NAME>, 2018, and the excerpt image in the question is this. > > The objective of a conditional GAN can be expressed as > > > $$ \mathcal{L}\_{cGAN} (G, D) = \mathbb{E}\_{x, y} \, [\log D(x, y)] \\ \quad\quad\quad\quad\quad\quad\quad + \mathbb{E}\_{x, z} \, [\log \, (1 − D(x, G(x, z))], \quad \text{(1)} $$ > > > where $G$ tries to minimize this objective against an adversarial $D$ that tries to maximize it, i.e. > > > $$ G^{∗} = \arg \, \min\_G \, \max\_D \mathcal{L}\_{cGAN} (G, D) \, \text{.} $$ > > > To test the importance of conditioning the discriminator, we also compare to an unconditional variant in which the discriminator does not observe $x$: > > > $$ \mathcal{L}\_{GAN} (G, D) = \mathbb{E}\_y \, [\log \, D(y)] \\ > \quad\quad\quad\quad\quad\quad\quad + \mathbb{E}\_{x, z} \, [\log \, (1 − D(G(x, z))] \, \text{.} \quad \text{(2)} $$ > > > In the above $G$ refers to the generative network, $D$ refers to the discriminative network, and $G^{\*}$ refers to the minimum with respect to $G$ of the maximum with respect to $D$. As the question author tentatively put forward, $\mathbb{E}$ is the expectation with respect to its subscripts. The question of discrepancy is that the right hand sides do not match between the first paper's equation (1) and the second paper's equation (2) which is absent of the condition involving $y$. First paper: $$ \mathbb{E}\_{x∼p\_{data}(x)} \, [\log \, D(x)] \\ \quad\quad\quad\quad\quad\quad\quad + \, \mathbb{E}\_{z∼p\_z(z)} \, [\log \, (1 − D(G(z)))] \, \text{.} \quad \text{(1)} $$ Second paper: $$ \mathbb{E}\_y \, [\log \, D(y)] \\ \quad\quad\quad\quad\quad\quad\quad + \mathbb{E}\_{x, z} \, [\log \, (1 − D(G(x, z))] \, \text{.} \quad \text{(2)} $$ The second later paper further states this. > > GANs are generative models that learn a mapping from random noise vector $z$ to output image $y, G : z \rightarrow y$. In contrast, conditional GANs learn a mapping from observed image $x$ and random noise vector $z$, to $y, G : {x, z} \rightarrow y$. > > > Notice that there is no $y$ in the first paper and the removal of the condition in the second paper corresponds to the removal of $x$ as the first parameter of $D$. This is one of the causes of confusion when comparing the right hand sides. The others are use of variables and degree of explicitness in notation. The tilda $~$ means *drawn according to*. The right hand side in the first paper indicates that the expectation involving $x$ is based on a drawing according to the probability distribution of the data with respect to $x$ and the expectation involving $z$ is based on a drawing according to the probability distribution of $z$ with respect to $z$. The removal of the observation of $x$ from the second right hand term of the second paper's equation (2), which is the first parameter of $G$, the replacement of that equation's $y$ variable with the now freed up $x$ variable, and the acceptance of the abbreviation of the tilda notation used in the first paper then brings both papers into exact agreement. Upvotes: 1
2019/01/15
649
2,874
<issue_start>username_0: Is there any way to control the extraction of features? How do I determine which features are been learned during training, i.e relevant information is been learned or not?<issue_comment>username_1: There are methods called "scoring systems" where you give a image scores such as "0.9 stripes, 0.0 red, 0.8 hair, ..." and use those scores to classify objects. It's an older idea, not used to determine if the network is learning. It's not in a standard CNN. To determine if relevant information is being learned or not, it's standard to use the testing accuracy, training accuracy, confusion matrix, or AUC. Determining what exactly a CNN is learning is a complicated research problem that's ongoing. In short - you can't really know. For a basic network, you can tell that it is learning *something* but not what it's actually using to make determinations. Upvotes: 2 <issue_comment>username_2: Yes there is! If a model generalizes well to the test set, we already know that it has found *some* useful features. However, the latent representation of the data may still be "entangled" - a single element of the latent vector may actually encode information about multiple attributes of the input, or a single attribute may be spread across multiple elements. We usually prefer a representation in which the features are represented by the axes of the latent space - a "disentangled" representation. For example, if we were encoding faces, it would be nice to have an axis for smiling/not, another for masculine/feminine, and ao on. Pushing models to learn "clean" (disentangled) representations is an active sub-field of machine learning research with practical applications (like interpretability, but also because it makes it easier for "downstream" models to learn their tasks, e.g. a control policy in reinforcement learning system taking as input a learned representation from a [world model](https://worldmodels.github.io/)). Where to begin? Start with L2 regularisation to push your network to "spend" those weights wisely (more weights close to zero => sparser latent vector) and work your way up from there. Upvotes: 0 <issue_comment>username_3: So there's pictures of low level activation maps, and some gradient based information where yoy take the deriviative of the output with respect to the input and generate a heatmap. I kind of have my doubts on how usefull this is in general, imo it kind of is creating a fallacious illusion of understanding. There's some additional research using blurring to figure out the relevant features also but again I have my doubts. Probably the most usefull is generating images by optimizing your class score. You can learn how badly your CNN actually labels things(doing this makes you realize quickly that CNN are garbage at actually understanding and incredibly easy to trick). Upvotes: 0
2019/01/16
1,018
4,297
<issue_start>username_0: What is "bad local minima"? The following papers all mention this expression. 1. [Eliminating all bad Local Minima from Loss Landscapes without even adding an Extra Unit](https://arxiv.org/abs/1901.03909) 2. [limination of All Bad Local Minima in Deep Learning](https://arxiv.org/abs/1901.00279) 3. [Adding One Neuron Can Eliminate All Bad Local Minima](https://arxiv.org/abs/1805.08671)<issue_comment>username_1: As mentioned in the abstract of on of these papers, bad local minima is a suboptimal local minimum which means a local minimum that is near to a global minimum. Upvotes: 0 <issue_comment>username_2: The adjective *bad* isn't mathematically descriptive. A better term is sub-optimal, which implies the state of learning might appear optimal based on current information but the optimal solution from among all possibilities is not yet located. Consider a graph representing a loss function, one of the names to measure disparity between the current learning state and the optimal. Some papers use the term *error*. In all learning and adaptive control cases, it is the quantification of disparity between current and optimal states. This is a 3D plot, so it only visualizes the loss as a function of two real number parameters. There can be thousands, but the two will suffice to visualize the meaning of local versus global minima. Some may recall this phenomenon from pre-calculus or analytic geometry. If pure gradient descent is used with an initial value to the far left, the local minimum in the loss surface would be located first. Climbing the slope to test the loss value at the global minimum further to the right would not generally occur. The gradient will in most cases cause the next iteration in learning algorithms to travel down hill, thus the term *gradient descent*. This is a simple case beyond just the visualization of two parameters, since there can be thousands of local minima in the loss surface. [![Error Surface Showing How Global Optimum Can be Missed](https://i.stack.imgur.com/2tS9r.png)](https://i.stack.imgur.com/2tS9r.png) There are many strategic approaches to improve the speed and reliability in the search for the global minimum loss. These are just a few. * Descent using gradient, or more exactly, the Jacobian * Possibly further terms of the multidimensional Taylor series, the next in line being related to curvature, the Hessian * Injection of noise, which is at an intuitive (and somewhat oversimplified) level is like shaking the graph so a ball representing the current learning state might bounce over a ridge or peak and land (by essentially trial and error) the global minimum — Simulated annealing is a materials science analogy and involves simulating the injection of Brownian (thermal) motion * Searches from more than one starting position * Parallelism, with or without an intelligent control strategy, to try multiple initial learning states and hyper-parameter settings * Models of the surface based on past learning or theoretically known principles so that the global minimum can be projected as trials are used to tune the parameters of the model Interestingly, the only way to prove that the optimal state is found is to try every possibility by checking each one, which is not nearly feasible in most cases, or by relying on a model from which the global optimal may be determined. Most theoretical frameworks target a particular accuracy, reliability, speed, and minimal input information quantity as part of an AI project, so that no such exhaustive search or model perfection is required. In practical terms, for example, an automated vehicle control system is adequate when the unit testing, alpha functional testing, and eventually beta functional testing all indicate a lower incidence of both injury and loss of property than when humans drive. It is a statistical quality assurance, as in the case of most service and manufacturing businesses. The graph above was developed for another answer, which has additional information for those interested. * [If digital values are mere estimates, why not return to analog for AI?](https://ai.stackexchange.com/questions/7328/if-digital-values-are-mere-estimates-why-not-return-to-analog-for-ai/7982#7982) Upvotes: 3 [selected_answer]
2019/01/16
386
1,520
<issue_start>username_0: For example, I have the following csv: [training.csv](https://gist.github.com/JafferWilson/3ab8ee88f3fc32e78579a1054aac757d) I want to know how I can determine which column will be the best feature for getting the output prediction before I go for machine training. Please do share your responses<issue_comment>username_1: You should know your data 100%. That means knowing what each of your columns and rows represents (e.g. temperature column, humidity, rows representing days), the value units (e.g. Celsius or Fahrenheit?), accuracy, value format (strings or numbers). You may need to clean and reorganize the data if necessary to bring them to your desired form (e.g. change the structure, units, aggregating, etc). Then use your logic and experience to decide what columns are necessary. This is in general. I hope someone will give you a more specific answer. Upvotes: 0 <issue_comment>username_2: Though there is no universal method which can be blindly used for all datasets, but here is what i usually do; * Fill missing values using interpolation or mean, if missing values are less than 10-15 percent of number of rows else drop the column. * Encode categorical data using some kind of encoding, e.g. one hot, etc. * Then normalize/rescale columns. * Now look at the variance in each feature. Usually, features with more variance are more important. * Next, see the correlation among columns. If two columns are highly correlated, you only need to keep only one. Upvotes: 2
2019/01/16
1,867
8,327
<issue_start>username_0: There are many people trying to show how neural networks are still very different from humans, but I fail to see in what way human brains are different from neural models in anything but complexity. The way we learn is similar, the way we process information is similar, the ways we predict outcomes and generate outputs are similar. Give a model enough processing power, enough training samples, and enough time and you can train a human. So, what is the difference between human (brains) and neural networks?<issue_comment>username_1: One incredibly important difference between humans and NNs is that the human brain is the result of billions of years of evolution whereas NNs were partially inspired by looking at the result and thinking "... we could do that" (utmost respect for [Hubel and Wiesel](https://www.youtube.com/watch?v=IOHayh06LJ4)). Human brains (and in fact anything biological really) have an embedded structure to them within the DNA of the animal. [DNA has about 4 MB of data](https://stackoverflow.com/questions/8954571/how-much-memory-would-be-required-to-store-human-dna) and incredibly contains the information of where arms go, where to put sensors and in what density, how to initialize neural structures, the chemical balances that drive neural activation, memory architecture, and learning mechanisms among many many other things. This is phenomenal. Note, the placement of neurons and their connections isn't encoded in dna, rather the rules dictating how these connections form is. This is fundamentally different from simply saying "there are 3 conv layers then 2 fully connected layers...". There has been some progress at [neural evolution](https://www.nature.com/articles/s42256-018-0006-z#ref-CR53) that I highly recommend checking out which is promising though. Another important difference is that during "runtime" (lol), human brains (and other biological neural nets) have a multitude of functions beyond the neurons. Things like Glial cells. There are about 3.7 Glial cells for every neuron in your body. They are a supportive cell in the central nervous system that surround neurons and provide support for and insulation between them and trim dead neurons. This maintenance is continuous update for neural structures and allows resources to be utilized most effectively. With fMRIs, neurologists are only beginning to understand the how these small changes affect brains. This isn't to say that its impossible to have an artificial NN that can have the same high level capabilities as a human. Its just that there is a lot that is missing from our current models. Its like we are trying to replicate the sun with a campfire but heck, they are both warm. Upvotes: 3 [selected_answer]<issue_comment>username_2: **Comparing Unlike Objects** The comparison between a person and an artificial network cannot be made on an equal basis. The former is a composition of many things that the later is not. Unlike an artificial network sitting in computer memory on a laptop or server, a human being is an organism, from head to toe, living in the biosphere and interacting with other human beings from birth. **Human Training** We have latent intelligence in the zygotes that met to form us and solidified as our genetic code during meiosis, but it is not yet trained. It cannot be until the brain grows from its first cells, directed by the genetic expressions of the brain's metabolic, sensory, cognitive, motor control, and immune structure and function. After nine months of growth, a newborn baby's intelligence is not yet exhibited in motion, language, or behavior other than to suck liquid food. Our intelligence begins to emerge after initial basic behavioral training and does not reach the ability to pass a test indicating academic abilities until the corresponding stages of development in a family structure and components of education are complete. These are all observations well studied and documented by those in the field of developmental psychology. **Artificial Networks are Not Particularly Neural** An artificial network is a distant and distorted conceptual offspring of a now obsolete model of how neurons behave in networks. Even when the perceptron was first conceived, it was known that neurons reacted to activation from electrical pulses transmitted across synapses from other neurons arranged in complex micro-structures, not by performing an activation function to a vector-matrix product. The parameter matrix at the input of artificial neurons are summing attenuated signals, not electro-chemically reacting to pulses that may only be roughly aligned in time. Since then, imaging and *in vetro* study of neurons are revealing the complexities of neuro-plasticity (genetically directed morphing of the network topology of neurons), the many varieties of cell types, the groupings of cells to form function geometrically, and the involvement of energy metabolism in the axon. In the human brain, chemical pathways of dozens of compounds that regulate function and comprise global and regional states and the secretion, transmission, agonist and antagonist reception, interaction, and uptake of those components is under study. There is barely, if at all, an equivalent in the environment of the artificial networks deployed today, although nothing stops us from designing such regulation systems, and some of the most recent work has pushed the envelope in that direction. **Sexual Reproduction** Artificial networks are also not brains inside individuals produced by sexual reproduction, therefore potentially exhibiting in neurological capacity the best of two parents, or the worst. We do not yet spawn artificial networks from genetic algorithms, although that has been thought of and it is likely to be researched again. **Adjusting the Basis for Comparison** In short, the basis for comparison renders it meaningless, however, with some adjustment based on the above, another similar comparison can be considered that is meaningful and on a more equal basis. > > What is the difference between a college student and an artificial network that has billions of artificial neurons, well configured and attached to five senses and motor control, integrated inside a humanoid robot that has been nurtured and educated like a member of a family and a community for eighteen years since its initial deployment? > > > We don't know. We can't even simulate such a robotic experience of eighteen years or properly project what might happen with scientific confidence. Many of the AI components of the above are not yet well developed. When they are — and there is no particularly compelling reason to think they cannot — then we will find out together. **Research that May Provide an Answer** From further cognitive science development, real time neuron level imaging, work on the genetic expressions out of which brains grow, artificial neuron designs will likely progress beyond perceptrons and the more temporally aware LSTM, B-LSTM, and GRU varieties and the topologies of neuron arrangements may break from their current Cartesian structural limitations. The neurons in a brain are not arranged in orthogonal rows and columns. They form clusters that exhibit closed loop feedback at low structural levels. This can be simulated by a B-LSTM type artificial network cell, but any electrical engineer schooled in digital circuit design understands that simulation and realization are miles apart in efficiency. A signal processor can run thousands of times faster than its simulation. From development of computer vision, hearing, tactile-motor coordination, olfactory sensing, materials science support, robotic packaging, and miniature power sources far beyond what lithium batteries can produce may come humanoid robots that can learn while interacting. At that time it would probably be easy to find a family that cannot have children that would adopt an artificial child. **Scientific Rigor** Progress in these areas is necessary for such a comparison to be made on a scientific basis and for confidence in the comparison results to be published and pass peer review by serious researchers not interested in media hype, making the right career moves, or hiking their company's stock prices. Upvotes: 1
2019/01/17
901
3,447
<issue_start>username_0: I've seen the Monte Carlo return $G\_{t}$ being used in REINFORCE and the TD($0$) target $r\_t + \gamma Q(s', a')$ in vanilla actor-critic. However, I've never seen someone use the lambda return $G^{\lambda}\_{t}$ in these situations, nor in any other algorithms. Is there a specific reason for this? Could there be performance improvements if we used $G^{\lambda}\_{t}$?<issue_comment>username_1: That can be done. For example, Chapter 13 of the 2nd edition of Sutton and Barto's Reinforcement Learning book (page 332) has pseudocode for "Actor Critic with Eligibility Traces". It's using $G\_t^{\lambda}$ returns for the critic (value function estimator), but also for the actor's policy gradients. Note that you do not *explicitly* see the $G\_t^{\lambda}$ returns mentioned in the pseudocode. They are being used *implicitly* through eligibility traces, which allow for an efficient online implementation (the "backward view"). --- I do indeed have the impression that such uses are fairly rare in recent research though. I haven't personally played around with policy gradient methods to tell from personal experience why that would be. My guess would be that it is because policy gradient methods are almost always combined with Deep Neural Networks, and **variance** is already a big enough problem in training these things without starting to involve long-trajectory returns. If you use large $\lambda$ with $\lambda$-returns, you get low bias, but high variance. For $\lambda = 1$, you basically get REINFORCE again, which isn't really used much in practice, and has very high variance. For $\lambda = 0$, you just get one-step returns again. Higher values for $\lambda$ (such as $\lambda = 0.8$) tend to work very well in my experience with tabular methods or linear function approximation, but I suspect the variance may simply be too much when using DNNs. Note that it is quite popular to use $n$-step returns with a fixed, generally fairly small, $n$ in Deep RL approaches. For instance, I believe the original A3C paper used $5$-step returns, and Rainbow uses $3$-step returns. These often work better in practice than $1$-step returns, but still have reasonably low variance due to using small $n$. Upvotes: 5 [selected_answer]<issue_comment>username_2: Recent actor-critic algorithms *do* use $\lambda$-returns, but they are disguised as something called the [Generalized Advantage Estimator](https://arxiv.org/pdf/1506.02438.pdf) defined as $A^{GAE}\_t = \sum\_{i=0}^{\infty} (\gamma\lambda)^i \delta\_{t+i}$ where $\delta\_t = r\_t + \gamma V(s\_{t+1}) - V(s\_t)$. This turns out to be identically equal to $[G^\lambda\_t - V(s\_t)]$, *i.e.* the $\lambda$-return with a value-function baseline subtracted from it. Theoretically, any actor-critic gradient method could use this quite easily; it was combined with TRPO in the GAE paper, and later used for PPO. Similarly, ACER uses an off-policy variant known as Retrace($\lambda$). For replay methods like DQN or DDPG, it is harder to implement $\lambda$-returns. This is why they have historically defaulted to $n$-step returns as @DennisSoemers mentioned. I recently [published a paper](https://papers.nips.cc/paper/8397-reconciling-returns-with-experience-replay) that describes a way to efficiently combine $\lambda$-returns with experience replay, which I hope will increase the popularity of $\lambda$-returns for these methods. Upvotes: 3
2019/01/18
1,203
4,747
<issue_start>username_0: I am working to build a reinforcement agent with DQN. The agent would be able to place buy and sell orders for a day trading purpose. I am facing a little problem with that project. The question is "how to tell the agent to maximize the profit and avoid the transaction where the profit is less than 100$". I want to maximize the profit inside a trading day and avoid to place the pair (limit buy order, limit sell order) if the profit on that transaction is less than 100$. The idea here is to avoid the little noisy movements. Instead, I prefer long beautiful profitable movements. Be aware that I thought using the "Profit & Loss" as the reward. "I want the minimal profit per transaction to be 100$" ==> It seems this is not something that is enforceable. I can train the agent to maximize profit per transaction, but how that profit is cannot be ensured. At the beginning, I wanted to tell the agent, if the profit of a transaction is 50 dollars, I will remove 100 dollars, then It becomes a penalty of 50 dollars for the agent. I thought it was a great way to tell the agent to not place a limit buy order if you are not sure it will give us a minimal profit of 100$. It seems that all I would be doing there is simply shifting the value of the reward. The agent only cares about maximizing the sum of rewards and not taking care of individual transactions. How to tell the agent to maximize the profit and avoid the transaction where the profit is less than 100$? With that strategy, what guarantee that the agent will never make a buy/sell decision that results in less than 100 dollars profit? Does the sum of reward - # transaction \* 100 can be a solution?<issue_comment>username_1: That can be done. For example, Chapter 13 of the 2nd edition of Sutton and Barto's Reinforcement Learning book (page 332) has pseudocode for "Actor Critic with Eligibility Traces". It's using $G\_t^{\lambda}$ returns for the critic (value function estimator), but also for the actor's policy gradients. Note that you do not *explicitly* see the $G\_t^{\lambda}$ returns mentioned in the pseudocode. They are being used *implicitly* through eligibility traces, which allow for an efficient online implementation (the "backward view"). --- I do indeed have the impression that such uses are fairly rare in recent research though. I haven't personally played around with policy gradient methods to tell from personal experience why that would be. My guess would be that it is because policy gradient methods are almost always combined with Deep Neural Networks, and **variance** is already a big enough problem in training these things without starting to involve long-trajectory returns. If you use large $\lambda$ with $\lambda$-returns, you get low bias, but high variance. For $\lambda = 1$, you basically get REINFORCE again, which isn't really used much in practice, and has very high variance. For $\lambda = 0$, you just get one-step returns again. Higher values for $\lambda$ (such as $\lambda = 0.8$) tend to work very well in my experience with tabular methods or linear function approximation, but I suspect the variance may simply be too much when using DNNs. Note that it is quite popular to use $n$-step returns with a fixed, generally fairly small, $n$ in Deep RL approaches. For instance, I believe the original A3C paper used $5$-step returns, and Rainbow uses $3$-step returns. These often work better in practice than $1$-step returns, but still have reasonably low variance due to using small $n$. Upvotes: 5 [selected_answer]<issue_comment>username_2: Recent actor-critic algorithms *do* use $\lambda$-returns, but they are disguised as something called the [Generalized Advantage Estimator](https://arxiv.org/pdf/1506.02438.pdf) defined as $A^{GAE}\_t = \sum\_{i=0}^{\infty} (\gamma\lambda)^i \delta\_{t+i}$ where $\delta\_t = r\_t + \gamma V(s\_{t+1}) - V(s\_t)$. This turns out to be identically equal to $[G^\lambda\_t - V(s\_t)]$, *i.e.* the $\lambda$-return with a value-function baseline subtracted from it. Theoretically, any actor-critic gradient method could use this quite easily; it was combined with TRPO in the GAE paper, and later used for PPO. Similarly, ACER uses an off-policy variant known as Retrace($\lambda$). For replay methods like DQN or DDPG, it is harder to implement $\lambda$-returns. This is why they have historically defaulted to $n$-step returns as @DennisSoemers mentioned. I recently [published a paper](https://papers.nips.cc/paper/8397-reconciling-returns-with-experience-replay) that describes a way to efficiently combine $\lambda$-returns with experience replay, which I hope will increase the popularity of $\lambda$-returns for these methods. Upvotes: 3
2019/01/19
3,717
15,067
<issue_start>username_0: I have an electromagnetic sensor and electromagnetic field emitter. The sensor will read power from the emitter. I want to predict the position of the sensor using the reading. Let me simplify the problem, suppose the sensor and the emitter are in 1 dimension world where there are only position X (not X,Y,Z) and the emitter emits power as a function of distance squared. From the painted image below, you will see that the emitter is drawn as a circle and the sensor is drawn as a cross. [![one emitter](https://i.stack.imgur.com/kuJR8.png)](https://i.stack.imgur.com/kuJR8.png) E.g. if the sensor is 5 meter away from the emitter, the reading you get on the sensor will be 5^2 = 25. So the correct position will be either 0 or 10, because the emitter is at position 5. So, with one emitter, I cannot know the exact position of the sensor. I only know that there are 50% chance it's at 0, and 50% chance it's at 10. So if I have two emitters like the following image: [![two emitters](https://i.stack.imgur.com/piKWZ.png)](https://i.stack.imgur.com/piKWZ.png) I will get two readings. And I can know exactly where the sensor is. If the reading is 25 and 16, I know the sensor is at 10. So from this fact, I want to use 2 emitters to locate the sensor. Now that I've explained you the situation, my problems are like this: 1. The emitter has a more complicated function of the distance. It's not just distance squared. And it also have noise. so I'm trying to model it using machine learning. 2. Some of the areas, the emitter don't work so well. E.g. if you are between 3 to 4 meters away, the emitter will always give you a fixed reading of 9 instead of going from 9 to 16. 3. When I train the machine learning model with 2 inputs, the prediction is very accurate. E.g. if the input is 25,36 and the output will be position 0. But it means that after training, I cannot move the emitters at all. If I move one of the emitters to be further apart, the prediction will be broken immediately because the reading will be something like 25,49 when the right emitter moves to the right 1 meter. And the prediction can be anything because the model has not seen this input pair before. And I cannot afford to train the model on all possible distance of the 2 emitters. 4. The emitters can be slightly not identical. The difference will be on the scale. E.g. one of the emitters can be giving 10% bigger reading. But you can ignore this problem for now. My question is **How do I make the model work when the emitters are allowed to move?** Give me some ideas. Some of my ideas: 1. I think that I have to figure out the position of both emitters relative to each other dynamically. But after knowing the position of both emitters, how do I tell that to the model? 2. I have tried training each emitter separately instead of pairing them as input. But that means there are many positions that cause conflict like when you get reading=25, the model will predict the average of 0 and 10 because both are valid position of reading=25. You might suggest training to predict distance instead of position, that's possible if there is no **problem number 2**. But because there is problem number 2, the prediction between 3 to 4 meters away will be wrong. The model will get input as 9, and the output will be the average distance 3.5 meters or somewhere between 3 to 4 meters. 3. Use the model to predict position probability density function instead of predicting the position. E.g. when the reading is 9, the model should predict a uniform density function from 3 to 4 meters. And then you can combine the 2 density functions from the 2 readings somehow. But I think it's not going to be that accurate compared to modeling 2 emitters together because the density function can be quite complicated. We cannot assume normal distribution or even uniform distribution. 4. Use some kind of optimizer to predict the position separately for each emitter based on the assumption that both predictions must be the same. If the predictions are not the same, the optimizer must try to move the predictions so that they are exactly at the same point. Maybe reinforcement learning where the actions are "move left", "move right", etc. I told you my ideas so that it might evoke some ideas in you. Because this is already my best but it's not solving the issue elegantly yet. So ideally, I would want the end-to-end model that are fed 2 readings, and give me position even when the emitters are moved. How would I go about that? PS. The emitters are only allowed to move before usage. During usage or prediction, the model can assume that the emitter will not be moved anymore. This allows you to have time to run emitters position calibration algorithm before usage. Maybe this will be a helpful thing for you to know.<issue_comment>username_1: Model input: * 1 mean scaled input for each emitter * 1 distance value for each distance **Multiple input** You mentioned there is noise. If the noise is constant, ie you test it in place A and the values returned are always the same, then it means training in different places. If you place it in a place and the first reading is different from the second reading. Then you need to take a lot of readings and select the mean or median of the readings. The central limit theorem says at least 30 readings. This would be the easiest. You could use each sample as an input which allows the NN to learn to filter out the noise. This makes training longer but is probably better than just taking an average. **Scaled input** I know you said it is not important for now, and that the emitters come in different scales. I would scale the output of the emitter proportional to it's scale so that two emitters of different size will produce the same output relative to the sensor no matter how far the sensor is from the emitter. This function might be a simple linear function or more complex, depending if output of the emitter drops faster for a smaller emitter than a larger one. **Distance value** You mentioned dynamically calculating the position of an emitter, but it was placed there so you must know where it is. This means you could use two solutions. One a co-ordinate system which is a little more complex, or a simpler solution is a distance vector. There must be a maximum distance that one could place the emitters. Lets assume this distance is 25. You could normalise the data as any distance / maximum distance. This should be repeated for all unique combinations of emitters. i.e if you have 2 (A, B) then only one distance value if you have 3 emitters (A,B,C), then 3 distance values ie from A-B, and A-C, and B-C. A co-ordinate system is more complex because using a number to denote a position will apply importance to it, for example a grid from 1 to 10 across the x, and y. Emitter at position 10,10 will have a greater importance than an emitter at 1,1. And if it is 0,0 without a bias input your result will be 0. **Structure of the NN** Of course the structure of the NN, the data samples, what you use for validation etc will all play a role. Perhaps some research on previous work. See the following: * [Significant Location Detection & Prediction in Cellular Networks Using Artificial Neural Networks](https://pdfs.semanticscholar.org/5c78/363d64eaa8aa391943aea0a5dbf9ff359754.pdf) * [Mobile Localization Based on Received Signal Strength and Pearson's Correlation Coefficient](https://journals.sagepub.com/doi/full/10.1155/2015/157046) * [Discrete Indoor Three-Dimensional Localization System Based on Neural Networks Using Visible Light Communication](https://pdfs.semanticscholar.org/d578/f13b48ac16e5a0c8421bd7060e148c0a7c36.pdf) Upvotes: 1 <issue_comment>username_2: **Position Detection** In a traditional data acquisition and control scenario, with some assumptions, the relation between sensors signals $s\_i$, emitters drive $\epsilon\_j$, distances $x\_{ij}$, and calibration factors is modelled as follows. $$ \forall \, (i, j) \text{,} \quad \frac {s\_i} {v\_i} = \frac {\epsilon\_j} {v\_j \, x\_{ij}^2} $$ The assumptions include these. * Linear acquisition of magnetic flux signal strength * Linear control of magnetic flux signal strength * Independent readings either by sequential reading or by use of two distinct emitter frequencies * Dismissal of relativistic phenomena * Single point emission * Single point detection It is correct that, with only a single emitter, position of the sensor cannot be accurately determined because the direction from which the signal originates cannot be disambiguated. Two emitters are necessary for reliability. In two dimensional space, three are necessary, thus the term triangulation. In three dimensional space, four are necessary. **Less Known Function with Motion and Noise** The more complicated function of distance was not specified, whether the sample rate is tightly controlled is not indicated, and the nature and magnitude of the noise relative to the signal was not provided. It also appears there is a low digital accuracy in the readings. To model these contingencies, for motion, $j$ shall be the sample index, and $i$ remains the detector number. The data acquisition vector is now a tuple of the reading $r\_{ij}$ and time $t\_j$. The function $f$ may differ from the inverse squared function due to flux conduction curvature, non-point emission and detection, and other secondary effects. The combination of this function and noise $n$, a function of sample time, $s\_j$, is made discrete, according to the question, rounding or truncating to the nearest integer (indicated by $\mathcal{I}$). $$ \forall \, (i, j) \text{,} \quad (r\_{ij}, \; t\_j) = \bigg(\mathcal{I}\big(f(\epsilon\_j, x\_{ij}) + n(s\_j)\big), \; s\_j\bigg) $$ There are other benefits to the additional emitter than disambiguation of direction. The impact of noise is reduced as redundancy is added, and there is the calibration issue. **Calibration** High volume, low cost, electronic parts are not usually calibrated in the factory. Even when they are, the calibration cannot be trusted. Even if calibrated in the factory and then in the lab, the phenomenon of temperature and pressure drift complicates acquisition for passive emitters and transducers. Carefully designed instrumentation and measurement strategy can compensate for component behavioral variance, and redundant emitters and detectors can be used in such designs. Assuming accuracy above that of a mass produced part is required, the calibration voltages $v\_i$ and $v\_j$ must be determined simultaneously and be consistently either relative to magnetic flux levels at some point or to each other. If the environment cannot be controlled, re-calibration may be periodically necessary so that the calibration will remain representative. > > The emitters can be slightly not identical. The difference will be on the scale. E.g. one of the emitters can be giving 10% bigger reading. But you can ignore this problem for now. > > > Calibration issues should not be dismissed until later. They must be built into the model tuned by the parameters converged to an optimal during training. Fortunately, since $f$ is unknown and encapsulates calibration factors, the addressing up front of calibration will not likely frustrate proper analysis. **Drawing of Samples and Aligning Training and Use Distributions** It is important, however, to understand that, When training, the distribution of the training samples must match the distribution of the samples encountered when the training is expected to work. This applies directly to the calibration issue and determines the frequency of re-calibration. In essence, training is calibration. This is not new to the recent machine learning craze. Such was the case with self-adjusting PID controllers in the 1990s. **Addressing Questions in the Ideas Section of the Question** > > When I train the machine learning model with 2 inputs, the prediction is very accurate ... but it means that after training, I cannot move the emitters at all. ... I cannot afford to train the model on all possible distance of the 2 emitters. > > > That is the case if the training samples are not representative or insufficient in number or the model $f$ is entirely unknown or not used in the convergence strategy. > > I have to figure out the position of both emitters relative to each other dynamically. But after knowing the position of both emitters, how do I tell that to the model? > > > A model does not know the position of emitters or detectors. A model generalizes these. What you tell the model is what is known for sure about $f$ and $\mathcal{I}$. > > I have tried training each emitter separately instead of pairing them as input. > > > That defies the rule that the training distribution must match the usage distribution. Reliability, accuracy, and speed of convergence will all be damaged by doing that. > > Use the model to predict position probability density function instead of predicting the position. ... We cannot assume normal distribution or even uniform distribution. > > > Because of the noise function $n$, the function to be learned is necessarily stochastic, but that is not unusual, and that does not mean that convergence during learning will not occur. It merely means that the loss function cannot be expected to reach zero. It can nonetheless be minimized, even with motion. Because the objects attached to detectors and sensors are physical and have mass and forces involved are not nuclear or supernatural, acceleration cannot be either $\infty$ or $- \infty$, thus the vectors do not have the Markov property. If the preparation of training data allows the labeling of the readings and time stamps with reference positions derived from a test fixture using digital encoders with high accuracy, then this project is much more feasible. In such a case, it is the patterns in the time series and their relationship to actual position that is being learned. Then a B-LSTM or GNU type cell for network layers may be the best choice. > > Maybe reinforcement learning where the actions are "move left", "move right", ... > > > Unless the system being designed is required to produce motion control, reinforcement learning or other adaptive control strategies are not indicated. In either case, that the Markov property is not present in a system that involves physical momentum, a form of learning that requires that property may not be the best control strategy. > > The emitters are only allowed to move before usage. During usage or prediction, the model can assume that the emitter will not be [stationary]. This allows you to have time to run emitters position calibration algorithm before usage. > > > It is recommended to design the math and fixture used for training as flexibly as possible and then binding variables only after there is no doubt the system is working and various degrees of freedom are superfluous. Upvotes: 2
2019/01/22
361
1,326
<issue_start>username_0: [![enter image description here](https://i.stack.imgur.com/thmqC.png)](https://i.stack.imgur.com/thmqC.png) They only reference in the paper that the position embeddings are learned, which is different from what was done in ELMo. ELMo paper - <https://arxiv.org/pdf/1802.05365.pdf> BERT paper - <https://arxiv.org/pdf/1810.04805.pdf><issue_comment>username_1: These embeddings are nothing more than token embeddings. You just randomly initialize them, then use gradient descent to train them, just like what you do with token embeddings. Upvotes: 3 [selected_answer]<issue_comment>username_2: Sentences (for those tasks such as NLI which take two sentences as input) are differentiated in two ways in BERT: * First, a `[SEP]` token is put between them * Second, a learned embedding $E\_A$ is concatenated to every token of the first sentence, and another learned vector $E\_B$ to every token of the second one That is, there are just **two** possible "segment embeddings": $E\_A$ and $E\_B$. Positional embeddings are learned vectors for every possible position between 0 and 512-1. Transformers don't have a sequential nature as recurrent neural networks, so some information about the order of the input is needed; if you disregard this, your output will be permutation-invariant. Upvotes: 2
2019/01/24
2,043
9,158
<issue_start>username_0: Imagine that I have an artificial neural network with a single hidden layer and that I am using ReLU as my activating function. If by change I initialize my bias and my weights in such a form that: $$ X \* W + B < 0 $$ for every input **x** in **X** then the partial derivate of the Loss function with respect to W will always be 0! In a setup like the above where the derivate is 0 is it true that an NN won´t learn anything? If true (the NN won´t learn anything) can I also assume that once the gradient reaches the value 0 for a given weight, that weight won´t ever be updated?<issue_comment>username_1: > > In a setup like the above where the derivat[iv]e is 0 is it true that an NN won´t learn anything? > > > There are a couple of adjustments to gradients that *might* apply if you do this in a standard framework: * Momentum may cause weights to continue changing if any recent ones were non-zero. This is typically implemented as a rolling mean of recent gradients. * Weight decay (aka L2 weight regularisation) is often implemented as an additional gradient term and may adjust weights down even in the absence of signals from prediction errors. If either of these extensions to basic gradient descent are active, or anything similar, then it is possible for the neural network to move out of the stationary zone that you have created after a few steps and then continue learning. Otherwise, then yes it is correct that the neural networks weights would not change at all through gradient descent, and the NN would remain unchanged for any of your input values. Your careful initialisation of biases and weights will have created a system that is unable to learn from the given data. This is a known problem with ReLU activation, and can happen to some percentage of artificial neurons during training with normal start conditions. Other activation functions such as sigmoid have similar problems - although the gradient is never zero in many of these, it can be arbitrarily low, so it is possible for parameters to get into a state where learning is so slow that the NN, whilst technically learning *something* on each iteration, is effectively stuck. It is not always easy to tell the difference between these unwanted states of a NN and the goal of finding a useful minimum error. Upvotes: 3 [selected_answer]<issue_comment>username_2: **Learning and Zero Derivatives** Artificial networks are designed so that even when the partial derivative of a single activation function is zero they can learn. They can also be designed to continue learning when the derivative of the loss1 function is zero too. This resilience to a vanishing feedback signal amplitude, by design, determines some of how calculus results are employed in the learning algorithm, hardware acceleration circuitry, or both. By learning behavior is meant the behavior of the changes to the parameters of the network as learning occurs. For many of the activation functions used today, the derivative of the of the activation function is never the real expression 0, but there are such cases. These are examples of when the evaluation of the derivative of the activation function is effectively zero. * All the time for a binary step activation function, which is why they are usually only used for the very last layer of a network to discretize the output. * When the input of a ReLU activation function is negative, which is the case given in the question * When the granularity of the IEEE representation of the number can no longer support the smallness of the absolute value of the number upon evaluation * When the loss is zero **Nearing Zero** This last condition can easily occur if the result of the loss function output is so close to zero that the digital products of that number, during propagation, rounds to zero in the floating point hardware. Even if not zero, the number can be so small that learning slows to an untenable speed. The learning process either oscillates, in many cases chaotically, because of rounding phenomena or finds a static state and remains there. Again, this does not necessarily require a zero partial in the Jacobian. **A Familiar Analogy** The cognitive equivalent that helps intuition in understanding this but is not at all a great and across-the-board accurate analogy is the mental concept of doubt. The advantages of various directions of change or action to produce change is no longer clear. This is a rough analogy that some can connect to when considering what it means when the gradient is vanishing. When looking at gradient in historical context, where gradient is the slope of a surface in a location where gravity defines which direction is down, a vanishing gradient is a place where no direction seems to be down hill. **Flat in One Dimension by Design** In the question, where an inner layer2 is a ReLU activation function, the evaluation of the partial derivative of the loss function with respect to the parameter being adjusted will always be zero if its input is negative. However, this is by design and is one of the reasons ReLU trains fast. When the signal is negative going into the layer at that particular cell, it is thrifty to ignore it. The other cells upstream are then altered through other paths around the deactivated cell with the zero partial. A neuroscientist might smile at the oversimplification, but this is like a missing synapse between two adjacent cells in the brain. > > In a setup like the above where the derivative is 0, is it true that a [network] won't learn anything? > > > It is false. Learning will stop if all the derivatives in a layer are zero and no other device, such as curvature, momentum, regularization, and other devices controlled by hyper-parameters, is employed. Even so, zeros across the layer would only affect the adjustment of upstream layers, layers closer to the input. Downstream, convergence activity may continue in such a case. Zero and close to zero values (as well as near overflows) are kinds of saturation conditions, and these are studied carefully in artificial network research, but a single cell with a zero partial will not stop learning and may, in specific cases, ensure its completion and the adequacy of its result. **Some Calculus** In mathematical terms, if the Jacobian has a zero in one position, the others may remain active indicators of proper adjustment magnitude and direction for the individual parameters. If the Hessian is used or various types of momentum, regularization, or other techniques are employed, zeros across the Jacobian will probably not block upstream learning, which is part of the reason why they are used. **An Analogy for Momentum** The analogy can again be employed to clarify momentum as a principle, with the caveat that it is again an oversimplification. Beliefs have momentum, so when a belief exists and all other indications of direction for the next step is unsure, most will base their next step upon their beliefs. This is how all organisms with a brain tend to work and why mouse traps and spider webs can catch. Without viable feedback from which learning can occur, the organism will act based on the momentum in its DNA or networks of its brain. This is usually beneficial, but can lead to loss in specific cases. Gradients are not fool proof either. The problem of local minima can render pure gradient descent dysfunctional as well. **An Analogy for Curvature** Curvature (as when the Hessian is employed) requires a different analogy. If a smart, blind person is on a dry, flat surface, thirsty, and needing water, the gradient may be flat, they may feel with their foot or cane for some indication of curvature. If some down curving feature of the surface is found, that may guide the person to water in more cases than a random step. As hardware acceleration improves, the Hessian, which is to computationally heavy for CPU execution in most cases, may emerge as standard practice. Mathematically, this is simply moving from two terms of the Taylor series expansion in multivariate space to three terms. From a mechanics perspective, it is the inclusion of acceleration to velocity. --- **Footnotes** [1] Loss or any of these functions that drive behavior in AI systems: Error, disparity, value, benefit, optimality, and others of similar evaluative nature. [2] Inner layers in artificial networks are often called hidden layers, but that is not technically correct. Even though the encapsulation of a neural network may hide signals in inner layers from the programmer, it is a low level software design feature to do that, and a bad one. One can usually and definitely should be able to monitor those signals and produce statistics on them. They are not hidden in the mathematics, as in some kind of mathematical mystery, and the only difference between them and the output layer is that the output layer is intended to drive or influence something outside the network. Upvotes: 1 <issue_comment>username_3: People often place a batchnorm layer before ReLU. That effectively prevents the problem you have described. Upvotes: 1
2019/01/24
817
3,075
<issue_start>username_0: I am curious if it is possible to do so. For example, if I supply * $[0, 1, 2, 3, 4, 5]$, the model should return "natural number sequence", * $[1,3,5,7,9,11]$, it should return "natural number with step of $2$", * $[1,1,2,3,5]$, it should return "Fibonacci numbers", * $[1,4,9,16,25]$, it should return "square natural number" and so on.<issue_comment>username_1: Those all fit into a single quadratic, auto-correlated model. $$ x\_0 = a \\ x\_i = b i^2 + c x\_{i-1} + d i + e $$ The sequences can be curve fitted producing a set of $n$ perfect fits of the form $(a, b, c, d, e)$ given the above model. A rules engine given the correct parameterized rules can produce the most desirable verbal description from among the $n$ fits in the set. The rules can also be prioritized by a simple feed forward network trained to simulate the most natural selection of string descriptions from any set of fits where $n > 1$. This will work well for the examples in the question and many more, however, if the sequence $\{1, 4, 1, 5, 9\}$ is fed into the system, it will produce some weird description based on the quadratic, auto-correlated model it was given rather than, "digits of $\pi$ to the right of the decimal place." The only way to produce the most common response a university freshman math student would produce would be to extend the boundaries of AI engineering first. For example, once an AI system is developed that can handle natural language and cognition like a child, several of them can be separately trained in a simulation of primary and secondary school mathematics. The median response for each sequence given to the class of AI students (class made up of artificial students studying math, not class of humans studying AI) will then be a reasonable prediction of what human university freshmen would produce as a median response. Upvotes: 1 <issue_comment>username_2: This can be framed as a classification problem where a model is supervised on a dataset containing finite-length number sequences $x^{(i)}\_1, \cdots, x^{(i)}\_n$ and the sequence name $y\_i$. For example, the dataset could look like this: * `([0,1,2,3,4], 0)` * `([1,3,5,7,9], 1)` * `([1,1,2,3,5], 2)` * `([1,4,9,16,25], 3)` where the numbers on the right are integer representations of the sequence name. Given intuitive sequences, plentiful training data, and training examples of reasonable length, this problem would not be that difficult to solve. Sequence models from deep learning, such as recurrent networks (LSTM, GRU) or temporal convolutional networks, are well-suited for tasks such as this one. Of course, this is only possible within certain constraints. The models are only good at what they are trained to do, so it would be impossible to use them to infer whether a sequence skips by 2 or 3 without having that information explicitly present in the training data. It would be interesting to see whether unsupervised models could detect this sort of information, although I don't think this work has been done in the present. Upvotes: -1
2019/01/25
1,737
7,466
<issue_start>username_0: I'm quite new to the field of computer vision and was wondering what are the purposes of having the boundary boxes in object detection. Obviously, it shows where the detected object is, and using a classifier can only classify one object per image, but my question is that 1. If I don't need to know 'where' an object is (or objects are) and just interested in the existence of them and how many there are, is it possible to just get rid of the boundary boxes? 2. If not, how does bounding boxes help detect objects? From what I have figured is that a network (if using neural network architectures) predicts the coordinates of the bounding boxes if there is something in the feature map. Doesn't this mean that the detector already knows where the object is (at least briefly)? So, continuing from question 1, if I'm not interested in the exact location, would training for bounding boxes be irrelevant? 3. Finally, in architectures like YOLO, it seems that they predict the probability of each class on each grid (e.g. 7 x 7 for YOLO v1). What would be the purpose of bounding boxes in this architecture other than that it shows exactly where the object is? Obviously, the class has already been predicted, so I'm guessing that it doesn't help classify better.<issue_comment>username_1: In principle, you could train the model to output a sigmoid map of coarse object positions (0 -> no object, 1 -> an object center is located here). The map could be subjected to non-maximum suppression and such model could be trained end-to-end. That would be possible, if that's what you are asking. Upvotes: -1 <issue_comment>username_2: A bounding box is a rectangle superimposed over an image within which all important features of a particular object is expected to reside. It's purpose is to reduce the range of search for those object features and thereby conserve computing resources: Allocation of memory, processors, cores, processing time, some other resource, or a combination of them. For instance, when a convolution kernel is used, the bounding box can significantly limit the range of the travel for the kernel in relation to the input frame. When an object is in the forefront of a scene and a surface of that object is faced front with respect to the camera, edge detection leads directly to that surface's outlines, which lead to object extent in the optical focal plane. When edges of object surfaces are partly obscured, the potential visual recognition value of modelling the object, depth of field, stereoscopy, or extrapolation of spin and trajectory increases to make up for the obscurity. > > A classifier can only classify one object per image > > > A collection of objects is an object, and the objects in the collection or statistics about them can be characterized mathematically as attributes of the collection object. A classifier dealing with such a case can produce a multidimensional classification of that collection object, the dimensions of which can correspond to the objects in the collection. Because of that case, the statement is false. > > 1) If I don't need to know 'where' an object is (or objects are) and just interested in the existence of them and how many there are, is it possible to just get rid of the boundary boxes? > > > If you have sufficient resources or patience to process portions of the frame that don't contain the objects, yes. Questions (2) and (3) are already addressed above, but let's look at them in that context. > > 2.a) If not, how does bounding boxes help detect objects? > > > It helps by fulfilling its purpose, to reduce the range of the search. If by thrifty method a bounding shape of any type can be created, then the narrowing of focus can be used to reduce the computing burden on the less thrifty method by eliminating pixels that are not necessary to peruse with more resource-consuming-per-pixel methods. These less thrifty methods may be necessary to recognize surfaces, motion, and obscured edges and reflections so that the detection of object trajectory can be obtained with reliability. That these thrifty mechanisms to find the region of focus and these less thrifty mechanisms to use that information and then determine activity at higher levels of abstraction are artificial networks of this type or that or use algorithms of this type or that is not relevant yet. First understand the need to reduce computing cost in AI, which is a pervasive concept for anything more complex than tic-tac-toe, and then consider how bounding boxes help the AI engineer and the stakeholders of the engineering project to procure technology that is viable in the market. > > 2.b) From what I have figured is that a network (if using neural network architectures) predicts the coordinates of the bounding boxes if there is something in the feature map. Doesn't this mean that the detector already knows where the object is (at least briefly)? > > > 2.c) So continuing from question 1, if I'm not interested in the exact location, would training for bounding boxes be irrelevant? > > > Cognition is something AI seeks to simulate, and many hope to have robots like in the movies that can help out and be invaluable friends, like TARS in the Nolan brothers 2014 film Interstellar. We're not there. The network knows nothing. It can train a complex connection between an input signal through a series of attenuation matrices and activation functions to produce an output signal statistically consistent with its loss function, value function, or some other criteria. The inner layers of an artificial net may, if not directed to do so, produce something equivalent to a bounding region only if velocity of convergence is present as a factor in its loss or value function. Otherwise there is nothing in the Jacobian leading convergence to reduce its own time to completion. Therefore, the process may complete, but not as well as if cognition steps in and decides that the bounding region will be found first and then used to reduce the total burden of mechanical (arithmetic) operations to find the desired output signal as a function of input signal. > > 3) Finally, in architectures like YOLO, it seems that they predict the probability of each class on each grid (e.g. 7 x 7 for YOLO v1). What would be the purpose of bounding boxes in this architecture other than that it shows exactly where the object is? Obviously, the class has already been predicted so I'm guessing that it doesn't help classify better. > > > Reading the section in [*A Real-Time Chinese Traffic Sign Detection Algorithm Based on Modified YOLOv2*](https://www.mdpi.com/1999-4893/10/4/127/pdf), <NAME>, <NAME>, <NAME>, <NAME>, 2017, may help further comprehension of these principles and their almost universal role in AI, especially the text around their statement, "The Network Architecture of YOLO v2 YOLO employs a single neural network to predict bounding boxes and class probabilities directly from full images in one inference. It divides the input image into S × S grids." This way you can see the use of these principles in the achievement of specific research goals. Other such applications can be found by simply reading the article full text available on the right side of [an academic search for yolo algorithm](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C10&q=YOLO+algorithm) and using ctrl-f to find the word bound. Upvotes: 3 [selected_answer]
2019/01/25
1,443
5,638
<issue_start>username_0: I'm reading the book *Artificial Intelligence: A Modern Approach* (by <NAME> and <NAME>). However, I don't understand the difference between search and planning. I was more confused when I saw that some search problems can be determined in a planning way. My professor explained to me in a confusing way that the real difference is that search uses a heuristic function, but my book says that planning uses a heuristic too (in chapter 10.2.3). I read [this Stack Overflow post](https://stackoverflow.com/q/10282476/3924118) that says in a certain way what I'm saying. So, what is the difference between search and planning?<issue_comment>username_1: ### What is planning? **Planning** (aka [*automated planning* or *AI planning*](https://en.wikipedia.org/wiki/Automated_planning_and_scheduling)) is the process of **searching** for a **plan**, which is a sequence of actions that bring the agent/world from an initial state to one or more goal states, or a **policy**, which a function from states to actions (or probability distributions over actions). Planning is not just the use of a search algorithm (e.g. a state-space search algorithm, such as A\*) to find the plan, but planning also involves the modelling of a problem by describing it in some way, e.g. with an **[action language](https://ep.liu.se/ea/cis/1998/016/cis98016-revised.pdf)**, such as [PDDL](https://en.wikipedia.org/wiki/Planning_Domain_Definition_Language) or [ADL](https://en.wikipedia.org/wiki/Action_description_language), or even with propositional logic. There are different *planning problems*, such as * [**classical planning**](https://users.aalto.fi/%7Erintanj1/jussi/planning.html) * planning in **Markov Decision Processes** (which are mathematical models that describe stochastic environments/problems, i.e. environments where an action $a$ may not always lead to the same outcome) * [**temporal planning**](https://users.aalto.fi/%7Erintanj1/jussi/temporalplanning.html) (i.e. when multiple actions can be taken at the same time and/or their duration may be different/variable) * [hierarchical vs non-hierarchical planning](https://urresearch.rochester.edu/fileDownloadForInstitutionalItem.action?itemId=5342&itemFileId=8258) There are different *planning approaches*, such as * planning by [**state-space search**](https://en.wikipedia.org/wiki/State_space_search) (with or without heuristics), i.e. formulate the planning problem so that you can apply a state-space search algorithm (examples of state-space search algorithms are A\*, IDA\*, WA\*, DFS or BFS); this is probably the approach to planning that makes people confuse planning with search (which typically implicitly refers to state-space search) * planning by [**SAT**](https://users.aalto.fi/%7Erintanj1/jussi/satplan.html), i.e. formulate the planning problem so that you can you a SAT solver to solve it (e.g. [SATPLAN](https://www.cs.cornell.edu/selman/papers/pdf/92.ecai.satplan.pdf)) * planning by solving **constraint satisfaction problems** (similar to SAT) * [planning by "symbolic" search methods](https://users.aalto.fi/%7Erintanj1/jussi/planning.html) (such as Binary Decision Diagrams) There are many different *planning algorithms* or *planners* (of course, each of these corresponds to some approach or is used to solve a particular planning problem), such as * [**policy iteration**](http://incompleteideas.net/book/first/ebook/node43.html) in the context of **Markov decision processes** * [GraphPlan](https://www.cs.cmu.edu/%7Eavrim/graphplan.html) * STRIPS (which stands for Stanford Research Institute Problem Solver, but note that STRIPS also and maybe more commonly refers to the corresponding description language used to describe the planning problem) ### What is search? In the context of AI, when people use the word **search**, they can either implicitly refer to **state-space search** (with algorithms like A\*) or, more vaguely/generally, to any type of search, e.g. gradient descent can also be viewed as a search algorithm, typically, in a continuous state space (rather than discrete state space), e.g. the space of real-valued vectors of parameters for a certain model (e.g. a neural network). ### What is the difference between planning and search? The difference should more or less be clear from my first paragraph above that describes planning: **search is used for planning**, but the planning process/study also includes other things, such as describing the problem with an action language, such as PDDL. You can also view planning as an application of search algorithms of different types. In any case, planning is about searching for plans or policies. You should probably check [this course](https://open.ed.ac.uk/artificial-intelligence-planning/) for more info. This definition of (automated) planning is consistent with [our usual notion of planning](https://en.wikipedia.org/wiki/Planning). Upvotes: 2 <issue_comment>username_2: Often terminology is vague and if this will be a test item for school I would circle back with teacher with my response before accepting it. They key difference is prior knowledge. Search - when one searches they explore without prior knowledge. This would likely be a stochastic process of trial and error. There is no expectation about state transitions. Example is exploring an abandoned house. You have no idea what you'll find next. Plan - you have a model of state transitions and can make predictions. One step or multi step prediction capabilities allow you to predict an outcome and therefore choose an outcome from several options. Upvotes: -1
2019/01/24
1,086
4,418
<issue_start>username_0: In [Open AI's actor-critic](https://github.com/pytorch/examples/blob/master/reinforcement_learning/actor_critic.py) and in [Open AI's REINFORCE](https://github.com/pytorch/examples/blob/master/reinforcement_learning/reinforce.py), the rewards are being normalized like so ``` rewards = (rewards - rewards.mean()) / (rewards.std() + eps) ``` *on every episode individually*. This is probably the [baseline reduction](https://qr.ae/TWGVk2), but [I'm not entirely sure why they divide by the standard deviation of the rewards](https://ai.stackexchange.com/q/10194/2444)? Assuming this is the baseline reduction, why is this done *per episode*? What if one episode yields rewards in the (absolute, not normalized) range of $[0, 1]$, and the next episode yields rewards in the range of $[100, 200]$? This method seems to ignore the absolute difference between the episodes' rewards.<issue_comment>username_1: We subtract mean from values and divide it with standard deviation to get data with mean of zero and variance of one. The range of values per episode does not matter, it will always make it to have zero mean and variance of one in all cases. If the range is bigger ([100, 200]) then deviation will be bigger as well than for smaller range ([0, 1]) so we will end up dividing by bigger number. Upvotes: 2 <issue_comment>username_2: The "trick" of subtracting a (state-dependent) baseline from the $Q(s, a)$ term in policy gradients to reduce variants (which is what is described in your "baseline reduction" link) is a different trick from the modifications to the rewards that you are asking about. The baseline subtraction trick for variance reduction does not appear to be present in the code you linked to. The thing that your question is about appears to be standardization of rewards, as described in [username_1\_'s answer](https://ai.stackexchange.com/a/10200/1641), to put all the observed rewards in a similar range of values. Such a standardization procedure inherently requires division by the standard deviation, so... that answers that part of your question. As for why they are doing this on a per-episode-basis... I think you're right, **in the general case** this seems like a bad idea. If there are rare events with extremely high rewards that only occur in some episodes, and the majority of episodes only experience common events with lower-scale rewards... yes, this trick will likely mess up training. **In the specific case of the CartPole environment** (which is the only environment used in these two examples), this is not a concern. In this implementation of the CartPole environment, the agent simply receives a reward with a value of exactly $1$ for every single time step in which it manages to "survive". The `rewards` list in the example code is in my opinion poorly-named, because it actually contains **discounted returns** for the different time-steps, which look like: $G\_T = \sum\_{t=0}^{T} \gamma^t R\_t$, where all the individual $R\_t$ values are equal to $1$ **in this particular environment**. These kinds of values tend to be in a fairly consistent range (especially if the policy used to generate them also only moves slowly), so the standardization that they do may be relatively safe, and may improve learning stability and/or speed (by making sure there are always roughly as many actions for which the probability gets increased as there are actions for which the probability gets decreased, and possibly by making hyperparameters easier to tune). It does not seem to me like this trick would generalize well to many other environments, and personally I think it shouldn't be included in such a tutorial / example. --- **Note**: I'm quite sure that a per-episode subtraction of the mean returns would be a valid, albeit possibly unusual, [baseline for variance reduction](https://spinningup.openai.com/en/latest/spinningup/rl_intro3.html#baselines-in-policy-gradients). It's the subsequent division by standard deviation that seems particularly problematic to me in terms of generalizing to many different environments. Upvotes: 4 [selected_answer]<issue_comment>username_3: This question is discussed in detail, in the following NeurIPS 2016 paper by <NAME>: [Learning values across many orders of magnitude](https://arxiv.org/pdf/1602.07714.pdf). They also give experimental results over the Atari domain. Upvotes: 3
2019/01/26
1,024
4,466
<issue_start>username_0: I am trying to make a personal ML project where my objective is using a photo from an invoice, for instance, a Walmart invoice, classify it as being a Walmart invoice and extract the total amount spent. I would then save this information in a relational database and infer some statistics about my spendings. The goal would be to classify invoices not only from Walmart but from the most frequent shops where I spend money and then extract the total amount spent. I already do this process manually, I insert my spendings in a relational database. I have a bunch of photos from different invoices that I have recorded over the past year for this purpose (training a model). What algorithms would you guys recommend? From my point of view, I think that I need some natural language processing to extract the total amount spent and maybe a convolutional neural network to classify the invoice as being from a specific store? Thanks!<issue_comment>username_1: We subtract mean from values and divide it with standard deviation to get data with mean of zero and variance of one. The range of values per episode does not matter, it will always make it to have zero mean and variance of one in all cases. If the range is bigger ([100, 200]) then deviation will be bigger as well than for smaller range ([0, 1]) so we will end up dividing by bigger number. Upvotes: 2 <issue_comment>username_2: The "trick" of subtracting a (state-dependent) baseline from the $Q(s, a)$ term in policy gradients to reduce variants (which is what is described in your "baseline reduction" link) is a different trick from the modifications to the rewards that you are asking about. The baseline subtraction trick for variance reduction does not appear to be present in the code you linked to. The thing that your question is about appears to be standardization of rewards, as described in [username_1\_'s answer](https://ai.stackexchange.com/a/10200/1641), to put all the observed rewards in a similar range of values. Such a standardization procedure inherently requires division by the standard deviation, so... that answers that part of your question. As for why they are doing this on a per-episode-basis... I think you're right, **in the general case** this seems like a bad idea. If there are rare events with extremely high rewards that only occur in some episodes, and the majority of episodes only experience common events with lower-scale rewards... yes, this trick will likely mess up training. **In the specific case of the CartPole environment** (which is the only environment used in these two examples), this is not a concern. In this implementation of the CartPole environment, the agent simply receives a reward with a value of exactly $1$ for every single time step in which it manages to "survive". The `rewards` list in the example code is in my opinion poorly-named, because it actually contains **discounted returns** for the different time-steps, which look like: $G\_T = \sum\_{t=0}^{T} \gamma^t R\_t$, where all the individual $R\_t$ values are equal to $1$ **in this particular environment**. These kinds of values tend to be in a fairly consistent range (especially if the policy used to generate them also only moves slowly), so the standardization that they do may be relatively safe, and may improve learning stability and/or speed (by making sure there are always roughly as many actions for which the probability gets increased as there are actions for which the probability gets decreased, and possibly by making hyperparameters easier to tune). It does not seem to me like this trick would generalize well to many other environments, and personally I think it shouldn't be included in such a tutorial / example. --- **Note**: I'm quite sure that a per-episode subtraction of the mean returns would be a valid, albeit possibly unusual, [baseline for variance reduction](https://spinningup.openai.com/en/latest/spinningup/rl_intro3.html#baselines-in-policy-gradients). It's the subsequent division by standard deviation that seems particularly problematic to me in terms of generalizing to many different environments. Upvotes: 4 [selected_answer]<issue_comment>username_3: This question is discussed in detail, in the following NeurIPS 2016 paper by <NAME>: [Learning values across many orders of magnitude](https://arxiv.org/pdf/1602.07714.pdf). They also give experimental results over the Atari domain. Upvotes: 3
2019/01/26
449
2,043
<issue_start>username_0: I am using [Open AI's](https://github.com/pytorch/examples/blob/master/reinforcement_learning/actor_critic.py) code to do a RL task on an environment that I built myself. I tried some network architectures, and they all converge, faster or slower on CartPole. On my environment, the reward seems not to converge, and keeps flickering forever. I suspect the neural network is too small, but I want to confirm my belief before going the route of researching the architecture. How can I confirm that the architecture is the problem and not anything else in a neural network reinforcement learning task?<issue_comment>username_1: Check the function loss. ======================== It might be that your environment is impossible to learn. However, most likely the network simply can't handle it. By measuring the loss during the learning stage, if you find it is always very high and does not decrease, it's a strong indication this might be the issue. Because the network is too simple, when you optimize for some states, you ruin others. There is not formal way to find out if this is the case, but since the same algorithm works elsewhere, it's either a problem of your environment, or of the network. Upvotes: 1 <issue_comment>username_2: Loss not decreasing during the training does not signify that network don't train. It may mean network is in the exploration mode. I have observed that situation - accumulated reward growing steadily, which mean network is training well, but loss is not decreasing. If you already know solution to some (simpler or other) version of your environment, you can train network in supervised manner to reproduce that solution. If network unable reproduce existing solution that is strong indication that network is too small or otherwise not good. The other reason for accumulated reward oscillation could be network overfitting on latest training samples. In that case bigger replay buffer or more slow update of target network(if target network is used) may help. Upvotes: 0
2019/01/27
611
2,517
<issue_start>username_0: In an MLP with ReLU activation functions after each hidden layer (except the final), Let's say the final layer should output positive and negative values. With ReLU intermediary activations, this is still possible because the final layer, despite taking in positive inputs only, can combine them to be negative. However, would using leaky ReLU allow faster convergence? Because you can pass in negative values as input to the final layer instead of waiting till the final layer to make things negative<issue_comment>username_1: In short, yes Leaky Relu helps in faster convergence if your output requires both positive and negative values. But the catch is that you need to tune the negative slope of Leaky Relu which is a hyperparameter to get better accuracy. Upvotes: 2 <issue_comment>username_2: Both the output and the the gradient at the negative part of leaky ReLU is 100 times lower than at the positive part. I doubt that they have any significant impact on training direction and/or on the final output of a trained model unless the model is severely underfitting. Upvotes: 0 <issue_comment>username_3: The answer depends on a case to case basis. It may so happen that a dataset performs very well on ReLu but takes more number of iterations to converge on leaky ReLu's or PReLu's or vice-versa. There are 2 arguments to consider here: * ReLu is the **most non-linear** among all other type of ReLu's, and by this not so mathematical term I mean to say that it has the steepest drop in slope at 0, as compared to any other type of modified ReLu's. * ReLu's omit negative values which can be a significant problem with context to data normalisation. As this [video](https://www.youtube.com/watch?v=wEoyxE0GP2M&list=PL3FW7Lu3i5JvHM8ljYj-zLfQRF3EO8sYv&index=6) `(~10:00)` from Stanford explains how data normalisation is necessary in context of **`signs`** of weight updates, so we can very roughly say any form of Leaky ReLU's somewhat normalise the data. So theoretically speaking (might not be mathematically rigorous) iff all the inputs have a positive correlation to the output(input increases, output also increases), ReLu should work very well and converge faster. Whereas if there is negative correlation as well then Leaky ReLu's might work better. The point is that unless someone gives definitive mathematical relations of what's going on inside a NN when it is being trained, its hard to tell which will work well and which will not except from intuition. Upvotes: 2
2019/01/28
883
4,095
<issue_start>username_0: Can someone explain what is the process of learning? What does it mean to learn something?<issue_comment>username_1: That is a wonderfully fundamental question. > > Learning is the use of a system to change another system so that, instead of doing what it did before, which may have been nothing, it does something else. > > > In the human brain, the system is the way that genetic expression caused the directed mutability of that brain so that human intentions and responses to external stimuli would alter. The direction of alteration is based on incentives and deterrents. In classes of children, there are things a teacher may do or a culture in the school that motivate learning. Misbehavior is deterred through other mechanisms. In that case, the first system is the educational process and the second system is the brain's ability to use what it was taught. Even without classes, children learn things that create comfort, so comfort is the internal incentive. It is wired into us as a kind of teacher to dislike groin moisture, so we learn to use bathroom receptacles. It is wired into us to like praise from parents and teachers, so we tend to perform to get it. Things like that are complex losses and rewards and primary systems to incentivize learning. Learning in artificial networks is not nearly as complex as in human brains. In some cases the artificial results are better. In other cases the abilities of the human brain cannot be approached by artificial networks yet. The functioning of an artificial network often begins arbitrary and completely useless, but it's a parametric function, meaning the function can be changed by changing numbers called parameters. Each time the function is used to process an example drawn from a data set, the result is evaluated and the evaluation is used to modify the parameters. Care is taken to not over-modify the parameters, otherwise confusion can occur. Mathematicians call this type of system confusion *chaos*. Repeating this carefully directed and moderated process ideally leads to something called *convergence*. The learning system is set up so that the result of convergence is a set of parameters that minimize losses, maximize gains, or both. Sometimes initial learning is not enough and there are other related things to do later. * Adjusting learned behavior to adapt to new conditions or information * Unlearning things that no longer produce benefit so that new learning can replace them * Relearning things that had faded from memory * Developing greater confidence in what was learned so that unlearning requires greater impetus There are additional technical terms for the above concepts, more details that can be grasped, many categories of learning system types, varieties within those, and the mathematics that was used as a foundation to construct all this and make it work. Because the question was fundamental, those details and technicalities were omitted. Upvotes: 2 <issue_comment>username_2: Turing had much difficulty explaining how a computer could learn, and in his 1950 paper asked, How could a machine learn? Its possible behavior is completely defined by its rules of operation (program), whatever the machine's history (past, present, future) might be. His proposed solution was ephemerally valid rules, whatever they might be - and he doesn't say. So learning can be understood as the machine acquiring new behaviors but not by the behaviors being programmed into the machine by a human. Perhaps a better way to look at this is causally. A human can define the causation (possible behavior) of a computer by programming it into the machine. But this is a case of the human using their knowledge to define how the machine will react to situations. Learning is the case where the machine itself acquires now behaviors or possible behaviors (not by a human using their knowledge of the world). And any such alleged intrinsic learning can easily be tested. If it helps the machine survive in a complex and hostile world, then it really is learning. Upvotes: 0
2019/01/29
1,536
6,268
<issue_start>username_0: By reading the abstract of [Neural Networks and Statistical Models](https://people.orie.cornell.edu/davidr/or474/nn_sas.pdf) paper it would seem that ANNs are statistical models. In contrast [Machine Learning is not just glorified Statistics](https://towardsdatascience.com/no-machine-learning-is-not-just-glorified-statistics-26d3952234e3). I am looking for a more concise/summarized answer with focus on ANNs.<issue_comment>username_1: What is a statistical model? ---------------------------- According to <NAME> (in the book [Statistical Models](https://fac.ksu.edu.sa/sites/default/files/statistical_models.pdf)), a statistical model is a probability distribution constructed to enable inferences to be drawn or decisions made from data. The probability distribution represents the variability of the data. Are neural networks statistical models? --------------------------------------- Do neural networks construct or represent a probability distribution that enables inferences to be drawn or decisions made from data? ### MLP for binary classification For example, a multi-layer perceptron (MLP) trained to solve a binary classification task can be thought of as model of the probability distribution $\mathbb{P}(y \mid x; \theta)$. In fact, there are many examples of MLPs with a softmax or sigmoid function as the activation function of the output layer in order to produce a probability or a probability vector. However, it's important to note that, although many neural networks produce a probability or a probability vector, a probability distribution is not the same thing. A probability alone does not describe a full probability distribution and different distributions are defined by different parameters (e.g. a Bernoulli is defined by $p$, while a Gaussian by $\mu$ and $\sigma$). However, for example, if you make your neural network produce a probability, i.e. model $\mathbb{P}(y = 1 \mid x; \theta)$, at least in the case of binary classification, you could obviously derive the probability of the other label as follows: $\mathbb{P}(y = 0 \mid x; \theta) = 1 - \mathbb{P}(y = 1 \mid x; \theta)$. In any case, in this example, you only need the parameter $p = \mathbb{P}(y = 1 \mid x; \theta)$ to define the associated [Bernoulli distribution](https://en.wikipedia.org/wiki/Bernoulli_distribution). So, these neural networks (for binary classification) that model and learn some probability distribution given the data in order to make inferences or predictions *could* be considered statistical models. However, note that, once the weights of the neural network are fixed, given the same input, they always produce the same output. ### Generative models Variational auto-encoders (VAEs) construct a probability distribution (e.g. a Gaussian or $\mathbb{P}(x)$ that represents the probability distribution over images, if you want to generate images), so VAEs *can* be considered statistical models. ### Bayesian neural networks There are also [Bayesian neural networks](http://proceedings.mlr.press/v37/blundell15.html), which are neural networks that maintain a probability distribution (usually, a Gaussian) for each unit (or neuron) of the neural network, rather than only a point estimate. BNNs *can* thus also be considered statistical models. ### Perceptron The perceptron may be considered a "statistical model", in the sense that it learns from data, but it doesn't produce any probability vector or distribution, i.e. it is not a probabilistic model/classifier. Conclusion ---------- So, whether or not a neural network is a statistical model depends on your definition of a statistical model and which machine learning models you would consider neural networks. If you are interested in more formal definitions of a statistical model, take a look at [this paper](http://www.stat.uchicago.edu/%7Epmcc/pubs/AOS023.pdf). Parametric vs non-parametric ---------------------------- Statistical models are often also divided into parametric and non-parametric models. Neural networks are often classified as non-parametric because they make fewer assumptions than e.g. linear regression models (which are parametric) and are typically more generally applicable, but I will not dwell on this topic. Upvotes: 3 [selected_answer]<issue_comment>username_2: According to Wikipedia: > > A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data (and similar data from a larger population). A statistical model represents, often in considerably idealized form, the data-generating process. > > > Answer to your question: To build any neural network model we assume the train, test and validation data are coming from a probability distribution. So, if you produce a neural network model based on statistical data then the network is a statistical model. Moreover, neural networks' cost function is generally a parametric model and parametric modes are statistical models. Please look at Goodfellow's Deep Learning book chapter [Deep Feedforward Networks](https://www.deeplearningbook.org/contents/mlp.html) page 174 and 175. From Goodfellow's book > > Fortunately, the cost functions for neural networks are more or less the same as those for other parametric models, such as linear models. In most cases, our [parametric model](https://en.wikipedia.org/wiki/Parametric_model) defines a distribution $p(y \mid x; \theta)$ and we simply use the principle of maximum likelihood. > > > In conclusion, ANNs (e. g. MLP, CNN, etc.) are statistical models Upvotes: 3 <issue_comment>username_3: A dataset can be thought of as a set of ordered pairs $\subset R^f \times R^l$, where $f$ is the feature dimensions and $l$ the label dimensions. Ordered pairs give rise to a statistical model (i.e. a function from $R^f$ to a probability distribution over $R^l$) Then this statistical model is turned into function $R^f \mapsto R^l$ by taking the argmax. A neural network (trained on a dataset) is a compression of this statistical model composed with argmax. (think of how much smaller the GPT3 weights are than the dataset). So they are approximations to statistical models composed with argmax. Upvotes: 2
2019/01/29
939
3,701
<issue_start>username_0: I'm trying to train a neural network on evaluating chess positions if rather white (0.0) or black would win (1.0) Currently the input consists of 4 bits per chess field (piece id 0 - 12, equals 64\*4). Factors like castling are being ignored for now. Also, all training sets are random positions from popular games where it's white's turn and the desired output is the outcome of the game (0.0, 0.5, 1.0). Are my input values the right choice? How many hidden layers / neurons for each layer should be used and what's the best learning rate? What type of NN's and which activation function would you recommend for this project?<issue_comment>username_1: Guessing the winner from a chess position is difficult for classification. In chess, even if you start from the same position, it can give you different result depends on the player's action. So, I recommend you to use Temporal Difference (TD) Learning, the driving concept behind [Reinforcement Learning](https://en.wikipedia.org/wiki/Reinforcement_learning). Some methods in Reinforcement Learning still use a neural net but not for predicting the winner. The prediction in Q-Learning, a popular Reinforcement Learning algorithm, predicts the "value" of choosing a certain action while in a certain position for a player. From those values, a player can choose the best action for the current position. The following references might interest you: * [Reinforcement learning for chess](https://arxiv.org/pdf/1509.01549.pdf) * [Deep reinforcement learning](https://neuro.cs.ut.ee/demystifying-deep-reinforcement-learning/) * [Predicting the Outcome of a Chess Game by Statistical and Machine Learning techniques](https://upcommons.upc.edu/bitstream/handle/2117/106389/119749.pdf?sequence=1&isAllowed=y) Upvotes: 1 <issue_comment>username_2: Easy ones first: * Activations are going to be RELU all the way down, until your final softmax layer (win probability?) (because empirically, RELU does great on most problems, except when your model is an RNN, and it makes the gradient explode, or in the final layer of a regression model - numerical stability etc.). * You probably to structure this with *some* layers of convolutions with max pooling between them, then 1 or 2 fully connected layers (FC) near the end (because if you only have FC layers, you probably won't have enough data to train them) * Well worth trying some 1D convolutions (which cleverly combine channels from convolutions created by previous layers). * Learning rate: take the SGD default to begin with, then tune later. Problem dependent, I'm afraid! The returns to tuning can be large though (my kids twigged to this quite quickly when playing with a toy problem). Now the hard bit - encoding your input: * Categorical encoding using a single integer per board position could cause your model some grief (it is an input that "looks like" a real number, but of course it isn't, and values that seem numerically close may represent pieces with radically different abilities (perhaps the code for King is 1 and Queen is 2 and Bishop is 3, but all those pieces have such different attributes). * I would strongly consider thinking of each piece/player combo as a "colour channel" - a 64 cell grid where each value is either zero or one (so, one channel for White's pawns, another for White's knights, and so on). Finally, those labels of yours: do think about what you'll do with *drawn* games - perhaps you have 3 possible outcomes, not 2? We would all be fascinated to hear how you get on - I hope you'll write your work up in some form (and that you'll come back and complain/praise our advice, as appropriate!). Upvotes: 3 [selected_answer]
2019/01/30
1,654
7,040
<issue_start>username_0: From what I understand, Monte Carlo Tree Search Algorithm is a solution algorithm for model free reinforcement learning (RL). Model free RL means agent doesnt know the transition and reward model. Thus for it to know which next state it will observe and next reward it will get is for the agent to actually perform an action. my question is: if that is the case, then how come the agent knows which state it will observe during the rollout, since rollout is just a simulation, and the agent never actually perform that action ? (it never really **interact** with the environment: e.g it never really move the piece in a Go game during rollout or look ahead, thus cannot observed anything). It can only assume observing anything when not actually interacting with environment (during simulation) if it knows the transition model as I understand it. The same arguments goes for the rewards during rollout/ simulation. in this case, doesnt rollout in Monte Carlo Tree Search algorithm suggests that the agent knows the transition model and reward model and thus a solution algorithm for model **based** reinforcement learning and not model **free** reinforcement learning ? \*\* it makes sense in Alphago, since the agent is trained to estimate what it would observed. but MCTS (without policy and value netwrok) method assumes that agent knows what it would observed even though no additional training is included.<issue_comment>username_1: > > From what I understand, Monte Carlo Tree Search Algorithm is a solution algorithm for model free reinforcement learning (RL). > > > Monte Carlo Tree Search is a *planning* algorithm. It can be considered part of RL, in a similar way to e.g. Dyna-Q. As a planning algorithm MCTS does need access to a model of the environment. Specifically it requires a *sampling* model, i.e. a model that can accept a state and action, then return a single next state and reward with the same probabilities as the target system. The alternative model, used by other RL techniques such as Value Iteration, is a *distribution* model which provides the full distribution of probabilities for rewards and next states. > > if that is the case, then how come the agent knows which state it will observe during the rollout, since rollout is just a simulation, and the agent never actually perform that action ? > > > It is not the case. The agent knows what it will observe during the simulation because the simulation *is* a sampling model. > > In this case, doesnt rollout in Monte Carlo Tree Search algorithm suggests that the agent knows the transition model and reward model and thus a solution algorithm for model based reinforcement learning and not model free reinforcement learning ? > > > Yes. This is how most planning algorithms work. The simulation *can* be driven purely by sampling from previous experience, which is how Dyna-Q works\*. I would still consider that a model-based approach, and its success depends a lot on how well such a model can be learned. In many cases, variance due to errors in the learned model adversely affects learning. So MCTS works best in environments that can be accurately sampled, because they are strongly rules-driven. For example, board games. --- \* Functionally DynaQ is almost identical to experience replay. So much so, that whether you consider it a planning algorithm or experience replay added to basic Q learning is more a matter of how you present the design of the learning agent - e.g. perhaps a designer wants to focus on the model-learning side more, so has code that explicitly represents the learned model. Upvotes: 2 <issue_comment>username_2: Whether or not MCTS is even a Reinforcement Learning algorithm at all [may be up for debate](https://ai.stackexchange.com/q/7589/1641), but let's assume that we view it as an RL algorithm here. **For practical purposes, MCTS really should be considered to be a Model-Based method.** Below, I'm going to describe how you could view it as a **Model-Free RL approach** in some way... and then wrap back to why that viewpoint isn't really often useful in practice. --- More specifically, following [this paper](https://jair.org/index.php/jair/article/view/11099/26289), we'll think of an MCTS search process as a value-based RL algorithm (it learns estimates of a value function, very much like Sarsa, $Q$-learning, etc.), which limits itself to learning values for the states that it chooses to represent by nodes in the search tree (this set of states that it chooses to represent gradually grows over time during the search process). Unlike traditional RL approaches, such an MCTS process doesn't really result in a policy or an exhaustive / generalizable value function estimator that can be extracted after the "training" process and re-used in many different states afterwards. We normally play a move after running MCTS, and then discard everything and start over again for the next move (maybe we'll keep a relevant part of the search tree and reuse that, but that's a minor detail... we certainly won't be able to re-use our search results in another match/game/episode). The **MCTS search process** itself can be viewed as a **Model-Free RL approach**; every iteration of the search can be viewed as an actual episode of an "agent" that is collecting experience in a model-free manner in a "real" environment (but not as real as the game for which we're running the complete search process), where this "internal agent" first follows the Selection Policy for a while (e.g. UCB1), and then a Play-out policy for the remainder of the episode (e.g. uniform random). This "internal" agent "inside" the MCTS iterations could be viewed as learning from a model-free RL process. The **main problem with this view in practice** is that, because MCTS "decides" to have a laser-like focus on a relatively small subset of states (around the root node), this process really only leads to something useful being learned for that state in the root node (and possibly some of the closest children/grandchildren/etc.). We don't really learn something that can easily be re-used in the future in MCTS. **What this means in practice is that we have to be able to re-run the complete "Reinforcement Learning process" (or search) whenever we need to make a decision** (i.e. every turn in a turn-based game). That is feasible if you have a simulator, or **model** of the environment, in which you can do the learning... but then we really get back to actually have a **model-based** approach. --- Fun fact: if you like to take the viewpoint of MCTS as a Model-Free RL approach, you could also turn that into a Model-Based approach again by incorporating additional forms of planning/search "inside" the MCTS iterations. For example, you can [run little instances of MiniMax inside every MCTS iteration](https://ai.stackexchange.com/q/9504/1641), and I suppose that would turn the approach into a Model-Based approach again even in this viewpoint. Upvotes: 3
2019/01/30
1,860
7,382
<issue_start>username_0: ***Status***: For a few weeks now, I have been working on a Double DQN agent for the `PongDeterministic-v4` environment, which you can find [here](https://github.com/hridayns/Reinforcement-Learning/tree/master/Atari). A single training run lasts for about 7-8 million timesteps (about 7000 episodes) and takes me about 2 days, on Google Collab (K80 Tesla GPU and 13 GB RAM). At first, I thought this was normal because I saw a lot of posts talking about how DQNs take a long time to train for Atari games. ***Revelation***: But then after cloning the OpenAI baselines [repo](https://github.com/openai/baselines), I tried running `python -m baselines.run --alg=deepq --env=PongNoFrameskip-v4` and this took about 500 episodes and an hour or 2 to converge to a nice score of +18, without breaking a sweat. Now I'm convinced that I'm doing something terribly wrong but I don't know what exactly. ***Investigation***: After going through the DQN baseline [code](https://github.com/openai/baselines/tree/master/baselines/deepq) by OpenAI, I was able to note a few differences: * I use the `PongDeterministic-v4` environment but they use the `PongNoFrameskip-v4` environment * I thought a larger **replay buffer size** was important, so I struggled (with the memory optimization) to ensure it was set to **70000** but they set it to a mere **10000**, and still got amazing results. * I am using a normal **Double DQN**, but they seem to be using a **Dueling Double DQN**. ***Results/Conclusion*** I have my doubts about such a huge increase in performance with just these few changes. So I know there is probably something wrong with my existing implementation. Can someone **point me in the right direction**? Any sort of help will be appreciated. Thanks!<issue_comment>username_1: > > From what I understand, Monte Carlo Tree Search Algorithm is a solution algorithm for model free reinforcement learning (RL). > > > Monte Carlo Tree Search is a *planning* algorithm. It can be considered part of RL, in a similar way to e.g. Dyna-Q. As a planning algorithm MCTS does need access to a model of the environment. Specifically it requires a *sampling* model, i.e. a model that can accept a state and action, then return a single next state and reward with the same probabilities as the target system. The alternative model, used by other RL techniques such as Value Iteration, is a *distribution* model which provides the full distribution of probabilities for rewards and next states. > > if that is the case, then how come the agent knows which state it will observe during the rollout, since rollout is just a simulation, and the agent never actually perform that action ? > > > It is not the case. The agent knows what it will observe during the simulation because the simulation *is* a sampling model. > > In this case, doesnt rollout in Monte Carlo Tree Search algorithm suggests that the agent knows the transition model and reward model and thus a solution algorithm for model based reinforcement learning and not model free reinforcement learning ? > > > Yes. This is how most planning algorithms work. The simulation *can* be driven purely by sampling from previous experience, which is how Dyna-Q works\*. I would still consider that a model-based approach, and its success depends a lot on how well such a model can be learned. In many cases, variance due to errors in the learned model adversely affects learning. So MCTS works best in environments that can be accurately sampled, because they are strongly rules-driven. For example, board games. --- \* Functionally DynaQ is almost identical to experience replay. So much so, that whether you consider it a planning algorithm or experience replay added to basic Q learning is more a matter of how you present the design of the learning agent - e.g. perhaps a designer wants to focus on the model-learning side more, so has code that explicitly represents the learned model. Upvotes: 2 <issue_comment>username_2: Whether or not MCTS is even a Reinforcement Learning algorithm at all [may be up for debate](https://ai.stackexchange.com/q/7589/1641), but let's assume that we view it as an RL algorithm here. **For practical purposes, MCTS really should be considered to be a Model-Based method.** Below, I'm going to describe how you could view it as a **Model-Free RL approach** in some way... and then wrap back to why that viewpoint isn't really often useful in practice. --- More specifically, following [this paper](https://jair.org/index.php/jair/article/view/11099/26289), we'll think of an MCTS search process as a value-based RL algorithm (it learns estimates of a value function, very much like Sarsa, $Q$-learning, etc.), which limits itself to learning values for the states that it chooses to represent by nodes in the search tree (this set of states that it chooses to represent gradually grows over time during the search process). Unlike traditional RL approaches, such an MCTS process doesn't really result in a policy or an exhaustive / generalizable value function estimator that can be extracted after the "training" process and re-used in many different states afterwards. We normally play a move after running MCTS, and then discard everything and start over again for the next move (maybe we'll keep a relevant part of the search tree and reuse that, but that's a minor detail... we certainly won't be able to re-use our search results in another match/game/episode). The **MCTS search process** itself can be viewed as a **Model-Free RL approach**; every iteration of the search can be viewed as an actual episode of an "agent" that is collecting experience in a model-free manner in a "real" environment (but not as real as the game for which we're running the complete search process), where this "internal agent" first follows the Selection Policy for a while (e.g. UCB1), and then a Play-out policy for the remainder of the episode (e.g. uniform random). This "internal" agent "inside" the MCTS iterations could be viewed as learning from a model-free RL process. The **main problem with this view in practice** is that, because MCTS "decides" to have a laser-like focus on a relatively small subset of states (around the root node), this process really only leads to something useful being learned for that state in the root node (and possibly some of the closest children/grandchildren/etc.). We don't really learn something that can easily be re-used in the future in MCTS. **What this means in practice is that we have to be able to re-run the complete "Reinforcement Learning process" (or search) whenever we need to make a decision** (i.e. every turn in a turn-based game). That is feasible if you have a simulator, or **model** of the environment, in which you can do the learning... but then we really get back to actually have a **model-based** approach. --- Fun fact: if you like to take the viewpoint of MCTS as a Model-Free RL approach, you could also turn that into a Model-Based approach again by incorporating additional forms of planning/search "inside" the MCTS iterations. For example, you can [run little instances of MiniMax inside every MCTS iteration](https://ai.stackexchange.com/q/9504/1641), and I suppose that would turn the approach into a Model-Based approach again even in this viewpoint. Upvotes: 3
2019/01/31
637
2,645
<issue_start>username_0: I'm doing bachaleor thesis on traffic sign detection using single shot detector called YOLO. These single shot detectors can perform detection of objects in image and so they have specific way of training, ie. training on full images. Thats quite problem for me, because the biggest real dataset with full traffic sign images is [Belgian one](https://btsd.ethz.ch/shareddata/) with 9000 images in 210 classes, which is unfortunately not enough to train good detector. To overcome this problem, I've created [DatasetGenerator](https://github.com/kocica/DatasetGenerator), which does quite good job in generating synthetic datasets, you can see in the [results directory](https://github.com/kocica/DatasetGenerator/tree/master/results). Recently I came across GAN's which can (besides others) generate or extend existing dataset and I would like to use these networks to compare with my dataset generator. I've tried [this introduction to GANs](https://medium.com/ai-society/gans-from-scratch-1-a-deep-introduction-with-code-in-pytorch-and-tensorflow-cb03cdcdba0f) succesfully. The problem is it's unsupervised learning and so there are no annotations. It means it's able to extend my dataset of traffic signs, but the generated dataset won't be annotated at all, which is problem. So my question is: Is there any way how to use GAN's to extend my dataset of full traffic sign images with annotations of traffic sign class and position? Actually the class is not important, because I can do it separately for each class, but what matters is the position of traffic sign in generated image.<issue_comment>username_1: You could add the desired traffic sign location to the latent vector and then arrange that the generator incurs loss if the traffic sign is not at the right place in the generated image. Upvotes: 0 <issue_comment>username_2: I think you'll [enjoy this work from Apple](https://machinelearning.apple.com/2017/07/07/GAN.html) on improving the realism of synthetic images. Essentially what you need to do is generate a synthetic image *then* have your GAN *modify* the synthetic image so that a 1) a discriminator thinks it is real while also 2) not changing the gross structure of the image very much (so the traffic sign doesn't move) - yes, this loss function is going to take a little work! Making synthetic data realistic enough to allow models to generalize successfully in the real world is a very active and exciting area of research, not least with respect to robotics, and so the work you are doing now should make you very attractive indeed to the right employer. Upvotes: 3 [selected_answer]
2019/01/31
684
2,722
<issue_start>username_0: I recently came across a paper on [Deep Ranking](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/42945.pdf). I was wondering whether this could be used to classify book covers as book titles. (For example, if I had a picture for the cover of the second HP book, the classifier would return Harry Potter and the Chamber of Secrets.) For example, say I have a dataset of book covers along with the book titles in text. Could that data set be used for this deep ranking algorithm, or is there a much better way to approach my problem? I'm quite new to this whole thing, and this one of my first projects in this field. What I'm trying to create is a mobile app where people can take a picture of a book cover, have an algorithm/neural net classify the title of the book, and then have some other algorithm connect that to the book's Goodreads page. Thanks for the help!<issue_comment>username_1: I read some papers talked about it you can take a look on them maybe help you. [Deep Neural Network Learns to Judge Books by Their Covers](https://www.technologyreview.com/s/602807/deep-neural-network-learns-to-judge-books-by-their-covers/) [Classification of Book Genres By Cover and Title](http://cs229.stanford.edu/proj2015/127_report.pdf) Upvotes: 1 <issue_comment>username_2: > > Could that data set be used for this deep ranking algorithm? > > > Yes you can! I think there are at least two approaches for this task: 1. First, solve using Image Classification. If you want to use Deep Learning, you can use Deep Convolutional Neural Network to create classifier that decide the image is HP book or not. You can read papers mentioned on username_1's answer. But the problem is you need a very very large dataset, to make a good classifier you can't just provide one image for one book so if you have thousand title of books (or more) you need train your model with very hugh dataset. 2. Using image similarity or content based image retrieval (CBIR), there is a [good discussion in Stackoverflow](https://stackoverflow.com/questions/843972/image-comparison-fast-algorithm) about this topic, there are a lot of techniques include Deep Ranking, perceptual hash, and others. One of their difference is Deep Ranking use fewer feature engineering technique than others. In my opinion using image similarity technique is better approach than using image classification (it's also compared in Deep Ranking paper) because some methods will be faster and didn't required a lot of dataset. You also can read [another simple reference of image similarity](https://towardsdatascience.com/building-a-similar-images-finder-without-any-training-f69c0db900b5). Upvotes: 0
2019/02/01
725
2,911
<issue_start>username_0: I am working on a project where the Neural Network weights must be quantized on 8 or 16 bits for an embedded platform, thus I will lose some precision. Since our platform does not have floating point arithmetic we need to quantize the weights. By quantizing i mean taking the max absolute value of the weights and divide it by the maximum signed number representable on 8 or 16 bits. this operation will give us a quantization factor $(qf)$. the final quantized weights will be integer$(value \* qf)$. If my weights are very sparse and have a very bad distribution, I lose more precision. For example, to the left here is the distribution of weights for one layer, and to the right is the distribution of weights after I added to the loss function the Kurtosis and skew measures of the weights, and it improved a bit the shape of the distribution while keeping the same accuracy, even a bit higher.[![enter image description here](https://i.stack.imgur.com/XPQOG.png)](https://i.stack.imgur.com/XPQOG.png) Does anybody have any other suggestions? Has anyone tackled this problem before?<issue_comment>username_1: I read some papers talked about it you can take a look on them maybe help you. [Deep Neural Network Learns to Judge Books by Their Covers](https://www.technologyreview.com/s/602807/deep-neural-network-learns-to-judge-books-by-their-covers/) [Classification of Book Genres By Cover and Title](http://cs229.stanford.edu/proj2015/127_report.pdf) Upvotes: 1 <issue_comment>username_2: > > Could that data set be used for this deep ranking algorithm? > > > Yes you can! I think there are at least two approaches for this task: 1. First, solve using Image Classification. If you want to use Deep Learning, you can use Deep Convolutional Neural Network to create classifier that decide the image is HP book or not. You can read papers mentioned on username_1's answer. But the problem is you need a very very large dataset, to make a good classifier you can't just provide one image for one book so if you have thousand title of books (or more) you need train your model with very hugh dataset. 2. Using image similarity or content based image retrieval (CBIR), there is a [good discussion in Stackoverflow](https://stackoverflow.com/questions/843972/image-comparison-fast-algorithm) about this topic, there are a lot of techniques include Deep Ranking, perceptual hash, and others. One of their difference is Deep Ranking use fewer feature engineering technique than others. In my opinion using image similarity technique is better approach than using image classification (it's also compared in Deep Ranking paper) because some methods will be faster and didn't required a lot of dataset. You also can read [another simple reference of image similarity](https://towardsdatascience.com/building-a-similar-images-finder-without-any-training-f69c0db900b5). Upvotes: 0
2019/02/01
574
2,235
<issue_start>username_0: Suppose we have a data set consists of columns > > TransactionId, CardNo, TransactionDate > > > then how can we calculate the customer purchase interval (means if customer A purchased on Jan 1st and after 10 days he again purchased, and then he again purchased after 15 days.) and how to predict the next visit of customer A by analyzing the purchasing intervals of customer A. Any help will be appreciated.<issue_comment>username_1: I read some papers talked about it you can take a look on them maybe help you. [Deep Neural Network Learns to Judge Books by Their Covers](https://www.technologyreview.com/s/602807/deep-neural-network-learns-to-judge-books-by-their-covers/) [Classification of Book Genres By Cover and Title](http://cs229.stanford.edu/proj2015/127_report.pdf) Upvotes: 1 <issue_comment>username_2: > > Could that data set be used for this deep ranking algorithm? > > > Yes you can! I think there are at least two approaches for this task: 1. First, solve using Image Classification. If you want to use Deep Learning, you can use Deep Convolutional Neural Network to create classifier that decide the image is HP book or not. You can read papers mentioned on username_1's answer. But the problem is you need a very very large dataset, to make a good classifier you can't just provide one image for one book so if you have thousand title of books (or more) you need train your model with very hugh dataset. 2. Using image similarity or content based image retrieval (CBIR), there is a [good discussion in Stackoverflow](https://stackoverflow.com/questions/843972/image-comparison-fast-algorithm) about this topic, there are a lot of techniques include Deep Ranking, perceptual hash, and others. One of their difference is Deep Ranking use fewer feature engineering technique than others. In my opinion using image similarity technique is better approach than using image classification (it's also compared in Deep Ranking paper) because some methods will be faster and didn't required a lot of dataset. You also can read [another simple reference of image similarity](https://towardsdatascience.com/building-a-similar-images-finder-without-any-training-f69c0db900b5). Upvotes: 0
2019/02/01
1,950
6,685
<issue_start>username_0: In [experience replay](https://datascience.stackexchange.com/q/20535/10640), the update rule follows the loss: > > $$ > L\_i(\theta\_i) = \mathbb{E}\_{(s\_t, a\_t, r\_t, s\_{t+1}) \sim U(D)} \left[ \left(r\_t + \gamma \max\_{a\_{t+1}} Q(s\_{t+1}, a\_{t+1}; \theta\_i^-) - Q(s\_t, a\_t; \theta\_i)\right)^2 \right] > $$ > > > I can't get my head around the order of calculation of the terms in that equation: An experience element is > > $(s\_t, a\_t, r\_t, s\_{t+1} )$ > > > where > > $s\_t$ is the state at time $t$ > > > $a\_t$ is the action taken from $s\_t$ at time $t$ > > > $r\_t$ is the reward received by taking that action from $s\_t$ at time > $t$ > > > $s\_{t+1}$ is the next state > > > In [the on policy case](https://qr.ae/pNb5Z7), **as I understand it**, Q of the equation above is the same Q, which is the only approximator. As I understand the algorithm, at time $t$ we save an experience > > $(s\_t, a\_t, r\_t, s\_{t+1} )$. > > > Then, later, at time $t+x$, we attempt to learn from that experience. However, at the time of saving the experience, $Q(s\_t,a\_t)$ was something different than at the time of attempting to learn from that experience, because the parameters $\theta$ of $Q$ have since changed. This could actually be written as > > $Q\_t(s\_t, a\_t) \neq Q\_{t+x}(s\_t,a\_t)$ > > > Because the Q value is different, I don't see how the reward signal at time $t$ is of any relevance for $Q\_{t+x}(s\_t,a\_t)$ at $t+x$, the time of learning. Also, it is likely that following a policy that is derived from $Q\_t$ would lead to $a\_{t}$, whereas following a policy that is derived from $Q\_{t+x}$ would not. I don't see in the experience replay algorithm that the Q value $Q\_t(s\_t, a\_t)$ is saved, so I must assume that is not. **Why does calculating the Q value again at a later time make sense FOR THE SAME SAVED REWARD AND ACTION?**<issue_comment>username_1: Once the algorithm reaches a point where a reward (or penalty) is gained, then a slowly decaying amount of that reward is given to the Q-value of whichever action was derived from a particular state for each time step back through the game (or however far back you want to apply that reward). You will have stored the state and Q-values for each time step during the game, allowing you to update the appropriate Q-values with the new reward. These state / updated Q-values (from many runs) become your training data for the next iteration of your training algorithm. Upvotes: 0 <issue_comment>username_2: > > Because the Q value is different, I don't see how the reward signal at time $t$ is of any relevance for $Q\_{t+x}(s\_t,a\_t)$ at $t+x$, the time of learning. > > > The $r\_t$ value for any single step is not dependent on $Q$ or the current policy. It is purely dependent on $(s\_t,a\_t)$. That means you can use the Q update equation to use it to calculate *new* TD targets, by combining your knowledge of the old reward signal and the value functions of the latest policy. The value $r\_t + \gamma \max\_{a\_{t+1}} Q(s\_{t+1}, a\_{t+1}; \theta\_i^-)$ is the TD target in the loss function you show. There are many possible variations to calculate the TD target value in RL, with different properties. The one you are using is *biased* towards initial random values unrelated to the problem, and also deliberately biased towards older Q values to prevent runaway feedback. It is also low variance compared to other possibilities, and simpler to calculate. > > Also, it is likely that following a policy which is derived from $Q\_t$ would lead to $a\_{t}$, whereas following a policy which is derived from $Q\_{t+x}$ would not. > > > Yes that is correct. However, you don't care about that for a single step, you just care about calculating a better estimate for $Q(s,a)$ regardless of whether you would choose $a$ in state $s$ with the current policy. That is what an action value measures - the utility of taking action $a$ in state $s$ and *thereafter* following a given policy. This is a strength of experience replay, that you are constantly refining your estimates of off-policy action values to determine which is the best. This does become more of an issue when you want to use longer trajectories in the update step (which you may want to do to reduce bias of your TD target estimate). A series of steps in history $s\_t,a\_t,r\_t ... s\_{t+1}, a\_{t+1},r\_{t+1}...s\_{t+n}$ may not have the same chance of occurring under the latest policy as it did when it was stored in memory. For the first step $s\_t,a\_t$ again you don't care if it is one you would currently take because the point is to refine your estimate of that action value. However, if you want to use $r\_{t+1}, r\_{t+2}$ etc plus $s\_{t+n}$ to create a TD target, then you have to care whether your current policy and the one used to populate the history table would be different. It is a problem if you want to use more sophisticated estimates of TD target that use multiple sample steps along with experience replay. There are some approaches you can take to allow for this, such as *importance sampling*. For a single step update mechanism you don't need to worry about it. > > I don't see in the experience replay algorithm that the Q value $Q\_t(s\_t, a\_t)$ is saved, so I must assume that is is not. > > > This is correct. You must *re-calculate* the TD target from a more up-to-date policy to get better estimates of the action value. With experience replay, you are not interested in collecting a history of values of Q. Instead you are interested in the history of state transitions and rewards. > > Why does calculating the Q value again at a later time make sense FOR THE SAME SAVED REWARD AND ACTION? > > > Because it *will* change, due to the learning process summarising effects of state transitions. As a toy example, consider a maze solver with traps (negative rewards) and treasures (positive rewards). At one point in history, the agent finds itself in a location and its policy told it to move into a trap on the next step. The agent would initially score location and steps leading up to it with negative Q values. Later it discovers through exploration that there is also some treasure if it takes a different turning towards the end of the the same series of steps. With experience replay, and re-calculating Q values each time, it can figure out which section of that path should be converted to high scores because they lead to treasure as well as the trap and now the agent has a better policy for the end of the path it has better estimates of value there. Upvotes: 3 [selected_answer]
2019/02/02
946
3,484
<issue_start>username_0: I am reading [BayesChess: A computer chess program based on Bayesian networks](https://core.ac.uk/download/pdf/143454908.pdf) (<NAME>; 2008) It is a chess-playing engine using Bayesian networks. The following is mentioned about the heuristic function in section 3. > > Here the heuristic is defined in terms of 838 parameters. > > > There are 5 parameters indicating the value of each piece (pawn, > queen, rook, knight, and bishop -the king is not evaluated, as it must > always be on the board), 1 parameter for controlling whether the king > is under check, 64 parameters for evaluating the location of each > piece on each square on the board (i.e., a total of 786 parameters, > corresponding to 64 squares × 6 pieces each colour × 2 colours) and > finally 64 more parameters that are used to evaluate the position of > the king on the board during the endgame. > > > The above sentence contains the parameters used by the heuristic function. But I didn't find the actual definition. What is the actual definition of the heuristic function?<issue_comment>username_1: Heuristic just means that it is hand constructed by a human. Suppose you have the value of each piece given an initial value. That is a heuristic because it was just defined by a human. But if the bayesian process is going to modify that initial value given by the human that would be a non-heuristic thing. So there is no function definition to search for except to find what the initial parameters given by the human are. Upvotes: 0 <issue_comment>username_2: Heuristics can be understood aas rules. Typically heuristics are thought of as problem-specific strategies. [Expert systems](https://en.wikipedia.org/wiki/Expert_system) were an early form of AI that utilized rules-based decisions. In a game-playing context, heuristics can be [pure strategies](https://en.wikipedia.org/wiki/Strategy_(game_theory)#Pure_and_mixed_strategies). A heuristic function would be one that includes a some predefined decision rules. Russell and Norvig have a nice chapter on informed (heuristic) search strategies. > > Heuristic functions are the most common form in which additional knowledge of the > problem is imparted to the search algorithm. > > Artificial Intelligence: A Modern Approach; 3.5 [pdf](https://faculty.psau.edu.sa/filedownload/doc-7-pdf-a154ffbcec538a4161a406abf62f5b76-original.pdf) > > > h(n) as opposed to f(n) or g(n) Upvotes: 2 <issue_comment>username_3: By far the most common form of heuristic evaluation functions for Chess-playing (or, really, any game-playing) agents are simple linear functions. At least when we're talking about handcrafted features that's the case, of course all the hype with Deep Neural Networks in more recent years is different. So, when it's not specified in a paper like this exactly what their heuristic evaluation function looks like, you can relatively safely assume it's just a linear function. With linear function, I mean that you have vectors of features $\boldsymbol{\phi}(s)$ for your states $s$, and a vector of weights $\boldsymbol{\theta}$, and the evaluation $f(s)$ of a state $s$ is simply given by the dot product (summing up all the multiplications of feature values with their corresponding weights): $$f(s) = \boldsymbol{\phi}(s)^{\top} \boldsymbol{\theta} = \sum\_i \phi\_i(s) \times \theta\_i(s),$$ where the subscript $i$ indicates taking the $i^{th}$ element of a vector. Upvotes: 2
2019/02/02
1,281
4,609
<issue_start>username_0: In my country, the Expert System class is mandatory, if you want to take the AI specialization in most universities. In class, I learned how to make a rule-based system, [forward chaining](https://en.wikipedia.org/wiki/Forward_chaining), [backward chaining](https://en.wikipedia.org/wiki/Backward_chaining), [Prolog](https://en.wikipedia.org/wiki/Prolog), etc. However, I have read somewhere on the web that [expert systems](https://en.wikipedia.org/wiki/Expert_system) are no longer used. Is that true? If yes, why? If not, where are they being used? With the rise of machine learning, they may not be as used as before, but is there any industry or company that still uses them today? Please, provide some references to support your claims.<issue_comment>username_1: I would say Expert Systems is still being taught. For instance, if you look at some of the open courses like [MIT's](https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-034-artificial-intelligence-fall-2010/lecture-videos/lecture-3-reasoning-goal-trees-and-rule-based-expert-systems/), there are still lectures on it. Also, looking at the [CLIPS](http://www.clipsrules.net/AboutCLIPS.html) documentation, you will find a couple of examples of usage from 2005. What I suspect is that Expert Systems are now embedded with "normal systems" in practice. Hence it may be difficult to distinguish from systems used on a daily basis for diagnostics, etc. and not as popular as before. Upvotes: 4 [selected_answer]<issue_comment>username_2: > > Are there companies that still use expert systems? > > > There are still some expert system inference engines available in open source form, in particular [CLIPS rules](http://www.clipsrules.net/) A specialization of your question could be: what companies are using CLIPS in 2020 ? I don't have any ideas, even if I did try in <https://github.com/bstarynk/clips-rules-gcc> And the [RefPerSys](http://refpersys.org/) project is right now (in November 2020) discussing the idea of incorporating such rules in it. Read of course quickly <NAME>'s blog on <http://bootstrappingartificialintelligence.fr/WordPress3/> and his last book (describing the design of an ambitious symbolic artificial intelligence system -CAIA- with expert system ideas) *Artificial Beings The Conscience of a Conscious Machine* ([ISBN 9 781848211018](https://rads.stackoverflow.com/amzn/click/com/1848211015)) - his CAIA system is on <https://github.com/bstarynk/caia-pitrat> but there is absolutely no documentation, since <NAME> passed away in October 2019. His CAIA system was capable of generating all the 500KLOC of its C code from some kind of expert system rules (whose design is described in Jacques Pitrat's books and papers). I am not a native English speaker (since I am French) but **I heard that expert systems are called** (in 2020) **[business rule management systems](https://en.wikipedia.org/wiki/Business_rule_management_system)**. I heard that major banks in France (maybe [BNP](https://en.wikipedia.org/wiki/BNP_Paribas) or [Société Générale](https://en.wikipedia.org/wiki/Soci%C3%A9t%C3%A9_G%C3%A9n%C3%A9rale)) are using such systems to decide to give some loan or some credit to persons and companies (in particular for people buying their flat - or a brand new automobile - with a bank credit and debt during dozen of years). The French banking system is very opaque: you won't be able to understand their internal software, and banks are not publishing any document about the design of their software. At most they would publish the *name* of their AI systems, but nothing public about the software design. According to rumors, [Lexifi](https://www.lexifi.com/) or [Yseop](https://www.yseop.com/) might use some kind of very proprietary expert system technology and sell services with them. But their software tools are closed source and very proprietary. Regarding expert system for games, see also recent papers by [Professor <NAME>](https://www.lamsade.dauphine.fr/%7Ecazenave/). He did use some kind of expert system technology for games. My guess is that large Internet companies like Google or Amazon are using expert system technology inside their internal software (e.g. [search engines](https://en.wikipedia.org/wiki/Search_engine)). [IBM Watson](https://en.wikipedia.org/wiki/Watson_(computer)) is rumored to use them also. BTW, [GNU make](https://www.gnu.org/software/make/) might be considered as some very crude expert system engine driving building of software artifact from source code. Upvotes: 2
2019/02/05
1,239
4,532
<issue_start>username_0: In a neural network, each neuron represents some part of the input. For example, in the case of a MNIST digit, consider the stem of the number 9. Each neuron in the NN represents some part of this digit. 1. What determines which neuron will represent which part of the digit? 2. Is it possible that if we pass in the same input multiple times, each neuron can represent different parts of the digit? 3. How is this related to the back-propagation algorithm and chain rule? Is it the case that, before training the neural network, each neuron doesn't really represent anything of the input, and, as training proceeds, neurons start to represent some part of the input?<issue_comment>username_1: I would say Expert Systems is still being taught. For instance, if you look at some of the open courses like [MIT's](https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-034-artificial-intelligence-fall-2010/lecture-videos/lecture-3-reasoning-goal-trees-and-rule-based-expert-systems/), there are still lectures on it. Also, looking at the [CLIPS](http://www.clipsrules.net/AboutCLIPS.html) documentation, you will find a couple of examples of usage from 2005. What I suspect is that Expert Systems are now embedded with "normal systems" in practice. Hence it may be difficult to distinguish from systems used on a daily basis for diagnostics, etc. and not as popular as before. Upvotes: 4 [selected_answer]<issue_comment>username_2: > > Are there companies that still use expert systems? > > > There are still some expert system inference engines available in open source form, in particular [CLIPS rules](http://www.clipsrules.net/) A specialization of your question could be: what companies are using CLIPS in 2020 ? I don't have any ideas, even if I did try in <https://github.com/bstarynk/clips-rules-gcc> And the [RefPerSys](http://refpersys.org/) project is right now (in November 2020) discussing the idea of incorporating such rules in it. Read of course quickly <NAME>'s blog on <http://bootstrappingartificialintelligence.fr/WordPress3/> and his last book (describing the design of an ambitious symbolic artificial intelligence system -CAIA- with expert system ideas) *Artificial Beings The Conscience of a Conscious Machine* ([ISBN 9 781848211018](https://rads.stackoverflow.com/amzn/click/com/1848211015)) - his CAIA system is on <https://github.com/bstarynk/caia-pitrat> but there is absolutely no documentation, since <NAME> passed away in October 2019. His CAIA system was capable of generating all the 500KLOC of its C code from some kind of expert system rules (whose design is described in <NAME>'s books and papers). I am not a native English speaker (since I am French) but **I heard that expert systems are called** (in 2020) **[business rule management systems](https://en.wikipedia.org/wiki/Business_rule_management_system)**. I heard that major banks in France (maybe [BNP](https://en.wikipedia.org/wiki/BNP_Paribas) or [Société Générale](https://en.wikipedia.org/wiki/Soci%C3%A9t%C3%A9_G%C3%A9n%C3%A9rale)) are using such systems to decide to give some loan or some credit to persons and companies (in particular for people buying their flat - or a brand new automobile - with a bank credit and debt during dozen of years). The French banking system is very opaque: you won't be able to understand their internal software, and banks are not publishing any document about the design of their software. At most they would publish the *name* of their AI systems, but nothing public about the software design. According to rumors, [Lexifi](https://www.lexifi.com/) or [Yseop](https://www.yseop.com/) might use some kind of very proprietary expert system technology and sell services with them. But their software tools are closed source and very proprietary. Regarding expert system for games, see also recent papers by [Professor <NAME>](https://www.lamsade.dauphine.fr/%7Ecazenave/). He did use some kind of expert system technology for games. My guess is that large Internet companies like Google or Amazon are using expert system technology inside their internal software (e.g. [search engines](https://en.wikipedia.org/wiki/Search_engine)). [IBM Watson](https://en.wikipedia.org/wiki/Watson_(computer)) is rumored to use them also. BTW, [GNU make](https://www.gnu.org/software/make/) might be considered as some very crude expert system engine driving building of software artifact from source code. Upvotes: 2
2019/02/07
715
3,136
<issue_start>username_0: I have two Machine Learning models (I use LSTM) that have a different result on the validation set (~100 samples data): * Model A: Accuracy: ~91%, Loss: ~0.01 * Model B: Accuracy: ~83%, Loss: ~0.003 The size and the speed of both models are almost the same. So, which model should I choose?<issue_comment>username_1: You should choose the model A. The loss is just a differentiable proxy for accuracy. That said, the situation should be examined in more detail. If the higher loss is due to the data term, examine the data which produce high loss and check for presence of overfitting or incorrect labels. If the higher loss is due to a regularizer then reducing the regularization factor may further improve the results. Upvotes: 3 <issue_comment>username_2: It depends on your application! Imagine a binary classifier that is always very "confident" - it always assigns P=100% to Class A and 0% to Class B, or *vice versa* (sometimes wrong, never uncertain!). Now imagine a "humble" model that is perhaps fractionally less accurate, but whose probabilities are actually meaningful (when it says "Class A with probability 70%" it is wrong 30% of the time). In your case, both losses are quite small, so we probably prefer the more accurate one. Upvotes: 1 <issue_comment>username_3: You should note that both your results are consistent with a "true" probability of 87% accuracy, and your measurement of a difference between these models is not statistically significant. With an 87% accuracy applied at random, then there is approx 14% chance of getting the two extremes of accuracy you have observed by chance if samples are chosen randomly from the target population, and models are different enough make errors effectively at random. This last assertion is usually not true though, so you can relax a little - that is, unless you took different random slices for cross-validation in each case. 100 test cases is not really enough to discern small differences between models. I would suggest using k-fold cross-validation in order to reduce errors in your accuracy and loss estimates. Also, it is critical to check that the cross-validation split was identical in both cases here. If you have used auto-splitting with a standard tool and not set the appropriate RNG seed, then you may have got a different set each time, and your results are just showing you variance due to the validation split which could completely swamp any differences between the models. However, assuming the exact same dataset was used each time, and it was representative sample of your target population, then on average you should expect the one with the best metric to have the highest chance of *being* the best model. What you should really do is decide which metric to base the choice on *in advance of the experiment*. The metric should match some business goal for the model. Now you are trying to choose after the fact, you should go back to the reason you created the model in the first place and see if you can identify the correct metric. It might not be either accuracy or loss. Upvotes: 3 [selected_answer]
2019/02/07
2,315
8,509
<issue_start>username_0: In this [video](https://www.youtube.com/watch?v=Jw3ZnWFjDfM), the lecturer states that $R(s)$, $R(s, a)$ and $R(s, a, s')$ are equivalent representations of the reward function. Intuitively, this is the case, according to the same lecturer, because $s$ can be made to represent the state and the action. Furthermore, apparently, the Markov decision process would change depending on whether we use one representation or the other. I am looking for a formal proof that shows that these representations are equivalent. Moreover, how exactly would the Markov decision process change if we use one representation over the other? Finally, when should we use one representation over the other and why are there three representations? I suppose it is because one representation may be more convenient than another in certain cases: which cases? How do you decide which representation to use?<issue_comment>username_1: In general the different reward functions $R(s)$, $R(s, a)$ and $R(s, a, s')$ are *not* equivalent mathematically, so you will not find any formal proof. It is possible for the functions to resolve to the same value in a specific MDP, if, for instance, you use $R(s, a, s')$ and the value returned only depends on $s$, then $R(s, a, s') = R(s)$. This is not true in general, but as the reward functions are often under your control, it can be the case quite often. For instance, in scenarios where the agent's goal is to reach some pre-defined state, as in the grid world example from the video, then there is no difference between $R(s, a, s')$ or $R(s)$. Given that is the case, for those example problems you may as well use $R(s)$, as it simplifies the expressions that you need to calculate for algorithms like Q-learning. I think the lecturer did not mean "equivalent" in the mathematical sense, but in the sense that future lectures will use one of the functions, and a lot of what you will learn is going to be much the same as if you had used a different reward function. > > Finally, when should we use one representation over the other and why are there three representations? > > > Typically, I don't use any of those representations by default. I tend to use Sutton & Barto's $p(s', r|s, a)$ notation for combined state transitions and rewards. That expression returns probability of transitioning to state $s'$ and receiving reward $r$ when starting in state $s$ and taking action $a$. For discrete actions, you can re-write the expectation of the different functions $R$ in terms of this function as follows: $$\mathbb{E}[R(s)] = \sum\_{a \in \mathcal{A}(s)}\sum\_{s' \in \mathcal{S}}\sum\_{r \in {R}}rp(s', r|s, a)\qquad\*$$ $$\mathbb{E}[R(s,a)] = \sum\_{s' \in \mathcal{S}}\sum\_{r \in {R}}rp(s', r|s, a)$$ $$\mathbb{E}[R(s,a, s')] = \sum\_{r \in {R}}r\frac{p(s', r|s, a)}{p(s'|s, a)}$$ I think this is one way to see how the functions in the video are closely related. Which one would you use? Depends on what you are doing. If you want to simplify an equation or code, then use the simplest version of the reward function that fits with the reward scheme you set up for the goals of the problem. For instance, if there is one goal state to exit a maze, and an episode ends as soon as this happens, then you don't care how you got to that state or what the previous state was, and can use $R(s)$ In practice, what happens if you use a different reward function is that you need to pay attention to where it appears in things like the Bellman equation for theoretical treatments. When you get to implement model-free methods like Q-learning, $R(s)$ or its variants don't really appear except in the theory. --- \* This is not technically correct in all cases. The assumption I have made is that $R(s)$ is a reward granted at the point of *leaving* state $s$, and is independent of how the state is left and where the agent ends up next. If this was a fixed reward for *entering* state $s$, regardless of how, then it could be written around $R(s')$ as follows: $$\mathbb{E}[R(s')] = \sum\_{s \in \mathcal{S}}\sum\_{a \in \mathcal{A}(s)}\sum\_{r \in {R}}rp(s', r|s, a)$$ i.e. by summing all the rewards that end up at $s'$ Upvotes: 3 <issue_comment>username_2: Let $R(s)$ denote a probability distribution over rewards that our agent may get in some MDP as a reward for entering a state $s$. The easiest case is to demonstrate that we can also choose to write this as $R(s, a)$ or $R(s, a, s')$: simply take $\forall a: R(s, a) = R(s)$, or $\forall a \forall s': R(s, a, s') = R(s)$, as also described in [Neil's answer](https://ai.stackexchange.com/a/10444/1641). --- Let $R(s, a)$ denote a probability distribution over rewards that our agent may get as a reward for executing action $a$ in state $s$. The easy case of demonstrating equivalence to $R(s, a, s')$ is already handled above, but can we also construct an MDP in which we only use the $R(s)$ notation? The easiest way I can think of to do so (may not be the cleanest way) would be to construct a new MDP with a bunch of "dummy" states $z(s, a)$, such that executing action $a$ in state $s$ in the original MDP deterministically leads to a dummy state $z(s, a)$ in the new MDP. Note that I write $z(s, a)$ to make the connection back to the original MDP explicit, but this is a completely independent MDP and you should view it as just a state "$z$". Then, the reward distribution $R(s, a)$ that was associated with the state-action pair $(s, a)$ in the original MDP can be written as $R(z(s, a))$, which is now only a function of the state in the new MDP. In this dummy state $z(s, a)$ in the new MDP, every possible action $\alpha$ should have exactly the same transition probabilities towards new states $s'$ as the original transition probabilities for executing $a$ in $s$ back in the original MDP. This guarantees that the same policy has the same probabilities of reaching certain states in both MDPs; only in our new MDP the agent is forced to transition through these dummy states in between. If you also have a discount factor $\gamma$ in the original MDP, I guess you should use a discount factor $\sqrt{\gamma}$ in the new MDP, because every step in the original MDP requires two steps (one step into a dummy state, and one step out of it again) in the new MDP. --- The final case for $R(s, a, s')$ could be done in a very similar way, but it would get even more complicated to write out formally. The intuition would be the same though. Above, we pretty much "baked" state-action pairs $(s, a)$ from the original MDP into additional dummy states, such that in the new MDP we have states that "carry the same amount of information" as a full state-action pair in the original MDP. For the $R(s, a, s')$ case, you'd need to devise an even uglier solution with even more information "baked into" dummy states, such that you can treat full $(s, a, s')$ triples as single $z(s, a, s')$ states in the new MDP. --- > > Finally, when should we use one representation over the other and why are there three representations? I suppose it is because one representation may be more convenient than another in certain cases: which cases? How do you decide which representation to use? > > > I would recommend always using the **simplest representation** that happens to be **sufficient** to describe how the rewards work in your environment in a natural way. For example, if you have a two-player zero-sum game where terminal game states give a reward of $-1$, $0$, or $1$ for losses, draws, or wins, it is sufficient to use the $R(s)$ notation; the reward depends on the terminal game state reached, not on how it was reached. Another example would be a maze with a specific goal position, as described in Neil's answer. You could use the more complex $R(s, a)$ or $R(s, a, s')$ notations... but there wouldn't be much of a point in doing so really. If you have an environment in which the reached state and the played action both have influence on the reward distribution, then it's much more sensible to just use the $R(s, a)$ notation rather than trying to define a massively overcomplicated MDP with dummy states as I've tried to do above. An example would be... let's say we're playing a quizz, where $s$ denotes the current question, and different actions $a$ are different answers that the agent can give. Then it's natural to model the problem as an MDP where $R(s, a)$ is only positive if $a$ is the correct answer to the question $s$. Upvotes: 3
2019/02/07
813
3,291
<issue_start>username_0: I have a dataset which I have loaded as a data frame in Python. It consists of 21392 rows (the data instances, each row is one sample) and 1972 columns (the features). The last column i.e. column 1972 has string type labels (14 different categories of target labels). I would like to use a CNN to classify the data in this case and predict the target labels using the available features. This is a somewhat unconventional approach though it seems possible. However, I am very confused on how the methodology should be as I could not find any sample code/ pseudo code guiding on using CNN for Classifying non-image data, either in Tensorflow or Keras. Any help in this regard will be highly appreciated. Cheers!<issue_comment>username_1: You can use CNN on any data, but it's **recommended** to use CNN only on data that have spatial features (It might still work on data that doesn't have spatial features, see DuttaA's comment below). For example, in the image, the connection between pixels in some area gives you another feature (e.g. edge) instead of a feature from one pixel (e.g. color). So, as long as you can shaping your data, and your data have spatial features, you can use CNN. For Text classification, there are connections between characters (that form words) so you can use CNN for [text classification in character level](https://papers.nips.cc/paper/5782-character-level-convolutional-networks-for-text-classification.pdf). For Speech recognition, there is also a connection between frequencies from one frame with some previous and next frames, so you can also use [CNN for speech recognition](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/CNN_ASLPTrans2-14.pdf). If your data have spatial features, just reshape it to a 1D array (for example in text) or 2D array (for example in Audio). Tensorflow's function [conv1d](https://www.tensorflow.org/api_docs/python/tf/nn/conv1d) and [conv2d](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d) are general function that can be used on any data. It look the data as an array of floating-point, not as image/audio/text. But if your data doesn't have spatial features, for example, your features are `price`, `salary`, `status_marriage`, etc. I think you don't need CNN, and using CNN won't help. Upvotes: 4 [selected_answer]<issue_comment>username_2: The convolutional models are a method of choice when your problem is translation invariant (or covariant). In image classification, the image should be classified into class 'cow' if a cow is present in any part of the image. In text classification, different orders of phrases and sentences result in related meaning. In speech recognition, the same syllable is used at different places to build different words. In your problem, you should check whether some subsequences of your 1791 columns give rise to the same meaning although they are located at different places within the sample. If the answer is positive, then convolutional layers are likely going to improve the performance. Upvotes: 3 <issue_comment>username_3: DeepInsight method has been used for converting tabular data into corresponding images which are then processed by CNN. Here is the link <https://alok-ai-lab.github.io/DeepInsight/> Upvotes: 1
2019/02/09
1,825
6,210
<issue_start>username_0: I am studying a knowledge base (KB) from the book "Artificial Intelligence: A Modern Approach" (by <NAME> and <NAME>) and from [this series of slides](https://www.cs.princeton.edu/courses/archive/fall16/cos402/lectures/402-lec12.pdf). A formula is satisfiable if there is some assignment to the variables that makes the formula evaluate to true. For example, if we have the boolean formula $A \land B$, then the assignments $A=\text{true}$ and $B=\text{true}$ make it satisfiable. Right? But what does it mean for a KB to be consistent? The definition (given at slide 14 of [this series of slides](https://www.cs.princeton.edu/courses/archive/fall16/cos402/lectures/402-lec12.pdf)) is: > > a KB is consistent with formula $f$ if $M(KB \cup \{ f \})$ is non-empty (there is a world in which KB is true and $f$ is also true). > > > Can anyone explain this part to me with an example?<issue_comment>username_1: I will first recapitulate the key concepts which you need to know in order to understand the answer to your question (which will be very simple, because I will just try to clarify what is given as a "definition"). In logic, a **formula** is e.g. $f$, $\lnot f$, $f \land g$, where $f$ can be e.g. the **proposition** (or *variable*) "today it will rain". So, in a (propositional) formula, you have *propositions*, i.e. sentences like "today it will rain", and logical **connectives**, i.e. symbols like $\land$ (i.e. logical AND), which logically *connect* these sentences. The propositions like "today it will rain" can often be denoted by a single (capital) letter like $P$. $f \land g$ is the combination of two formulae (where *formulae* is the plural of *formula*). So, for example, suppose that $f$ is composed of the propositions "today it will rain" (denoted by $P$) *or* "my friend will visit me" (denoted by $Q$) and $f$ is defined as "I will play with my friend" (denoted by $S$). Then the formula $f \land g = (P \lor Q) \land S$. In general, you can combine formulae in any logically appropriate way. In this context, a **model** is an *assignment* to each variable in a formula. For example, suppose $f = P \lor Q$, then $w = \{ P=0, Q = 1\}$ is a *model* for $f$, that is, each variable (e.g. $P$) is assigned either "true" ($1$) or "false" ($0$) but *not* both. (Note that [the word *model* may be used to refer to different concepts depending on the context](https://ai.stackexchange.com/a/25460/2444); again, in this context, you can simply think of a model as *an assignment of values to the variables in a formula*.) Suppose now we define $I(f, w)$ to be a *function that receives the formula $f$ and the model $w$ as input, and $I$ returns either "true" ($1$) or "false" ($0$)*. In other words, $I$ is a function that *automatically* tells us if $f$ is evaluated to true or false given the assignment $w$. You can now define $M(f)$ to be a *set of assignments (or models) to the formula $f$ such that $f$ is true*. So, $M$ is a set and not just an assignment (or model). This set can be empty, it can contain one assignment or it can contain any number of assignments: it depends on the formula $f$: in some cases, $M$ is empty and, in other cases, it may contain say $n$ valid assignments to $f$, where by "valid" I mean that these assignments make $f$ evaluate to "true". For example, suppose we have formula $f = A \land \lnot A$. Then you can try to assign any value to $A$, but $f$ will never evaluate to true. In that case, $M(f)$ is an empty set, because there is no assignment to the variables (or propositions) of $f$ which make $f$ evaluate to true. A **knowledge base** is a set of formulae $\text{KB} = \{ f\_1, f\_2, \dots, f\_n \}$. So, for example, $f\_2 = $ "today it will rain" and $f\_3 = $ "I will go to school AND I will have lunch". We can now define $M(\text{KB})$ to be the *set of assignments to the formulae in the knowledge base $\text{KB}$ such that all formulae are true*. If you think of the formulae in $KB$ as "facts", $M(\text{KB})$ is an assignment to these formulae in $KB$ such that these facts hold or are true. In this context, we then say that a particular knowledge base (i.e., a set of formulae as defined above), denoted by $\text{KB}$, is *consistent with formula $f$ if $M(\text{KB} \cup \{ f \})$ is a non-empty set*, where $\cup$ means the *union* operation between sets: note that (as we defined it above) $\text{KB}$ is a set, and $\{ f \}$ means that we are making a set out of the formula $f$, so we are indeed performing an *union* operation on sets. So, what does it mean for a knowledge base to be consistent? First of all, the consistency of a knowledge base $\text{KB}$ is defined *with respect to* another formula $f$. Recall that a knowledge base is a set of formulae, so we are defining the consistency of a set of formulae with respect to another formula. When is then a knowledge base $\text{KB}$ consistent with a formula $f$? When $M(\text{KB} \cup \{ f \})$ is a non-empty set. Recall that $M$ is an assignment to the variables in its input such that its inputs evaluate to true. So, $\text{KB}$ is consistent with $f$ when there is a set of assignments of values to the formulae in $\text{KB}$ and an assignment of values to the variables in $f$ such that *both* $\text{KB}$ and $f$ are true. In other words, $\text{KB}$ is consistent with $f$ when both all formulae in $\text{KB}$ and $f$ can be true *at the same time*. Upvotes: 3 [selected_answer]<issue_comment>username_2: Here is a (very) brief [wikipedia article on consistency in KB's](https://en.wikipedia.org/wiki/Consistency_(knowledge_bases)), which should answer your question. A KB is consistent, if it does not contain any contradictions, ie $\lnot a$ and $a$ are not both derivable from it. Which is pretty much common sense if you think about it. If I have a formula $f$, for example "A is a trout $\land$ A lays eggs", and my KB contains "fish lay eggs" and "a trout is a fish", then, if $f$ is true, ie trout do lay eggs, that formula is consistent with my KB, which states that trout are fish and that fish lay eggs. Edit: for a more formalised version, see username_1's answer. Upvotes: 0
2019/02/11
547
1,469
<issue_start>username_0: In the diagram below, there are three variables: `X3` is a function of (depends on) `X1` and `X2`, `X2` also depends on `X1`. More specifically, `X3 = f(X1, x2)` and `X2 = g(X1)`. Therefore, `X3 = f(X1, g(X1))`. [![enter image description here](https://i.stack.imgur.com/Av2hx.png)](https://i.stack.imgur.com/Av2hx.png) If the probabilistic distribution of `X1` is known, is it possible to derive the probabilistic distribution of `X3`?<issue_comment>username_1: No, it is not possible. We could derive the most probable $x\_3$ by calculating the maximum likelihood: $x^\*\_3=\underset{x\_3}{\arg\max} p(x\_1,x\_2|x\_3)$. We are unable to calculate this as you only stated that there is a correlation, but we don't know how it looks like. Upvotes: 0 <issue_comment>username_2: Yes you can, provided you know about $f$ and $g$. Expression $X3 = f(X1, g(X1))$can be written as $X3 = h(X1)$ where $h$ takes into account both $f$ and $g$. After this finding the PDF is simple by differentiating the CDF: $$ F\_{X3} (x3) = P(X3 \leq x3) = P(h(X1) \leq x3) = P(X1 \leq h^{-1}(x3))$$ $$ \frac {d F\_{X3} (x3)}{dx3} = \frac {d P(X1 \leq h^{-1}(x3))}{dx3} = f\_{X3}(x3)$$ NOTE: The conventions followed are the same as used in the field of Probablity (Take care of the function inversion step in non-monotonic cases) Check these [lectures](https://www.youtube.com/watch?v=X-AzW70e2M0&list=PLUl4u3cNGP60hI9ATjSFgLZpbNJ7myAg6&index=115). Upvotes: 1
2019/02/11
459
1,377
<issue_start>username_0: What is the difference between automatic transcription and automatic speech recognition? Are they the same? Is my following interpretation correct? *Automatic transcription*: it converts the speech to text by looking at the whole spoken input *Automatic speech recognition*: it converts the speech to text by looking into word by word choices<issue_comment>username_1: No, it is not possible. We could derive the most probable $x\_3$ by calculating the maximum likelihood: $x^\*\_3=\underset{x\_3}{\arg\max} p(x\_1,x\_2|x\_3)$. We are unable to calculate this as you only stated that there is a correlation, but we don't know how it looks like. Upvotes: 0 <issue_comment>username_2: Yes you can, provided you know about $f$ and $g$. Expression $X3 = f(X1, g(X1))$can be written as $X3 = h(X1)$ where $h$ takes into account both $f$ and $g$. After this finding the PDF is simple by differentiating the CDF: $$ F\_{X3} (x3) = P(X3 \leq x3) = P(h(X1) \leq x3) = P(X1 \leq h^{-1}(x3))$$ $$ \frac {d F\_{X3} (x3)}{dx3} = \frac {d P(X1 \leq h^{-1}(x3))}{dx3} = f\_{X3}(x3)$$ NOTE: The conventions followed are the same as used in the field of Probablity (Take care of the function inversion step in non-monotonic cases) Check these [lectures](https://www.youtube.com/watch?v=X-AzW70e2M0&list=PLUl4u3cNGP60hI9ATjSFgLZpbNJ7myAg6&index=115). Upvotes: 1
2019/02/12
1,848
7,205
<issue_start>username_0: From [the reinforcement learning book section 13.3](http://incompleteideas.net/book/bookdraft2017nov5.pdf): [![enter image description here](https://i.stack.imgur.com/Ex4a5.png)](https://i.stack.imgur.com/Ex4a5.png) Using pytorch, I need to calculate a loss, and then the gradient is calculated internally. How to obtain the loss from equations which are stated in the form of an iterative update with respect to the gradient? In this case: $\theta \leftarrow \theta + \alpha\gamma^tG\nabla\_{\theta}ln\pi(A\_t|S\_t,\theta)$ What would be the loss? And in general, what would be the loss if the update rule were $\theta \leftarrow \theta + \alpha C\nabla\_{\theta}g(x|\theta)$ for some general (derivable) function $g$ parameterized by theta?<issue_comment>username_1: You can find an implementation of the REINFORCE algorithm (as defined in your question) in PyTorch at the following URL: <https://github.com/JamesChuanggg/pytorch-REINFORCE/>. First of all, I would like to note that a policy can be represented or implemented as a neural network, where the input is the state (you are currently in) and the output is a "probability distribution over the actions you can take from that state received as input". In the Python module <https://github.com/JamesChuanggg/pytorch-REINFORCE/blob/master/reinforce_discrete.py>, the policy is defined as a neural network with 2 linear layers, where the first linear layer is followed by a ReLU activation function, whereas the second is followed by a soft-max. In that same Python module, the author also defines another class called `REINFORCE`, which creates a `Policy` object (in the `__init__` method) and defines it as property of that class. The class `REINFORCE` also defines two methods `select_action` and `update_parameters`. These two methods are called from the [`main.py`](https://github.com/JamesChuanggg/pytorch-REINFORCE/blob/master/main.py) module, where the main loop of the `REINFORCE` algorithm is implemented. In that same main loop, the author declares lists `entropies`, `log_probs` and `rewards`. Note that these lists are re-initialized at ever episode. A "log\_prob" and an "entropy" is returned from the `select_action` method, whereas a "reward" is returned from the environment after having executed one environment step. The environment is provided by the OpenAI's Gym library. The lists `entropies`, `log_probs` and `rewards` are then used to update the parameters, i.e. they are used by the method `update_parameters` defined in the class `REINFORCE`. Let's see better now what these methods, `select_action` and `update_parameters`, actually do. `select_action` first calls the `forward` method of the class `Policy`, which returns the output of the forward pass of the NN (i.e. the output of the soft-max layer), so it returns the probabilities of selecting each of the available actions (from the state given as input). It then selects the probability associated with the first action (I guess, it picks the probabilities associated with the action with the highest probabilities), denoted by `prob` (in the source code). Essentially, what I've described so far regarding this `select_action` method is the computation of $\pi(A\_t \mid S\_t, \theta)$ (as shown in the pseudocode of your question). Afterwards, in the same method `select_action`, the author also computes the log of that probability I've just mentioned above (i.e. the one associated with the action with the highest probability, i.e. the log of `prob`), denoted by `log_prob`. In that same method, the entropy (as defined in [this answer](https://datascience.stackexchange.com/a/20301/10640)) is calculated. In reality, the author calculates the entropy using only one distribution (instead of two): more specifically, the entropy is calculated as follows `entropy = -(probs*probs.log()).sum()`. In fact, the entropy loss function usually requires the ground-truth labels (as explained in the answer I linked you to above), but, in this case, we do not have ground-truth labels (given that we are performing RL and not supervised learning). Nonetheless, I can't really tell you why the entropy is calculated like this, in this case. Finally, the method `select_action` then return `action[0], log_prob, entropy`. First of all, I would like to note that the method `update_parameters` is called only at the end of each episode (in the `main.py` module). In that same method, a variable called `loss` is first initialized to zero. In that method, we then iterate the list of rewards for the current episode. Inside that loop of the `update_parameters` method, the return, `R` is calculated. `R` is also multiplied by $\gamma$. On each time step, the loss is then calculated as follows ``` loss = loss - (log_probs[i]*(Variable(R).expand_as(log_probs[i])).cuda()).sum() - (0.0001*entropies[i].cuda()).sum() ``` The `loss` is calculated by subtracting the previous `loss` with ``` (log_probs[i]*(Variable(R).expand_as(log_probs[i])).cuda()).sum() - (0.0001*entropies[i].cuda()).sum() ``` where `log_probs` are the log probabilities calculated in the `select_action` method. `log_probs` is the part $\log \pi(A\_t \mid S\_t, \theta)$ of the update rule of your pseudocode. `log_probs` are then multiplied by the return `R`. We then sum the result of this multiplication over all elements of the vector. We then subtract this just obtained result by the entropies multiplied by 0.0001. I can't really tell you why the author decided to implement the loss in this way. I would need to think about it a little more. The following article may also be useful: <https://pytorch.org/docs/stable/distributions.html>. Upvotes: 3 [selected_answer]<issue_comment>username_2: Chiming in because I had the same question and stumbled across your post. It seems like the general version of your question still has not been answered. In general, a well-formed gradient update rule is all you need to be able to train the network. We are thinking of converting to a "loss function" because that is the typical flow in the structure that pytorch provides for training networks via their autograd framework. But the pytorch loss function is really just meant to be the last in a list of nested operations over which we are going to compute gradients. So if you already know your gradient update rule, the easiest way to write it down as a pytorch "loss function" is as the integral of your gradient update rule with respect to the network's output -- then pytorch will compute your gradient update rule for you automatically. So in your example, the loss function would be: $-\alpha^tG \ln\pi(A\_t | S\_t, \theta\_t)$ as already noted by @Brale and others. The minus sign is there because pytorch optimizers will typically carry out minimization, but your gradient update rule is intended to maximize. Hopefully this explanation is helpful to others that are still learning pytorch! Also, I would highly recommend that anyone just getting started with pytorch read their latest autograd tutorial [here](https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html#sphx-glr-beginner-blitz-autograd-tutorial-py) Upvotes: 0
2019/02/12
639
2,621
<issue_start>username_0: I am a beginner, just started studying around NLP, specifically various language models. So far, my understanding is that - the goal is to understand/produce natural language. So far the methods I have studied speak about correlation of words, using correct combination to make a meaningful sentence. I also have the sense that the language modeling does not really care about the punctuation marks (or did I miss it?) Thus I am curious is there a way they can classify sentence types such as Declarative, Imperative, Interrogative or Exclamatory?<issue_comment>username_1: You can generally identify the mood of a verb by looking at grammatical structures; you don't need any language model for it. The three major moods in English are declarative, interrogative, and imperative. Assuming English is the language you will be working with, here are some questions: * Does he like coffee? * Is this a piece of chocolate? * When did you go there? * How is that possible? * Have you got any cheese? Apart from the obvious marker '?', these examples all start with either an auxiliary verb or a *wh*-word, so are fairly easy to recognise with a simple lookup. The one exception I can think of is an imperative (*Do be quiet, please*), where the *do* is followed by a further verb, which wouldn't be the case with a question. Imperatives: * Go to school now! * Eat up your vegetables! * Do shut up. * Have a go at it! These start with a main verb in the base form, or with an auxiliary followed by a verb/not a pronoun. Once you identified all interrogative and imperative sentences, all the remaining ones should be declarative. So, all you would need is a small list of auxiliary verb, pronouns, and *wh*-words, and with a bit of simple string matching you should get most of the way there. Undoubtedly there will be some exceptions, but there shouldn't be too many of them. In other languages there will be similar structures; or there might be explicit markers (eg in Hawai'ian an imperative starts with the marker 'e', as in *Hele 'oe ma ka hale* "you go to the house" vs *E hele 'oe ma ka hale* "Go to the house!") Upvotes: 1 <issue_comment>username_2: If you can classify words then you can easily classify sentences. One of interesting problems you can solve then is »what are allowed sentence forms?« How can you classify words? By searching for features that are common between them. These features are all possible truths that are true for a word, like with what other words it appears in same sentence, what is their spatial relation, what is frequency of appearance. Upvotes: 0
2019/02/13
258
1,214
<issue_start>username_0: For example, you train on dataset 1 with an adaptive optimizer like Adam. Should you reload the learning schedule, etc., from the end of training on dataset 1 when attempting transfer to dataset 2? Why or why not?<issue_comment>username_1: When doing transfer learning it makes sense to have different update policies for "inherited" parameters and the "new" parameters. "Inherited" parameters are pre-trained on dataset1 and they typically form the front end of the deep model. The "new" parameters are trained from scratch and they typically produce the desired predictions on dataset2. It would be sensible to restart the learning schedule for the "new" parameters. However, most often we would avoid doing that for "inherited" parameters in order to avoid catastrophic forgetting. Upvotes: 2 <issue_comment>username_2: Adam reduces the learning rate over time. When you change to the new training data, you want to reset the learning rate. But Adam might not be the best choice for the second round of training - it can make big changes to the inherited weights, which prevents the transfer of previous learning. It can be good to switch to simple SGD for the second round. Upvotes: 1
2019/02/13
681
2,740
<issue_start>username_0: I am purchasing Titan RTX GPU. Everything seems fine with that except float32 & float64 performance which seems lower vis-a-vis some of its counter parts. I wanted to understand if single precision and double precision performance of GPU affect deep learning training or efficiency ? We work mostly with images, however not limited to that.<issue_comment>username_1: First off I would like to post this comprehensive [blog](http://timdettmers.com/2018/11/05/which-gpu-for-deep-learning/) which makes comparison between all kinds of NVIDIA GPU's. The most popular deep learning library TensorFlow by default uses 32 bit floating point precision. The choice is made as it helps in 2 causes: * Lesser memory requirements * Faster calculations 64 bit is only marginally better than 32 bit as very small gradient values will also be propagated to the very earlier layers. But the trade-off for the gain in performance vs (the time for calculations + memory requirements + time for running through so many epochs so that those small gradients actually do something) is not worth it. There are state of art CNN architectures, which insert gradients midpoint and has very good performance. So overall 32 bit performance is the one which should really matter for deep learning, unless you are doing a very very high precision job (which still would hardly matter as small differences due to 64 bit representation is literally erased by any kind of softmax or sigmoid). So 64 bit might increase your accuracy classification by $<< 1 {\%}$ and will only become significant over very large datasets. As far as raw specs go the TITAN RTX in comparison to 2080Ti, TITAN will perform better than 2080Ti in fp64 (as its memory is double than 2080Ti and has higher clock speeds, BW, etc) but a more practical approach would be to use 2 2080Ti's coupled together, giving a much better performance for price. **Side Note:** Good GPU's require good CPU's. It is difficult to tell whether a given CPU will bottleneck a GPU as it entirely depends how the training is being performed (whether data is fully loaded in GPU then training occurs, or continuous feeding from CPU takes place.) Here are a few links explaining the problem: [CPU and GPU Bottleneck: A Detailed Explanation](https://www.wepc.com/tips/cpu-gpu-bottleneck/) [A Full Hardware Guide to Deep Learning](http://timdettmers.com/2018/12/16/deep-learning-hardware-guide/) Upvotes: 3 [selected_answer]<issue_comment>username_2: Deep models are very tolerant to arithmetic underflow. You can hope for neglectable differences in prediction accuracy between FP32 and FP16 models. Check this [paper](https://arxiv.org/pdf/1710.03740.pdf) for concrete results. Upvotes: 1
2019/02/13
2,284
6,254
<issue_start>username_0: In the paper [Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor](https://arxiv.org/pdf/1801.01290.pdf), they define the loss function for the policy network as $$ J\_\pi(\phi)=\mathbb E\_{s\_t\sim \mathcal D}\left[D\_{KL}\left(\pi\_\phi(\cdot|s\_t)\Big\Vert {\exp(Q\_\theta(s\_t,\cdot)\over Z\_\theta(s\_t)}\right)\right] $$ Applying the reparameterization trick, let $a\_t=f\_\phi(\epsilon\_t;s\_t)$, then the objective could be rewritten as $$ J\_\pi(\phi)=\mathbb E\_{s\_t\sim \mathcal D, \epsilon \sim\mathcal N}[\log \pi\_\phi(f\_\phi(\epsilon\_;s\_t)|s\_t)-Q\_\theta(s\_t,f\_\phi(\epsilon\_t;s\_t))] $$ They compute the gradient of the above objective as follows $$ \nabla\_\phi J\_\pi(\phi)=\nabla\_\phi\log\pi\_\phi(a\_t|s\_t)+(\nabla\_{a\_t}\log\pi\_\phi(a\_t|s\_t)-\nabla\_{a\_t}Q(s\_t,a\_t))\nabla\_\phi f\_\phi(\epsilon\_t;s\_t) $$ The thing confuses me is the first term in the gradient, where does it come from? To my best knowledge, the second large term is already the gradient we need, why do they add the first term?<issue_comment>username_1: I'll give it a go here and try to answer your question, I'm not sure if this is entirely correct, so if someone thinks that it isn't please correct me. I'll disregard expectation here to make things simpler. First, note that policy $\pi$ depends on parameter vector $\phi$ and function $f\_\phi(\epsilon\_t;s\_t)$, and value function $Q$ depends on parameter vector $\theta$ and same function $f\_\phi(\epsilon\_t;s\_t)$. Also, one important thing that authors mention in the paper and you didn't mention is that this solution is approximate gradient not the true gradient. Our goal is to calculate gradient of objective function $J\_\pi$ with respect to $\phi$, so disregarding the expectation we have: $\nabla\_\phi J\_\pi (\phi) = \nabla\_\phi \log\pi(\phi,f\_\phi (\epsilon\_t;s\_t)) - \nabla\_\phi Q(s\_t,\theta,f\_\phi (\epsilon\_t;s\_t))$ Let's see the gradient of first term on right hand side. To get the full gradient we need to calculate derivative w.r.t to both variables, $\phi$ and $f\_\phi (\epsilon\_t;s\_t)$, so we have: $\nabla\_\phi \log\pi(\phi,f\_\phi (\epsilon\_t;s\_t)) = \frac {\partial \log\pi(\phi,f\_\phi (\epsilon\_t;s\_t))}{\partial \phi} + \frac{\partial \log\pi(\phi,f\_\phi (\epsilon\_t;s\_t))}{\partial f\_\phi(\epsilon\_t;s\_t)} \frac{\partial f\_\phi(\epsilon\_t;s\_t)}{\partial \phi}$ This is where approximation comes, they replace $f\_\phi (\epsilon\_t;s\_t)$ with $a\_t$ in some places and we have: $\nabla\_\phi \log\pi(\phi,f\_\phi (\epsilon\_t;s\_t)) \approx \frac {\partial \log\pi(\phi,a\_t)}{\partial \phi} + \frac{\partial \log\pi(\phi,a\_t)}{\partial a\_t} \frac{\partial f\_\phi(\epsilon\_t;s\_t)}{\partial \phi}$ $\nabla\_\phi \log\pi(\phi,f\_\phi (\epsilon\_t;s\_t)) \approx \nabla\_\phi \log\pi(\phi,a\_t) + \nabla\_{a\_t} \log\pi(\phi,a\_t) \nabla\_\phi f\_\phi (\epsilon\_t;s\_t)$ For the second term in first expression on right hand side we have: $\nabla\_\phi Q(s\_t,\theta,f\_\phi (\epsilon\_t;s\_t)) = \frac {\partial Q(s\_t,\theta,f\_\phi (\epsilon\_t;s\_t))}{\partial \phi} + \frac{\partial Q(s\_t,\theta,f\_\phi (\epsilon\_t;s\_t))}{\partial f\_\phi(\epsilon\_t;s\_t)} \frac{\partial f\_\phi(\epsilon\_t;s\_t)}{\partial \phi}$ $\nabla\_\phi Q(s\_t,\theta,f\_\phi (\epsilon\_t;s\_t)) \approx \frac {\partial Q(s\_t,\theta,a\_t)}{\partial \phi} + \frac{\partial Q(s\_t,\theta,a\_t)}{\partial a\_t} \frac{\partial f\_\phi(\epsilon\_t;s\_t)}{\partial \phi}$ Fist term on right hand side is 0 because $Q$ does not depend on $\phi$ so we have: $\nabla\_\phi Q(s\_t,\theta,f\_\phi (\epsilon\_t;s\_t)) \approx \nabla\_{a\_t}Q(s\_t, \theta,a\_t)\nabla\_\phi f\_\phi(\epsilon\_t;s\_t)$ Now you add up things and you get the final result. Upvotes: 3 [selected_answer]<issue_comment>username_2: This is more meant like a comment to the previous answer. I also originally thought that $$ \nabla\_{\theta}\log \pi\_{\theta}(f\_{\theta}(\varepsilon, s)\mid s) = \nabla\_{a}\log\pi\_{\theta}(a\mid s)\vert\_{a=f\_{\theta}(\varepsilon,s)}\nabla\_{\theta}f\_{\theta}(\varepsilon, s), $$ instead of $$ \nabla\_{\theta}\log \pi\_{\theta}(f\_{\theta}(\varepsilon, s)\mid s) = \nabla\_{a}\log\pi\_{\theta}(a\mid s)\vert\_{a=f\_{\theta}(\varepsilon,s)}\nabla\_{\theta}f\_{\theta}(\varepsilon, s) + \nabla\_{\theta}\log\pi\_{\theta}(a\mid s)\vert\_{a=f\_{\theta}(\varepsilon, s)}. $$ The following is certainly less elegant, but I hope that it gives some additional intuition why we need to take the gradient with respect to $a$ and $\theta$. For simplicity, I will assume that $a$ is one-dimensional, but the same argument would apply for higher dimensions. In the SAC paper, they assume that $\pi\_{\theta}$ is a Gaussian distribution $\mathcal{N}(\mu\_{\theta}(s), \sigma\_{\theta}(s))$. Therefore: $$ \log\pi\_{\theta}(a\mid s)=-\frac{1}{2}\log(2\pi) - \log\sigma\_{\theta}(s)-\frac{(a-\mu\_{\theta}(s))^2}{2\sigma\_{\theta}(s)^2}. $$ Then the gradient becomes: \begin{align} \nabla\_{\theta}\log \pi\_{\theta}(f\_{\theta}(\varepsilon, s)\mid s)&=-\frac{\nabla\_{\theta}\sigma\_{\theta}(s)}{\sigma\_{\theta}(s)}-\frac{(f\_{\theta}(\varepsilon, s)-\mu\_{\theta}(s))(\nabla\_{\theta}f\_{\theta}(\varepsilon,s)-\nabla\_{\theta}\mu\_{\theta}(s))}{\sigma\_{\theta}(s)^2} \\&+\frac{(f\_{\theta}(\varepsilon, s)-\mu\_{\theta}(s))^2\nabla\_{\theta}\sigma\_{\theta}(s)}{\sigma\_{\theta}(s)^3}.\end{align} Let us calculate now the terms on the right hand side: $$ \nabla\_{a}\log\pi\_{\theta}(a\mid s)\vert\_{a=f\_{\theta}(\varepsilon,s)}=-\frac{f\_{\theta}(\varepsilon,s)-\mu\_{\theta}(s)}{\sigma\_{\theta}(s)^2} $$ and \begin{align}\nabla\_{\theta}\log\pi\_{\theta}(a\mid s)\vert\_{a=f\_{\theta}(\varepsilon, s)}&=-\frac{\nabla\_{\theta}\sigma\_{\theta}(s)}{\sigma\_{\theta}(s)}+\frac{(f\_{\theta}(\varepsilon, s)-\mu\_{\theta}(s))\nabla\_{\theta}\mu\_{\theta}(s)}{\sigma\_{\theta}(s)^2}\\ &+\frac{(f\_{\theta}(\varepsilon, s)-\mu\_{\theta}(s))^2\nabla\_{\theta}\sigma\_{\theta}(s)}{\sigma\_{\theta}(s)^3},\end{align} which proves the equality. For the Q-function, we apply the chain rule as usual as $Q$ does not depend on $\theta$. Upvotes: 1
2019/02/14
1,664
5,205
<issue_start>username_0: In reinforcement learning, we often define two functions, the *state-value function* $$V^\pi(s) = \mathbb{E}\_{\pi} \left[\sum\_{k=0}^{\infty} \gamma^{k}R\_{t+k+1} \Bigg| S\_t=s \right]$$ and the *state-action-value function* $$Q^\pi(s,a) = \mathbb{E}\_{\pi}\left[\sum\_{k=0}^{\infty} \gamma^{k}R\_{t+k+1}\Bigg|S\_t=s, A\_t=a \right]$$ where $\mathbb{E}\_{\pi}$ means that these functions are defined as the [expectation](https://en.wikipedia.org/wiki/Expected_value) with respect to a fixed policy $\pi$ of what is often called the *return*, $\sum\_{k=0}^{\infty} \gamma^{k}R\_{t+k+1}$, where $\gamma$ is a *discount factor* and $R\_{t+k+1}$ is the *reward* received from the environment (while the agent interacts with it) from time $t$ onwards. So, both the $V$ and $Q$ functions are defined as expectations of the return (or the *cumulative future discounted reward*), but these expectations have different "conditions" (or are conditioned on different variables). The $V$ function is the expectation (with respect to a fixed policy $\pi$) of the return given that the current state (the state at time $t$) is $s$. The $Q$ function is the expectation (with respect to a fixed policy $\pi$) of the return conditioned on the fact that the current state the agent is in is $s$ and the action the agent takes at $s$ is $a$. Furthermore, the *Bellman optimality equation* for $V^\*$ (the optimal value function) can be expressed as the Bellman optimality equation for $Q^{\pi^\*}$ (the optimal state-action value function associated with the optimal policy $\pi^\*$) as follows $$ V^\*(s) = \max\_{a \in \mathcal{A}(s)} Q^{\pi^\*}(s, a) $$ This is actually shown (or proved) at page 76 of the book "Reinforcement Learning: An Introduction" (1st edition) by <NAME> and <NAME>. Are there any other functions, apart from the $V$ and $Q$ functions defined above, in the RL context? If so, how are they related? For example, I've heard of the "advantage" or "continuation" functions. How are these functions related to the $V$ and $Q$ functions? When should one be used as opposed to the other? Note that I'm *not just* asking about the "advantage" or "continuation" functions, but, if possible, any existing function that is used in RL that is similar (in purpose) to these mentioned functions, and how they are related to each other.<issue_comment>username_1: **Advantage function**: $A(s,a) = Q(s,a) - V(s)$ More interesting is the **General Value Function (GVF)**, the expected sum of the (discounted) future values of some arbitrary signal, not necessarily reward. It is therefore a generalization of value function $V(s)$. The GVF is defined on page 459 of the 2nd edition of [Sutton and Barto's RL book](http://incompleteideas.net/book/the-book-2nd.html) as $$v\_{\pi,\gamma,C}(s) =\mathbb{E}\left[\left.\sum\_{k=t}^\infty\left(\prod\_{i=t+1}^k \gamma(S\_i)\right)C\_{k+1}\right\rvert S\_t=s, A\_{t:\infty}\sim\pi\right]$$ where $C\_t \in \mathbb{R}$ is the signal being summed over time. $\gamma(S\_t)$ is a function $\gamma: \cal{S}\to[0,1]$ allowing the discount rate to depend upon the state. Sutton and Barto call it the termination function. [Some](http://reinforcement-learning.ml/papers/pgmrl2018_sherstan.pdf) call it the continuation function. Also of note are the **differential value functions**. These are used in the continuing, undiscounted setting. Because there is no discounting, the expected sum of future rewards is unbounded. Instead, we optimize the expected differential reward $R\_{t+1}-r(\pi)$, where $r(\pi)$ is the average reward under policy $\pi$. $$v\_{\pi,\,diff}(s) = \sum\_a \pi(a|s) \sum\_{s',r} p(s',r|s,a)\left[r-r(\pi)+ v\_{\pi,\,diff}(s')\right]$$ $$v\_{\*,\,diff}(s) = \max\_a \sum\_{s',r} p(s',r|s,a)\left[r-\max\_\pi r(\pi)+ v\_{\*,\,diff}(s')\right]$$ The differential value functions assume that a single fixed value of $r(\pi)$ exists. That is, they assume the MDP is "ergodic." See section 10.3 of Sutton and Barto for details. Upvotes: 4 [selected_answer]<issue_comment>username_2: There is also the simpler **action value function** $$q\_\*(a) = \mathbb{E} \left[ R\_t \mid A\_t = a\right],$$ which we try to approximate when solving *context-free* **bandit problems**. You can also similarly define the action value function for *contextual* bandit problems by also conditioning on the context (rather than just on the action). See chapter 2 of the book [Reinforcement Learning: An Introduction](http://incompleteideas.net/book/RLbook2020.pdf#page=48) (2nd edition) by Barto and Sutton for more details. There are also the **afterstate value functions**, $v(s')$. Check [this question](https://ai.stackexchange.com/q/24816/2444) and [section 6.8](http://incompleteideas.net/book/RLbook2020.pdf#page=158) of the just cited book. Moreover, there's the **state-action-goal value function**, $q(s, a, g)$, as described in the paper [Hindsight Experience Replay](https://papers.nips.cc/paper/2017/file/453fadbd8a1a3af50a9df4df899537b5-Paper.pdf) (2017), although I am not sure how we can mathematically define it (i.e. as a function of other value functions or as a Bellman equation). Upvotes: 0
2019/02/15
1,077
4,462
<issue_start>username_0: Is the optimal policy always stochastic (that is, a map from states to a probability distribution over actions) if the environment is also stochastic? Intuitively, if the environment is *deterministic* (that is, if the agent is in a state $s$ and takes action $a$, then the next state $s'$ is always the same, no matter which time step), then the optimal policy should also be deterministic (that is, it should be a map from states to actions, and not to a probability distribution over actions).<issue_comment>username_1: > > Is the optimal policy always stochastic (that is, a map from states to a probability distribution over actions) if the environment is also stochastic? > > > No. An optimal policy is generally deterministic unless: * Important state information is missing (a POMDP). For example, in a map where the agent is not allowed to know its exact location or remember previous states, and the state it is given is not enough to disambiguate between locations. If the goal is to get to a specific end location, the optimal policy may include some random moves in order to avoid becoming stuck. Note that the environment in this case could be deterministic (from the perspective of someone who can see the whole state), but still lead to requiring a stochastic policy to solve it. * There is some kind of minimax game theory scenario, where a deterministic policy can be punished by the environment or another agent. Think scissors/paper/stone or prisoner's dilemma. > > Intuitively, if the environment is deterministic (that is, if the agent is in a state and takes action , then the next state ′ is always the same, not matter which time step), then the optimal policy should also be deterministic (that is, it should be a map from states to actions, and not to a probability distribution over actions). > > > That seems reasonable, but you can take that intuition further with any method based on a value function: *If you have found an optimal value function, then acting greedily with respect to it **is** the optimal policy.* The above statement is just a natural language re-statement of the Bellman optimality equation: $$v^\*(s) = \text{max}\_a \sum\_{r,s'}p(r,s'|s,a)(r+\gamma v^\*(s'))$$ i.e. the optimal values are obtained when always choosing the action that maximises reward plus discounted value of next step. The $\text{max}\_a$ operation is deterministic (if necessary you can break ties for max value deterministically with e.g. an ordered list of actions). Therefore, any environment that can be modelled by a MDP and solved by a value-based method (e.g. value iteration, Q-learning) has an optimal policy which is deterministic. It is possible in such an environment that the optimal solution may not be stochastic at all (i.e. if you add any randomness to the deterministic optimal policy, the policy will become strictly worse). However, when there are ties for maximum value for one or more actions in one or more states then there are multiple equivalent optimal and deterministic policies. You may construct a stochastic policy that mixes these in any combination, and it will also be optimal. Upvotes: 4 <issue_comment>username_2: I would say no. For example, consider the [multi-armed bandit problem](https://en.wikipedia.org/wiki/Multi-armed_bandit). So, you have $n$ arms which all have a probability of giving you a reward (1 point, for example), $p\_i$, $i$ being between 1 and $n$. This is a simple stochastic environment: this is a one state environment, but it is still an environment. But obviously the optimal policy is to choose the arm with the highest $p\_i$. So this is not a stochastic policy. Obviously, if you are in an environment where you play against other agent (a game theory setting), your optimal policy will certainly be stochastic (think of a poker game, for example). Upvotes: 3 <issue_comment>username_3: I'm thinking of a probability landscape, in which you find yourself as an actor, with various unknown peaks and troughs. A good deterministic approach is always likely to lead you to the nearest local optimum, but not necessarily to the global optimum. To find the global optimum, something like an MCMC algorithm would allow to stochastically accept a temporarily worse outcome in order to escape from a local optimum and find the global optimum. My intuition is that in a stochastic environment this would also be true. Upvotes: 0
2019/02/15
2,189
9,048
<issue_start>username_0: This is not meant to be negative or a joke but rather looking for a productive solution on AI development, engineering and its impact on human life: Lately with my Google searches, the AI model keeps auto filling the ending of my searches with: “...in Vietnamese” And “...in a Vietnamese home” The issue is I have never searched for that but because of my last name the model is creating this context. The other issue is that I’m a halfy and my dad is actually third generation, I grew up mainstream American and don’t even speak Vietnamese. I’m not even sure what a Vietnamese home means. My buddy in a similar situation of South Asian and noticed the same exact thing more so with YouTube recommended videos. We already have enough issues in the US with racism, projections of who others expect us to be based on any number of things, stereotyping and putting people in boxes to limit them - I truly believe AI is adding to the problem, not helping. How can we fix this. Moreover, how can we use AI to bring out peoples true self, talents and empower and free them them to create their life how they like ? There is huge potential here to harness AI in ways that can bring us more freedom, joy and beauty so people can be the whole of themselves and with who they really are. Then meet peoples needs, wishes, dreams and hope. Given them shoulders to stand on to create their reality, not live someone else's projection of themselves.<issue_comment>username_1: > > Lately with my Google searches, the AI model keeps auto filling the ending of my searches with: > > > “...in Vietnamese” > > > I can see how this would be annoying. I don't think Google's auto-complete algorithm and training data is publicly available. Also it changes frequently as they work to improve the service. As such, it is hard to tell what exactly is leading it to come up with this less-than-useful suggestion. Your suspicion that it has something to do with Google's service detecting your heritage seems plausible. The whole thing is based around statistical inference. At no point does any machine "know" what Vietnamese - or in fact any of the words in your query - actually *means*. This is a weakness of pretty much all core NLP work in AI, and is called the [grounding problem](https://en.wikipedia.org/wiki/Symbol_grounding_problem). It is why, for instance, that samples of computer generated text produce such surreal and comic material. The rules of grammar are followed, but semantics and longer term coherence are a mess. Commercial chatbot systems work around this with a lot of bespoke coding around some subject area, such as booking tickets, shopping etc. These smaller domains are possible for human developers to "police", connecting them back to reality, and avoiding the open-ended nature of the whole of human language. Search engine text autocomplete however, cannot realistically use this approach. Your best bets are probably: * Wait it out. The service will improve. Whatever language use statistics are at work here are likely change over time. Your own normal use of the system without using the suggestions will be part of that data stream of corrections. * Send a complaint to Google. Someone, somewhere in Google will care about these results, and view them as errors to be fixed. Neither of these approaches guarantee results in any time frame sadly. > > We already have enough issues in the US with racism, projections of who others expect us to be based on any number of things, stereotyping and putting people in boxes to limit them - I truly believe AI is adding to the problem, not helping. > > > You are not alone in having these worries. The statistics-driven nature of machine learning algorithms and use of "big data" to train them means that machines are exposing bias and prejudice that are long buried in our language. These biases are picked up by machinery then used by companies that don't necessarily want to reflect those attitudes. A similar example occurs in natural language processing models with word embeddings. A very interesting feature of LSTM neural networks that learn statistical language models is that you can look at [word embeddings](https://towardsdatascience.com/introduction-to-word-embedding-and-word2vec-652d0c2060fa), mathematical representations of words, and do "word math": $$W(king) - W(man) + W(woman) \approx W(queen)$$ $$W(he) - W(male) + W(female) \approx W(she)$$ This is very cool, and implies that the learned embeddings really are capturing semantics up to some depth. However, the same model can produce results like this: $$W(doctor) - W(male) + W(female) \approx W(nurse)$$ This doesn't reflect modern sensibilities of gender equality. There is obviously a deep set reason for this, as it has appeared from non-prejudiced statistical analysis of billions of words of text from all sorts of sources. Regardless of this though, engineers responsible for these systems would prefer that their models did not have these flaws. > > How can we fix this. Moreover, how can we use AI to bring out peoples true self, talents and empower and free them them to create their life how they like ? > > > Primarily by recognising that statistical ML and AI doesn't inherently have prejudice or any agenda at all. It is reflecting back ugliness already in the world. The root problem is to fix people (beyond scope of this answer, if I had solid ideas about this I would not be working in software engineering, but in something more people-focussed). However, we can remove some of the unwanted bias from AI systems. Broadly the steps toward this go: * Recognise that a particular AI system has captured and is using unwanted gender, racial, religious etc bias. * Reach a consensus about how an unbiased model should behave. It must still be useful for purpose. * Add the desired model behaviour into the training and assessment routines of the AI. For instance in your case, there are possibly some users of Google's system who would prefer to read articles in Vietnamese, or have English translated into Vietnamese, and are finding it awkward that the default assumption is that everything should be presented in English. These users don't necessarily need to use the search text for this, but presumably are for some reason. A reasonable approach is to figure out how their needs could be met without spamming "in Vietnamese" on the end of every autocomplete suggestion, and perhaps in general move suggestions to localise searches by cultural differences out of autocomplete into a different part of the system. For the case of gender bias in NLP systems, [Andrew Ng's Coursera course on RNNs](https://www.coursera.org/lecture/nlp-sequence-models/debiasing-word-embeddings-zHASj) shows how this can be achieved using the embeddings themselves. Essentially it can be done by identifying a bias direction from a set of words (e.g. "he/she", "male/female"), and removing deviations in that direction for most other words, preserving it only for words where it is inherently OK to reflect the differences (such as "king" and "queen" for gender bias). Each case of unwanted bias though needs to be discovered by people and oversight of this as a political and social issue, not primarily a technical one. Upvotes: 5 [selected_answer]<issue_comment>username_2: The key I think is teaching the algorythm by providing better data. The only thing an AI can use is the data available for itself. Figuring out whatever it can is not bias, as it's based on objective facts. If it knows 98% of Nguyens are interested in X, knowing nothing else about you personally, showing you X might be good. If you consistently click on downvote/not interested, etc. buttons on the site, your personal data will override the default, and you won't see X anymore. As a user you could give better reviews for better results, and as a developer you can give better ways to get this: by logging what you click on, search, and showing "not interested/interested/upvote/downvote/like" etc. buttons. Note that I'm using youtube from different, unlinked machines/browsers, and I get different suggestions from all of these, probably because I've trained the AI with different data. You can also use services with less intrusive data collection, e.g. duckduckgo, bitchute, etc. Upvotes: 2 <issue_comment>username_3: Another fallacy that appears common to most search engines is that anything a person searches on is an aspect of their own identity. I once searched on walk-in tubs for a very elderly relative, and was followed all over the web by ads for aids for the infirm elderly. Users who recognize that Google uses their searches to build their profile can alter their searches accordingly. It's also fun to mess with Google's model. Try searching on "dragon images" and see how fast Google and advertisers decide you are a teenage female. Have fun with it. Do your best to turn Google's model of you into self-contradictory garbage. Upvotes: 2
2019/02/16
1,898
8,385
<issue_start>username_0: This is an excerpt taken from [Sutton and Barto](https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf) (pg. 3): > > Another key feature of reinforcement learning is that it explicitly considers the whole problem of a goal-directed agent interacting with an uncertain environment. This is in contrast with many approaches that address subproblems without addressing how they might fit into a larger picture. For example, we have mentioned that much of machine learning research is concerned with supervised learning without explicitly specifying how such an ability would finally be useful. Other researchers have developed theories of planning with general goals, but without considering planning's role in real-time decision-making, or the question of where the predictive models necessary for planning would come from. Although these approaches have yielded many useful results, their focus on > isolated subproblems is a significant limitation. > > > I have an idea of supervised learning (SL), but what exactly does the author mean by *planning*? And how is the RL approach different from planning and SL? (Illustration with an example would be nice).<issue_comment>username_1: The concept of "planning" is not just related to RL. In general (as the name suggests), planning consists in creating a "plan" which you will use to reach a "goal". The goal depends on the context or problem. For example, in robotics, you can use a "planning algorithm" (e.g. Dijkstra's algorithm) in order to find the path between two points on a map (given e.g. the map as a graph). In RL, planning usually refers to the use of a model of the environment in order to find a policy that hopefully will help the agent to behave optimally (that is, obtain the highest amount of return or "future cumulative discounted reward"). In RL, the problem (or environment) is usually represented as a Markov Decision Process (MDP). The "model" of the environment (or MDP) refers to the *transition probability distribution* (and *reward function*) associated with the MDP. If the transition model (and reward function) is known, you can use an algorithm that exploits it to (directly or indirectly) find a policy. This is the usual meaning of planning in RL. A common planning algorithm in RL is e.g. *value iteration* (which is a dynamic programming algorithm). > > Other researchers have developed theories of planning with general goals, but without considering planning's role in real-time decision- making, or the question of where the predictive models necessary for planning would come from. > > > Planing is often performed "offline", that is, you "plan" before executing. While you're executing the "plan", you often do not change it. However, often this is not desirable, given that you might need to change the plan because the environment might also have changed. Furthermore, the authors also point out that planning algorithms often have a few limitations: in the case of RL, a "model" of the environment is required to plan. > > For example, we have mentioned that much of machine learning research is concerned with supervised learning without explicitly specifying how such an ability would finally be useful. > > > I think the authors simply want to say that supervised learning is usually used to solve specific problems. The solutions to supervised problems often are not directly applicable to other problems, so this makes them limited. > > Another key feature of reinforcement learning is that it explicitly considers the whole problem of a goal-directed agent interacting with an uncertain environment. > > > In RL, there is the explicit notion of a "goal": there is an agent that interacts with an environment in order to achieve its goal. The goal is often to maximize the "return" (or "future cumulative discounted reward", or, simply, the reward in the long run). > > How is RL different from planning and supervised learning? > > > RL and planning (in RL) are quite related. In RL, the problem is similar to the one in planning (in RL). However, in RL, the transition model and reward function of the MDP (which represents the environment) are usually unknown. Therefore, the only way of finding or estimating an optimal policy that will allow the agent to (near-optimally) behave in this environment is to interact with the environment and gather some info regarding its "dynamics". RL and supervised learning (SL) are quite different. In SL, there isn't usually the explicit concept of "agent" or "environment" (and their interaction), even though it might be possible to describe supervised learning in that way (see [this question](https://ai.stackexchange.com/q/14167/2444)). In supervised learning, during the training or learning phase, a set of inputs and the associated expected outputs is often provided. Then the "objective" is to find a map between inputs and outputs, which generalizes to inputs (and corresponding outputs) that have not been observed during the learning phase. In RL, there isn't such a set of inputs and associated expected outputs. In RL, there is just a scalar signal emitted by the environment, at each time step, which roughly indicates how well the agent is currently performing. However, the goal of the agent is not just to obtain rewards, but to behave optimally (in the long run). In short, in RL, there is the explicit notion of agent, environment and goal, and the reward is the only signal which tells the agent how well it is performing, but the reward does not tell the agent which actions it should take at each time step. In supervised learning, the objective is to find a function that maps inputs to the corresponding outputs, and this function is learned by providing explicit examples of such mappings during the training phase. There are some RL algorithms (like the temporal-difference ones), which could roughly be thought of as self-supervised learning algorithms, where the agent learns from itself (or from the experience it has gained by interacting with the environment). However, even in these cases, the actions that the agent needs to take are not explicitly taught. Upvotes: 5 [selected_answer]<issue_comment>username_2: The [automated planning](https://en.wikipedia.org/wiki/Automated_planning_and_scheduling) is: > > Automated planning and scheduling, sometimes denoted as simply AI Planning,[1](https://en.wikipedia.org/wiki/Automated_planning_and_scheduling) is a branch of artificial intelligence that concerns the realization of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. Unlike classical control and classification problems, the solutions are complex and must be discovered and optimized in multidimensional space. Planning is also related to decision theory. > > > In known environments with available models, planning can be done offline. Solutions can be found and evaluated prior to execution. In dynamically unknown environments, the strategy often needs to be revised online. Models and policies must be adapted. Solutions usually resort to iterative trial and error processes commonly seen in artificial intelligence. These include dynamic programming, **reinforcement learning** and combinatorial optimization. Languages used to describe planning and scheduling are often called action languages. > > > In other words, the planning is some strategies or actions to reach from the start state to the goal state. As you found in the above, one of the solutions for planning could be RL (depends on the problem). Hence, MDP is a specific case of planning and it is more general. For the difference of RL and supervised learning you can see [this post](https://datascience.stackexchange.com/a/38902/27665): > > The main difference is to do with how "correct" or optimal results are learned: > > > * In Supervised Learning, the learning model is presented with an input and desired output. It learns *by example*. > * In Reinforcement Learning, the learning *agent* is presented with an environment and must guess correct output. Whilst it receives feedback on how good its guess was, it is never told the correct output (and in addition the feedback may be delayed). It learns *by exploration*, or trial and error. > > > Upvotes: 0
2019/02/16
417
1,759
<issue_start>username_0: I am trying to understand how RNNs are used for sequence modelling. On a [tutorial here](https://blog.keras.io/a-ten-minute-introduction-to-sequence-to-sequence-learning-in-keras.html), it mentions that if you want to translate say a sentence from English to French you can use an encoder-decoder set-up as they described. However what if you want to do a sequence to sequence modelling where your inputs and outputs are of the same domain but you just want to predict the next output of a sequence. For example if I want to use sequence modelling to learn the sine function. So say I have 20 y-coordinates from $y = sin(x)$ from 20 evenly spaced out x-coordinates and I want to predict the next 10 or so y-coordinates. Would I use an encoder-decoder setup here?<issue_comment>username_1: You don't need to Encoder-Decoder here. When using seq2seq learning for text (for example, for translation) you need encoder-decoder to encode the words into the numeric vectors and decode the vectors into the words. Therefore, for your numerical case, you don't need an encoder or decoder to train the RNN. Upvotes: 3 [selected_answer]<issue_comment>username_2: Why you need encoder and decoder here, this is not a use case for this. If you want to convert one sequence to another sequence then you can use encoder-decoder. You can use simple seq2seq learning for below purpose. 1. Sequence 2. Sequence Prediction 3. Sequence Classification 4. Sequence Generation Sequence to Sequence Prediction You can use below simple seq2seq method to predicate the next sequence Sequence prediction attempts to predict elements of a sequence on the basis of the preceding elements. <https://machinelearningmastery.com/sequence-prediction/> Upvotes: 1
2019/02/16
420
1,922
<issue_start>username_0: In reinforcement learning (RL), there are model-based and model-free algorithms. In short, model-based algorithms use a transition model $p(s' \mid s, a)$ and the reward function $r(s, a)$, even though they do not necessarily compute (or estimate) them. On the other hand, model-free algorithms do not use such a transition model or reward function, but they directly estimate a value function or policy by interacting with the environment, which allows the agent to infer the dynamics of the environment. Given that model-based RL algorithms do not necessarily estimate or compute the transition model or reward function, in the case these are unknown, how can they be computed or estimated (so that they can be used by the model-based algorithms)? In general, what are examples of algorithms that can be used to estimate the transition model and reward function of the environment (represented as either an MDP, POMDP, etc.)?<issue_comment>username_1: You don't need to Encoder-Decoder here. When using seq2seq learning for text (for example, for translation) you need encoder-decoder to encode the words into the numeric vectors and decode the vectors into the words. Therefore, for your numerical case, you don't need an encoder or decoder to train the RNN. Upvotes: 3 [selected_answer]<issue_comment>username_2: Why you need encoder and decoder here, this is not a use case for this. If you want to convert one sequence to another sequence then you can use encoder-decoder. You can use simple seq2seq learning for below purpose. 1. Sequence 2. Sequence Prediction 3. Sequence Classification 4. Sequence Generation Sequence to Sequence Prediction You can use below simple seq2seq method to predicate the next sequence Sequence prediction attempts to predict elements of a sequence on the basis of the preceding elements. <https://machinelearningmastery.com/sequence-prediction/> Upvotes: 1
2019/02/16
1,974
8,094
<issue_start>username_0: What is self-supervised learning in machine learning? How is it different from supervised learning?<issue_comment>username_1: Introduction ------------ The term *self-supervised learning* (SSL) has been used (sometimes differently) in different contexts and fields, such as representation learning [[1](https://arxiv.org/pdf/1708.07860.pdf)], neural networks, robotics [[2](https://arxiv.org/pdf/1809.07207.pdf)], natural language processing, and reinforcement learning. In all cases, the basic idea is to **automatically** generate some kind of supervisory signal to solve some task (typically, to learn representations of the data or to automatically label a dataset). I will describe what SSL means more specifically in three contexts: representation learning, neural networks and robotics. Representation learning ----------------------- The term self-supervised learning has been widely used to refer to techniques that do not use human-annotated datasets to learn (visual) representations of the data (i.e. representation learning). ### Example In [[1](https://arxiv.org/pdf/1708.07860.pdf)], two patches are randomly selected and cropped from an unlabelled image and the goal is to predict the relative position of the two patches. Of course, we have the relative position of the two patches once you have chosen them (i.e. we can keep track of their centers), so, in this case, this is the automatically generated supervisory signal. The idea is that, to solve this task (known as a **pretext** or **auxiliary** task in the literature [[3](https://arxiv.org/pdf/1905.01235.pdf), [4](https://arxiv.org/pdf/1505.05192.pdf), [5](https://arxiv.org/pdf/1803.07728.pdf), [6](https://arxiv.org/pdf/1901.09005.pdf)]), the neural network needs to learn features in the images. These learned representations can then be used to solve the so-called **downstream** tasks, i.e. the tasks you are interested in (e.g. object detection or semantic segmentation). So, you first learn representations of the data (by SSL pre-training), then you can transfer these learned representations to solve a task that you actually want to solve, and you can do this by fine-tuning the neural network that contains the learned representations on a labeled (but smaller dataset), i.e. you can use SSL for transfer learning. This example is similar to the example given [in this other answer](https://ai.stackexchange.com/a/13749/2444). ### Neural networks Some neural networks, for example, **autoencoders** (AE) [[7](https://link.springer.com/chapter/10.1007/978-3-319-46672-9_10)] are sometimes called self-supervised learning tools. In fact, you can train AEs without images that have been manually labeled by a human. More concretely, consider a de-noising AE, whose goal is to reconstruct the original image when given a noisy version of it. During training, you actually have the original image, given that you have a dataset of uncorrupted images and you just corrupt these images with some noise, so you can calculate some kind of distance between the original image and the noisy one, where the original image is the supervisory signal. In this sense, AEs are self-supervised learning tools, but it's more common to say that AEs are unsupervised learning tools, so SSL has also been used to refer to unsupervised learning techniques. Robotics -------- In [[2](https://arxiv.org/pdf/1809.07207.pdf)], the training data is **automatically** but approximately labeled by finding and exploiting the relations or correlations between inputs coming from different sensor modalities (and this technique is called SSL by the authors). So, as opposed to representation learning or auto-encoders, in this case, an actual labeled dataset is produced automatically. ### Example Consider a robot that is equipped with a *proximity sensor* (which is a *short-range* sensor capable of detecting objects in front of the robot at short distances) and a *camera* (which is *long-range* sensor, but which does not provide a direct way of detecting objects). You can also assume that this robot is capable of performing [odometry](https://en.wikipedia.org/wiki/Odometry). An example of such a robot is [Mighty Thymio](https://github.com/jeguzzi/mighty-thymio). Consider now the task of detecting objects in front of the robot at longer ranges than the range the proximity sensor allows. In general, we could train a CNN to achieve that. However, to train such CNN, in supervised learning, we would first need a labelled dataset, which contains labelled images (or videos), where the labels could e.g. be "object in the image" or "no object in the image". In supervised learning, this dataset would need to be manually labelled by a human, which clearly would require a lot of work. To overcome this issue, we can use a self-supervised learning approach. In this example, the basic idea is to associate the output of the proximity sensors at a time step $t' > t$ with the output of the camera at time step $t$ (a smaller time step than $t'$). More specifically, suppose that the robot is *initially* at coordinates $(x, y)$ (on the plane), at time step $t$. At this point, we still do not have enough info to label the output of the camera (at the same time step $t$). Suppose now that, at time $t'$, the robot is at position $(x', y')$. At time step $t'$, the output of the proximity sensor will e.g. be "object in front of the robot" or "no object in front of the robot". Without loss of generality, suppose that the output of the proximity sensor at $t' > t$ is "no object in front of the robot", then the label associated with the output of the camera (an image frame) at time $t$ will be "no object in front of the robot". Upvotes: 8 [selected_answer]<issue_comment>username_2: Self-supervised visual recognition is often applied to representation learning. Here we first learn features on unlabeled data (representation learning), and then learn the real model on features extracted from the labeled data. This especially makes sense when we have a lot of unlabeled data and few labeled data. The features can be learned by solving so called *pretext* tasks. Examples of pretext tasks are to predict rotation of a jittered image, to recognize jittered instances of a same image, or to predict spatial relationship of image patches. A nice overview and interesting results can be found in [this](https://arxiv.org/pdf/1901.09005.pdf) recent paper. Upvotes: 3 <issue_comment>username_3: Self-supervised learning is when you use some parts of the samples as labels for a task that requires a good degree of comprehension to be solved. I'll emphasize these two key points, before giving an example: * *Labels are extracted from the sample*, so they can be generated automatically, with some very simple algorithm (maybe just random selection). * *The task requires understanding*. This means that, in order to predict the output, the model has to extract some good patterns from the data, generating on the process a good representation. A very common case for semi-supervised learning takes place in natural language processing, when you need to solve a task but have few labeled data. In such cases, you need to learn a good representation or language model, so you take sentences and give your network self-supervision tasks like these: * Ask the network to predict the next word in a sentence (which you know because you took it away). * Mask a word and ask the network to predict which word goes there (which you know because you had to mask it). * Change the word for a random one (that probably doesn't make sense) and ask the network which word is wrong. As you can see, these tasks are fairly simple to formulate and the labels are part of the same sample, but they require a certain understanding of the context to be solved. And it's always like this: alter your data in some way, generating the label in the process, and ask the model something related to that transformation. If the task requires enough understanding of the data, you'll have success. Upvotes: 4
2019/02/17
1,021
3,948
<issue_start>username_0: I'm working on my own implementation of NEAT algorithm based on the original 2002 paper called "[Efficient Reinforcement Learning through Evolving Neural Network Topologies](http://nn.cs.utexas.edu/downloads/papers/stanley.gecco02_1.pdf)" (by <NAME> and <NAME>). The way the algorithm is designed it may generate loops in connection of hidden layer. Which obviously will cause difficulties in calculating the output. I have searched and came across two types of approaches. One set like this [example](https://ai.stackexchange.com/q/6231/2444) claim that the value should be calculated like a time series usually seen in RNNs and the circular nodes should use "old" values as their "current" output. But, this seems wrong since the training data is not always ordered and the previous value has nothing to do with current one. A second group like this [example](https://ai.stackexchange.com/q/6329/2444) claim that the structure should be pruned with some method to avoid loops and cycles. This approach apart from being really expensive to do, is also against the core idea of the algorithm. Deleting connections like this may cause later structural changes. I my self have so far tried setting the unknown forward values as 0 and this hides the connection (as whatever weight it has will have no effect on the result) but have failed also for two reasons. One is my networks get big quickly destroying the "smallest network required" idea and also not good results. What is the correct approach?<issue_comment>username_1: You can use a feed forward style network, so that every node outputs to a higher node except output nodes. This will eliminate connection loops. Upvotes: 1 <issue_comment>username_2: NEAT does not employ a feed forward concept and also it does not take any special action to avoid loops. Network is evaluated in non-recursive model. The only non-deterministic loop, the evaluation has is the loop for activating all the outputs. Pseudo code is something like this, ``` Until all the outputs are active for all non-sensor nodes activate node sum the input for all non-sensor and active nodes calculate the output ``` NOTE1: You can use a defensive mechanism (like a counter) to avoid a infinite loop NOTE2: When summing the input, outputs of nodes which were at least evaluated/calculated once are considered, otherwise their outputs are assumed to be zero. This is the note from the author of NEAT about identifying loops, > > Note that checking networks for loops in general in not necessary and therefore I stopped writing this function > > > bool Network::integrity() > > > Upvotes: 2 <issue_comment>username_3: This is what I do: During evaluation, I add visited counters to nodes and keep the paths. If we go [1i-5h-7h-4o-5h], 5h is visited twice, so I detect the loop [5h-7h-4o-5h] and mark the [4o-5h] connection as a loop. Then I let it continue evaluating. [1i-5h-7h-4o-5h-7h-4o] and 4o can't go to 5h again because it was marked as a loop. This way you don't avoid or ignore loops, you add them to your evaluation as a kind of memory. This is consistent with the stanley 02 paper Introduction Paragraph 1 ``` In addition, memory is easily repre- sented through recurrent connections in neural networks, making NE a natural choice for learning non-Markovian tasks (Gomez and Miikkulainen, 1999, 2002). ``` He references recurrent connections being used as "memory". Page 122 Figure 8 subtext: ``` A NEAT solution to the DPNV problem. This clever solution works by taking the derivative of the difference in pole angles. Using the recurrent connection to itself, the single hidden node determines whether the poles are falling away or towards each other. ``` So this recurrent connection should not be avoided, in fact it is required for the given solution. This is why I evaluate them once. Upvotes: 0
2019/02/17
3,079
13,074
<issue_start>username_0: The "AI Singularity" or "[Technological Singularity](https://en.wikipedia.org/wiki/Technological_singularity)" is a vague term that roughly seems to refer to the idea of: 1. Humans can design algorithms 2. Humans can improve algorithms 3. Eventually algorithms we design might end up being as good as humans at designing and improving algorithms 4. This might lead to these algorithms designing better versions of themselves, eventually becoming far more intelligent than humans. This improvement would continue to grow at an increasing rate until we reach a "singularity" where an AI is capable of making technological progress at a rate far faster than we could ever imagine Also known as an [Intelligence Explosion](https://wiki.lesswrong.com/wiki/Intelligence_explosion). This rough idea has been heavily debated as to its feasibility, how fast it'll take (if it does happen), etc. However I'm not aware of any formal definitions of the concept of "singularity". Are there any? If not, do we have close approximations? I have seen [AIXI](https://en.wikipedia.org/wiki/AIXI) and the [Gödel machine](https://en.wikipedia.org/wiki/G%C3%B6del_machine), but these both require some "reward signal" — it is unclear to me what reward signal one should choose to bring about a singularity, or really how those models are even relevant here. Because even if we had an oracle that can solve any formal problem given to it, it's unclear to me how we could use that to cause a singularity to happen [(see this question for more discussion on that note)](https://ai.stackexchange.com/questions/10643/is-a-very-powerful-oracle-sufficient-to-trigger-the-ai-singularity?noredirect=1&lq=1).<issue_comment>username_1: Here is one idea. I'll start with a more specific "mathematical singularity", defined as an algorithm that can do the following in N hours or less (for all $N >= 1$): 1. State equivalent versions (up to notional differences) of all mathematical theorems/conjectures that humans will read and understand in N\*20 years after 2018 that can be stated formally in [Metamath](http://us.metamath.org/) (this in an arbitrary choice, but Metamath is general enough to include quantum logic and extensions of ZFC so it seems like a decent place to start with. Feel free to instead use Coq, Isabelle, Lean, etc. instead if you prefer), assuming those humans never have access to a "mathematical singularity" capable algorithm and their mathematical community continues living and functioning intellectually in a manner similar in capacity to how it did in 2018 2. Of those problems, provide correct proofs (these may not be readable, that's ok) of all of those that will be solved by those humans in N\*20 years. This of course does not fully capture all mathematical progress that humans will make in those years: a big component missing is "readable proofs" and concepts that can't be captured in metamath. But it is something that is theoretically formal. I know that this doesn't include any "continual improvement", what I am referring to here is simply a threshold such that when an algorithm passes it, I think it is sufficently powerful enough to be considered as "intelligent enough" that it has reached close to singularity levels of intelligence. Feel free to adjust the (20 years) constant in your head to match your preferred threshold. I'm not going to accept this answer because it is lacking "continual improvement", but I brought it up because if we can't figure out how to define it mathematically, perhaps simply having "sufficient criteria" in various domains could be a good start. **Edit**: I suppose that the singularity typically involves an assumption of the development of an intelligence that is superior to human society. This implies that it is capable of at least doing the things that our society does, so there is probably a good argument to be made here that "proof accessibility" and "method teachability" are vital to this problem. I mean, if we think of the current state of the field of calculus, it has gone from an arcane topic only understood by a few field experts, to now being readily accessible and teachable to high school students. While that didn't require proving any new major mathematical theorems, one could argue that much of our technological progress didn't come until advanced mathematical machinery developed (calculus) became accessible to a wide range of people. I was going to make an argument about how "the difference is that computers can learn quicker: they can read through massive proofs very quickly". But I suppose that depends on the architecture of whatever kind of "thing" is achieving the singularity. I.e., here is a (non-exhaustive) list two possible outcomes: * There is only one "mind" that is achieving all of this. In that case, that mind has all the knowledge it needs and it doesn't need to teach anyone to progress further, so this point is sorta irrelevant. However, I can still see an argument for "teachability" if we want to utilize this vast amount of knowledge the AI has gained in human society, if possible. * There is a simulated "society" of virtual minds that are interacting with each other, that, together, achieve the mathematical singularity. If a single "mind" in this "society" isn't able to easily use and understand the work done by another mind, then the point of "teachability" is very important to prevent individual minds from having to continually recreate the wheel, so to speak. Without our biological limitations these digital minds may have very different "teaching" methods, but I think here is the ideal additional requirement for a "mathematical singularity": 3. These proofs must be (eventually, perhaps not until spending quite a bit of time) accessible to a graduate mathematician, via proving pdf textbooks (or other similar teaching materials) that cover the same material that human mathematical textbooks would have covered after N\*20 years in a way that is accessible to the typical graduate mathematician. However we have now lost some formality in this: textbooks usually contain lots of exposition and analogies that are difficult to formally measure and may not even be relevant for the AI. Here is an alternate option that is not as good, but still close: 3. The algorithm must present its results in a form that can be used by any other algorithm that also can achieve the "mathematical singularity" to "skip ahead" to N\*20 years, and then immediately continue progress from there. However this criteria has a trivial exploit: an algorithm might as well just provide a 'save state' and a 'program' to run that save state. Conceivably any algorithm that can achieve the mathematical singularity is at least capable of executing code, so providing a 'save state' and 'program' passes this criteria without making it at all accessible (The caveat here is if it uses some sort of model of computation that requires special hardware such as quantum computing or black hole computing to prevent slowdown, but that's besides the point) I think I prefer this alternative: 3. These proofs must be similar in length as the (formalized versions of) proofs the human academic community would have made in those 20\*N years "length" is tricky here: it is possible to prove a very difficult theorem very succinctly by simply referencing a very powerful lemma. But here is one example metric: $$length(Proof) = lengthInSymbols(Proof)+\sum\_{symbol \in Proof} \frac{length(symbol)}{numberOfTimesUsedInOtherProofs(symbol)}$$ Where "Other Proofs" is the set of all proofs read and understood by humans in those N\*20 years, and "symbols" refers to things such as "Green's Theorem" or "$\in$". Hopefully the idea is apparent here: if something is used frequently in many proofs, it is a "common technique" that isn't vital to that proof, and thus doesn't contribute as much to the "length" of that proof. Finding a potentially more suitable metric here seems like a much more tractable problem then defining the mathematical singularity itself and I suspect this is studied elsewhere more, so I'll leave it at this for now. Upvotes: 1 <issue_comment>username_2: I would approach it from a direction different from @username_1, though there seems to be nothing wrong with her answer. IMHO, when AI * A: is sufficiently general, * B: is able to direct its own evolution, * C: has control (whether direct or indirect) of all the resources needed to grow and evolve, and * D: has the *goal* of growing and evolving to solve a big, important problem, then the "singularity" will have been reached. "sufficiently general" means that its growth is not limited by the code that initially defines it: it can re-write its own code (through "offspring in a sandbox", or directly; doesn't matter which). Genetic programming is currently clumsy but is indeed sufficiently general. C is something that would most likely need to be given to it, so nobody will be able to unplug it. D is easy. I'd like to see an effort in the Futurist and AI communities to choose such goals. This isn't a mathematical definition; it's something more mundane; but I think it's to the point. Upvotes: 0 <issue_comment>username_1: I found someone that has done this thing! You can hear a good explanation in <NAME>'s answer to [this question](https://www.youtube.com/watch?v=E1AxVXt2Gv4&feature=youtu.be&t=3924) about rewards given to AIXI. He describes a work that seems to be referring to this paper: [Universal Knowledge-Seeking Agents for Stochastic Environments](http://www.hutter1.net/publ/ksaprob.pdf) I'll edit this answer later with a full explanation of the approach, but essentially the idea is that you use an AIXI model that does optimal reinforcement learning, giving it a reward that is based on information gained (phrased in a careful way to avoid a few common pitfalls). As a result, it learns to choose actions that give it the most information possible to predict the impacts of it's actions. This results in a "scientist" like behaviour, and you could imagine it doing things like turing the entire earth into a supercollider to better understand some physics laws if it decides that is the best approach for gaining maximum information. It would probably also do plenty of very unethical psycology experiements, for example, if it ended up deciding that human actions were important to predict and understand. It's not a "safe" singularity in that sense, but that's okay, I didn't require that. It's at least a formal definition. It requires doing some uncomputable things, but the hope of future research is that we can make close enough approximations to those uncomputable things to be good enough anyway. I feel this theory is lacking any explanation of how feasable such a task is since it uses uncomputable agents, so I won't accept it yet, but it's the best answer I've seen so far. And I'll be watching future research closely to see if they can get a better handle on feasability, there seems to be quite a bit of work that has gone on in finding computable approximations to AIXI. The reason I care about feasability is because it is very relevant for mathmatically deciding how plausable something like an "intelligence explosion" actually is. So if a theory doesn't talk about feasability, it is missing out on a big piece of this question. Still this theory seems hopeful. For example, maybe there are fundamental computational limits to maximizing some reward functions, and we can prove that even certain levels of approximations for this reward function aren't computable. That would be a really interesting negative result. In general, I think the idea of using reinforcement learning, then choosing a reward function that tries to capture something instrinsic (such as "curiosity") is a very good approach to trying to formally define the singularity. I look forward to seeing other potential reward functions defined in the future, I don't expect this to be the only one. Upvotes: 2 <issue_comment>username_3: I believe that mathematical theorems are social constructions which are formalised by virtue of rigorous proofs facilitated by an academic peer review process; in other words, I am not a mathematical Platonist. You ask: “Can we define the AI singularity mathematically?” I personally see no reason why the so-called AI singularity cannot be defined in mathematical terms. Let us pretend that a gifted mathematician is able to provide a convincing and logically consistent set of rigorous proofs for a mathematical conjecture pertaining to the AI singularity. The aforementioned mathematician then submits the paper to a highly prestigious mathematics journal, such as [Journal of the American Mathematical Society](https://www.ams.org/publications/journals/journalsframework/jams). If this paper passes the stringent academic peer review process for publication, then this would constitute peer acceptance that the AI singularity can indeed be defined in mathematical terms. Upvotes: 0
2019/02/18
695
2,873
<issue_start>username_0: I am generating images that consist of points, where the object's location is where the most overlap of points occurs. [![Generated image example](https://i.stack.imgur.com/7FyHl.png)](https://i.stack.imgur.com/7FyHl.png) In this example, the object location is $(25, 51)$. I am trying to train a model to just finds the location, so I don't care about the classification of the object. Additionally, the shape of the overlapping points where the object is located never changes and will always be that shape. **What is a good model for this objective?** Many of the potential models I've been looking at (CNN, YOLO, and R-CNN) are more concerned with classification than location. Should I search the image for the overlapping dots, create a bound box around them, then retrieve the boxes' coordinates?<issue_comment>username_1: Neural networks are not only used for classification but also for **regression**. It seems that a CNN would be a good solution for this problem with 2 output neurons each of them providing a number within the range of your frame. Upvotes: 0 <issue_comment>username_2: > > What is a good model for this objective? > > > I will try to give another perspective: Solve it without machine learning model Your problem is try to find the most overlapping point. If the image above is image that you used in your case, you can solve it directly by applying some computer vision algorithms. 1. Try to create some [binary image](https://en.wikipedia.org/wiki/Binary_image) based on the color of the dots. if you are not sure with the available colors on your image, you can list of pixel colors that not black and white uniquely at first. So if there are four colors you need to generate four different binary image. Create a simple condition or a complex one, for example: ``` if pixel[i,j]=red then pixel[i,j]=white else pixel[i,j]=black ``` 2. Get its location by searching "the white" over your image or [use blob detection method](https://www.learnopencv.com/blob-detection-using-opencv-python-c/) (it'll be a little bit tricky if the actual image always have different axis scale). You can save it as a list of coordinate of each color. 3. What happen if you can't see the dot fully because it's overlapping with another dot? Find the pattern. In your image, dots appear with certain pattern. If you can find two consecutive dots horizontally and vertically, you can predict the position of all your dots. 4. Find the most overlapping position from your list. **Pros** * The result may more accurate than using machine learning model * Faster, you don't need to train it first **Cons** * Finding dot's location in image with different axis will be difficult, but it's still solvable * It'll be difficult to predict the pattern if many dots are missing because overlapped by other dots Upvotes: 1
2019/02/18
505
1,710
<issue_start>username_0: I asked a question a while ago here and since then I've been solving the issues within my code but I have just one question... This is the formula for updating the Q-Matrix in Q-Learning: $$Q(s\_t, a\_t) = Q(s\_t, a\_t) + \alpha \times (R+Q(s\_{t+1}, max\_a)-Q(s\_t, a\_t))$$ However, I saw a Q-Learning example that uses a different formula, which I'm applying to my own problem and I'm getting good results: $$Q(s\_t, a\_t) = R(s\_t,a\_t) + \alpha \times Q(s\_{t+1}, max\_a)$$ Is this valid?<issue_comment>username_1: No, your second statement does not correctly implement the Q-learning update rule, which the first statement correctly implements. Upvotes: 2 <issue_comment>username_2: Your second code snippet is equivalent to this: $$Q\_{k+1}(s,a) \leftarrow r + \alpha \text{max}\_{a'} Q\_k(s', a')$$ This looks like a simplified Value Iteration update to me, where you have incorrectly switched $\alpha$ (the learning rate) for $\gamma$ (the discount rate). The full Value Iteration update based on action values looks like this: $$Q\_{k+1}(s,a) \leftarrow \sum\_{r,s'} p(r,s'|s,a)(r + \gamma \text{max}\_{a'} Q\_k(s', a'))$$ This is almost the same as your equation when you have a deterministic environment (so you can directly predict single values $r$ and $s'$ from $s, a$) As such, it will sort of work with certain assumptions: * You want a specific discount rate, or don't particularly care about predicting values, just finding a close-to-optimal policy * The environment is deterministic The further away you are from those assumptions, the worse fit the simpler update method will be to your problem. It is definitely not Q-learning either way. Upvotes: 1
2019/02/18
976
4,026
<issue_start>username_0: In a neural network, the number of neurons in the hidden layer corresponds to the complexity of the model generated to map the inputs to output(s). More neurons creates a more complex function (and thus the ability to model more nuanced decision barriers) than a hidden layer with less nodes. But what of the hidden layers? What do more hidden layers correspond to in terms of the model generated?<issue_comment>username_1: More hidden layers will just escalate the possibilities amount the neurons, including the solutions from the previous hidden layers. (I will edit this once I am at home and provide you with a good link I found some time ago) Meanwhile maybe this will help you <https://stats.stackexchange.com/questions/63152/what-does-the-hidden-layer-in-a-neural-network-compute> Upvotes: 1 <issue_comment>username_2: It was proven a feed-forward network with a single hidden layer containing a finite number of neurons can approximate continuous functions (see [Universal approximation theorem](https://en.wikipedia.org/wiki/Universal_approximation_theorem)). More layers can't improve something that can already do "everything". But adding more layers reduces the number of necessary neurons, and reduces computing power needed for the network as well. Upvotes: 1 <issue_comment>username_3: This is a very interesting question that you ask. I believe, this [post](http://colah.github.io/posts/2014-03-NN-Manifolds-Topology/) and this [post](http://username_3.com/post/neural-networks-2/) (written by me) well address your question. However, it deserves an explanation here. **1. Fully connected networks** The more layers you add, the more "nonlinear" your network becomes. For instance, in the case of [two spirals problem](https://www.gwern.net/docs/ai/1988-lang.pdf), which requires a "highly nonlinear separation", the [first known architecture](https://www.gwern.net/docs/ai/1988-lang.pdf) to solve the problem was pretty advanced for that time: it had 3 hidden layers and also it had skip-connections (very early *ResNet* in 1988). Back then, computing was a way less powerful and training methods with momentum were not known. Nevertheless, thanks to the multilayer architecture, the problem was solved. [Here](http://username_3.com/post/neural-networks-2/), however, I was able to train a single-hidden layer network to solve the spirals problem using Adam. **2. Convolutional nets (CNNs)** An interesting partial case of neural networks are CNNs. They restrict the architecture of the first layers, known as *convolutional* layers, so that there is a much smaller number of trainable parameters due to the weights sharing. What we have learned from computer vision, moving towards the end of CNNs layers, their receptive fields become larger. That means that the subsequent CNN layers "see" more than their predecessors. Conceptually, first CNN layers can recognize simpler features such as edges and textures, whereas final CNN layers contain information about more abstract objects such as trees or faces. **3. Recurrent nets (RNNs)** RNNs are networks with layers which receive some of their outputs as inputs. Technically, a single recurrent layer is equivalent to an infinite (or at least large) number of ordinary layers. Thanks to that recurrence, RNNs retain an internal state (memory). Therefore, it is much more difficult to answer your question in the case of recurrent nets. What is known, due to their memory, RNNs are more like programs, and thus are in principle more complex than other neural networks. Please let me know if you find an answer to your question in the last case. To conclude, the higher number of hidden layers may help to structure a neural network. Thanks to the recent developments such as [ResNets](https://arxiv.org/abs/1512.03385) and [backpropagation through time](https://en.wikipedia.org/wiki/Backpropagation_through_time), it is possible to train neural networks with a large number of hidden layers. Upvotes: 3 [selected_answer]
2019/02/18
1,259
5,564
<issue_start>username_0: In some newer robotics literature, the term system identification is used in a certain meaning. The idea is not to use a fixed model, but to create the model on the fly. So it is equal to a model-free system identification. Perhaps a short remark for all, who doesn't know what the idea is. System identification means, to create a prediction model, better known as a forward numerical simulation. The model takes the input and calculates the outcome. It's not exactly the same like a physics engine, but both are operating with a model in the loop which is generating the output in realtime. But what is policy learning? Somewhere, I've read that policy learning is equal to online system identification. Is that correct? And if yes, then it doesn't make much sense, because reinforcement learning has the goal to learn a policy. A policy is something which controls the robot. But if the aim is to do system identification, than the policy is equal to the prediction model. Perhaps somebody can lower the confusion about the different terms ... **Example** Q-learning is a good example for reinforcement learning. The idea is to construct a q-table and this table controls the robot movements. But, if online-system-identification is equal to policy learning and this is equal to q-learning, then the q-table doesn't contains the servo signals for the robot, but it provides only the prediction of the system. That means, the q-table is equal to a box2d physics engine which can say, what x/y coordinates the robot will have. This kind of interpretation doesn't make much sense. Or does it make sense and the definition of a policy is quite different?<issue_comment>username_1: From the book *Reinforcement Learning, An Introduction (R. Sutton, A. Barto)*: > > The term **system identification** is used in adaptive control for what we > call **model-learning** (e.g., Goodwin and Sin, 1984; Ljung and S > ̈oderstrom, 1983; Young, 1984). > > > **Model-learning** refers to the act of learning the model (environment). Reinforcement Learning can be divided into two types: 1. **Model-based** - first we build a model of an environment and then do the control. 2. **Model-free** - we do not try to model the behaviour of the environment. **Policy learning** is the act of learning optimal policy. You can do it in two ways: 1. **On-policy learning** - learn about the policy $π$ by sampling from the same policy. 2. **Off-policy learning** - learn about the policy $π$ from the experience sampled from some other policy (e.g. watching different agent playing a game). Upvotes: 4 [selected_answer]<issue_comment>username_2: System Identification and policy learning are two completely different aspects of a system. System Identification is basically finding out the transfer functions, the hardware parameters, the relationships and nature of behavior for different components that determine the results, when acted upon by a control signal. Generally, it is the hardware manufacturers who have all configuration details in their datasheets and they are either used as direct system parameters or used to derive other. Online system identification is the process of determining the set of parameters not with already available measurements but by using data coming through in real time. Policy learning is the process of correlating the actions to results and discerning what actions are good or bad. Policy learning is about determining the control strategy that shall produce desired results given all the circumstances. SI is like curve fitting and determine the equation of the curve (already knowing the polynomial degree because you need to know the structure of the parameters you are trying to estimate) on already available data while policy learning is using a closed loop system to repeatedly update your control signals till you find one that satisfies your performance and operational desires. In robotics context, a robot manipulator is supposed to have the Mass(H), Coriolis(C) and Gravity(G) matrix that define the dynamics of the system, basically relating the physics of the robot to the applied torque on the joints and the tip as shown in the equation below. Online parameter identification would mean using the torque, the known structure of the HCG matrices (H is a n x n matrix, n being the DOF and so on) dynamic equation and the then determine the numerical values. Similar online parameter identification is also done for friction components like the Static and Coulomb friction and the coefficients of Viscous friction. Least squares method is often used for the same. $$\mathbf{H(q)\ddot{q}}(t) + \mathbf{C(q, \dot{q})\dot{q}}(t) + \mathbf{B\dot{q} + g(q) = \tau}$$ Policy learning in terms fo RL is basically learning the set of actions that will produce desired good behavior. Q-learning is model free learning, so there is no predicted behavior to be obtained based on the inputs. Here, the inputs and simulated, results are obtained, they are given a degree of belief (positive & negative and high & low rewards) depending upon what part and how much of the desired result are they producing. Over time, the policy learned is finally the sequence of actions that should be run to get to desired result. The Q-table does not have anything to do with the system identification which is a modeling step, it is rather a control step. So, for the arm, the learned policy would be what joints in what sequence should be actuated to what angles to complete a pick and place task. Upvotes: 2
2019/02/18
766
3,377
<issue_start>username_0: Fuzzy logic is typically used in control theory and engineering applications, but is it connected fundamentally to classification systems? Once I have a trained neural network (multiple inputs, one output), I have a nonlinear function that will turn a set of inputs into a number that will estimate how close my set of given inputs are to the trained set. Since my output number characterizes "closeness" to the training set as a continuous number, isn't this kind of inherently some sort of fuzzy classifier? Is there a deep connection here in the logic, or am I missing something?<issue_comment>username_1: They are unrelated. There is a possibility of interpreting fuzzy values as probabilities, but strictly speaking they are different: fuzzy values are *vague*, while probabilities reflect *likelihood* (see [Wikipedia entry for Fuzzy Logic](https://en.wikipedia.org/wiki/Fuzzy_logic)) While rolling a particular number on a six-side die has a probability of $1 \over 6$, a roll can actually only ever have one outcome. A fuzzy value "quite old" can simultaneously be member of a number of fuzzy sets with different degrees of membership, eg "young" with 0.001, "adolescent" with 0.1, "old" with 0.4, "ancient" with 0.7. Unless it is "defuzzified", it is simultaneously contained in all the sets. Defuzzyfication is a way of interpreting the result of a series of fuzzy operations and finding the set that best matches, but it is not a clearly defined process such as picking a random number according to a set of probabilities (or rolling the die). I am not sure that the sum of all fuzzy set membership values of any given fuzzy value has to add up to 1.0; whereas this condition has to hold for probabilities. [EDIT: to clarify - probabilities are not a set; I refer here to all possible outcomes of a random event which have a certain probability of being realised. The sum of all possible event probabilities has to be 1.0] One alternative interpretation for your application could be the *confidence* that the input set is identical to the training set. Which could be a fuzzy value if you wanted to do something else with it, eg by combining it with other fuzzy variables. Upvotes: 2 <issue_comment>username_2: They're pretty much the same thing - in that the underlying logic of neural networks is fuzzy. A neural network will take a variety of valued inputs, give them different weight in relation to eachother, and arrive at a decision which normally also has a value. Nowhere in that process is there anything like the sequences of either-or decisions which characterize non-fuzzy mathematics, almost all of computer programming, and digital electronics. Back in the 1980s there was a debate about what AI would eventually look like - some researchers tried to program 'common sense' with huge bivalent decision trees, while others used neural networks which pretty soon found their way into a multitude of electronic devices. Obviously the underlying logic of the latter approach is radically different to the former, even if neural nets are built on top of bivalent electronics. However, the use of the term 'fuzzy logic' seems to have been downplayed since the 80s, perhaps because colloquially it sometimes implies uncertainty. This is shame because it offers a more accurate way to model complex situations. Upvotes: 0
2019/02/19
517
2,106
<issue_start>username_0: Some (stock market) traders have the ability to produce a high percentage of winning trades (80%+, positive return) over years. I had the chance to look into real money trades of two such traders and I also got trading instructions from them for research. Now the interesting part is that if you strictly follow their rules then you usually end up with more losers than winners on the long run. But after a while you get some kind of subconscious "feeling" for winners which also shows in the results. I assume that this "feeling" is a hidden function which can be modeled. My question is: Is there work about how to model such "gut feeling" and subconscious knowledge by means of machine learning (especially with little training data sets)? Is there relevant literature about this topic? Regards,<issue_comment>username_1: You could perhaps model gut feeling or subconscious bias as a prior in a Bayesian context, and then try to learn from the data how much to modify/moderate the bias in each individual case. I think there is another issue with the problem you outlined. We might expect it to be normal to see more losers than winners in the long run: trading is a zero-sum game where the house always takes its cut. The trick to being a successful trading algorithm seems to be to make the losers small (cut them early) and the winners big (let them run). Upvotes: 1 <issue_comment>username_2: It sounds like a supervised learning from not-so-many samples. [Representation/metric learning](https://stats.stackexchange.com/questions/393288/learning-useful-semantic-representations-of-data) could be of some help. There are two books by <NAME> that help quite a bit to make subconscious conscious. "Understanding Price Action" and "Forex Price Action Scalping". Despite sales-driven titles, this is the best non-scam attempt to formalize discretionary trading I ever saw. (Even though, one will become a one-trick pony at most, but on the other hand it's enough). If you google a bit, you'll find a dropbox with countless price charts commented by author. Upvotes: 0
2019/02/20
430
1,667
<issue_start>username_0: I trained a neural network on the [UNSW-NB15 dataset](https://www.unsw.adfa.edu.au/unsw-canberra-cyber/cybersecurity/ADFA-NB15-Datasets/), but, during training, I am getting spikes in the loss function. The algorithms see part of this UNSW dataset a single time. The loss function is plotted after every batch. [![enter image description here](https://i.stack.imgur.com/yoGv8.png)](https://i.stack.imgur.com/yoGv8.png) For other datasets, I don't experience this problem. I've tried different optimizers and loss functions, but this problem remains with this dataset. I'm using the `fit_generator()` function from Keras. Is there anyone experience this problem using Keras with this function?<issue_comment>username_1: The spikes could be caused by many reasons: insufficient model capacity, incorrect label, buggy input parsing, ... Finding out the culprit requires some detective work. For instance, you could apply the learned model to the whole train set and manually examine the datapoints which result in the highest loss. Alternatively, you could compare the learning outcomes of different models (both weaker and stronger). Upvotes: 1 <issue_comment>username_2: <NAME> explains with great details in [Deep Learning Course](https://www.coursera.org/lecture/deep-neural-network/understanding-mini-batch-gradient-descent-lBXu8) appears in the image below. He also focuses on some corners can cause this problem : 1. some mislabeled examples in the dataset. 2. the size of your mini batch or change it with GD. [![enter image description here](https://i.stack.imgur.com/EGqOx.png)](https://i.stack.imgur.com/EGqOx.png) Upvotes: 0
2019/02/20
1,716
6,880
<issue_start>username_0: This question is related to [What does "stationary" mean in the context of reinforcement learning?](https://ai.stackexchange.com/q/7640/2444), but I have a more specific question to clarify the difference between a non-stationary policy and a state that includes time. My understanding is that, in general, a non-stationary policy is a policy that doesn't change. My first (probably incorrect) interpretation of that was that it meant that the state shouldn't contain time. For example, in the case of game, we could encode time as the current turn, which increases every time the agent takes an action. However, I think even if we include the turn in the state, the policy is still non-stationary so long as sending the same state (including turn) to the policy produces the same action (in case of a deterministic policy) or the same probability distribution (stochastic policy). I believe the notion of stationarity assumes an additional implicit background state that counts the number of times we have evaluated the policy, so a more precise way to think about a policy (I'll use a deterministic policy for simplicity) would be: $$ \pi : \mathbb{N} \times S \rightarrow \mathbb{N} \times A $$ $$ \pi : (i, s\_t) \rightarrow (i + 1, s\_{t+1}) $$ instead of $\pi : S \rightarrow A$. So, here is the question: **Is it true that a stationary policy must satisfy this condition?** $$ \forall i, j \in \mathbb{N}, s \in S, \pi (i, s) = \pi(j, s) $$ In other words, the policy must output the same result no matter when we evaluate it (either the ith or jth time). Even if the state $S$ contains a counter of the turn, the policy would still be non-stationary because for the same state (including turn), no matter how many times you evaluate it, it will return the same thing. Correct? As a final note, I want to contrast the difference between a state that includes time, with the background state I called $i$ in my definition of $\pi$. For example, when we run an episode of 3 steps, the state $S$ will contain 0, 1, 2, and the background counter of number of the policy $i$ will also be set to 2. Once we reset the environment to evaluate the policy again, the turn, which we store in the state, will go back to 0, but the background number of evaluations won't reset and it will be 3. My understanding is that in this reset is when we could see the non-stationarity of the policy in action. If we get a different result here it's a non-stationary policy, and if we get the same result it's a stationary policy, and such property is independent of whether or not we include the turn in the state. Correct?<issue_comment>username_1: > > So, here's is the question: **Is it true that a non-stationary policy must satisfy this condition?** > > > $$ \forall i, j \in \mathbb{N}, s \in S, \pi (i, s) = \pi(j, s) $$ > > > With your custom notation (which certainly isn't common, but seems reasonable)... I assume you meant to say that a **stationary** policy must satisfy that condition, rather than that a **non-stationary** policy must satisfy that condition. In that case, yeah, that seems correct to me. A stationary policy would satisfy that condition, and a non-stationary one wouldn't. --- Wrapping back to the more usual notation, where $\pi(S, A)$ denotes a probability of selecting $A$ in $S$ (which also still covers the case of a deterministic policy, which would simply assign a probability of $1$ to a single action, and $0$ to all others)... I think it's still interesting to consider the case where we decide to "bake" a time-step counter into the state representation. With this notation, for two different time steps $t$ and $t'$, such that $t \neq t'$, I'd say that a policy $\pi$ is **stationary** if and only if: $$\pi(S\_t, A\_t) = \pi(S\_{t'}, A\_{t'}) \text{ if } S\_t = S\_{t'} \wedge A\_t = A\_{t'}.$$ Note that if we decide to include $t$ in the state-representation, the case that $S\_t = S\_{t'}$ with $t \neq t'$ will actually **never hold within the same episode**, states at different time steps will always automatically be different from each other if the time step is one of the "features" encoded in the state. So, within a single episode a policy with a "time-aware" state representation will always automatically be stationary because there cannot be any repeating states. Of course, if you start looking across multiple different episodes, this can change; this is what you're doing when you write: > > Once we reset the environment to evaluate the policy again, the turn, which we store in the state, will go back to 0, but the background number of evaluations won't reset and it will be 3. > > > If you chose to **also** embed that "episode counter" into the state representation, you would also no longer have any state repetitions at all anymore across different episodes (I don't think doing this would ever be a good idea though). Upvotes: 2 <issue_comment>username_2: I think you're overthinking it. I've never seen a formalisation of the concept of "stationary policy" (apart from yours). However, in general, "stationary" means that something does not change (over time). In the context of reinforcement learning, you can interpret it in such a way that it is consistent with the context where you find this expression, unless you decide to formalise this expression (like you've tried in this question). I think it might be useful to differentiate between the learning phase and the "inference" (or "behaviour") phase, even though these two might interleave in the RL context, that is, you might be using a policy (to behave in the real-world) even though you're still attempting to find the best policy (and I am not just referring to on-policy algorithms, but, in general, you might be using a policy to behave in the real-world which is sub-optimal while your learning algorithm, like Q-learning, is attempting to find the best policy). During the learning phase, the policy will keep changing over time (anyway), because you still haven't found the optimal policy. So, you could call the policy derived from the $Q$ values, during the $Q$-learning algorithm, a non-stationary policy, because it keeps changing (because the $Q$ values also keep changing). However, it is often the case that these are considered different policies associated with different approximations of the $Q$ function. You could also call a policy that changes in response to changes of the (dynamics of the) environment a non-stationary policy. In this case, you would call such an environment a non-stationary environment (because e.g. its transition model might keep changing over time). The problem that you describe when you compare the state $i$ with states that encode time only arises because of your definition of state $i$. You can also reset $i$ when you reset the environment. Upvotes: 1
2019/02/20
769
3,167
<issue_start>username_0: I am confused about how neural networks weigh different features or inputs. Consider this example. I have 3 features/inputs: an image, a dollar amount, and a rating. However, since one feature is an image, I need to represent it with very high dimensionality, for example with $128 \times 128 = 16384$ pixel values. (I am just using 'image' as an example, my question holds for any feature that needs high dimensional representation: word counts, one-hot encodings, etc.) Will the $16384$ 'features' representing the image completely overwhelm the other 2 features that are the dollar amount and rating? Ideally, I would think the network would consider each of the three true features relatively equally. Would this issue naturally resolve itself in the training process? Would training become much more difficult of a task?<issue_comment>username_1: As stated in your example, the three features are: an image, a price, a rating. Now, you want to build a model that uses all of these features and the simplest way to do is to feed them directly into the neural network, but it's inefficient and fundamentally flawed, due to the following reasons: * In the first dense layer, the neural network will try to combine raw pixel values linearly with price and rating, which will produce features that are meaningless for inference. * It could perform well just by optimizing the cost function, but the model performance will be nowhere as good as it could perform with a good architecture. So, the neural network doesn't care if the data is a raw pixel value, price, or rating: it would just optimize it to produce the desired output. That is why it is necessary to design a suitable architecture for the given problem. Possible architecture for your given example : 1. Separate your raw features, i.e. pixel value, and high-level data, i.e. price and rating 2. Stack 2-3 dense layers for raw features (to find complex patterns in images) 3. Stack 1-2 dense layers for high-level features 4. Combine them together in a final dense layer If you want to de-emphasize the importance of the image, just connect the first dense layer 16,384 to another layer having fewer connections, say 1024, and have more connections from the high-level data, say 2048. So, again, here's the possible architecture 1. Raw features -> dense layer (16384) -> dense layer (1024) 2. High-level features -> dense layer (2048) 3. Combine 1 and 2 with another dense layer Upvotes: 2 [selected_answer]<issue_comment>username_2: The answer by username_1 is correct. Your results could be further improved i) by extracting image features by a convolutional (instead of fully connected) architecture, and ii) by exploiting transfer learning. To exploit transfer learning you i) pick some widely used model, eg. ResNet-18, ii) initialize it with ImageNet pretrained parameters, iii) replace its fully connected layer (the one that produces 1000-D softmax input) with your own randomly initialized fully connected layer. If you are interested, have a look at detailed [instructions](https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html). Upvotes: 0
2019/02/21
638
2,611
<issue_start>username_0: How to detect liveness of face using face landmark points? I am getting face landmarks from android camera frames. And I want to detect liveness using these landmark points. How to tell if a human is making a specific movement that can be useful for liveness detection?<issue_comment>username_1: As stated in your example, the three features are: an image, a price, a rating. Now, you want to build a model that uses all of these features and the simplest way to do is to feed them directly into the neural network, but it's inefficient and fundamentally flawed, due to the following reasons: * In the first dense layer, the neural network will try to combine raw pixel values linearly with price and rating, which will produce features that are meaningless for inference. * It could perform well just by optimizing the cost function, but the model performance will be nowhere as good as it could perform with a good architecture. So, the neural network doesn't care if the data is a raw pixel value, price, or rating: it would just optimize it to produce the desired output. That is why it is necessary to design a suitable architecture for the given problem. Possible architecture for your given example : 1. Separate your raw features, i.e. pixel value, and high-level data, i.e. price and rating 2. Stack 2-3 dense layers for raw features (to find complex patterns in images) 3. Stack 1-2 dense layers for high-level features 4. Combine them together in a final dense layer If you want to de-emphasize the importance of the image, just connect the first dense layer 16,384 to another layer having fewer connections, say 1024, and have more connections from the high-level data, say 2048. So, again, here's the possible architecture 1. Raw features -> dense layer (16384) -> dense layer (1024) 2. High-level features -> dense layer (2048) 3. Combine 1 and 2 with another dense layer Upvotes: 2 [selected_answer]<issue_comment>username_2: The answer by username_1 is correct. Your results could be further improved i) by extracting image features by a convolutional (instead of fully connected) architecture, and ii) by exploiting transfer learning. To exploit transfer learning you i) pick some widely used model, eg. ResNet-18, ii) initialize it with ImageNet pretrained parameters, iii) replace its fully connected layer (the one that produces 1000-D softmax input) with your own randomly initialized fully connected layer. If you are interested, have a look at detailed [instructions](https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html). Upvotes: 0
2019/02/22
1,581
7,152
<issue_start>username_0: I've been looking at various bounding box algorithms, like the three versions of RCNN, SSD and YOLO, and I have noticed that not even the original papers include pseudocode for their algorithms. I have built a CNN classifier and I am attempting to incorporate bounding box regression, though I am having difficulties in implementation. I was wondering if anyone can whip up some pseudocode for any bounding box classifier or a link to one (unsuccessful in my search) to aid my endeavor. Note: I do know that there are many pre-built and pre-trained versions of these object classifiers that I can download from various sources, I am interested in building it myself.<issue_comment>username_1: In General ========== Each of those projects has open source code on github that you can look at. If you do some quick googling you'll find the software for these basic regressors exist for different deep learning frameworks. Usually the these types of projects only include pseduo code for the custom or complicated layers that are involved in the detector. There isn't much of a need to include pseudo code for a simple convolution because it's a well established operation. I just included a few links here, but if you look around there's a bunch of implementations in different frameworks that you'll find. SSD === <https://github.com/amdegroot/ssd.pytorch> <https://github.com/weiliu89/caffe/tree/ssd> FasterRCNN (Better version of RCNN) =================================== <https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN> YOLO V3 ======= V3 -- <https://itnext.io/implementing-yolo-v3-in-tensorflow-tf-slim-c3c55ff59dbe> V1 -- <https://github.com/hizhangp/yolo_tensorflow> Aside ===== As an aside, unless you're trying to do novel research or fit a specific function usually these detectors work pretty well out of the box. Running a base level detector and them modifying for your application is usually a safer route unless you have a deep understanding of how these networks work. Upvotes: 0 <issue_comment>username_2: The minimal algorithm for convolution in $\mathbb{R}^2$ is a four dimensional iteration. ``` for all vertical kernel positions for all horizontal kernel positions initialize the value at the output position to the bias for all vertical positions in the kernel for all horizontal positions in the kernel add the product of the input value to that of the output position ``` In $\mathbb{R}^n$ it is a $2n$ dimensional iteration following this pattern. The minimal algorithm for regression of bounding boxes orthogonal with respect to the image grid (no tilting) is this. ``` until number of boxes reaches max make first guess of two coordinates until number of guesses reaches max or matching criteria is met evaluate guess remember guess and guess results improve on guess based on evaluation results and possibly injected randomness, excluding locations already covered if some intermediate criteria is met change the nature of the guessing, evaluation, and improving as is appropriate for the criteria match (this covers approaches that have multiple phases) if no guess matched criteria break ``` That's approaching concepts from the top down. When approaching from the other direction, reverse engineer the best code. In the case of RCNN, it is unadvisable to find implementations following the first paper expressing the approach. Reading the first paper may be helpful to get the gist of the approach, but reverse engineer the best one, which, in this case, may be [*Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks*](https://arxiv.org/pdf/1506.01497.pdf), <NAME>, <NAME>, <NAME>, and <NAME>, 2016. Study the implementation they pushed to git at <https://github.com/rbgirshick/py-faster-rcnn/tree/master/>. The algorithm is in lib/fast\_rcnn. The reason this algorithm isn't spelled out in their paper or any paper from the first on down through the lineage to their paper is simple. * The pseudo-code above is universal across all convolutions and all bounding box regressions, so that doesn't need to be restated with each approach. * The main features of an approach like RCNN, SSD, or YOLO are not algorithmic. They are algebraic expressions of the guess, the evaluation, the improvement upon the guess, and the test for the criteria. * The use of objects and functional programming makes the implementation more readable, so it can be easier to read the implementation than read a huge chunk of the above pseudo-code with all the algebra and test branches plugged in. * For the above reasons, it is rare that pseudo-code would be used prior to the implementation when the paper is written. * The return on investment of reverse engineering from code to pseudo-code is only sufficient motivation if one is going to improve the algorithm and write another paper, and on the way to finishing the prior paper's pseudo-code, the new paper and the new code gets finished first. Since the author of this question seems interested in writing their own code, it may be reasonable to assume the same author may be interested in thinking their own thoughts, so I'll add this. None of these algorithms are object recognition. Recognition has to do with cognition, and these approaches do not even touch upon cognitive processing, another branch of AI not related to convolution and probably not closely related to formal regression either. Additionally, bounding boxes are not the way animal vision systems work. Early gestalt experiments in vision indicate a complete independence of human vision from rectilinear formalities. In lay terms, humans and other organisms with vision systems don't have any conception of Cartesian coordinates. We can still read books if tilted slightly relative to the plane passing through our eyes. We don't zoom or tilt in Cartesian coordinates. These facts may not be necessary to comprehend to create an automated vehicle driving system that produces a better safety record than average human drivers, but that is only because humans don't set that bar very high and because cars roll in the plane of the road. These facts are indeed necessary in aeronautic system used in military applications, where nothing is particularly Cartesian and the meaning of horizontal and vertical is ambiguous. For that reason, it is unlikely that bounding boxes will be the edge of vision technology for very long. If one wishes to transcend current mediocrity, consider bounding circles with fuzzy boundaries, which would be more like the systems that evolved over millions of biological iterations. If the computer hardware is poorly fit to radial processing, design new hardware in which radial processing is native and in which Cartesian coordinates may be foreign and cumbersome. Regarding the classifier, the classifier papers do generally include the algorithm, so those can be found by doing an academic search for the original paper describing the classifier being used. Upvotes: 2
2019/02/22
1,787
6,464
<issue_start>username_0: I came across these 2 algorithms, but I cannot understand the difference between these 2, both in terms of implementation as well as intuitionally. So, what difference does the second point in both the slides refer to? [![enter image description here](https://i.stack.imgur.com/kHuKM.png)](https://i.stack.imgur.com/kHuKM.png) [![enter image description here](https://i.stack.imgur.com/dOhO5.png)](https://i.stack.imgur.com/dOhO5.png)<issue_comment>username_1: The first-visit and the every-visit Monte-Carlo (MC) algorithms are both used to solve the ***prediction* problem** (or, also called, "evaluation problem"), that is, the problem of estimating the value function associated with a given (as input to the algorithms) fixed (that is, it does not change during the execution of the algorithm) policy, denoted by $\pi$. In general, even if we are given the policy $\pi$, we are not necessarily able to find the exact corresponding value function, so these two algorithms are used to **estimate** the value function associated with $\pi$. Intuitively, we care about the value function associated with $\pi$ because we might want or need to know "how good it is to be in a certain state", if the agent behaves in the environment according to the policy $\pi$. For simplicity, assume that the value function is the state value function (but it could also be e.g. the state-action value function), denoted by $v\_\pi(s)$, where $v\_\pi(s)$ is the *expected return* (or, in other words, *expected cumulative future discounted reward*), starting from state $s$ (at some time step $t$) and then following (after time step $t$) the given policy $\pi$. Formally, $v\_\pi(s) = \mathbb{E}\_\pi [ G\_t \mid S\_t = s ]$, where $G\_t = \sum\_{k=0}^\infty \gamma^k R\_{t+k+1}$ is the **return** (after time step $t$). In the case of MC algorithms, $G\_t$ is often defined as $\sum\_{k=0}^T R\_{t+k+1}$, where $T \in \mathbb{N}^+$ is the last time step of the episode, that is, the sum goes up to the final time step of the episode, $T$. This is because MC algorithms, in this context, often assume that the problem can be naturally split into **episodes** and each episode proceeds in a discrete number of **time steps** (from $t=0$ to $t=T$). As I defined it here, the return, in the case of MC algorithms, is only associated with a single episode (that is, it is the return of one episode). However, in general, the expected return can be different from one episode to the other, but, for simplicity, we will assume that the expected return (of all states) is the same for all episodes. To recapitulate, the first-visit and every-visit MC (prediction) algorithms are used to estimate $v\_\pi(s)$, for all states $s \in \mathcal{S}$. To do that, at every episode, these two algorithms use $\pi$ to behave in the environment, so that to obtain some knowledge of the environment in the form of sequences of states, actions and rewards. This knowledge is then used to estimate $v\_\pi(s)$. *How is this knowledge used in order to estimate $v\_\pi$?* Let us have a look at the pseudocode of these two algorithms. [![enter image description here](https://i.stack.imgur.com/Q8YCg.png)](https://i.stack.imgur.com/Q8YCg.png) $N(s)$ is a "counter" variable that counts the number of times we visit state $s$ throughout the entire algorithm (i.e. from episode one to $num\\_episodes$). $\text{Returns(s)}$ is a list of (undiscounted) returns for state $s$. I think it is more useful for you to read the pseudocode (which should be easily translatable to actual code) and understand what it does rather than explaining it with words. Anyway, the basic idea (of both algorithms) is to generate trajectories (of states, actions and rewards) at each episode, keep track of the returns (for each state) and number of visits (of each state), and then, at the end of all episodes, average these returns (for all states). This average of returns should be an approximation of the expected return (which is what we wanted to estimate). The differences of the two algorithms are highlighted in $\color{red}{\text{red}}$. The part "*If state $S\_t$ is **not** in the sequence $S\_0, S\_1, \dots, S\_{t-1}$*" means that the associated block of code will be executed only if $S\_t$ is **not** part of the sequence of states that were visited (in the episode sequence generated with $\pi$) before the time step $t$. In other words, that block of code will be executed only if it is the first time we encounter $S\_t$ in the sequence of states, action and rewards: $S\_0, A\_0, R\_1, S\_1, A\_1, R\_2 \ldots, S\_{T-1}, A\_{T-1}, R\_T$ (which can be collectively be called "episode sequence"), with respect to the time step and not the way the episode sequence is processed. Note that a certain state $s$ might appear more than once in $S\_0, A\_0, R\_1, S\_1, A\_1, R\_2 \ldots, S\_{T-1}, A\_{T-1}, R\_T$: for example, $S\_3 = s$ and $S\_5 = s$. Do not get confused by the fact that, within each episode, we proceed from the time step $T-1$ to time step $t = 0$, that is, we process the "episode sequence" backwards. We are doing that only to more conveniently compute the returns (given that the returns are *iteratively* computed as follows $G \leftarrow G + R\_{t+1}$). So, intuitively, in the first-visit MC, we only update the $\text{Returns}(S\_t)$ (that is, the list of returns for state $S\_t$, that is, the state of the episode at time step $t$) the first time we encounter $S\_t$ in that same episode (or trajectory). In the every-visit MC, we update the list of returns for the state $S\_t$ every time we encounter $S\_t$ in that same episode. For more info regarding these two algorithms (for example, the convergence properties), have a look at section 5.1 (on page 92) of the book "[Reinforcement Learning: An Introduction](http://incompleteideas.net/book/bookdraft2018mar21.pdf)" (2nd edition), by <NAME> and <NAME>. Upvotes: 6 [selected_answer]<issue_comment>username_2: For anyone coming across this question and wants a very intuitive understanding of first and every visit monte-carlo, look at the answer given in the link provided here. <https://amp.reddit.com/r/reinforcementlearning/comments/9zkdjb/d_help_need_in_understanding_monte_carlo_first/> After looking at that intuition, then you can come back and look at username_1z answer provided above. Hope this helps anyone struggling with this idea Upvotes: 1
2019/02/23
3,688
14,148
<issue_start>username_0: In reinforcement learning, successive states (actions and rewards) can be correlated. An [experience replay](http://www.incompleteideas.net/lin-92.pdf) buffer was used, in the [DQN architecture](https://web.stanford.edu/class/psych209/Readings/MnihEtAlHassibis15NatureControlDeepRL.pdf), to avoid training the neural network (NN), which represents the $Q$ function, with correlated (or non-independent) data. In statistics, the i.i.d. (independently and identically distributed) assumption is often made. See e.g. [this question](https://stats.stackexchange.com/q/213464/82135). [This](https://qr.ae/TUfiQ2) is another related question. In the case of humans, if consecutive data points are correlated, we may learn slowly (because the differences between those consecutive data points are not sufficient to infer more about the associated distribution). Mathematically, why exactly do (feed-forward) neural networks (or multi-layer perceptrons) require i.i.d. data (when being trained)? Is this only because [we use back-propagation to train NNs](https://www.ijcai.org/Proceedings/07/Papers/121.pdf)? If yes, why would back-propagation require i.i.d. data? Or is actually the optimisation algorithm (like gradient-descent) which requires i.i.d. data? Back-propagation is just the algorithm used to compute the gradients (which is e.g. used by GD to update the weights), so I think that back-propagation isn't really the problem. When using recurrent neural networks (RNNs), we apparently do not make this assumption, given that we expect consecutive data points to be highly correlated. So, why do feed-forward NNs required the i.i.d. assumption but not RNNs? I'm looking for a rigorous answer (ideally, a proof) and not just the intuition behind it. If there is a paper that answers this question, you can simply link us to it.<issue_comment>username_1: There is an assumption behind the theory training a neural network, that also applies to many other supervised learning methods, that a training sample is representative of the data set as a whole - that it has been sampled fairly from the population that the learning algorithm has been set up to approximate. The term i.i.d. stands for "independent and identically distributed". If you pick trajectories from RL then the sampling (per individual\* record as used to train with) is not independent. Even if you choose the start of the trajectory randomly, you will have made one random choice then your remaining choices are made according to the trajectory - technically they are chosen by the policy and environment dynamics over a single step, which is usually *not* enough to make the second, third etc steps from a trajectory fully independent random samples from the training population. To make the sampled observations independent, you should make a new random choice, and to be identically distributed that choice has to be made fairly over the whole dataset. If a subset of samples drawn for a mini-batch (or as consecutive individual samples) are correlated for an online algorithm like gradient descent, it causes a problem. For the algorithm to converge towards a globally optimal solution, it needs errors and the gradients they generate to be unbiased samples of the "true" function gradients across the loss function. Whilst correlated data from a trajectory does not do this at all - it exhibits a strong sampling bias, causing weight updates to move consistently in the wrong direction. You can demonstrate this effect trying to learn any simple function like $y = x^2$ with $x$ from $-1$ to $+1$. Training a NN with randomly selected $x,y$ pairs results in far better results than training it serially with ~2000 records starting $(-1.000, 1.000) (-0.999, 0.998) (-0.998, 0.996)$ etc > > So, why do feed-forward NNs required the i.i.d. assumption but not RNNs? > > > RNNs do need i.i.d. data, but the "unit" of sampling here is each sequence. --- \* By "individually" I mean in terms of how they are used in the training process. A "unit" here is the smallest set of records that leads to a complete measure of loss in your optimisation. It is ok to draw longer trajectories at random if your smallest unit of measurable error is from a trajectory, as in Monte Carlo or TD($\lambda$) for RL, or for RNNs. There can be subtle problems with this - the trajectories still have to be i.i.d. as in they represent fair samples from the assumed population of trajectories. If you are working at the level of longer trajectories, typically this reduces systemic bias: In RL it reduces bias from initial conditions due to bootstrap process. For RNNs I am less sure what this would mean, but suspect there is some equivalent. Typically, it also increases variance, meaning you may need more training samples. There is a bias/variance trade-off, which is why setting $\lambda$ somewhere between 0 and 1 in TD($\lambda$) is often the optimal choice. Note in RL this is a different source of bias and variance than discussed when considering number of parameters and regularisation in supervised learning. Upvotes: 4 <issue_comment>username_2: Suppose that we have some optimization criterion $J(x)$, which we aim to optimize (maybe maximize, maybe minimize), which we can compute for a single example $x$. In an "ideal world", where we have no restrictions on computation time and memory, we would generally want to run training algorithms on the complete "ground truth" population. For example, if we're training a model (may be a DNN, but may also be some other kind of Machine Learning model), we'd ideally train it on the complete population of all "real-world" images that have ever been produced, ever will be produced, or ever could be produced (and, of course, all with accurate labels). We could write our full optimization criterion as: $$\sum\_{x \in \mathcal{P}} J(x),$$ where $\mathcal{P}$ denotes the complete population. If we have a model that we would like to train using gradient descent (such as a Neural Network), this means we have to compute the gradient with respect to some trainable parameters $\theta$: $$\nabla\_{\theta} \sum\_{x \in \mathcal{P}} J(x).$$ --- In practice, we do not have this ideal scenario, we do not have access to the complete population $\mathcal{P}$. Often, we find ourselves approximating the population with a (hopefully rather large) dataset $\mathcal{D}$. This could be a collection of images like ImageNet, or in (Deep) Reinforcement Learning it could be a large experience replay buffer. **If the dataset $\mathcal{D}$ is an accurate representation of the complete population's distribution**, we can estimate the gradient of the objective we ultimately care about (computed over the complete population) by the gradient computed over the dataset $\mathcal{D}$: $$\nabla\_{\theta} \sum\_{x \in \mathcal{P}} J(x) \approx \nabla\_{\theta} \sum\_{x \in \mathcal{D}} J(x).$$ Collecting such a dataset rather than the complete population is often actually feasible in practice. If we can afford to compute the objective/gradient over such a complete dataset, that's great. **Note that in such a case, the data can already be viewed as being i.i.d.: it's the best approximation we have of a single, complete distribution (the population distribution)**. --- However, computing the objective / gradient over a large complete dataset is often still prohibitively expensive in terms of computation time. This is one of the reasons (not the only one) why we often use minibatches $B$. Then, we approximate the gradient of the objective of the dataset (which is itself an approximation of the gradient for the complete population) by computing it only over a minibatch $B$: $$\nabla\_{\theta} \sum\_{x \in \mathcal{P}} J(x) \approx \nabla\_{\theta} \sum\_{x \in \mathcal{D}} J(x) \approx \nabla\_{\theta} \sum\_{x \in \mathcal{B}} J(x).$$ If we're repeatedly (over many, many training iterations) going to use such an approximation to take gradient descent steps, it is crucial that the gradient computed over this minibatch is actually an accurate approximation of the "true" gradient over the complete population; **if it's not an accurate approximation, we're optimizing the wrong objective!** In my opinion, this means that we have an even stronger requirement for our minibatch than just wanting it to be identically distributed. It's not sufficient for our minibatch to be sampled from any arbitrary identical distribution. **We want the instances in our minibatch to be sampled from one very specific identical distribution; the dataset/population distribution! If they're not all sampled from that particular distribution, they're an unreliable approximation of the objective we truly care about**. --- Now, you may wonder if it wouldn't be possible for multiple biased, "unrepresentative" minibatches with "opposite" biases to "cancel each other out" if they're used in different, subsequent gradient descent steps. Couldn't we first run a bunch of gradient descent steps on minibatches that only contain images of dogs, and afterwards "cancel out" any errors by also running a bunch of updates on minibatches containing only images of cats? If you're lucky, it might work sometimes, but it's unreliable. One problem is that the first iterations using only dog images may cause you to end up in a poor area of the "parameter space", which you would never have ended up in if had used a correct mix of cat and dog images all along. Escaping that poor area may be much more difficult than simply ensuring you never reach it. Another problem is that it would be extremely difficult to find the "correct" number of gradient descent updates to run with cat images. Run too few, you're still stuck recognising only dogs. Run too many, and you may forget how to recognise dogs at all and only start recognising cats. Things like momentum in more sophisticated optimizers will exacerbate this issue. --- Note that I don't think the requirement for i.i.d. batches is necessarily unique to gradient descent. Other learning techniques may have the same requirement (maybe for similar reasons, maybe for different reasons). Upvotes: 3 <issue_comment>username_3: Short answer ------------ One reason why we assume/require i.i.d. data is that **it simplifies the computations**. More specifically, if we assume the samples to be i.i.d., their joint probability is then simplified to a product of marginal probabilities. Long answer ----------- In a dataset $D$, suppose we have $n$ samples. We define their joint probability (i.e. the probability of these samples occurring at the same time) as follows $$P(z\_1, z\_2, \dots, z\_n) \tag{1}\label{1}$$ For instance, if each $z\_i$ is binary (i.e. can take one of two values, e.g. $0$ or $1$). Then, to define the probability distribution over all possible values of all $z\_i$, we need to compute $2^n$ probabilities (which corresponds to all combinations of the values of all $z\_i$s). More importantly, if the samples are correlated, this probability must be calculated as a product of *conditional* probabilities ([by definition](https://en.wikipedia.org/wiki/Joint_probability_distribution)). However, if the samples are i.i.d., then the join probability in equation (\ref{1}) can be computed as a product of marginal probabilities $$P(z\_1, \dots, z\_n)=\prod\_{i}P(z\_i) \tag{2}\label{2}$$ which may be *simpler* than calculating with conditional probabilities because marginal probabilities may be simpler to compute. ### Example: binary cross-entropy In the case of a binary classification problem, we assume to have a labelled dataset $D = \{(x\_i, y\_i) \}$, where $y\_i$ would be the binary label ($0$ or $1$) for the corresponding input $x\_i$. So, we could define our likelihood function parametrized by the parameters $w$ as follows $$\ell(w) = P(y\_1, y\_2, \dots, y\_n \mid x\_1, x\_2, \dots, x\_n; w) \tag{3}\label{3}$$ If we assume $(x\_i, y\_i)$ to be independent of $(x\_i, y\_j)$, for all $i \neq j$, then this joint probability (\ref{3}) of labels given the inputs can also be written as a product of the marginals of the labels given the inputs. $$\ell(w) = \prod\_i P(y\_i \mid x\_i; w) \tag{4}\label{4}$$ For numerical stability (because sums are more stable than products of small numbers), rather than considering the likelihood, we can consider the *log-likelihood*, which is just the logarithm of $\ell$. However, this transforms the product in (\ref{4}) into a sum (note that this is just a rule of logarithms!). $$\log \ell(w) = \sum\_i \log P(y\_i \mid x\_i; w) \tag{5}\label{5}$$ We can also do this because the logarithm is a strictly increasing function, so the maxima/minima of $\ell$ are attained at the same parameters as the maxima/minima of $\log \ell$. So, now, the goal is to find the parameters $w$ of the log-likelihood such that the probability of the samples is maximized. Equivalently, rather than maximizing the log-likelihood, we can minimize its negative (this is what is usually done in practice!), which leads to what people call the cross-entropy function (which is just the negative log-likelihood). $$\text{CE}(w) = - \log \ell(w)$$ So, minimizing the cross-entropy $\text{CE}(w)$ is exactly the same thing as maximizing the $\log \ell(w)$. Given that the labels $y\_i$ are binary, we can assume that $P(y\_i \mid x\_i; w)$ in (\ref{5}) is a Bernoulli distribution, so we could define the probability that the label is equal to $1$ (or $0$, it's the equivalent) as follows $$P(y\_i=1 \mid x\_i; w)=\hat{p}^{y\_i} (1-\hat{p})^{(1-{y\_i})},$$ where $\hat{p}$ is the output of the neural network $f(x\_i; w) = \hat{p}$ when fed with $x\_i$, and $\hat{p}$ is just an estimate of the parameter $p$ of the Bernoulli distribution we try to learn. So, now, our cross-entropy $\text{CE}(w)$ can be written as follows $$\text{CE}(w) = \sum\_i \log \left( \hat{p}^{y\_i} (1-\hat{p})^{(1-y\_i)} \right) $$ So, now, we just need to estimate $w$ to produce $\hat{p}$. So, basically, we have avoided the computation of conditional probabilities of the labels. Upvotes: 1
2019/02/26
280
1,006
<issue_start>username_0: I have just found the paper and documentation about GAN 2.0, the new face creator from Nvidia. On the website <https://thispersondoesnotexist.com/> they have used this approach to create realistic faces. Unfortunately, the website does not exist anymore. Is there another webpage demonstrating the new face creator from Nvidia?<issue_comment>username_1: There is a Youtube video: [Nvidia - AI for Generating High-Resolution Images - <NAME>](https://www.youtube.com/watch?v=zrVDfX9XFpE) I don't know about webpages. Upvotes: 0 <issue_comment>username_2: You can find a [decent sized collection on archive.org](https://web.archive.org/web/*/thispersondoesnotexist.com). Just browse through the snapshots and they'll contain a few images. It probably doesn't contain every single one, but it has quite a decent set to start from. :) Here's one I found: [![enter image description here](https://i.stack.imgur.com/9h6jSb.jpg)](https://i.stack.imgur.com/9h6jSb.jpg) Upvotes: 1
2019/02/27
711
2,980
<issue_start>username_0: I have already implemented a relatively simple DQN on Pacman. Now I would like to clearly understand the difference between a DQN and the techniques used by AlphaGo zero/AlphaZero and I couldn't find a place where the features of both approaches are compared. Also sometimes, when reading through blogs, I believe different terms might in fact be the same mathematical tool which adds to the difficulty of clearly understanding the differences. For example, variations of DQN e.g. Double DQN also uses two networks like alpha zero. Has someone a good reference regarding this question ? Be it a book or an online ressource.<issue_comment>username_1: DQN and AlphaZero do not share much in terms of implementation. However, they are based on the same Reinforcement Learning (RL) theoretical framework. If you understand terms like MDP, reward, return, value, policy, then these are interchangeable between DQN and AlphaZero. When it comes to implementation, and what each part of the system is doing, then this is less interchangeable. For instance two networks you have read about in AlphaZero are the policy network and value network. Whilst double DQN alternates between two value networks. Probably the best resource that summarises both DQN and AlphaZero, and explains how they extend the basic RL framework in different ways is Sutton & Barto's [Reinforcement Learning: An Introduction (second edition)](http://www.incompleteideas.net/book/RLbook2018.pdf) - Chapter 16 sections 5 and 6 cover the designs of DQN Atari, AlphaGo and AlphaZero in some depth. In brief: DQN Atari --------- * Is model-free * Uses an action value estimator for $Q(s,a)$ values, based on a Convolutional Neural Network (CNN) * Uses experience replay and temporarily frozen target network to stabilise learning process * Uses a variety of tricks to simplify and standardise the state description and reward structure so that the exact same design and hyperparameters work across multiple games, demonstrating that it is a general learner. AlphaZero --------- * Is model based (although *some* of the learning is technically model-free, based on samples of play) * Uses a policy network (estimating $\pi(a|s)$) and a state value network (estimating $V(s)$), based on CNNs. In practice for efficiency the NN for these share many layers and parameters, so how many "networks" there are depends how you want to count them. + The earlier AlphaGo version had 4 separate networks, 3 variations of policy network - used during play at different stages of planning - and one value network. * Is designed around self-play * Uses Monte Carlo Tree Search (MCTS) as part of estimating returns - MCTS is a planning algorithm critical to AlphaZero's success, and there is no equivalent component in DQN Upvotes: 4 [selected_answer]<issue_comment>username_2: You can actually combine AlphaZero-like approach with DQN: [A\* + DQN](https://github.com/imagry/aleph_star) Upvotes: 1
2019/02/27
839
3,179
<issue_start>username_0: In the context of Deep Q Network, a target network is usually utilized. The target network is a slow changing network with a changing rate as its hyperparameter. This includes both replacement update every $N$ iterations and slowly update every iteration. Since the rate is hard to fine tune manually, is there an alternative technique that can eliminate the use of target network or at least makes it less susceptible to the changing rate?<issue_comment>username_1: *I have done some research and would like to share.* **Generally to eliminate the use of target network one needs to show that training would be *stable* under off-policy semi-gradient.** There are two approaches that might work: 1. Experience reweighting 2. Constrained optimization **Experience reweighting** Probably the simplest idea is to use importance sampling ratio ([Precup 2001](https://www.researchgate.net/publication/2371944)) to multiply each sample. This would correct the off-policy sample distribution to be on-policy distribution. It has been shown ([Sutton 2016](http://jmlr.org/papers/volume17/14-488/14-488.pdf)) that on-policy samples lead to stability for semi-gradient. However, this line of work that corrects for sample distribution has high variance and does not work well in practice. Another line of work aimed to partially correct the distribution only to the extent that would be provably stable is called **Emphatic TD** ([Sutton 2016](http://jmlr.org/papers/volume17/14-488/14-488.pdf)). The distribution is still mostly off-policy but is proved to be stable under linear function approximation. **Constrained optimization** The general wisdom comes from the fact that updating a value would also alter the target value since they share the same set of parameters. This problem is called over-generalization. To reduce this, [Durugkar](http://www.cs.utexas.edu/~pstone/Papers/bib2html-links/NIPS17-ishand.pdf) suggests that constraining the target Q-value after parameter update to be steady helps reduce divergence. [Achiam 2019](https://arxiv.org/abs/1903.08894) provides a good overview of the problem, and suggests a way to make sure that the update is non-expansion (which should prevent divergence). **Sidenote** Some works have shown convergence under off-policy samples but only in tabular case. It takes much more to show that it is stable in function approximation (even a linear one). Works like: * **Q($\lambda$)** ([Harutyunyan 2016](https://arxiv.org/abs/1602.04951)), which augments the value function with a correction term * **Retrace($\lambda$)** ([Munos 2016](http://arxiv.org/abs/1606.02647)), which extends the former by truncating the importance sampling ratio are shown to work *only* in tabular case. Upvotes: 3 <issue_comment>username_2: Never tried it, but there couple of approaches which may or may not help: Distributional DQN ( not C51, other one, cant find ref right now typing from phone) with several output heads, chosen randomly Multiple agents learning randomly on each other with some regularizer to pervent collapse to same net. Both approach essentially try to"hide" or smear target. Upvotes: 1
2019/02/27
446
1,507
<issue_start>username_0: I am trying to solve part b of the exercise 3.6 (page 113) from the book [Artificial Intelligence: A Modern Approach](https://faculty.psau.edu.sa/filedownload/doc-7-pdf-a154ffbcec538a4161a406abf62f5b76-original.pdf). More specifically, I need to give a complete problem formulation (that is precise enough to be implemented) for the following problem. > > A 3-foot-tall monkey is in a room where some bananas are suspended from the 8-foot ceiling. He would like to get the bananas. The room contains two stackable, movable, climbable 3-foot-high crates. > > > Give the initial state, goal test, successor function, and cost function for each of the following. Choose a formulation that is precise enough to be implemented. > > ><issue_comment>username_1: * Initial state: initial position of the monkey. * Possible actions + climb on the crate, + get down the crate, + move the crate from one spot to another, + stack one crate on another, + walk from one spot to another, + grab bananas (if standing on the crate) * Goal test: did the monkey get the bananas? * Cost function: the number of actions completed Upvotes: 1 <issue_comment>username_2: Initial state: The monkey, the suspended bananas, and two cratesin the room Goal test: Monkey has bananas. Successor function: To jump on crate; to jump off crate; Push crate from One spot to another; Walk from one spot to another;Grab bananas (if standing on crate). Cost function: Number of actions. Upvotes: 0
2019/02/27
599
2,234
<issue_start>username_0: Suppose I want to build a neural network regression model that takes one input and return one output. Here's the training data: ``` 0.1 => 0.1 0.2 => 0.2 0.1 => -0.1 ``` You will see that there are 2 inputs `0.1` that matches to different output values `0.1` and `-0.1`. So what will happen with most machine learning models is that they will predict the average when `0.1` is fed to the model. E.g. the output of `0.1` will be `(0.1 + (-0.1))/2 = 0`. But this `0` as an average answer is an incorrect answer. I want the model to be telling me that the input is ambiguous/insufficient to infer the output. Ideally, the model would report it as a form of confidence. **How do I report predictability confidence from the input?** The application that I find very useful in many areas is that I could then later ask the model to show me inputs that are easy to predict and inputs that are ambiguous. This would make me able to collect the data that are making sense. One way I know is to train the model then check the error on each training data, if it's high then it probably means that the input is ambiguous. But if you know any other papers or better techniques, I would be appreciated to know that!<issue_comment>username_1: [Predicting with confidence: the best machine learning idea you never heard of](https://scottlocklin.wordpress.com/2016/12/05/predicting-with-confidence-the-best-machine-learning-idea-you-never-heard-of/) by <NAME> might provide you an idea. > > The name of this basket of ideas is “conformal prediction.” > > > Upvotes: 1 <issue_comment>username_2: Another specific way to do this if one uses a neural network for this. Use a dropout a layer in your network and instead of scaling the activations at test time, one can sample the activations (just like in training-time) and predict multiple times for a given input, then look at distribution of your outputs. Intuitively this would add "probabilistic, bayesian effect" to you neural network. I think this method was first proposed in [Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning](https://arxiv.org/pdf/1506.02142.pdf) which is called *Monte Carlo Dropout*. Upvotes: 2
2019/02/27
409
1,590
<issue_start>username_0: I hope this question is not too broad or general. I have a very large set of images all of which contain text (some have more, some less). All of them have been tagged as containing, say, English text or Korean. I wonder if convolutional neural networks would be a good approach to classify these images as containing English vs. Korean. Or is there any existing literature/method that does this already. Crucially though, I am not interested in "understanding" the text, so this is not an NLP task but, I suppose, a task of classifying orthographies in the images.<issue_comment>username_1: [Predicting with confidence: the best machine learning idea you never heard of](https://scottlocklin.wordpress.com/2016/12/05/predicting-with-confidence-the-best-machine-learning-idea-you-never-heard-of/) by <NAME> might provide you an idea. > > The name of this basket of ideas is “conformal prediction.” > > > Upvotes: 1 <issue_comment>username_2: Another specific way to do this if one uses a neural network for this. Use a dropout a layer in your network and instead of scaling the activations at test time, one can sample the activations (just like in training-time) and predict multiple times for a given input, then look at distribution of your outputs. Intuitively this would add "probabilistic, bayesian effect" to you neural network. I think this method was first proposed in [Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning](https://arxiv.org/pdf/1506.02142.pdf) which is called *Monte Carlo Dropout*. Upvotes: 2
2019/02/28
1,903
3,939
<issue_start>username_0: I have a bayesian network, which has the following data: $P(S) = 0.07$ $P(A) = 0.01$ $P(F \mid S,A) = 1.0$ $P(F \mid S, \lnot A) = 0.7$ $P(F \mid \lnot S, A) = 0.9$ $P(F \mid \lnot S, \lnot A) = 0.1$ And I'm asked to get $P(F \mid S)$. Is it possible? How can I deduce it?<issue_comment>username_1: I do not think you can compute $P(F \mid S=s)$ only using your given probabilities (and no further independence assumption between your random variables). First of all, note that $P(F \mid S=s)$ is the probability of $F$ (being equal to one of the values that $F$ can attain), given that the value of $S$ is $s$. Not also that $P(F \mid S)$ is a shorthand for $P(F \mid S=s)$. In general, by the *law of total probability*, we have \begin{align} P(A) &= P(A, B) + P(A, B^c) \\ &= P(A|B)P(B) + P(A|B^c)P(B^c) \\ &= P(B|A)P(A) + P(B|A^c)P(A^c) \end{align} where $B^c$ is the complement of $B$ (in case $A$ and $B$ are sets or events). So, in your specific case, we have \begin{align} P(F \mid S) &= P(F \mid S, A)P(A \mid S) + P(F \mid S, \lnot A)P(\lnot A \mid S) \\ &= P(F \mid S, A) \frac{P(A, S)}{P(S)} + P(F \mid S, \lnot A)\frac{P(\lnot A, S)}{P(S)} \\ &= \frac{P(F, S, A)}{P(A, S)} \frac{P(A, S)}{P(S)} + \frac{P(F,S, \lnot A)}{P(\lnot A, S)}\frac{P(\lnot A, S)}{P(S)} \\ &= \frac{P(F, S, A)}{P(S)} + \frac{P(F,S, \lnot A)}{P(S)} \\ &= \frac{1}{P(S)} (P(F, S, A) + P(F,S, \lnot A) ) \\ &= \frac{P(F, S)}{P(S)} \end{align} We $P(F \mid S, A), P(S)$ and $P(F \mid S, \lnot A)$, but we do not have $P(A, S)$, $P(F, S, A)$ or $P(F, S)$ (and I think we cannot retrieve them from your given probabilities). (I will be happy to be corrected, if this conclusion is wrong. Maybe I'm not seeing another way of computing it now.) Upvotes: 0 <issue_comment>username_2: I believe you can deduce it. Using the product rule: $p(x,y) = p(x\mid y)p(y)$ we have: $P(F\mid S) = \frac{P(F,S)}{P(S)}$ we have $P(S)$ and we do not have $P(F,S)$, but we can use the addition rule: $p(x) = \sum\_\limits y p(x, y)$ $P(F, S) = P(F,S,A) + P(F,S,\lnot A)$ first term on the right side, using the product rule, is: $P(F,S,A) = P(F\mid S,A)P(S,A)$ I assume that $S$ and $A$ are not conditionally dependent so we have: $P(F,S,A) = P(F\mid S, A)P(S)P(A)$ for the second term on the right side, following the same logic, we have: $P(F,S,\lnot A) = P(F\mid S, \lnot A)P(S)P(\lnot A)$ we have all the needed members so in the end we have: $P(F\mid S) = \frac{P(F\mid S, A)P(S)P(A) + P(F\mid S, \lnot A)P(S)P(\lnot A)}{P(S)}$ Upvotes: 0 <issue_comment>username_3: This is a bit of a puzzle but you can compute a reasonable narrow limit even without knowing whether or not $P(S,A) = P(S) P(A)$. Start with the contingency table relating $P(S, A)$, $P(S,\neg A)$, $P(\neg S, A)$, $P(\neg S,\neg A)$ to $P(S)$ and $P(A)$ : $$\begin{array}{cc|c} P( S,A)& P(\neg S,A) & P(A) \\ P(S,\neg A)& P(\neg S,\neg A) & P(\neg A) \\ \hline P(S)& P(\neg S) & 1 \\ \end{array} \quad \rightarrow \quad \begin{array}{cc|c} x& 0.01-x & 0.01 \\ 0.07-x& 0.92+x & 0.99 \\ \hline 0.07& 0.93 & 1 \\ \end{array}$$ note that the cells must be between zero and one thus $0 \leq x \leq 0.01$ --- Then with \begin{array}{rcrl}P(F|S) &= & & P(F|S,A) P(A|S) + P(F| S,\neg A) P(\neg A|S)\\ &=&& 1.0 \frac{x}{0.07} + 0.7 \frac{0.07-x}{0.07} \\ & = && 0.7 + 4 \frac{2}{7} x \end{array} you get $$0.7 \leq P(F|S) \leq 0.743$$ --- To solve $P(F|S)$ exactly you need to narrow down $x$ more precisely. Possibilities are: * If you know $P(F)$ then you could use $$\begin{array}{rcrl}P(F) &= & & P(F|S,A) P(S,A) \\ && +& P(F|\neg S,A) P(\neg S,A) \\ &&+& P(F| S,\neg A) P(S, \neg A)\\&&+&P(F|\neg S,\neg A) P(\neg S,\neg A)\\ &=&& 1.0 (x) + 0.7 (0.07 - x) + 0.9 (0.01-x)+0.1(0.92+x) \\ & = && 0.15-0.5 x \end{array}$$ * If you know that $S \perp \! \! \! \perp A$ then you can use $x = P(S,A) = P(S)P(A) = 0.0007$ Upvotes: 1
2019/03/01
369
1,414
<issue_start>username_0: I was reading an article on [Medium](https://chatbotsjournal.com/why-i-should-should-not-invest-in-chatbot-df952cbabf10) and wanted to make it clear whether a bot created on IBM Watson is an intelligent one or unintelligent. > > Simply put, there are 2 types of chatbots — unintelligent ones that act using predefined conversation flows (algorithms) written by developers building them and intelligent ones that use machine learning to interact with users. > > ><issue_comment>username_1: A quick glance at the [IBM Watson wikipedia page](https://en.wikipedia.org/wiki/Watson_(computer)) reveals that it does indeed use machine learning. Watson is a complex computing system that uses a variety of cutting edge techniques and concepts such as natural language processing and machine learning. Upvotes: 0 <issue_comment>username_2: Yes, it does and at many parts of the solution. For one of the core components - intent detection - Intento did [a benchmark comparing IBM Watson and other similar products](https://www.slideshare.net/KonstantinSavenkov/nlu-intent-detection-benchmark-by-intento-august-2017). Outside of intent detection, there are other areas where AI techniques help - e.g. disambiguation, bootstrapping a bot from chat logs etc. Specifically for IBM Watson, you can learn more [here](https://www.ibm.com/cloud/watson-assistant/features/). Upvotes: 3 [selected_answer]
2019/03/01
1,726
7,048
<issue_start>username_0: The following plot shows error function output based on system weights. Two equal local minima are shown in green pointers. Note that the red dots are not related to the question. Does the right one generalize better compared to the left one? My assumption is that for the right minimum if the weights change, the overall error increases less compared to the left minimum point. Would this somehow mean the system does better generalization if the right one is chosen as the optimum minimum? [![enter image description here](https://i.stack.imgur.com/pDEFF.png)](https://i.stack.imgur.com/pDEFF.png)<issue_comment>username_1: I don't think that the concept of [generalization](https://en.wikipedia.org/wiki/Generalization_error) is (directly) related to the "shape" of the function close to the point where it attains a minimum. The concept of generalisation refers to when a trained model is able "[perform](https://stats.stackexchange.com/q/385312/82135) well" on unseen data (that is, data not seen during the training phase). If a trained model does not generalise well, then it might have ["overfitted" or "underfitted"](https://en.wikipedia.org/wiki/Overfitting). Overfitting means that the model performs well on the training data, but not on other data. In other words, in the case of overfitting, the model learns the structure of the training data, but not of the whole population (of interest) or it learns the "noise" (that is, the data which is not part of the population of interest) present in the training data. This also implies that the training data is not a "good" sample of the population, that is, it doesn't "summarise" well all characteristics of the population. In practice, this is often the case (for large populations). Underfitting occurs when the model is not even able to learn enough about the training data during the training phase. For example, underfitting occurs when you try to train a linear model but the data does not have a linear relationship. So, being able to generalise (or not) depends on the model (including the number of parameters), but also on the data you are given. > > Does the right one generalize better compared to the left one? > > > The shape of the function will only be useful during the training phase. More specifically, in this case, you might reach one of the two minima faster than the other (depending also on the optimisation algorithms, model, etc., that you use). I would like to note that, in machine learning, we often minimise a function of functions (which in mathematics is called a "[functional](https://en.wikipedia.org/wiki/Functional_(mathematics))"). Why is that? For "simplicity", consider a simple neural network (NN) model (e.g. a multi-layer perceptron). We usually train such NN using gradient descent (or one of its variants) by minimising a function (e.g. the mean squared error). Essentially, when we train such a NN, we want to find the function (which is represented by the parameters of the NN) that minimises e.g. the MSE. In this case, the mean squared error (MSE) is the functional we attempt to minimise. Note that the MSE is a function of the parameters of the current NN and that a NN is a model that represents a function. Let's go back to your question. If, during the training phase, you get one minimum rather than the other, you will get different NNs (say $A$ and $B$). Note, again, that the point where our MSE function attains one its minima, in this context of training a NN (and often, in general, in machine learning) is a function (and not just a scalar). I know that, at the beginning (and if you're not a math guy), it might be difficult to think of functions as points (actually, more precisely, we should call them "vectors") that minimise other functions, but this concept exists and actually underlies a great amount of machine learning techniques (e.g. training NNs using back-propagation by minimising a cost function). *So, why do I think that we can't say much about the generalisation ability of NNs $A$ and $B$?* Suppose that we have access, after the training phase, to both NNs $A$ and $B$ (that is, the ones that correspond to the "points" where your function attains the two (visible) minima. $A$ and $B$ will perform equally well on the training data (or even validation data). We might then conclude that both generalise in the same way. But this might be a wrong conclusion because both the training and the validation data (as I mentioned above) might not be good samples of the population. So, we can't really say which NN (or "minima"), $A$ or $B$, generalises better. They might generalise more or less in the same way (according to their performance e.g. on the [validation dataset](https://en.wikipedia.org/wiki/Training,_validation,_and_test_sets)) or not (because the validation dataset is not a good sample of the population). To conclude, the concept of generalisation is a little bit more complex than function minimisation, because it is also related to data. In machine learning, we often minimise a **functional** (a function whose parameters are functions). If your functional attains the same minimum at two different points (actually, functions), during training, one might be faster to reach than the other. However, if you have two NNs that represent these two minima, we can't say much about their generalisation ability. Essentially, we can just hope that the validation dataset is a good sample of the population. Upvotes: 2 <issue_comment>username_2: In general I agree with @username_1 answer, nevertheless sticking strictly to this specific question I'd like to share some speculations: * what the author of the question provides us with is the Loss Function Shape so I'll try to use the full information here to compare the 2 minima * looking at the LF steepness we observe the Left LM is in a steeper region than the Right LM: how can this be interpreted? * I'd interpret the LF Steepness as a Measure of Parametrization Stability: in fact perturbating the Parametrization slightly has a bigger effect on Left LM observed performance rather than the Right LM one * when the NN is running "in production" typically the parametrization is fixed (in most typical application, weights are not changed out of the training phase) so you should not be concerned about parametrization stability, however I'm one of those believing the [Flat Minima - Schmidhuber 1997](http://www.bioinf.jku.at/publications/older/3304.pdf) idea that Local Minima Flatness is connected to generalization property However it is important to observe this is still a very open and interesting question as in [Sharp Minima Can Generalize For Deep Nets](https://arxiv.org/abs/1703.04933) Dinh 2017 et al. demonstrated it is not just about flatness, because reparametrization of the NN model, though preserving minima locations, change its shape so basically sharp minima might be transformed into flat ones without affecting network performance Upvotes: 3 [selected_answer]
2019/03/02
792
3,059
<issue_start>username_0: I just read about the concept of a [parse tree](https://en.wikipedia.org/wiki/Parse_tree). In my understanding, a valid parse tree of a sentence needs to be validated by a linguistic expert. So, I concluded, a sentence only has one parse tree. But, is that correct? Is it possible a sentence has more than one valid parse tree (e.g. constituency-based)?<issue_comment>username_1: Grammars in NLP basically correspond to [Context-free Grammars(CFG)](https://en.wikipedia.org/wiki/Context-free_grammar) in formal Language theory. And, in case the CFG corresponding to the NLP task is [ambiguous](https://en.wikipedia.org/wiki/Ambiguous_grammar), then corresponding to a single sentence (more formally *derivation*), there can be multiple Parse Trees. Hence, it depends on the grammar whether there can be more than one valid parse tree. Upvotes: 1 <issue_comment>username_2: > > But, is that correct? Is it possible a sentence has more than one valid parse tree (e.g. constituency-based)? > > > The fact that a single sequence of words can be parsed in different ways depending on context (or "grounding") is a common basis of miscommunication, misunderstanding, innuendo and jokes. One classic NLP-related "joke" (around longer than modern AI and NLP) is: > > Time flies like an arrow. > > > Fruit flies like a banana. > > > There are actually [several valid parse trees](https://en.wikipedia.org/wiki/Time_flies_like_an_arrow;_fruit_flies_like_a_banana) for even these simple sentences. Which ones come "naturally" will depend on context - anecdotally I only half got the joke when I was younger, because I did not know there were such things as *[fruit flies](https://entomology.ca.uky.edu/ef621)*, so I was partly confused by literal (but still validly parsed, and *somewhat* funny) meaning that all fruit can fly about as well as a banana does. Analysing these kinds of ambiguous sentences leads to the [grounding problem](http://www.scholarpedia.org/article/Symbol_grounding_problem) - the fact that without some referent for symbols, a grammar is devoid of meaning, even if you know the rules and can construct valid sequences. For instance, the above joke works partly because the nature of time, when referred in a particular way (singular noun, not as a possession or property of another object), leads to a well-known metaphorical reading of the first sentence. A statistical ML parser could get both sentences correct through training on many relevant examples (or trivially by including the examples themselves with correct parse trees). This has not solved the grounding problem, but may be of practical use for any machine required to handle natural language input and map it to some task. I did check a while ago though, and most Parts Of Speech taggers in Pythons NLTK get both sentences wrong - I suspect because resolving sentences like those above and AI "getting language jokes" is not a high priority compared to more practical uses for chatbots/summarisers, etc. Upvotes: 3 [selected_answer]
2019/03/03
702
2,870
<issue_start>username_0: So I was wondering, why I have only encountered square loss function also known as MSE. The only nice property of MSE I am so far aware of is its convex nature. But then all equations of the form $x^{2n}$ where $n$ is an integer belongs to the same family. My question is what makes MSE the most suitable candidate among this entire family of curves? Why do other curves in the family, even though having steeper slopes, $(x >1) $, which might result in better optimisation, not used? Here is a picture to what I mean where red is $x^4$ and green is $x^2$: [![enter image description here](https://i.stack.imgur.com/muwB1.gif)](https://i.stack.imgur.com/muwB1.gif)<issue_comment>username_1: There is another variant for MSE. You can employ the absolute value of the difference of your hypothesis and the expected output. `MSE` and the absolute difference version, each have a property. The interpretation of the `MSE` is that you have the area of squares which at first are large but after training the predicted outputs and real outputs get similar to each other which means the area of squares has been diminished after training. For absolute value, the interpretation is that the difference you are going to reduce is a length and after training, you diminish the length of the errors. I don't say what you've said is not possible. It is possible and it does have an interpretation, maybe to diminish the volume of hypercubes, but the point is that the differentiation of such cost functions are a bit complex and it may lead to update rules which are longer to be calculated. Upvotes: -1 <issue_comment>username_2: I can comment on several properties of MSE and related losses. As you mentioned MSE (aka $l\_2$-loss) is convex which is a great property in optimization in which one can find a single global optimum. MSE is used in linear and non-linear least squares problems which form the basis of many widely used statistical methods. I would imagine the math and implementation would be more difficult if one would use a higher-order loss (e.g. $x^3$) and that would also prove to be futile because MSE already possesses great statistical and optimization properties on its own. Another important aspect, one wouldn't use higher-order loss functions in regression is because it would be extremely prone to outliers. MSE on its own would weigh the outliers much more than l1-loss would! And in real world data there is always noise and outliers present. In comparison l1 loss is more difficult in optimization, one reason for which is it's not differentiable at zero. Other interesting losses you might want to read about are [$l\_0$ and $l\_{inf}$ loss](https://medium.com/@montjoile/l0-norm-l1-norm-l2-norm-l-infinity-norm-7a7d18a4f40c), all of which have their own trade-offs in optimization-sense. Upvotes: 2 [selected_answer]
2019/03/04
744
3,029
<issue_start>username_0: The (discrete and continuous) Fourier transform (FT) is used in signal processing in order to convert a signal (or function) in a certain domain (e.g. the time domain) to another domain (e.g., frequency domain). There are several resources on the web that attempt to explain the FT at different levels of complexity. See e.g. [this answer](https://math.stackexchange.com/a/11114/168764) or [this](https://www.youtube.com/watch?v=mkGsMWi_j4Q) and [this](https://www.youtube.com/watch?v=spUNpyF58BY) Youtube videos. What are examples of (real-world) applications of the Fourier transform to AI? I am looking for answers that explain the reason behind the use of the FT in the given application. I suppose that there are several applications of the FT to e.g. ML (data analysis) and robotics. I am looking for specific examples.<issue_comment>username_1: There is another variant for MSE. You can employ the absolute value of the difference of your hypothesis and the expected output. `MSE` and the absolute difference version, each have a property. The interpretation of the `MSE` is that you have the area of squares which at first are large but after training the predicted outputs and real outputs get similar to each other which means the area of squares has been diminished after training. For absolute value, the interpretation is that the difference you are going to reduce is a length and after training, you diminish the length of the errors. I don't say what you've said is not possible. It is possible and it does have an interpretation, maybe to diminish the volume of hypercubes, but the point is that the differentiation of such cost functions are a bit complex and it may lead to update rules which are longer to be calculated. Upvotes: -1 <issue_comment>username_2: I can comment on several properties of MSE and related losses. As you mentioned MSE (aka $l\_2$-loss) is convex which is a great property in optimization in which one can find a single global optimum. MSE is used in linear and non-linear least squares problems which form the basis of many widely used statistical methods. I would imagine the math and implementation would be more difficult if one would use a higher-order loss (e.g. $x^3$) and that would also prove to be futile because MSE already possesses great statistical and optimization properties on its own. Another important aspect, one wouldn't use higher-order loss functions in regression is because it would be extremely prone to outliers. MSE on its own would weigh the outliers much more than l1-loss would! And in real world data there is always noise and outliers present. In comparison l1 loss is more difficult in optimization, one reason for which is it's not differentiable at zero. Other interesting losses you might want to read about are [$l\_0$ and $l\_{inf}$ loss](https://medium.com/@montjoile/l0-norm-l1-norm-l2-norm-l-infinity-norm-7a7d18a4f40c), all of which have their own trade-offs in optimization-sense. Upvotes: 2 [selected_answer]
2019/03/05
491
1,920
<issue_start>username_0: It is possible that the view of what is impressive enough in computer behavior to be called intelligence changes with each decade as we adjust to what capabilities are made available in products and services.<issue_comment>username_1: My sense is that they would, based on a high-level take of [Babbage](https://en.wikipedia.org/wiki/Charles_Babbage) and [Lovelace](https://en.wikipedia.org/wiki/Ada_Lovelace)'s view of the potential capability of the "[analytic engine](https://en.wikipedia.org/wiki/Analytical_Engine)". If Babbage's Tic-Tac-Toe machine had been built, I am sure that would have been regarded as machine intelligence. [Nimatron](https://history-computer.com/ModernComputer/Relays/Condon.html) ([Edward Condon](https://en.wikipedia.org/wiki/Edward_Condon)) may have been the first game AI, and the capability seems similar to what Babbage was envisioning. Certainly the bogus "[Turk](https://en.wikipedia.org/wiki/The_Turk)" chess-playing hoax was considered a machine intelligence. Conventional software could connote a form of automation, and I think any form of automation, particularly where the operations are "under the covers", would have been considered a form of intelligence. --- *I think the current idea of only regarding "strong statistical AI" (Machine Learning) as AI is inherently flawed because of the concept of utility. Intelligence is a spectrum, being a relative measure of problem solving strength, and artificial merely connotes a thing intentionally or skillfully constructed. The Russell & Norvig definition seems to hew to this viewpoint.* Upvotes: 2 <issue_comment>username_2: They would probably have followed the same sequence we do: * be amazed at the capabilities, * ask how it is done, * wonder whether this is really intelligence and (or) point out our narrow the performance was, * require more next time to be impressed again. Upvotes: 3
2019/03/05
1,769
6,223
<issue_start>username_0: I wanted to clarify the term 'acting greedily'. What does it mean? Does it correspond to the immediate reward, future reward or both combined? I want to know the actions that will be taken in 2 cases: * $v\_\pi(s)$ is known and $R\_s$ is also known (only). * $q\_{\pi}(s, a)$ is known and $R\_s^a$ is also known (only).<issue_comment>username_1: In general, a greedy "action" is an action that would lead to an immediate "benefit". For example, the [Dijkstra's algorithm](https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm) can be considered a greedy algorithm because at every step it selects the node with the smallest "estimate" to the initial (or starting) node. In reinforcement learning, a greedy action often refers to an action that would lead to the immediate highest **reward** (*disregarding possible future rewards*). However, a greedy action can also mean the action that would lead to the highest possible **return** (that is, the greedy action can also be considered an action that takes into account not just immediate rewards but also future ones). In your case, I think that the "greedy action" can mean different things, depending on weather you use the reward function or the value functions, that is, you can act greedily with respect to the reward function or the value functions. I would like to note that you are using a different notation for the reward function for each of the two value functions, but this does not need to be the case. So, your reward function might be expressed as $R\_s^a$ even if you use $v\_\pi(s)$. I will use the notation $R\_s^a$ for simplicity of the explanations. So, if you have access to the reward function for a given state and action, $R^a\_s = r(s, a)$, then the greedy action (with respect to the reward function $r$) would just be the action from state $s$ with the highest **reward**. So, formally, we can define it as $a\_\text{greedy} = \arg \max\_a r(s, a)$ (both in the case of the state or state-action value functions: it does not matter if you have one or the other value function). In other words, if you have access to the reward function (in that form), you can act greedily from any state without needing to access the value functions: you have a "model" of the rewards that you will obtain. If you have $q\_\pi(s, a)$ (that is, the state-action value function for a *fixed* policy $\pi$), then, at time step $t$, the greedy action (with respect to $q\_\pi(s, a)$) from state $s$ is $a\_\text{greedy} = \arg \max\_{a}q\_\pi(s, a)$. If you then take action $a\_\text{greedy}$ in the environment, you would obtain the highest discounted future reward (that is, the **return**), according the $q\_\pi(s, a)$, which might actually not be the highest *possible* return from $s$, because $q\_\pi(s, a)$ might not be the optimal state-action value function. If $q\_\pi(s, a) = q\_{\pi^\*}(s, a)$ (that is, if you have the optimal state-action value function), then, if you execute $a\_\text{greedy}$ in the environment, you will theoretically obtain the highest possible return from $s$. If you had the optimal value function (the value function associated with the optimal policy to act in your environment), then the following equation holds $v\_\*(s) = \max\_{a} q\_{\pi^\*}(s, a)$. So, in that case, $a\_\text{greedy} = \arg \max\_{a}q\_{\pi^\*}(s, a)$ would also be the greedy action if you had $v\_\*(s)$. If you only have $v\_\pi(s)$ (without e.g. the Q function), I don't think you can act greedily (that is, there is no way of knowing which action is the greedy action from $s$ by just having the value of state $s$: this is actually why we often estimate the Q functions for "control", i.e. acting in the environment). Upvotes: 2 <issue_comment>username_2: In RL, the phrase "acting greedily" is usually short for "acting greedily with respect to the value function". Greedy local optimisation turns up in other contexts, and it is common to specify what metric is being maximised or minimised. The value function is most often the discounted sum of expected future reward, and also the metric used when defining a policy as "acting greedily". It is possible to define a policy as acting greedily with respect to immediate expected reward, but not the norm. As a special case, when the discount factor $\gamma = 0$ then it is the same as acting greedily with respect to the value function. When rewards are sparse (a common situation in RL), acting greedily with respect to expected immediate reward is not very useful. There is not enough information to make a decision. > > I want to know the actions that will be taken in 2 cases: > > > * $v\_\pi(s)$ is known and $R\_s$ is also known (only). > > > To act greedily in RL, you would use the value function $v\_\pi(s')$ - the value function of the next states. To do so, you need to know the environment dynamics - to go with the notation $R\_s$ you should also know the transition matrix $P\_{ss'}^a$ - the probability of transitioning from $s$ to $s'$ whilst taking action $a$: $$\pi'(s) = \text{argmax}\_a \sum\_{s'} P\_{ss'}^a(R\_{s} + \gamma v\_{\pi}(s'))$$ Notes: * This assumes $R\_{s}$ is your immediate reward for *leaving* state $s$. Substitute $R\_{s'}$ if the reward matrix is for entering state $s'$ instead. * The policy gained from acting greedily with respect to $v\_\pi(s)$ is *not* $\pi$, it is (a usually improved) policy $\pi'$ > > * $q\_{\pi}(s, a)$ is known and $R\_s^a$ is also known (only). > > > To act greedily in RL, you would use the value function $q\_{\pi}(s, a)$, and this is much simpler: $$\pi'(s) = \text{argmax}\_a q\_{\pi}(s, a)$$ Again, the policy gained from acting greedily with respect to $q\_\pi(s,a)$ is *not* $\pi$, it is (a usually improved) policy $\pi'$ Upvotes: 3 [selected_answer]<issue_comment>username_3: Acting greedily means that the search is not forward thinking and limits its decisions solely on immediate return. It is not quite the same as what is meant in human social contexts in that greed in that context can involve forward thinking strategies that sacrifice short term losses for long term gain. In the typical machine search lingo, greed is myopic (short-sighted). Upvotes: 1
2019/03/05
267
1,025
<issue_start>username_0: I'm looking to build from scratch an implementation of the [wake-sleep algorithm](https://www.cs.toronto.edu/~hinton/absps/ws.pdf), for unsupervised learning with neural networks. I plan on doing this in Python in order to better understand how it works. In order to facilitate my task, I was wondering if anyone could point me to an existing (open-source) implementation of this concept. I'm also looking for articles or, in general, resources that could facilitate this task.<issue_comment>username_1: I don't know if you are looking for something in a library, but I've found [this](https://github.com/jackklys/wake-sleep-modifications/blob/master/wakesleep/ws.py) in a public Github (I've not checked deeply if it fits for you). I hope that's what you're looking for. Upvotes: 1 <issue_comment>username_2: I found the following detailed and well documented [Python notebook](https://github.com/vadim0x60/wakesleep/blob/master/WakeSleep.ipynb), which uses only NumPy. Upvotes: 3 [selected_answer]
2019/03/07
652
2,794
<issue_start>username_0: I'm studying how SPP (Spatial, Pyramid, Pooling) works. SPP was invented to tackle the fix input image size in CNN. According to the original paper <https://arxiv.org/pdf/1406.4729.pdf>, the authors say: > > convolutional layers do not require a fixed image size and can > generate feature maps of any sizes. On the other hand, the > fully-connected layers need to have fixed size/length input by their > definition. Hence, the fixed size constraint comes only from the > fully-connected layers, which exist at a deeper stage of the network. > > > Why does a fully connected layer only accepts a fixed input size (but convolutional layers don't)? What's the real reason behind this definition?<issue_comment>username_1: A convolutional layer is a layer where you slide a kernel or filter (which you can think of as a small square matrix of weights, which need to be learned during the learning phase) over the input. In practice, when you need to slide this kernel, you will often need to specify the "padding" (around the input) and "stride" (with which you convolve the kernel on the input), in order to obtain the desired output (size). So, even if you receive inputs of different sizes, you can change these values, like the padding or the stride, in order to produce a valid output (size). In this sense, I think, we can say that convolutional layers accept inputs of (almost) any size. The number of feature maps does not depend on the kernel or input (sizes). The number of features maps is determined by the number of different kernels that you will use to slide over the input. If you have $K$ different kernels, then you will have $K$ different feature maps. The number of kernels is often a hyper-parameter, so you can change it (as you please). A fully connected (FC) layer requires a fixed input size by design. The programmer decides the number of input units (or neurons) that the FC layer will have. This hyper-parameter often does not change during the learning phase. So, yes, FC often accept inputs of fixed size (also because they do not adopt techniques like "padding"). Upvotes: 2 [selected_answer]<issue_comment>username_2: It doesn't have to be so. Fully connected layer could be considered as convolutional layer with input image of 1 pixel and spatial kernel size of 1 pixel. So 1-pixel kernel convolutional layer is effectively the same as fully connected layer attached to each pixel. That is idea behind "Fully Convolutional Networks". If you want "true", 1-pixel fully connected layer after convolutional layer (with variable input size) all you have to do is to put average pooling layer (or other type of pooling layer) before fully connected layer. That way fully connected layer can accept variable input size. Upvotes: 0
2019/03/07
577
2,533
<issue_start>username_0: I have series of sensors (around 4k) and each sensor will measure the amplitudes at each point.Suppose I train the neural network with sufficent set of 4k values (N \* 4k shape). The machine will find a pattern in the series of values.If the values stray away from the pattern (that is anomaly) it can detect the point and will be able to say that anomaly is in the 'X'th sensor.Is this possible.If so what kind of neural network should I use?<issue_comment>username_1: A convolutional layer is a layer where you slide a kernel or filter (which you can think of as a small square matrix of weights, which need to be learned during the learning phase) over the input. In practice, when you need to slide this kernel, you will often need to specify the "padding" (around the input) and "stride" (with which you convolve the kernel on the input), in order to obtain the desired output (size). So, even if you receive inputs of different sizes, you can change these values, like the padding or the stride, in order to produce a valid output (size). In this sense, I think, we can say that convolutional layers accept inputs of (almost) any size. The number of feature maps does not depend on the kernel or input (sizes). The number of features maps is determined by the number of different kernels that you will use to slide over the input. If you have $K$ different kernels, then you will have $K$ different feature maps. The number of kernels is often a hyper-parameter, so you can change it (as you please). A fully connected (FC) layer requires a fixed input size by design. The programmer decides the number of input units (or neurons) that the FC layer will have. This hyper-parameter often does not change during the learning phase. So, yes, FC often accept inputs of fixed size (also because they do not adopt techniques like "padding"). Upvotes: 2 [selected_answer]<issue_comment>username_2: It doesn't have to be so. Fully connected layer could be considered as convolutional layer with input image of 1 pixel and spatial kernel size of 1 pixel. So 1-pixel kernel convolutional layer is effectively the same as fully connected layer attached to each pixel. That is idea behind "Fully Convolutional Networks". If you want "true", 1-pixel fully connected layer after convolutional layer (with variable input size) all you have to do is to put average pooling layer (or other type of pooling layer) before fully connected layer. That way fully connected layer can accept variable input size. Upvotes: 0
2019/03/07
595
1,998
<issue_start>username_0: > > Excercise 3.5 The equastions in Section 3.1 are for the continuing > case and need to be modified (very slightly) to apply to episodic > tasks. Show that you know the modifications needed by giving the > modified version of (3.3). > > > $\displaystyle\sum\_{s^{\prime} \in S} \displaystyle\sum\_{r \in R} = p(s^{\prime}, r | s,a) = 1$ , for all $s\in S, a \in A(s)$ (3.3) Is it just about final states? So for $s \in S$ when S is not final?<issue_comment>username_1: > > Is it just about final states? So for $s \in S$ when S is not final? > > > You are thinking the right way, but to represent what you mean you don't need to write out "when $s$ is not final" - although that would be fine (and is used in some places), there is a more concise way of saying that given to you by the book. As this is a formal exercise from the book, I don't want to write out an answer that could be cut&paste for all students. Instead I suggest you take a look at the notations section at the beginning of the book, and find how Sutton & Barto use different set labels for all states including terminal states, and all states excluding terminal states. Also, check carefully which of those sets needs to be summed over. Upvotes: 3 [selected_answer]<issue_comment>username_2: I found myself turning in cycles for a while, so to clarify username_1's answer, In the beginning of the book, $S$ means "set of non-terminal states" and $S^+$ means "set of all states, including the terminal ones". $$\sum\_{s^{\prime} \in S} \sum\_{r \in R} p(s^{\prime}, r | s,a) = 1, \forall s \in S, a \in A(s) \tag{3.3}$$ That said, in eq. 3.3 when we define that $\forall s \in S$, we say that that once in a terminal state, the formula does not apply (which is obvious because no action is ever available in a terminal state by definition). It does not however constraint the probability in how to "get" in a terminal state, and that is the key to answer the question. Upvotes: 0
2019/03/08
329
1,408
<issue_start>username_0: I am working on a MLP neural networks, using supervised learning (2 classes and multi-class classification problems). For the hidden layers, I am using $\tanh$ (which produces an output in the range $[-1, 1]$) and for the output layer a softmax (which gives the probability distribution between $0$ and $1$). As I am working with supervised learning, should be my targets output between 0 and 1, or $-1$ and $1$ (because of the $\tanh$ function), or it does not matter? The loss function is quadratic (MSE).<issue_comment>username_1: For this particular classification problem, I would recommend you using a softmax function whose output range is [0,1]. The sum of all outputs should be 1, so an advantage of using a softmax function is that you get a percentage of how confident the network is in this classification. Side note: As DuttaA has commented, cross entropy loss is a better loss function than the quadratic mean squared error. Upvotes: 1 <issue_comment>username_2: your targets should be in the same range as your output functions other wise your loss function wont be accurate, with supervised learning your trying to reduce the loss of your output against your targets so in this case your targets should be the true/optimal probability distribution for that set of input data. Im from the midwest so obligatory "cant compare apples to oranges" here ;) Upvotes: 0
2019/03/08
239
1,043
<issue_start>username_0: Is there a way to run C64, Nintendo Gameboy, or other platforms in an OpenAI's Gym-like environment? It seems that we can run only Atari games.<issue_comment>username_1: For this particular classification problem, I would recommend you using a softmax function whose output range is [0,1]. The sum of all outputs should be 1, so an advantage of using a softmax function is that you get a percentage of how confident the network is in this classification. Side note: As DuttaA has commented, cross entropy loss is a better loss function than the quadratic mean squared error. Upvotes: 1 <issue_comment>username_2: your targets should be in the same range as your output functions other wise your loss function wont be accurate, with supervised learning your trying to reduce the loss of your output against your targets so in this case your targets should be the true/optimal probability distribution for that set of input data. Im from the midwest so obligatory "cant compare apples to oranges" here ;) Upvotes: 0
2019/03/12
3,005
11,906
<issue_start>username_0: What is geometric deep learning (GDL)? Here are a few sub-questions * How is it different from deep learning? * Why do we need GDL? * What are some applications of GDL?<issue_comment>username_1: The article [Geometric deep learning: going beyond Euclidean data](https://arxiv.org/abs/1611.08097) (by <NAME>, <NAME>, <NAME>, <NAME>, <NAME>) provides an overview of this relatively new sub-field of deep learning. It answers all the questions asked above (and more). If you are familiar with deep learning, graphs, linear algebra and calculus, you should be able to follow this article. *What is geometric deep learning (GDL)*? This article describes GDL as follows > > Geometric deep learning is an umbrella term for emerging techniques attempting to generalize (structured) deep neural models to non-Euclidean domains such as graphs and manifolds. > > > So, the inputs to these GDL models are graphs (or representations of graphs), or, in general, any [non-Euclidean data](https://ai.stackexchange.com/q/11226/2444). To be more concrete, the input to these models (e.g. [graph neural networks](https://ai.stackexchange.com/q/11169/2444)) are e.g. feature vectors associated with the nodes of the graphs and matrices which describe the graph structure (e.g. the [adjacency matrix](https://en.wikipedia.org/wiki/Adjacency_matrix) of the graphs). *Why are e.g. graphs non-Euclidean data?* A graph is a non-Euclidean structure because e.g. distances between nodes are not well defined. Yes, you can have graphs with weights associated with the edges, but not all graphs have this property. *What classes of problems does GDL address?* In GDL, there are two classes of problems that are often tackled: 1. characterise the structure of the data (e.g. of a graph) 2. analyse functions defined on a given non-Euclidean domain These classes of problems are related, given that the structure of the graph imposes certain properties on the functions that can be defined on it. Furthermore, these properties of these functions can also convey information about the structure of the graph. *What are applications of GDL?* An example of an application where this type of data (graphs) arises is in the context of social networks, where each user can be associated with a vertex of the social graph and the characteristics (or features) of each user (e.g. number of friends) can be represented as a feature vector (which can then be associated with the corresponding vertex of a graph). In this context, the goal might e.g. be to determine different groups of users in the social network (i.e. clustering). *Why can't we simply use deep learning methods (like CNNs) when the data is non-Euclidean?* There are several problems that arise when dealing with non-Euclidean data. For example, operations like convolution are not (usually) defined on non-Euclidean data. More concretely, the relative position of nodes is not defined on graphs (but this would be required to perform the usual convolution operation): in other words, it is meaningless to talk about a vertex that is e.g. on the left of another vertex. In practice, it means that we can't simply use the usual CNN when we are given non-Euclidean data. There have been attempts to generalise the convolution operation to graphs (or to approximate it). The field is still quite new, so there will certainly be new developments and breakthroughs. Upvotes: 4 [selected_answer]<issue_comment>username_2: To complete the [first answer](https://ai.stackexchange.com/a/11201/22176) that is rather graph oriented, I will write a little about deep learning on manifolds, which is quite general in terms of GDL thanks to the nature of manifolds. Note that the description of GDL through the explanation of what are DL on graphs and manifolds, in opposition to DL on euclidean domains, comes from the 2017 paper [Geometric deep learning: going beyond Euclidean data](https://arxiv.org/abs/1611.08097) (this paper is excellent at clarifying both the intuition and the mathematics of what I'm writing). 1. In case you don't know what a manifold is -------------------------------------------- As the previously cited paper puts it: > > Roughly, a manifold is a space that is locally Euclidean. One of the > simplest examples is a spherical surface modeling our planet: around a > point, it seems to be planar, which has led generations of people to > believe in the flatness of the Earth. Formally speaking, a > (differentiable) d-dimensional manifold X is a topological space where > each point x has a neighborhood that is topologically equivalent > (homeomorphic) to a d-dimensional Euclidean space, called the tangent > space. > > > Good other not-so-technical [explanation](https://stats.stackexchange.com/a/289488/241224) on stats.stackexchange Other [Wikipedia examples](https://en.wikipedia.org/wiki/Manifold#Motivating_examples) to develop not too abstract understanding Very shortly put, it's an interesting mathematical set on which to work (different kinds exist, see papers at the end of this answer for DL related manifolds uses). By work, you can typically understand that you *constrain the neural net parameters to the manifold* you chose (e.g. training with parameters constrained on a hypersphere, among the [geomstats paper](https://arxiv.org/abs/1805.08308) examples). Your data can also be represented thanks to a practical manifold. For example, you can choose to work on images and videos by representing the samples using Symmetric Positive Definite (SPD) matrices (see this [paper](https://arxiv.org/abs/1605.06182)), the space of SPD matrices being a manifold itself. 2. Why bother learning on manifolds ? ------------------------------------- Defining a clearer/better adapted set (understand that it's a sort of constraint!) on which to learn parameters and features can make it simpler to formally understand what your model is doing, and can lead to better results. I see it as a part of the effort of deep learning formalization. One could say **you're looking for the best information geometry for your task**, the one that best captures the desirable data distribution properties.To develop this intuition consider the [solar system analogy for manifold learning of this Kaggle kernel](https://www.kaggle.com/apapiu/manifold-learning-and-autoencoders): > > Perhaps a good analogy here is that of a solar system: the surface of our planets are the manifolds we're interested in, one for each digit. Now say you're on the surface of the earth which is a 2-manifold and you start moving in a random direction (let's assume gravity doesn't exist and you can go through solid objects). If you don't understand the structure of earth you'll quickly find yourself in space or inside the earth. But if you instead move within the local earth (say spherical) coordinates you will stay on the surface and get to see all the cool stuff. > > > This analogy reminds us of the spherical surface planet model from [Bronstein's paper](https://arxiv.org/abs/1611.08097) already quoted above. This paper also describes a typical case for which manifolds are interesting: where graphs (the other example of GDL/DL on non euclidean data) are better at handling data from social or sensor networks, manifolds are good at modeling 3D objects endowed with properties like color texture in computer vision. 3. Regarding deep neural networks on manifolds ---------------------------------------------- I would advise reading the [geomstats](https://github.com/geomstats/geomstats) [associated paper](https://arxiv.org/abs/1805.08308), which does a great job at showing what it is and how it can be used, along with example codes (e.g. MNIST on hyperspheres manifold example code [here](https://github.com/geomstats/applications/blob/master/deep_learning/mnist_hypersphere.py)). This library implements manifolds and associated metrics on Keras. The choice of metrics is essential to understand the point of working on manifolds: it's because you need to work on an adapted mathematical set (*ie* with the right properties) with an adapted distance definition (*so that the measure actually means something when considering the problem you're trying to solve*) that you switch to working on manifolds. If you want to dive in the details and examples of deep learning on manifolds here are some papers: * [A Riemannian Network for SPD Matrix Learning](https://arxiv.org/abs/1608.04233): new backpropagation to learn SPD matrices on Riemannian manifolds * [Learning a Robust Representation via a Deep Network on Symmetric Positive Definite Manifolds](https://arxiv.org/abs/1711.06540): using SPD matrices to aggregate convolutional features 4. Why *Riemannian* manifolds ? ------------------------------- *TL;DR: you need a metric to do machine learning* (otherwise, how could you evaluate how much you actually learned !) Still based on [Bronstein's paper](https://arxiv.org/abs/1611.08097): > > On each tangent space, we define an inner product [...]. This inner > product is called a Riemannian metric in differential geometry and > allows performing local measurements of angles, distances, and > volumes. A manifold equipped with a metric is called a Riemannian > manifold. > > > 5. What's the relation between a Riemannian manifold and a Euclidean space ? ---------------------------------------------------------------------------- Still based on [Bronstein's paper](https://arxiv.org/abs/1611.08097): > > a Riemannian manifold can be realized as a subset of a Euclidean space > (in which case it is said to be *embedded* in that space) by using the > structure of the Euclidean space to induce a Riemannian metric. > > > I leave the details to the paper, otherwise this answer will never end. 6. Answers to questions in comments ----------------------------------- Will only answer once I think I've found a relatively well-argued answer, so won't answer everything at once. * Isn't manifold learning just a way of dimensionality reduction? I don't think so, it isn't *just that*. I haven't seen any dimensional reduction constraint (yet ?) in the papers I've read (cf. geomstats again). In the [hypersphere/MNIST geomstats code example](https://github.com/geomstats/applications/blob/master/deep_learning/mnist_hypersphere.py), you can see the chosen manifold dimension `hypersphere_dimension = 17`. Since we're working with MNIST data I guess this would mean a dimension reduction in this particular case. I admit I would need to check exactly what that dimension implies on the neural net architecture, I haven't discussed my understanding of this yet. Disclaimer ---------- I'm still developing a more rigorous mathematical understanding of manifolds, and shall update this post to make additional necessary clarifications: exactly what can be considered as a manifold in a traditional deep learning context, why do we use the word manifold when speaking about the hidden state of auto-encoders (see the previously cited Kaggle kernel that quotes Goodfellow's book on this). All of this if the perfectly clear answer doesn't show up here before ! Upvotes: 4 <issue_comment>username_3: I didn't read the paper in depth, but one example of where assumptions of Euclidean space are made in the design of the networks are with ConvNets in image processing. Specifically, Euclidean spaces are transformationally invariant, meaning that $d(a,b) = d(a+c,b+c)$. Each convolution layer iterates over the image with a certain amount of stride, which guarantees a certain amount of translational invariance in exchange for a smaller size of network parameters. ConvNets would not naively work for a dataset which comes from, say, a graph, because its not clear how to stride over it, as a possible example. Upvotes: 0
2019/03/13
3,043
12,138
<issue_start>username_0: I am making an AI model to predict monthly retail sales of a motor cycle spare parts shop, for that to be possible I have to first create a dataset. The problem I am facing is what features should the dataset have? I already did some research on some other datasets but still I want to know specifically what features should it have other than Date, Product Name, Quantity, Net amount, Gross amount..?<issue_comment>username_1: The article [Geometric deep learning: going beyond Euclidean data](https://arxiv.org/abs/1611.08097) (by <NAME>, <NAME>, <NAME>, <NAME>, <NAME>) provides an overview of this relatively new sub-field of deep learning. It answers all the questions asked above (and more). If you are familiar with deep learning, graphs, linear algebra and calculus, you should be able to follow this article. *What is geometric deep learning (GDL)*? This article describes GDL as follows > > Geometric deep learning is an umbrella term for emerging techniques attempting to generalize (structured) deep neural models to non-Euclidean domains such as graphs and manifolds. > > > So, the inputs to these GDL models are graphs (or representations of graphs), or, in general, any [non-Euclidean data](https://ai.stackexchange.com/q/11226/2444). To be more concrete, the input to these models (e.g. [graph neural networks](https://ai.stackexchange.com/q/11169/2444)) are e.g. feature vectors associated with the nodes of the graphs and matrices which describe the graph structure (e.g. the [adjacency matrix](https://en.wikipedia.org/wiki/Adjacency_matrix) of the graphs). *Why are e.g. graphs non-Euclidean data?* A graph is a non-Euclidean structure because e.g. distances between nodes are not well defined. Yes, you can have graphs with weights associated with the edges, but not all graphs have this property. *What classes of problems does GDL address?* In GDL, there are two classes of problems that are often tackled: 1. characterise the structure of the data (e.g. of a graph) 2. analyse functions defined on a given non-Euclidean domain These classes of problems are related, given that the structure of the graph imposes certain properties on the functions that can be defined on it. Furthermore, these properties of these functions can also convey information about the structure of the graph. *What are applications of GDL?* An example of an application where this type of data (graphs) arises is in the context of social networks, where each user can be associated with a vertex of the social graph and the characteristics (or features) of each user (e.g. number of friends) can be represented as a feature vector (which can then be associated with the corresponding vertex of a graph). In this context, the goal might e.g. be to determine different groups of users in the social network (i.e. clustering). *Why can't we simply use deep learning methods (like CNNs) when the data is non-Euclidean?* There are several problems that arise when dealing with non-Euclidean data. For example, operations like convolution are not (usually) defined on non-Euclidean data. More concretely, the relative position of nodes is not defined on graphs (but this would be required to perform the usual convolution operation): in other words, it is meaningless to talk about a vertex that is e.g. on the left of another vertex. In practice, it means that we can't simply use the usual CNN when we are given non-Euclidean data. There have been attempts to generalise the convolution operation to graphs (or to approximate it). The field is still quite new, so there will certainly be new developments and breakthroughs. Upvotes: 4 [selected_answer]<issue_comment>username_2: To complete the [first answer](https://ai.stackexchange.com/a/11201/22176) that is rather graph oriented, I will write a little about deep learning on manifolds, which is quite general in terms of GDL thanks to the nature of manifolds. Note that the description of GDL through the explanation of what are DL on graphs and manifolds, in opposition to DL on euclidean domains, comes from the 2017 paper [Geometric deep learning: going beyond Euclidean data](https://arxiv.org/abs/1611.08097) (this paper is excellent at clarifying both the intuition and the mathematics of what I'm writing). 1. In case you don't know what a manifold is -------------------------------------------- As the previously cited paper puts it: > > Roughly, a manifold is a space that is locally Euclidean. One of the > simplest examples is a spherical surface modeling our planet: around a > point, it seems to be planar, which has led generations of people to > believe in the flatness of the Earth. Formally speaking, a > (differentiable) d-dimensional manifold X is a topological space where > each point x has a neighborhood that is topologically equivalent > (homeomorphic) to a d-dimensional Euclidean space, called the tangent > space. > > > Good other not-so-technical [explanation](https://stats.stackexchange.com/a/289488/241224) on stats.stackexchange Other [Wikipedia examples](https://en.wikipedia.org/wiki/Manifold#Motivating_examples) to develop not too abstract understanding Very shortly put, it's an interesting mathematical set on which to work (different kinds exist, see papers at the end of this answer for DL related manifolds uses). By work, you can typically understand that you *constrain the neural net parameters to the manifold* you chose (e.g. training with parameters constrained on a hypersphere, among the [geomstats paper](https://arxiv.org/abs/1805.08308) examples). Your data can also be represented thanks to a practical manifold. For example, you can choose to work on images and videos by representing the samples using Symmetric Positive Definite (SPD) matrices (see this [paper](https://arxiv.org/abs/1605.06182)), the space of SPD matrices being a manifold itself. 2. Why bother learning on manifolds ? ------------------------------------- Defining a clearer/better adapted set (understand that it's a sort of constraint!) on which to learn parameters and features can make it simpler to formally understand what your model is doing, and can lead to better results. I see it as a part of the effort of deep learning formalization. One could say **you're looking for the best information geometry for your task**, the one that best captures the desirable data distribution properties.To develop this intuition consider the [solar system analogy for manifold learning of this Kaggle kernel](https://www.kaggle.com/apapiu/manifold-learning-and-autoencoders): > > Perhaps a good analogy here is that of a solar system: the surface of our planets are the manifolds we're interested in, one for each digit. Now say you're on the surface of the earth which is a 2-manifold and you start moving in a random direction (let's assume gravity doesn't exist and you can go through solid objects). If you don't understand the structure of earth you'll quickly find yourself in space or inside the earth. But if you instead move within the local earth (say spherical) coordinates you will stay on the surface and get to see all the cool stuff. > > > This analogy reminds us of the spherical surface planet model from [Bronstein's paper](https://arxiv.org/abs/1611.08097) already quoted above. This paper also describes a typical case for which manifolds are interesting: where graphs (the other example of GDL/DL on non euclidean data) are better at handling data from social or sensor networks, manifolds are good at modeling 3D objects endowed with properties like color texture in computer vision. 3. Regarding deep neural networks on manifolds ---------------------------------------------- I would advise reading the [geomstats](https://github.com/geomstats/geomstats) [associated paper](https://arxiv.org/abs/1805.08308), which does a great job at showing what it is and how it can be used, along with example codes (e.g. MNIST on hyperspheres manifold example code [here](https://github.com/geomstats/applications/blob/master/deep_learning/mnist_hypersphere.py)). This library implements manifolds and associated metrics on Keras. The choice of metrics is essential to understand the point of working on manifolds: it's because you need to work on an adapted mathematical set (*ie* with the right properties) with an adapted distance definition (*so that the measure actually means something when considering the problem you're trying to solve*) that you switch to working on manifolds. If you want to dive in the details and examples of deep learning on manifolds here are some papers: * [A Riemannian Network for SPD Matrix Learning](https://arxiv.org/abs/1608.04233): new backpropagation to learn SPD matrices on Riemannian manifolds * [Learning a Robust Representation via a Deep Network on Symmetric Positive Definite Manifolds](https://arxiv.org/abs/1711.06540): using SPD matrices to aggregate convolutional features 4. Why *Riemannian* manifolds ? ------------------------------- *TL;DR: you need a metric to do machine learning* (otherwise, how could you evaluate how much you actually learned !) Still based on [Bronstein's paper](https://arxiv.org/abs/1611.08097): > > On each tangent space, we define an inner product [...]. This inner > product is called a Riemannian metric in differential geometry and > allows performing local measurements of angles, distances, and > volumes. A manifold equipped with a metric is called a Riemannian > manifold. > > > 5. What's the relation between a Riemannian manifold and a Euclidean space ? ---------------------------------------------------------------------------- Still based on [Bronstein's paper](https://arxiv.org/abs/1611.08097): > > a Riemannian manifold can be realized as a subset of a Euclidean space > (in which case it is said to be *embedded* in that space) by using the > structure of the Euclidean space to induce a Riemannian metric. > > > I leave the details to the paper, otherwise this answer will never end. 6. Answers to questions in comments ----------------------------------- Will only answer once I think I've found a relatively well-argued answer, so won't answer everything at once. * Isn't manifold learning just a way of dimensionality reduction? I don't think so, it isn't *just that*. I haven't seen any dimensional reduction constraint (yet ?) in the papers I've read (cf. geomstats again). In the [hypersphere/MNIST geomstats code example](https://github.com/geomstats/applications/blob/master/deep_learning/mnist_hypersphere.py), you can see the chosen manifold dimension `hypersphere_dimension = 17`. Since we're working with MNIST data I guess this would mean a dimension reduction in this particular case. I admit I would need to check exactly what that dimension implies on the neural net architecture, I haven't discussed my understanding of this yet. Disclaimer ---------- I'm still developing a more rigorous mathematical understanding of manifolds, and shall update this post to make additional necessary clarifications: exactly what can be considered as a manifold in a traditional deep learning context, why do we use the word manifold when speaking about the hidden state of auto-encoders (see the previously cited Kaggle kernel that quotes Goodfellow's book on this). All of this if the perfectly clear answer doesn't show up here before ! Upvotes: 4 <issue_comment>username_3: I didn't read the paper in depth, but one example of where assumptions of Euclidean space are made in the design of the networks are with ConvNets in image processing. Specifically, Euclidean spaces are transformationally invariant, meaning that $d(a,b) = d(a+c,b+c)$. Each convolution layer iterates over the image with a certain amount of stride, which guarantees a certain amount of translational invariance in exchange for a smaller size of network parameters. ConvNets would not naively work for a dataset which comes from, say, a graph, because its not clear how to stride over it, as a possible example. Upvotes: 0
2019/03/14
3,019
12,012
<issue_start>username_0: I need to create model which will find suspicious entries or anomalies in a network, whose characteristics or features are the `asset_id`, `user_id`, IP accessed from and `time_stamp`. Which unsupervised anomaly detection algorithms or models should I use to solve this task?<issue_comment>username_1: The article [Geometric deep learning: going beyond Euclidean data](https://arxiv.org/abs/1611.08097) (by <NAME>, <NAME>, <NAME>, <NAME>, <NAME>) provides an overview of this relatively new sub-field of deep learning. It answers all the questions asked above (and more). If you are familiar with deep learning, graphs, linear algebra and calculus, you should be able to follow this article. *What is geometric deep learning (GDL)*? This article describes GDL as follows > > Geometric deep learning is an umbrella term for emerging techniques attempting to generalize (structured) deep neural models to non-Euclidean domains such as graphs and manifolds. > > > So, the inputs to these GDL models are graphs (or representations of graphs), or, in general, any [non-Euclidean data](https://ai.stackexchange.com/q/11226/2444). To be more concrete, the input to these models (e.g. [graph neural networks](https://ai.stackexchange.com/q/11169/2444)) are e.g. feature vectors associated with the nodes of the graphs and matrices which describe the graph structure (e.g. the [adjacency matrix](https://en.wikipedia.org/wiki/Adjacency_matrix) of the graphs). *Why are e.g. graphs non-Euclidean data?* A graph is a non-Euclidean structure because e.g. distances between nodes are not well defined. Yes, you can have graphs with weights associated with the edges, but not all graphs have this property. *What classes of problems does GDL address?* In GDL, there are two classes of problems that are often tackled: 1. characterise the structure of the data (e.g. of a graph) 2. analyse functions defined on a given non-Euclidean domain These classes of problems are related, given that the structure of the graph imposes certain properties on the functions that can be defined on it. Furthermore, these properties of these functions can also convey information about the structure of the graph. *What are applications of GDL?* An example of an application where this type of data (graphs) arises is in the context of social networks, where each user can be associated with a vertex of the social graph and the characteristics (or features) of each user (e.g. number of friends) can be represented as a feature vector (which can then be associated with the corresponding vertex of a graph). In this context, the goal might e.g. be to determine different groups of users in the social network (i.e. clustering). *Why can't we simply use deep learning methods (like CNNs) when the data is non-Euclidean?* There are several problems that arise when dealing with non-Euclidean data. For example, operations like convolution are not (usually) defined on non-Euclidean data. More concretely, the relative position of nodes is not defined on graphs (but this would be required to perform the usual convolution operation): in other words, it is meaningless to talk about a vertex that is e.g. on the left of another vertex. In practice, it means that we can't simply use the usual CNN when we are given non-Euclidean data. There have been attempts to generalise the convolution operation to graphs (or to approximate it). The field is still quite new, so there will certainly be new developments and breakthroughs. Upvotes: 4 [selected_answer]<issue_comment>username_2: To complete the [first answer](https://ai.stackexchange.com/a/11201/22176) that is rather graph oriented, I will write a little about deep learning on manifolds, which is quite general in terms of GDL thanks to the nature of manifolds. Note that the description of GDL through the explanation of what are DL on graphs and manifolds, in opposition to DL on euclidean domains, comes from the 2017 paper [Geometric deep learning: going beyond Euclidean data](https://arxiv.org/abs/1611.08097) (this paper is excellent at clarifying both the intuition and the mathematics of what I'm writing). 1. In case you don't know what a manifold is -------------------------------------------- As the previously cited paper puts it: > > Roughly, a manifold is a space that is locally Euclidean. One of the > simplest examples is a spherical surface modeling our planet: around a > point, it seems to be planar, which has led generations of people to > believe in the flatness of the Earth. Formally speaking, a > (differentiable) d-dimensional manifold X is a topological space where > each point x has a neighborhood that is topologically equivalent > (homeomorphic) to a d-dimensional Euclidean space, called the tangent > space. > > > Good other not-so-technical [explanation](https://stats.stackexchange.com/a/289488/241224) on stats.stackexchange Other [Wikipedia examples](https://en.wikipedia.org/wiki/Manifold#Motivating_examples) to develop not too abstract understanding Very shortly put, it's an interesting mathematical set on which to work (different kinds exist, see papers at the end of this answer for DL related manifolds uses). By work, you can typically understand that you *constrain the neural net parameters to the manifold* you chose (e.g. training with parameters constrained on a hypersphere, among the [geomstats paper](https://arxiv.org/abs/1805.08308) examples). Your data can also be represented thanks to a practical manifold. For example, you can choose to work on images and videos by representing the samples using Symmetric Positive Definite (SPD) matrices (see this [paper](https://arxiv.org/abs/1605.06182)), the space of SPD matrices being a manifold itself. 2. Why bother learning on manifolds ? ------------------------------------- Defining a clearer/better adapted set (understand that it's a sort of constraint!) on which to learn parameters and features can make it simpler to formally understand what your model is doing, and can lead to better results. I see it as a part of the effort of deep learning formalization. One could say **you're looking for the best information geometry for your task**, the one that best captures the desirable data distribution properties.To develop this intuition consider the [solar system analogy for manifold learning of this Kaggle kernel](https://www.kaggle.com/apapiu/manifold-learning-and-autoencoders): > > Perhaps a good analogy here is that of a solar system: the surface of our planets are the manifolds we're interested in, one for each digit. Now say you're on the surface of the earth which is a 2-manifold and you start moving in a random direction (let's assume gravity doesn't exist and you can go through solid objects). If you don't understand the structure of earth you'll quickly find yourself in space or inside the earth. But if you instead move within the local earth (say spherical) coordinates you will stay on the surface and get to see all the cool stuff. > > > This analogy reminds us of the spherical surface planet model from [Bronstein's paper](https://arxiv.org/abs/1611.08097) already quoted above. This paper also describes a typical case for which manifolds are interesting: where graphs (the other example of GDL/DL on non euclidean data) are better at handling data from social or sensor networks, manifolds are good at modeling 3D objects endowed with properties like color texture in computer vision. 3. Regarding deep neural networks on manifolds ---------------------------------------------- I would advise reading the [geomstats](https://github.com/geomstats/geomstats) [associated paper](https://arxiv.org/abs/1805.08308), which does a great job at showing what it is and how it can be used, along with example codes (e.g. MNIST on hyperspheres manifold example code [here](https://github.com/geomstats/applications/blob/master/deep_learning/mnist_hypersphere.py)). This library implements manifolds and associated metrics on Keras. The choice of metrics is essential to understand the point of working on manifolds: it's because you need to work on an adapted mathematical set (*ie* with the right properties) with an adapted distance definition (*so that the measure actually means something when considering the problem you're trying to solve*) that you switch to working on manifolds. If you want to dive in the details and examples of deep learning on manifolds here are some papers: * [A Riemannian Network for SPD Matrix Learning](https://arxiv.org/abs/1608.04233): new backpropagation to learn SPD matrices on Riemannian manifolds * [Learning a Robust Representation via a Deep Network on Symmetric Positive Definite Manifolds](https://arxiv.org/abs/1711.06540): using SPD matrices to aggregate convolutional features 4. Why *Riemannian* manifolds ? ------------------------------- *TL;DR: you need a metric to do machine learning* (otherwise, how could you evaluate how much you actually learned !) Still based on [Bronstein's paper](https://arxiv.org/abs/1611.08097): > > On each tangent space, we define an inner product [...]. This inner > product is called a Riemannian metric in differential geometry and > allows performing local measurements of angles, distances, and > volumes. A manifold equipped with a metric is called a Riemannian > manifold. > > > 5. What's the relation between a Riemannian manifold and a Euclidean space ? ---------------------------------------------------------------------------- Still based on [Bronstein's paper](https://arxiv.org/abs/1611.08097): > > a Riemannian manifold can be realized as a subset of a Euclidean space > (in which case it is said to be *embedded* in that space) by using the > structure of the Euclidean space to induce a Riemannian metric. > > > I leave the details to the paper, otherwise this answer will never end. 6. Answers to questions in comments ----------------------------------- Will only answer once I think I've found a relatively well-argued answer, so won't answer everything at once. * Isn't manifold learning just a way of dimensionality reduction? I don't think so, it isn't *just that*. I haven't seen any dimensional reduction constraint (yet ?) in the papers I've read (cf. geomstats again). In the [hypersphere/MNIST geomstats code example](https://github.com/geomstats/applications/blob/master/deep_learning/mnist_hypersphere.py), you can see the chosen manifold dimension `hypersphere_dimension = 17`. Since we're working with MNIST data I guess this would mean a dimension reduction in this particular case. I admit I would need to check exactly what that dimension implies on the neural net architecture, I haven't discussed my understanding of this yet. Disclaimer ---------- I'm still developing a more rigorous mathematical understanding of manifolds, and shall update this post to make additional necessary clarifications: exactly what can be considered as a manifold in a traditional deep learning context, why do we use the word manifold when speaking about the hidden state of auto-encoders (see the previously cited Kaggle kernel that quotes Goodfellow's book on this). All of this if the perfectly clear answer doesn't show up here before ! Upvotes: 4 <issue_comment>username_3: I didn't read the paper in depth, but one example of where assumptions of Euclidean space are made in the design of the networks are with ConvNets in image processing. Specifically, Euclidean spaces are transformationally invariant, meaning that $d(a,b) = d(a+c,b+c)$. Each convolution layer iterates over the image with a certain amount of stride, which guarantees a certain amount of translational invariance in exchange for a smaller size of network parameters. ConvNets would not naively work for a dataset which comes from, say, a graph, because its not clear how to stride over it, as a possible example. Upvotes: 0
2019/03/14
441
1,597
<issue_start>username_0: I read top articles on Google Search about Deep Q-Learning: * <https://medium.freecodecamp.org/an-introduction-to-deep-q-learning-lets-play-doom-54d02d8017d8> * <https://skymind.ai/wiki/deep-reinforcement-learning> * <https://neuro.cs.ut.ee/demystifying-deep-reinforcement-learning/> * [Q-Learning page on wikipedia](https://en.wikipedia.org/wiki/Q-learning#Deep_Q-learning)\* and then I noticed that they all use CNN as approximator. If deep learning has a broader definition than just CNN, can we use the term "Deep Q-Learning" on our model if we don't use CNN? or is there a more appropriate definition for that kind of Q-Learning model? for example, if my model only using deep fully-connected layer. \*it doesn't say explicitly Deep RL means CNN on RL, but it uses the DeepMind (that uses CNN) as an example on Deep Q-Learning<issue_comment>username_1: No. DQN and other deep RL methods work well with fully connected layers. Here's an implementation of DQN which doesn't use CNNs: [github.com/keon/deep-q-learning/blob/master/dqn.py](http://github.com/keon/deep-q-learning/blob/master/dqn.py) DeepMind mostly use CNN because they use image as input state, and that because they tried to evaluate performance of their methods vs humans performance. Humane performance is easy to measure at games with image as input state, and that's why CNN based methods present so promptly in RL now. Upvotes: 4 [selected_answer]<issue_comment>username_2: The approximator can be any artificial neural network architecture, including deep fully-connected networks. Upvotes: -1
2019/03/14
191
785
<issue_start>username_0: I am not new to AI and did some work for few months but completely new to text to audio. Yes I used text to audio tools a decade back... but I would like to know where exactly we stand in terms of text to audio today. I already did some research, it seems like traditional way of text to audio is fading away and speech cloning seems to be emerging but my impression on this might be completely wrong. What are the current open source text-to-audio libraries?<issue_comment>username_1: There is one by Mozilla called Deep Voice And another python library called pocket-sphinx Upvotes: 0 <issue_comment>username_2: [**voice-builder**](https://github.com/google/voice-builder) is an opensource text-to-speech (TTS) voice building tool from Google. Upvotes: 1
2019/03/14
2,960
11,530
<issue_start>username_0: What is non-Euclidean data? Here are some sub-questions * Where does this type of data arise? I have come across this term in the context of geometric deep learning and graph neural networks. * Apparently, graphs and manifolds are non-Euclidean data. Why exactly is that the case? * What is the difference between non-Euclidean and Euclidean data? * How would a dataset of non-Euclidean data look like?<issue_comment>username_1: Non-Euclidian geometry can be generally boiled down to the phrase > > the shortest path between 2 points isn't necessarily a straight line. > > > Or, put in a way that lends itself very much to machine learning, > > things that are similar to each other are not necessarily close if one uses Euclidean distance as a metric (aka the [triangle inequality](https://en.wikipedia.org/wiki/Triangle_inequality) doesn't hold). > > > You mention graphs and manifolds as being non-Euclidian, but, really, the majority of problems being worked on don't have Euclidian data. Take the below images for example: Clearly, 2 of the images are more similar to each other than the third one is, but if we looked at the pixels alone, the Euclidean distance between the pixel values don't represent this similarity. [![2 good boys and a rad hampster](https://i.stack.imgur.com/P6mH2.png)](https://i.stack.imgur.com/P6mH2.png) If there was a function, $F(\text{image})$, that mapped images to a space of values where similar images produced values that were closer together, we could better understand the data, infer some statistics about the distributions, and make predictions on data we have yet to see. This is what classic techniques of image recognition have done and it's also what modern machine learning is doing. Taking data and mapping it to a space such that the [triangle inequality](https://en.wikipedia.org/wiki/Triangle_inequality) holds. Let's look at a more concrete example, some points I drew in MSPaint. On the left is some space that we are interested in where points have 2 classes (red or blue). Even though there are points that are close to each other, they may have different colors/classes. Ideally, we could have a function that converts these points to some space where we can draw a line to separate these 2 classes. In general, there would be many lines, or hyper-planes in dimensions > 3, but the goal is to transform the data so that it will be "linearly separable". [![Some points I drew in MSPaint.](https://i.stack.imgur.com/S81eT.png)](https://i.stack.imgur.com/S81eT.png) To conclude, non-Euclidian data is everywhere. Upvotes: 3 <issue_comment>username_2: I presume this question was prompted by the paper [Geometric deep learning: going beyond Euclidean data](https://arxiv.org/abs/1611.08097) (2017). If we look at its abstract: > > Many scientific fields study **data with an underlying structure that is a non-Euclidean space.** Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric > data are large and complex (in the case of social networks, on the scale of billions), and are natural targets for machine learning techniques. In particular, we would like to use **deep neural networks**, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural language processing, and audio analysis. However, > these tools **have been most successful on data with an underlying Euclidean or grid-like structure,** and in cases where the invariances of these structures are built into networks used to model them. > > > We see that the authors use the term "non-Euclidean data" to refer to data whose underlying structure is non-Euclidean. Since Euclidean spaces are prototypically defined by $\mathbb{R}^n$ (for some dimension $n$), 'Euclidean data' is data which is sensibly modelled as being plotted in $n$-dimensional linear space, for example image files (where the $x$ and $y$ coordinates refer to the location of each pixel, and the $z$ coordinate refers to its colour/intensity). However some data does not map neatly into $\mathbb{R}^n$, for example, a social network modelled by a graph. You can of course [embed the physical shape of a graph in 3-d space](https://math.stackexchange.com/questions/584171/show-that-every-graph-can-be-embedded-in-mathbbr3), but you will lose information such as the quality of edges, or the values associated with nodes, or the directionality of edges, and there isn't an obvious sensible way of mapping these attributes to higher dimensional Euclidean space. And depending on the specific embedding, you may introduce spurious correlations (e.g. two unconnected nodes appearing closer to each other in the embedding than to nodes they are connected to). Methods such as [Graph Neural Networks](https://ai.stackexchange.com/questions/11169/what-is-a-graph-neural-network/20626#20626) seek to adapt existing Machine Learning technologies to directly process non-Euclidean structured data as input, so that this (possibly useful) information is not lost in transforming the data into a Euclidean input as required by existing techniques. Upvotes: 5 [selected_answer]<issue_comment>username_3: As far as I understand, the concept of *non-Euclidean space* doesn't bring the ordinality or hierarchy among the features, compared to that with the data formed in the Euclidean space. The difference between both these techniques is not remarkable for discriminative tasks like classification. But, for generative modeling, the non-Euclidean techniques helps in defining the latent manifold space for the given data distribution. This can further help in traversing the manifold from the same distribution (to generate similar samples from the same or underlying manifold) even with $n$ degrees of freedom in the latent space. This is not possible with Euclidean techniques. One cannot fully traverse/generate samples from or outside the manifold without the minimal change in the Euclidean space. More precisely, it can, but it will only present it as noisy data. Upvotes: 0 <issue_comment>username_4: It's hard to say because Euclidean space is defined with respect to some kind of metric, so without any clearer exposition on the nature of the data/problem, the phrase itself may or may not be clear. A [metric](https://en.wikipedia.org/wiki/Metric_(mathematics)) $d: A \times A \rightarrow \mathbb{R}$ is a function that defines distance between any two points in the space with respect to axioms that 1. two points have zero distance iff they are the same: $d(a,b) = 0 \Leftrightarrow a = b$. 2. symmetric: $d(a,b) = d(b,a)$. 3. triangle inequality: $d(a,b) + d(b,c) ≥ d(a,c)$. A Euclidian metric is a metric that also obeys Pythagoras theorem , or at least: the distance between some point $(x,y) \in \mathbb{R}^2$ is equal to $\sqrt{x^2 + y^2}.$ You will find that all Euclidian spaces are [isomorphic](https://en.wikipedia.org/wiki/Isomorphism) to $\mathbb{R}^n$, meaning that the two notions are in some sense identical. Any graph/data whose underlying data does not "naturally come" from $\mathbb{R}^n$, or a graph that does not admit a natural embedding in $\mathbb{R}^n$ might not be Euclidian, since $\mathbb{R}^n$ is isomorphic to any euclidian space. Upvotes: 1 <issue_comment>username_5: > > Where does this type of data arise? > > > In terms of *learning* on non-Euclidean data, it was probably first coined by <NAME> [here](https://arxiv.org/abs/1611.08097). Recently, <NAME> published, along with other top authors in the field, a [book](https://arxiv.org/pdf/2104.13478.pdf) about geometric deep learning, which in essence presents a unified mathematical framework for symmetries-based learning, and exemplifies the extension of concepts from classical ML/DL to the domain of higher-dimensional geometric domains (chapter 4). > > Apparently, graphs and manifolds are non-Euclidean data. Why exactly is that the case? > > > According to the unified mathematical framework presented in prof. Bronstein's book, the blueprint of geometric DL consists of an underlying domain $\Omega$ and signals $\mathcal{X}(\Omega)$ and functions $\mathcal{F}(\mathcal{X}(\Omega))$. The domain is the geometric/algebraic **structure** behind any instance of that data type (basically shows how its composing features are arranged). Associated with this domain is a symmetry group (all the transformations under which the domain is invariant). The signal is the observable state/value/quantity/features in the points of the domain. The function(s) are the blocks/layers that we want to build as functions of the signals on the domain. Here is where we can inject inductive biases. Translation equivariance is a property of the domain of images (among other groups). Convolutions as functions of a signal on the domain $R^2$ are translation equivariant, meaning that if you move an object in an image, you would still find the same features (at different locations) when applying the same kernel over the two images. This is the (geometric prior) "inductive bias", as **the function used is aware of the domain, knows its properties, and uses them**. The underlying domain of an image is $\mathbb{R}^2$ (the position of the pixels in the (x, y) plane). In this context, one can say that the signal of an is simply the colour/intensity of the pixels. As pointed out in [this](https://ai.stackexchange.com/a/30368/54143) answer, this group admits the Euclidean distance between two points in the plane as $d(x, y) = \sqrt{\sum\_{i=0}^{1} (x\_i - y\_i)^2}$. In broad terms, non-Euclidean data is data whose underlying domain does not obey Euclidean distance as a metric between points in the domain. For visualization simplicity, think of $\mathbb{Z}^2$ instead, which can be seen as a "grid" of integer-valued points separated by a distance of 1. You can easily "visualize" the distance between the points of the domain. Now let's move to graphs. What is the underlying domain of the graph? It's a set of nodes $\mathcal{V}$ and a set of edges $\mathcal{E}$. There is no such thing as distance in the Euclidean sense between the nodes of a graph. If you ask "What is the distance between point A and B?", the answer is probably a function of the connectivity of the graph (which is part of the domain), **and is not the same for arbitrary points in the graph**. > > How would a dataset of non-Euclidean data look like? > > > Usually, a data point (sample) consists of a domain and a signal. * The domain can be for example a weighted adjacency matrix of the graph, which lists the "distances" between nodes, for those that exist. * Features can be either per node, per edge or both. In a concrete example, the domain can be a road network graph with distances and connectivity between road intersections, and the signal can be per-node features indicating how many cars are in the intersection at a given point. Of course, the domain can also be different between data points (i.e. in the PROTEINS dataset, where each protein is a graph connected (edges) aminoacids (nodes), with different structures). A non-Euclidean dataset simply consists of multiple such data points, which may or may not have the same underlying domain (i.e. same graph structure defined by an adjacency matrix). Upvotes: 1
2019/03/14
1,786
5,315
<issue_start>username_0: In certain reinforcement learning (RL) proofs, the operators involved are assumed to be *non-expansive*. For example, on page 6 of the paper [Generalized Markov Decision Processes: Dynamic-programming and Reinforcement-learning Algorithms](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.71.1316&rep=rep1&type=pdf) (1997), <NAME> and <NAME> state > > When $0 \leq \gamma < 1$ and $\otimes$ and $\oplus$ are non-expansions, the generalized Bellman equations have a unique optimal solution, and therefore, the optimal value function is well defined. > > > On page 7 of the same paper, the authors say that max is non-expansive. Moreover, on page 33, the authors assume $\otimes$ and $\oplus$ are non-expansions. What is a non-expansive operator? Why is the $\max$ (and the $\min$), which is, for example, used in Q-learning, a non-expansive operator?<issue_comment>username_1: In laymen's terms, a non-expansive operator is a function that brings points closer together or at least no further apart. An example of a non-expansive operator is the function $f(x) = x/2$. The two numbers $0$ and $5$ are a distance of $5$ apart. The two output numbers $f(0) = 0$ and $f(5) = 2.5$ are 2.5 apart (which is smaller than $5$ apart). It is easy to see that $f$ brings everything closer together except when the two input numbers are the same: in which case, the distance between the outputs of the function at those numbers is at least no further apart than distance between the two input numbers. $\max$ is a two-input function (or n-input, but the intuition should be clear from the 2-input case). We can think of max as a function that maps *pairs* of numbers $(x, y)$ to single numbers (picking whichever of $x$ and $y$ is larger). Suppose that we chose to measure the distance between pairs using [Euclidean distance](https://en.wikipedia.org/wiki/Euclidean_distance), and the distance between single numbers using the Euclidean distance as well. Here's an example: The distance between (0,0) and (3,3) is $\sqrt{3^2 + 3^2} = \sqrt{18}$. The distance between $\max(0,0) = 0$ and $\max(3, 3) = 3$ is $\sqrt{{(\max(0,0) - \max(3, 3))}^2} = \sqrt{9} = 3$. Let's consider the general case. The Euclidean distance between the 2D points $(a, b)$ and $(c, d)$ is $\sqrt{(a-c)^2 + (b-d)^2}$. There are four cases to consider: 1. Suppose that $a\geq b$ and $c \geq d$. In this case, the distance between max(a,b) and max(c,d) is just |a-c|, which is clearly *at most* $\sqrt{(a-c)^2 + (b-d)^2}$. 2. Suppose that $a\leq b$ and $c \leq d$. In this case, the distance is |b-d|, which is also at most the original distance. 3. Suppose that $a\geq b$ but $c \leq d$. Then the distance is |a-d|. Suppose that $a > d$. Since $d > c$, then $|a-d| <= \sqrt{(a-c)^2 + (b-d)^2}$ since |a-d|<=$\sqrt{(a-c)^2}$, and a symmetric argument holds for the case $d > a$. 4. $a\leq b$ but $c \geq d$, we can construct an argument identical to the one for case 3 above. Since max is always bringing things closer together, or at least, no further apart, it is a non-expansive operator. Upvotes: 2 <issue_comment>username_2: Suppose that $X$ and $Y$ are [metric spaces](https://en.wikipedia.org/wiki/Metric_space). A metric space is a set equipped with a *metric*, which is a function that defines the intuitive notion of *distance* between the elements of the set. For example, the set of real numbers, $\mathbb{R}$, equipped with the [metric induced by the absolute value function (which is a norm)](https://math.stackexchange.com/a/1893295/168764). More precisely, the metric $d$ can be defined as $$d(x,y)=|x - y|, \forall x, y \in \mathbb{R} \tag{1}\label{1}.$$ Let $f$ be a function from the metric space $X$ to the metric space $Y$, that is, $f: X \rightarrow Y$. Then, $f$ is a *non-expansive map* (also known as [metric map](https://en.wikipedia.org/wiki/Metric_map)) *if and only if* $$d\_{Y}(f(x),f(y)) \leq d\_{X}(x,y) \tag{2} \label{2}$$ where the subscript $\_X$ in $d\_X$ means that the metric $d\_X$ is the metric associated with the metric space $X$. Therefore, any function $f$ between two metric spaces that satisfies \ref{2} is a non-expansive operator. To show that the max operator is non-expansive, consider the set of real numbers, $\mathbb{R}$, equipped with the absolute value metric defined in \ref{1}. Then, in this case, $f=\operatorname{max}$, $d(x, y) = |x - y|$ and $X = Y = \mathbb{R}$, so \ref{2} becomes $$|\operatorname{max}(x) - \operatorname{max}(y)| \leq | x - y | \tag{3} \label{3}$$ Given that $\operatorname{max}(x) = x, \forall x$, then \ref{3} trivially holds, that is \begin{align} |\operatorname{max}(x) - \operatorname{max}(y)| &\leq | x - y | \iff \\ |x - y| &\leq | x - y | \iff \\ |x - y| &= | x - y | \tag{4} \label{4} \end{align} For example, suppose that $x=6$ and $y=9$, then \ref{4} becomes \begin{align} |\operatorname{max}(6) - \operatorname{max}(9)| &\leq | 6 - 9 | \iff \\ |6 - 9| &\leq | -3 | \iff \\ |-3| &= 3 \end{align} There are other examples of non-expansive maps. For example, $f(x) = k x$, for $0 \leq k \leq 1$, where $f : \mathbb{R} \rightarrow \mathbb{R}$. See also <https://en.wikipedia.org/wiki/Contraction_mapping> and <https://en.wikipedia.org/wiki/Contraction_(operator_theory)>. Upvotes: 1
2019/03/17
1,823
7,843
<issue_start>username_0: In general, the word "latent" means "hidden" and "to embed" means "to incorporate". In machine learning, the expressions "hidden (or latent) space" and "embedding space" occur in several contexts. More specifically, an [embedding](https://en.wikipedia.org/wiki/Word_embedding) can refer to a vector representation of a word. An [embedding space](https://en.wikipedia.org/wiki/Embedding) can refer to a subspace of a bigger space, so we say that the subspace is *embedded* in the bigger space. The word "latent" comes up in contexts like [hidden Markov models (HMMs)](https://en.wikipedia.org/wiki/Hidden_Markov_model) or [auto-encoders](https://en.wikipedia.org/wiki/Autoencoder). What is the difference between these spaces? In some contexts, do these two expressions refer to the same concept?<issue_comment>username_1: When it comes to normal layman terms "latent space" means it cannot be accessed, thus we have no direct control over it. We can only manipulate it indirectly, while "Embeddings" can be obtained directly. We can use deterministic operations or transformations to convert an object into its corresponding embedding space. There is no marked difference between these 2 terms as far as Machine Learning is concerned. If we look at this famous [paper](https://arxiv.org/pdf/1406.5298.pdf) on Variational Autoencoders, we can see the words has been used interchangeably. More specifically, I would consider the word (in the context of Machine Learning only) **latent** as a more general term than **Embedding**. Embeddings will refer to a more specific object (in context of ML), for example the embedding of $word\_1$ is $embedding\_1$. Whereas, we can use the term latent to describe broader terms like **latent space**, **latent representation**, **latent variables** (latent variables of a word is same as an embedding of a word). After digging some more I found some what of a formal definition of Latent Variables in [Deep Learning](https://www.deeplearningbook.org/) by Goodfellow: * **Latent Variables** - A latent variable is a random variable that we cannot observe directly. The component identity variable $c$ of the mixture model provides an example. Latent variables may be related to $x$ through the joint distribution, in this case, $P(x, c) = P(x | c)P(c)$. The distribution $P(c)$ over the latent variable and the distribution $P(x | c)$ relating the latent variables to the visible variables determines the shape of the distribution $P(x)$, even though it is possible to describe $P(x)$ without reference to the latent variable. Also a [paper](https://www.utc.fr/~bordesan/dokuwiki/_media/en/transe_nips13.pdf) cited by Goodfellow while discussing embeddings has the following excerpt: > > Following the success of user/item clustering or matrix factorization techniques in collaborative filtering to represent non-trivial similarities between the connectivity patterns of entities in single relational data, most existing methods for multi-relational data have been designed within the framework of relational learning from latent attributes, as pointed out by; that is, by learning and > operating on latent representations (or embeddings) of the constituents (entities and relationships). > > > So clearly these are somewhat interchangeable terms. But my interpretation would be that embeddings are helpful more explicitly (more visible, latent variables are meant to be hidden), that is we can construct a new data-set from it and use various ML methods on it, whereas latent variables are something not useful explicitly (it is a part of a bigger problem we are trying to solve). EDIT: In the context of HMM's the term better suitable is hidden state and not latent space. Thus, in a HMM (from Wiki) The adjective hidden refers to the state sequence through which the model passes, not to the parameters of the model; the model is still referred to as a hidden Markov model even if these parameters are known exactly. Upvotes: 3 <issue_comment>username_2: The expression "latent space" explicitly indicates that the space is associated with the mathematical concept of an hidden (or latent) variable, which cannot be observed directly, but only indirectly. The expression "embedding space" refers to a vector space that represents an original space of inputs (e.g. images or words). For example, in the case of "word embeddings", which are vector representations of words. It can also refer to a latent space because a latent space can also be a space of vectors. However, an embedding space is not necessarily an hidden space. It is just another (vector) representation of another space. These two expressions can be used interchangeably, also because the expression "embedding space" is often not formally defined. Upvotes: 3 <issue_comment>username_3: Embedding vs Latent Space ------------------------- Due to Machine Learning's recent and rapid renaissance, and the fact that it draws from many distinct areas of mathematics, statistics, and computer science, it often has a number of different terms for the same or similar concepts. "Latent space" and "embedding" both refer to an (often lower-dimensional) representation of high-dimensional data: * *Latent space* refers specifically to the space from which the low-dimensional representation is drawn. * *Embedding* refers to the way the low-dimensional data is mapped to ("embedded in") the original higher dimensional space. For example, in this "Swiss roll" data, the 3d data on the left is sensibly modelled as a 2d manifold 'embedded' in 3d space. The function mapping the 'latent' 2d data to its 3d representation is the *embedding*, and the underlying 2d space itself is the *latent space* (or *embedded space*): [![enter image description here](https://i.stack.imgur.com/ID6dP.png)](https://i.stack.imgur.com/ID6dP.png) Synonyms -------- Depending on the specific impression you wish to give, "embedding" often goes by different terms: | Term | Context | | --- | --- | | dimensionality reduction | combating the "curse of dimensionality" | | feature extraction feature projection [feature embedding](https://ai.stackexchange.com/questions/6556/what-is-feature-embedding-in-the-context-of-convolutional-neural-networks/20697) feature learning representation learning | extracting 'meaningful' features from raw data | | embedding manifold learning latent feature representation | understanding the underlying topology of the data | However this is not a hard-and-fast rule, and they are often completely interchangeable. Upvotes: 6 [selected_answer]<issue_comment>username_4: The words latent space and embedding space are often used interchangeably. However, latent space can more specifically refer to the sample space of a stochastic representation, whereas embedding space more often refers to the space of a deterministic representation. This comes from latent referring to an unobserved random variable, for which we can infer a belief distribution over its plausible values, for example using an encoder network. You can then draw samples of the predicted distribution for further processing. To learn more, you can look into VAEs. Upvotes: 0 <issue_comment>username_5: To give a statistician's answer, the distinction is empirical (embedding) versus theoretical (latent positions). You define a statistical model which has latent positions that you could then try to estimate, given data. Or, given data, you might simply find a vector representation of each object of interest in a way that makes sense for the applications considered - and you would call that set of representations an embedding. There's a lot of work going on trying to re-interpret popular embeddings as estimating a compatible model's latent positions, but that's not always straightforward. Upvotes: 1
2019/03/17
333
1,474
<issue_start>username_0: I am trying to understand how weights are actually gotten. What is generating them in a neural network? What is the algorithm that gives them certain values?<issue_comment>username_1: Typically, weights are randomly initialized. Then, as the model is [optimized](https://en.wikipedia.org/wiki/Artificial_neural_network#Optimization) for its given task, those weights are steadily made "better" as determined by the network's [loss function](https://en.wikipedia.org/wiki/Loss_function). This is also referred to as "training" the neural network. By far the most popular way of updating weights in a neural net is the [backpropagation](https://en.wikipedia.org/wiki/Backpropagation) algorithm, most simply with [stochastic gradient descent](https://en.wikipedia.org/wiki/Stochastic_gradient_descent) (SGD). Essentially, the algorithm determines how much each individual weight contributed to the network's loss. It then updates that weight in the direction that would reduce the loss. I recommend going through <NAME>'s [online book](http://neuralnetworksanddeeplearning.com/) to learn the basics. Upvotes: 4 [selected_answer]<issue_comment>username_2: I agree with @PhilipRaeisghasem, in most architectures, weights are initialized in a random manner. However, some research papers suggest applying a random normal distribution initialization to the weights in the case of Convolutional Neural Networks (for computer vision). Upvotes: 0
2019/03/20
828
3,824
<issue_start>username_0: If we have classified 1000 people's faces; how do we ensure the network tells us when it encounters a new person?<issue_comment>username_1: You can't really make the network tell you if a face is new (unless you actually train a network with that particular purpose, maybe). Ideally, if you feed a new face to a trained network, the output activations will be pretty low in all the possible categories (faces that the network has already seen) i.e. no particular category will signal a high probability. In that case, you can just set a cut, such that if the maximum activation is below a certain threshold, it means that the NN encountered a new face. But things are not that easy in practice. If the NN encounters a face it has already seen, but the image is weird (blurry, the face doesn't cover much of the actual image, there are multiple faces in the image etc.) you could get the same output as in the case of a new face. At the same time, for a NN images are just probability distributions, so it could happen that a new face, which for us, humans, looks nothing like the others, to look pretty similar to an old one, from the NN point of view, hence giving a high activation in a single particular output neuron. Upvotes: 0 <issue_comment>username_2: Specifically for face recognition (and other identification algorithms) there are better approaches than using classifiers directly. Most identity recognition algorithms generate some kind of metric - typically an "embedding" of the original image into an abstract space i.e. a vector of real numbers. The space might be based on real-world biometrics e.g. normalised measurements of eye distance, eyebrow arch etc, which would be trained as a regression algorithm. The problem with this is that it requires a lot of labelled data, and the biometrics are not necessarily good at differentiating between identities. An alternative is to get the neural network to find the best abstract space for identities. You can do this if you have at least two images of each identity, and using [triplet loss](https://en.wikipedia.org/wiki/Triplet_loss) to train the network - the loss function directly rewards embedding the same identity close and different identity far apart. Once you have an embedding, you no longer directly classify identities using the neural network. Instead, you base identity on distance between measured embedding and stored embeddings. This requires implementing a search function that looks at known embeddings and sorts by distance. > > how do we ensure the network tells us when it encounters a new person? > > > Embeddings don't solve this problem directly, but give a useful heuristic - distance in embedding space. Typically a maximum allowed distance is set as a cutoff to consider an image as showing a new identity. This is a hyperparameter of the model. This is an area that triplet loss helps with, since it is trained to make the distance as large as possible between images that show different identities. If it has generalised well during training, then it should ignore differences due to lighting, pose, makeup etc, but still be able to differentiate similar looking people. As the embedding is approximate, any such system may make mistakes, and needs to be carefully tested. The quality and quantity of training data are important, and it should match images used in production. But that is no different to the pure classifier, which must in addition be re-built and re-trained for every new class added. Whether to use a more basic classifier or something like triplet loss is a question of scale - if the number of identities that need to be tracked is high, or the rate of change in identities is high, then embeddings trained on triplet loss (or similar) are more practical. Upvotes: 1
2019/03/20
1,012
4,458
<issue_start>username_0: I am fine-tuning a VGG16 model on 20 classes with 500k images I was wondering how do you chose the size of the dense layer (the one before the prediction layer which has a size 20). I would prefer not to do a grid search seeing how long it take to train my model. Also how many Dense layer should I put after my global average pooling ? ``` base_model = keras.applications.VGG16(weights='imagenet', include_top=False) x = base_model.output x = GlobalAveragePooling2D()(x) x = Dense(???, activation='relu')(x) x = Dropout(0.5, name='drop_fc1')(x) prediction_layer = Dense(class_number, activation='softmax')(x) ``` I haven't see particular rules about how its done, are there any ? Is it link with the size of the convolution layer ?<issue_comment>username_1: You can't really make the network tell you if a face is new (unless you actually train a network with that particular purpose, maybe). Ideally, if you feed a new face to a trained network, the output activations will be pretty low in all the possible categories (faces that the network has already seen) i.e. no particular category will signal a high probability. In that case, you can just set a cut, such that if the maximum activation is below a certain threshold, it means that the NN encountered a new face. But things are not that easy in practice. If the NN encounters a face it has already seen, but the image is weird (blurry, the face doesn't cover much of the actual image, there are multiple faces in the image etc.) you could get the same output as in the case of a new face. At the same time, for a NN images are just probability distributions, so it could happen that a new face, which for us, humans, looks nothing like the others, to look pretty similar to an old one, from the NN point of view, hence giving a high activation in a single particular output neuron. Upvotes: 0 <issue_comment>username_2: Specifically for face recognition (and other identification algorithms) there are better approaches than using classifiers directly. Most identity recognition algorithms generate some kind of metric - typically an "embedding" of the original image into an abstract space i.e. a vector of real numbers. The space might be based on real-world biometrics e.g. normalised measurements of eye distance, eyebrow arch etc, which would be trained as a regression algorithm. The problem with this is that it requires a lot of labelled data, and the biometrics are not necessarily good at differentiating between identities. An alternative is to get the neural network to find the best abstract space for identities. You can do this if you have at least two images of each identity, and using [triplet loss](https://en.wikipedia.org/wiki/Triplet_loss) to train the network - the loss function directly rewards embedding the same identity close and different identity far apart. Once you have an embedding, you no longer directly classify identities using the neural network. Instead, you base identity on distance between measured embedding and stored embeddings. This requires implementing a search function that looks at known embeddings and sorts by distance. > > how do we ensure the network tells us when it encounters a new person? > > > Embeddings don't solve this problem directly, but give a useful heuristic - distance in embedding space. Typically a maximum allowed distance is set as a cutoff to consider an image as showing a new identity. This is a hyperparameter of the model. This is an area that triplet loss helps with, since it is trained to make the distance as large as possible between images that show different identities. If it has generalised well during training, then it should ignore differences due to lighting, pose, makeup etc, but still be able to differentiate similar looking people. As the embedding is approximate, any such system may make mistakes, and needs to be carefully tested. The quality and quantity of training data are important, and it should match images used in production. But that is no different to the pure classifier, which must in addition be re-built and re-trained for every new class added. Whether to use a more basic classifier or something like triplet loss is a question of scale - if the number of identities that need to be tracked is high, or the rate of change in identities is high, then embeddings trained on triplet loss (or similar) are more practical. Upvotes: 1
2019/03/20
1,022
4,519
<issue_start>username_0: So i have been playing around with `neat-python`. I made a program, applying neat, to play pinball on the Atari 2600. The code for that can be found in the file `test2.py` [here](https://github.com/niallmandal/atari_pinball_tutorial) Now based on that, I would like to do the same, but on a 2 player game. I have already set up the environment to play a 2 player game, which PONG using OpenAI Retro. What I have no clue how to do, is run 2 nets at the same time, on the same observation. The way that `neat-python` works, is you get the observation from a single function that goes through each genome and runs the environment. How would you create 2 `eval_genome` functions that can take in the same observation real-time? This means that they train based off of the same images and environmenrs. Help?<issue_comment>username_1: You can't really make the network tell you if a face is new (unless you actually train a network with that particular purpose, maybe). Ideally, if you feed a new face to a trained network, the output activations will be pretty low in all the possible categories (faces that the network has already seen) i.e. no particular category will signal a high probability. In that case, you can just set a cut, such that if the maximum activation is below a certain threshold, it means that the NN encountered a new face. But things are not that easy in practice. If the NN encounters a face it has already seen, but the image is weird (blurry, the face doesn't cover much of the actual image, there are multiple faces in the image etc.) you could get the same output as in the case of a new face. At the same time, for a NN images are just probability distributions, so it could happen that a new face, which for us, humans, looks nothing like the others, to look pretty similar to an old one, from the NN point of view, hence giving a high activation in a single particular output neuron. Upvotes: 0 <issue_comment>username_2: Specifically for face recognition (and other identification algorithms) there are better approaches than using classifiers directly. Most identity recognition algorithms generate some kind of metric - typically an "embedding" of the original image into an abstract space i.e. a vector of real numbers. The space might be based on real-world biometrics e.g. normalised measurements of eye distance, eyebrow arch etc, which would be trained as a regression algorithm. The problem with this is that it requires a lot of labelled data, and the biometrics are not necessarily good at differentiating between identities. An alternative is to get the neural network to find the best abstract space for identities. You can do this if you have at least two images of each identity, and using [triplet loss](https://en.wikipedia.org/wiki/Triplet_loss) to train the network - the loss function directly rewards embedding the same identity close and different identity far apart. Once you have an embedding, you no longer directly classify identities using the neural network. Instead, you base identity on distance between measured embedding and stored embeddings. This requires implementing a search function that looks at known embeddings and sorts by distance. > > how do we ensure the network tells us when it encounters a new person? > > > Embeddings don't solve this problem directly, but give a useful heuristic - distance in embedding space. Typically a maximum allowed distance is set as a cutoff to consider an image as showing a new identity. This is a hyperparameter of the model. This is an area that triplet loss helps with, since it is trained to make the distance as large as possible between images that show different identities. If it has generalised well during training, then it should ignore differences due to lighting, pose, makeup etc, but still be able to differentiate similar looking people. As the embedding is approximate, any such system may make mistakes, and needs to be carefully tested. The quality and quantity of training data are important, and it should match images used in production. But that is no different to the pure classifier, which must in addition be re-built and re-trained for every new class added. Whether to use a more basic classifier or something like triplet loss is a question of scale - if the number of identities that need to be tracked is high, or the rate of change in identities is high, then embeddings trained on triplet loss (or similar) are more practical. Upvotes: 1
2019/03/20
1,187
4,511
<issue_start>username_0: I have read a lot about RL algorithms, that update the action-value function at each step with the currently gained reward. The requirement here is, that the reward is obtained after each step. I have a case, where I have three steps, that have to be passed in a specific order. At each step the agent has to make a choice between a range of actions. The actions are specific for each step. To give an example for my problem: I want the algorithm to render a sentence of three words. For the first word the agent may choose a word out of `['I', 'Trees']`, the second word might be `['am', 'are']` and the last word could be chosen from `['nice', 'high']`. After the agent has made its choices, the reward is obtained once for the whole sentence. *Does anyone know which algorithms to use in this kind of problem?* --- To give a bit more detail on what I already tried: I thought that using value iteration would be an reasonable approach to test. My problem here is that I don't know how to assign the discounted reward for the chosen operations. For example after the last choice I get a reward of 0.9. But how do I update the value for the first action (choosing out of I and Trees in my example)?<issue_comment>username_1: You don't need to have a reward on every single timestep, reward at the end is enough. Reinforcement learning can deal with temporal credit assignment problem, all algorithms are designed to work with it. Its enough to define a reward at the end where you, for example, give a reward of 1 if sentence is satisfactory or -1 if it isn't. Regular tabular Q-learning would easily solve the toy problem that you gave as an example. Upvotes: 2 <issue_comment>username_2: You are describing a straightforward Markov Decision Process that could be solved by almost any Reinforcement Learning algorithm. > > I have read a lot about RL algorithms, that update the action-value function at each step with the currently gained reward. The requirement here is, that the reward is obtained after each step. > > > This is true. However, the reward value can be zero, and it is quite common to have most states and actions result in zero reward with only some specific ones returning a non-zero value that makes a difference to the goals of the agent. When this is more extreme - e.g. only one non-zero reward per thousand steps - this is referred to as "sparse rewards" and can be a problem to handle well. Your three steps then reward situation is nowhere near this, and not an issue at all. > > I thought that using value iteration would be an reasonable approach to test. My problem here is that I don't know how to assign the discounted reward for the chosen operations. > > > Value iteration should be fine for your test problem, providing the number of choices for words is not too high. The way you assign discounted rewards is to use the Bellman equation as an update: $$v(s) \leftarrow \text{max}\_{a}[\sum\_{s',r} p(s',r|s,a)(r + \gamma v(s'))]$$ For a deterministic environment you can simplify the sum, as $p(s',r|s, a)$ will be 1 for a specific combination of $s', r$ It doesn't matter that $r = 0$ for the first two steps. The Bellman equation links time steps, by connecting $v(s)$ and $v(s')$. So, over many repetitions of value iteration's main loop, the episode end rewards get copied - with the discount applied - to their predecessor states. Very quickly in your case you will end up with values for the start states. > > For example after the last choice I get a reward of 0.9. But how do I update the value for the first action (choosing out of I and Trees in my example)? > > > You don't do it directly in a single step. What happens is repeating the value iteration loop copies the best values to their predecessor states, one possible time step at a time. On the first loop through all states, you will run an update something like: $$v(\text{'I'}) \leftarrow 0 + \gamma v(\text{'I am'})$$ and $v(\text{'I am'}) = 0$ initially, so it will learn nothing useful in this first loop. However, it will also learn in the same loop: $$v(\text{'I am'}) \leftarrow 0 + \gamma v(\text{'I am nice'})$$ So assuming $\gamma = 0.9$ and $v(\text{'I am nice'}) = 0.9$, and that "I am high" scores less than 0.9, then it will set $v(\text{'I am'}) = 0.81$. On the next loop through states in value iteration, it will then set $v(\text{'I'}) = 0.727$ (assuming "I am" beats "I are" for maximum value) Upvotes: 3 [selected_answer]
2019/03/21
1,311
5,440
<issue_start>username_0: I want to solve a problem using Reinforcement Learning on a 20x20 board. An agent (a mouse) has to get the highest possible rewards as fast as possible by collecting cheese, which there are 10 in total. The agent has a fixed amount of time for solving this problem, namely 500 steps per game. The problem is, the cheeses will be randomly assigned to the fields on the board, the agent knows however, where the cheeses is located. Is there any way how this could be solved using only Reinforcement Learning (and not training for an absurd amount of time)? Or is the only way to solve this problem to use algorithms like the A\*-algorithm? I've tried many different (deep)-q-learning models, but it always failed miserably. Edit: I could not get any meaningful behavior after 6 hours of learning while using a GTX 950M. Maybe my implementation was off, but i don't think so.<issue_comment>username_1: Yes you can use RL for this. The trick is to include the location of the cheese as part of the state description. So as well as up to 400 states for the mouse location, you have (very roughly) $400^{10}$ possible cheese locations, meaning you have $400^{11}$ states in total. So you are going to want some function approximation if you want to use RL - you would probably train using a convolutional neural network, with an "image" of the board including mouse and cheese positions, plus DQN to select actions. Viewed like this, a game where the mouse tries to get the cheese in minimal time seems on the surface a lot simpler than many titles in the Atari game environments, which DQN has been shown to solve well for many games. I would probably use two image channels - one for mouse location and one for cheese location. A third channel perhaps for walls/obstacles if there are any. > > Or is the only way to solve this problem to use algorithms like the A\*-algorithm? > > > A\* plus some kind of sequence optimisation like a travelling salesman problem (TSP) solver would probably be optimal if you have been presented the problem and asked to solve it any way you want. With only 11 locations to resolve - mouse start plus 10 cheese locations - then you can brute force the movement combinations in a few seconds on a modern CPU, so that part may not be particularly exciting (whilst TSP solvers can get more involved and interesting). The interesting thing about RL is *how* it will solve the problem. RL is a learning algorithm - the purpose of implementing it is to see what it takes for the machine to gain knowledge towards a solution. Whilst A\* and combinatorial optimisers are where *you* have knowledge of how to solve the problem and do so as optimally as possible based on a higher level analysis. The chances are high that an A\*/optimiser solution would be more robust, quicker to code, and quicker to run than a RL solution. There is nothing inherently wrong with either approach, if all you want to do is solve the problem at hand. It depends on your goals for why you are bothering with the problem in the first place. You could even combine A\* and RL if you really wanted to. A\* to find the paths, then RL to decide best sequence using the paths as part of the input to the CNN. The A\* analysis of routes would likely help the RL stage a lot - add them as one or more additional channels. Upvotes: 3 [selected_answer]<issue_comment>username_2: A assume here OP is familiar with DQN basics. "Standard" way to solve this problem with Deep RL would be using convolutional DQN. Make net from 2 to 4 convolutional layers with 1-2 fully connected on top. The trick here is how you output Q-values. Input is board with cheese, without information about mouse. Instead net should output Q for every action from every field on the board (every possible position of mouse), that mean you output 20x20x(number\_of\_moves from the field) Q values. That would make net quite big, but that is most reliable way for DQN. For each move form the replay buffer only one Q value updated (gradient produced) with Time Difference equation Because only one value form 20x20x(number\_of\_moves) updated per sample you need quite big replay buffer and a lot of training. After each episod (complete game) cheese should be randomly redistributed. Episodes should be mixed up in replay buffer, training on 1 episod is a big No. Hopefully that should at least give you direction in which do research/development. Warning: DQN is slow to train, and with such big action space (20 x 20 x number\_of\_moves) could require million or tens of millions of even more moves. Alternatively, if you don't want such big action space, is to use actor-critic architecture (or policy gradient, actor-critic is a kind of policy gradient). Actor-critic network have small action space, with only number\_of\_moves outputs. On the down size complexity of method is much higher, and behavior could be difficult to predict or analyze. However if action space is too big it could be preferable solution. Practical issues and implementations of actor-critic is too huge area to go in depth here. Edit: There is another way with lesser action space for DQN, but somehow less reliable and possibly more slow: shift the board in such way that mouse is in the center of the board and pad invalid parts with zero (size of the new board should be x2). In that case only number\_of\_moves should be in output. Upvotes: 0