date stringlengths 10 10 | nb_tokens int64 60 629k | text_size int64 234 1.02M | content stringlengths 234 1.02M |
|---|---|---|---|
2019/03/22 | 1,265 | 5,320 | <issue_start>username_0: Coming from a process (optimal) control background, I have begun studying the field of deep reinforcement learning.
Sutton & Barto (2015) state that
>
> particularly important (to the writing of the text) have been the contributions establishing and developing the relationships to the theory of optimal control and dynamic programming
>
>
>
With an emphasis on the elements of reinforcement learning - that is, policy, agent, environment, etc., what are the key differences between (deep) RL and optimal control theory?
In optimal control we have, controllers, sensors, actuators, plants, etc, as elements. Are these different names for similar elements in deep RL? For example, would an optimal control plant be called an environment in deep RL?<issue_comment>username_1: Yes you can use RL for this. The trick is to include the location of the cheese as part of the state description. So as well as up to 400 states for the mouse location, you have (very roughly) $400^{10}$ possible cheese locations, meaning you have $400^{11}$ states in total.
So you are going to want some function approximation if you want to use RL - you would probably train using a convolutional neural network, with an "image" of the board including mouse and cheese positions, plus DQN to select actions.
Viewed like this, a game where the mouse tries to get the cheese in minimal time seems on the surface a lot simpler than many titles in the Atari game environments, which DQN has been shown to solve well for many games.
I would probably use two image channels - one for mouse location and one for cheese location. A third channel perhaps for walls/obstacles if there are any.
>
> Or is the only way to solve this problem to use algorithms like the A\*-algorithm?
>
>
>
A\* plus some kind of sequence optimisation like a travelling salesman problem (TSP) solver would probably be optimal if you have been presented the problem and asked to solve it any way you want. With only 11 locations to resolve - mouse start plus 10 cheese locations - then you can brute force the movement combinations in a few seconds on a modern CPU, so that part may not be particularly exciting (whilst TSP solvers can get more involved and interesting).
The interesting thing about RL is *how* it will solve the problem. RL is a learning algorithm - the purpose of implementing it is to see what it takes for the machine to gain knowledge towards a solution. Whilst A\* and combinatorial optimisers are where *you* have knowledge of how to solve the problem and do so as optimally as possible based on a higher level analysis. The chances are high that an A\*/optimiser solution would be more robust, quicker to code, and quicker to run than a RL solution.
There is nothing inherently wrong with either approach, if all you want to do is solve the problem at hand. It depends on your goals for why you are bothering with the problem in the first place.
You could even combine A\* and RL if you really wanted to. A\* to find the paths, then RL to decide best sequence using the paths as part of the input to the CNN. The A\* analysis of routes would likely help the RL stage a lot - add them as one or more additional channels.
Upvotes: 3 [selected_answer]<issue_comment>username_2: A assume here OP is familiar with DQN basics.
"Standard" way to solve this problem with Deep RL would be using convolutional DQN.
Make net from 2 to 4 convolutional layers with 1-2 fully connected on top.
The trick here is how you output Q-values. Input is board with cheese, without information about mouse. Instead net should output Q for every action from every field on the board (every possible position of mouse), that mean you output 20x20x(number\_of\_moves from the field) Q values. That would make net quite big, but that is most reliable way for DQN. For each move form the replay buffer only one Q value updated (gradient produced) with Time Difference equation
Because only one value form 20x20x(number\_of\_moves) updated per sample you need quite big replay buffer and a lot of training. After each episod (complete game) cheese should be randomly redistributed. Episodes should be mixed up in replay buffer, training on 1 episod is a big No.
Hopefully that should at least give you direction in which do research/development. Warning: DQN is slow to train, and with such big action space (20 x 20 x number\_of\_moves) could require million or tens of millions of even more moves.
Alternatively, if you don't want such big action space, is to use actor-critic architecture (or policy gradient, actor-critic is a kind of policy gradient). Actor-critic network have small action space, with only number\_of\_moves outputs. On the down size complexity of method is much higher, and behavior could be difficult to predict or analyze. However if action space is too big it could be preferable solution. Practical issues and implementations of actor-critic is too huge area to go in depth here.
Edit: There is another way with lesser action space for DQN, but somehow less reliable and possibly more slow: shift the board in such way that mouse is in the center of the board and pad invalid parts with zero (size of the new board should be x2). In that case only number\_of\_moves should be in output.
Upvotes: 0 |
2019/03/23 | 1,206 | 5,276 | <issue_start>username_0: [Autoencoders](https://en.wikipedia.org/wiki/Autoencoder) are neural networks that learn a compressed representation of the input in order to later reconstruct it, so they can be used for dimensionality reduction. They are composed of an encoder and a decoder (which can be separate neural networks). Dimensionality reduction can be useful in order to deal with or attenuate the issues related to the curse of dimensionality, where data becomes sparse and it is more difficult to obtain "statistical significance". So, autoencoders (and algorithms like PCA) can be used to deal with the curse of dimensionality.
Why do we care about dimensionality reduction specifically using autoencoders? Why can't we simply use PCA, if the purpose is dimensionality reduction?
Why do we need to decompress the latent representation of the input if we just want to perform dimensionality reduction, or why do we need the decoder part in an autoencoder? What are the use cases? In general, why do we need to compress the input to later decompress it? Wouldn't it be better to just use the original input (to start with)?<issue_comment>username_1: A use case of autoencoders (in particular, of the decoder or generative model of the autoencoder) is to denoise the input. This type of autoencoders, called *denoising autoencoders*, take a partially corrupted input and they attempt to reconstruct the corresponding uncorrupted input. There are several applications of this model. For example, if you had a corrupted image, you could potentially recover the uncorrupted one using a denoising autoencoder.
Autoencoders and PCA are related:
>
> an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors.
>
>
>
For more info, have a look at the paper [From Principal Subspaces to Principal Components with Linear Autoencoders](https://arxiv.org/pdf/1804.10253.pdf) (2018), by <NAME>. See also [this answer](https://stats.stackexchange.com/a/120096/82135), which also explains the relation between PCA and autoencoders.
Upvotes: 2 <issue_comment>username_2: PCA is a linear method that creates a transformation that is capable of changing the vectors projections (changing axis)
Since PCA looks for the direction of maximum variance it usually have high discriminativity **BUT** it does not guaranteed that the direction of most variance is the direction of most discriminativity.
LDA is a linear method that creates a transformation that is capable of finding the direction that is most relevant to decide if a vector belong to class A or B.
PCA and LDA have non-linear Kernel versions that might overcome their linear limitations.
Autoencoders can perform dimensionality reduction with other kinds of loss function, can be non-linear and might perform better than PCA and LDA for a lot of cases.
There is probably no best machine learning algorithm to do anything, sometimes Deep Learning and Neural Nets are overkill for simple problems and PCA and LDA might be tried before other, more complex, dimensionality reductions.
Upvotes: 2 <issue_comment>username_3: It is important to think about what sort of patterns in the data are being represented.
Suppose that you have a dataset of greyscale images, such that every image is a uniform intensity. As a human brain you'd realise that every element in this dataset can be described in terms of a single numeric parameter, which is that intensity value. This is something that PCA would work fine for, because each of the dimensions (we can think of each pixel as a different dimension) is perfectly linearly correlated.
Suppose instead that you have a dataset of black and white 128x128px bitmap images of centred circles. As a human brain you'd quickly realise that every element in this dataset can be fully described by a single numeric parameter, which is the radius of the circle. That is a very impressive level of reduction from 16384 binary dimensions, and perhaps more importantly it's a semantically meaningful property of the data. However, PCA probably won't be able to find that pattern.
Your question was "Why can't we simply use PCA, if the purpose is dimensionality reduction?" The simple answer is that PCA is the simplest tool for dimensionality reduction, but it can miss a lot of relationships that more powerful techniques such as autoencoders might find.
Upvotes: 3 <issue_comment>username_4: 1. The decoder half is necessary in order to compute the loss function for training the network. Similar to how the 'adversary' is still necessary in a GAN even if you are only interested in the generative component.
2. Autoencoders can learn non-linear embeddings of the data, and hence are more powerful than vanilla PCA.
3. Autoencoders have [applications](https://en.wikipedia.org/wiki/Autoencoder#Applications) beyond dimensionality reduction:
* Generating new data points, or perform interpolation (see VAE's)
* Create denoising filters (e.g. in image processing)
* Compress/decompress data
* Link prediction (e.g. in drug discovery)
Upvotes: 1 |
2019/03/23 | 1,143 | 4,712 | <issue_start>username_0: [<NAME>' Neural Networks A Systematic Introduction, section 8.1.2](https://page.mi.fu-berlin.de/rojas/neural/chapter/K8.pdf#page=9) relates off-line backpropagation and on-line backpropagation with Gauss-Jacobi and Gauss-Seidel methods for finding the intersection of two lines.
What I can't understand is how the iterations of on-line backpropagation are perpendicular to the (current) constraint. More specifically, how is $\frac12(x\_1w\_1 + x\_2w\_2 - y)^2$'s gradient, $(x\_1,x\_2)$, normal to the constraint $x\_1w\_1 + x\_2w\_2 = y$?<issue_comment>username_1: A use case of autoencoders (in particular, of the decoder or generative model of the autoencoder) is to denoise the input. This type of autoencoders, called *denoising autoencoders*, take a partially corrupted input and they attempt to reconstruct the corresponding uncorrupted input. There are several applications of this model. For example, if you had a corrupted image, you could potentially recover the uncorrupted one using a denoising autoencoder.
Autoencoders and PCA are related:
>
> an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors.
>
>
>
For more info, have a look at the paper [From Principal Subspaces to Principal Components with Linear Autoencoders](https://arxiv.org/pdf/1804.10253.pdf) (2018), by <NAME>. See also [this answer](https://stats.stackexchange.com/a/120096/82135), which also explains the relation between PCA and autoencoders.
Upvotes: 2 <issue_comment>username_2: PCA is a linear method that creates a transformation that is capable of changing the vectors projections (changing axis)
Since PCA looks for the direction of maximum variance it usually have high discriminativity **BUT** it does not guaranteed that the direction of most variance is the direction of most discriminativity.
LDA is a linear method that creates a transformation that is capable of finding the direction that is most relevant to decide if a vector belong to class A or B.
PCA and LDA have non-linear Kernel versions that might overcome their linear limitations.
Autoencoders can perform dimensionality reduction with other kinds of loss function, can be non-linear and might perform better than PCA and LDA for a lot of cases.
There is probably no best machine learning algorithm to do anything, sometimes Deep Learning and Neural Nets are overkill for simple problems and PCA and LDA might be tried before other, more complex, dimensionality reductions.
Upvotes: 2 <issue_comment>username_3: It is important to think about what sort of patterns in the data are being represented.
Suppose that you have a dataset of greyscale images, such that every image is a uniform intensity. As a human brain you'd realise that every element in this dataset can be described in terms of a single numeric parameter, which is that intensity value. This is something that PCA would work fine for, because each of the dimensions (we can think of each pixel as a different dimension) is perfectly linearly correlated.
Suppose instead that you have a dataset of black and white 128x128px bitmap images of centred circles. As a human brain you'd quickly realise that every element in this dataset can be fully described by a single numeric parameter, which is the radius of the circle. That is a very impressive level of reduction from 16384 binary dimensions, and perhaps more importantly it's a semantically meaningful property of the data. However, PCA probably won't be able to find that pattern.
Your question was "Why can't we simply use PCA, if the purpose is dimensionality reduction?" The simple answer is that PCA is the simplest tool for dimensionality reduction, but it can miss a lot of relationships that more powerful techniques such as autoencoders might find.
Upvotes: 3 <issue_comment>username_4: 1. The decoder half is necessary in order to compute the loss function for training the network. Similar to how the 'adversary' is still necessary in a GAN even if you are only interested in the generative component.
2. Autoencoders can learn non-linear embeddings of the data, and hence are more powerful than vanilla PCA.
3. Autoencoders have [applications](https://en.wikipedia.org/wiki/Autoencoder#Applications) beyond dimensionality reduction:
* Generating new data points, or perform interpolation (see VAE's)
* Create denoising filters (e.g. in image processing)
* Compress/decompress data
* Link prediction (e.g. in drug discovery)
Upvotes: 1 |
2019/03/24 | 856 | 3,319 | <issue_start>username_0: I am training a modified VGG-16 to classify crowd density (empty, low, moderate, high). 2 dropout layers were added at the end on the network each one after one of the last 2 FC layers.
network settings:
* training data contain 4381 images categorized under 4 categories (empty, low, moderate, high), 20% of the training data is set for validation. test data has 2589 images.
* training is done for 50 epochs.(training validation accuracy drops after
50 epochs)
* lr=0.001, decay=0.0005, momentum=0.9
* loss= categorical\_crossentropy
* augmentation for (training, validation and testing data): rescale=1./255, brightness\_range=(0.2,0.9), horizontal\_flip
With the above-stated settings, I get the following results:
* training evaluation loss: 0.59, accuracy: 0.77
* testing accuracy 77.5 (correct predictions 2007 out of 2589)
Regarding this, I have two concerns:
1. Is there anything else I could do to improve accuracy for both training and testing?
2. How can I know if this is the best accuracy I can get?<issue_comment>username_1: >
> Is there anything else I could do to improve accuracy for both training and testing?
>
>
>
Yes, of course, there are a lot of methods if you want to try to improve your accuracy, some that I can mention:
* Try to use a more complex model: ResNet, DenseNet, etc.
* Try to use other optimizers: Adam, Adadelta, etc.
* Tune your hyperparameters (e.g. change your learning rate, momentum, rescale factor, convolution size, number of feature maps, epochs, neurons, FC layers)
* Try to analyze your data, with ~75% and 4 categories, is that possible there is one category that difficult to classified?
In essence, you have to do a lot of experiments with your model until you think "it is enough" (if you have a deadline). If you don't have a hard deadline, you can keep improving updating your ML model.
>
> How can I know if this is the best accuracy I can get?
>
>
>
No, you can't until you compare it with other models/hyperparameters. If you do more experiments (some ways like the one I mentioned above) or compare with other people's experiments that using the same data, you'll find which one is the best. For an academic paper, for example, you need to compare at least 3 to 4 models that similar or experiment with hundreds of different hyperparameters combination.
Upvotes: 3 <issue_comment>username_2: One option is not mentioned by **username_1** is getting more data. Getting bigger dataset is almost always improve training results. If it's too hard to obtain more labeled data you can use data augmentation on existing data - small random transformations while keeping the same label.
For images most common methods of augmentations are (applying padding if needed):
* small random zooming in/out
* small random shifts of image
* adding random noise to image
* small random change in brightness, contrast, color balance and similar parameters
There are more complex methods, but they are dependent on specific of dataset/goal of training
Stopping:
* you should stop training if you see training error is not decreasing - your have reached (possibly local) minima.
* you should stop if testing or validation error start to increase - your method is overfitting (data augmentation can help in that case)
Upvotes: 2 |
2019/03/24 | 905 | 3,221 | <issue_start>username_0: I was looking for an approach to recognise musical notes from photos.
I found this repository <https://github.com/mpralat/notesRecognizer>. However, it doesn't seem good enough. If you look into the [`bad`](https://i.stack.imgur.com/XtB0y.png) folder, you can see that just tiny variations of lightning can already cause problems. One should be able to read musical notes with lower quality images.
I found other projects. However, they all use high resolution images.
* <https://github.com/suyalcinkaya/music-note-recognition> ([example of image](https://github.com/suyalcinkaya/music-note-recognition/blob/master/input_images/im2s.JPG?raw=true))
* <https://github.com/nikolalsvk/note-play> ([example](https://github.com/nikolalsvk/note-play/blob/master/images/notes-1.png?raw=true))
Now, this is unsatisfying. If you want to snap a photo of some tunes, and want them to be recognized.
So, what could one do to achieve a good solution?
I was thinking about treating the musical notes just like written letters. The computer can easily learn written characters with the Arabic symbols.
I wonder, though, how easy would it be for a non-Arabic? For example, in Chinese or Japanese, several characters combine into one.
The same applies to musical notes, they can be connected and form something slightly different through that. For example,
[](https://i.stack.imgur.com/XtB0y.png)
or:
[](https://i.stack.imgur.com/P6X9K.png)
in contrast to just simple notes like:
[](https://i.stack.imgur.com/ds7tf.png)
What would be a good approach to recognize musical notes even for slightly low-resolution images or bit blurry deformed images?
I'm not saying to read out a symphony out of a thumbnail. But less than optimal captures.<issue_comment>username_1: refer to [MNIST on kagle](https://www.kaggle.com/c/digit-recognizer)
to train model to recognize given set of notes in your case.
[](https://i.stack.imgur.com/B2oQ3.png)
if you start training model will understand pattern depending on bits in your image.
issue comes with blurry images when last 2 sets of pics with minute change. will be difficult task for computer to distinguish.
there is one sure shot method which works almost every time but is resource intensive and needs plethora of data to train. is DNN.
this is my take on your question.
Upvotes: -1 <issue_comment>username_2: >
> any subjective ideas or comments are more than welcome
>
>
>
Not a complete answer, but some ideas:
Your goal is subdivided into many tasks. It's not exactly the same as OCR because you also need to find the vertical alignment for each note.
---
>
> One should be able to read musical notes with lower quality images.
>
>
>
If you want your model to perform on low quality image, you'll need such database.
But instead of labeling and taking pictures of printed sheets, you could just generate the images and virtually apply all sorts of distortion on them.
Upvotes: 1 |
2019/03/25 | 300 | 1,173 | <issue_start>username_0: I have been looking for BERT for many tasks. I would like to compare the performance to answer an FAQ, using BERT semantic similarity and BERT Q/A.
However, I'm not sure it is a good idea to use semantic similarity for this task. If it is, do you think it is possible to find a dataset to fine-tune my algorithm?<issue_comment>username_1: Maybe the following article can help you:
[FAQ Retrieval using Query-Question Similarity and BERT-Based Query-Answer Relevance (2019)](https://arxiv.org/abs/1905.02851)
They evaluate their model in localgovFAQ and StackExchange datasets.
Upvotes: 1 <issue_comment>username_2: You can do that and you'll propably find data, yet that depends on the kind of FAQ data you will apply it on. More importantly, what insight do you gain by comparing two BERT models?
Secondly, f you mean with semantic similarity that vector space embeddings are used, even for the retrieval/ranking and not just for re-ranking, then I can tell you that the performance still isn't SOTA. But you can simply use a such a neural semantic model for re-ranking.
We are working on that. So if you wanna know more, PM me.
Upvotes: 0 |
2019/03/25 | 508 | 2,056 | <issue_start>username_0: From what I know, AI/ML uses a large amount of data to train an algorithm to solve problems. But since it’s an algorithm, I was wondering if it's possible to export it. If I trained an AI with R, could I export a mathematical algorithm that could be imported by other users to use in their application, whether it’s written in R or another language?
So it’s like I’ve discovered a secret message decoding method. I don’t need to share the whole program for others to decode it. I just need to tell them the steps (algorithm) to decode it, and they can implement it in whatever application they want.<issue_comment>username_1: Yes, once you've trained a model you'll have the details of that model in your workspace.
e.g.
```
B_Naive = naiveBayes(train_set[,-c(1)],train_set[,1]);
```
Will give you an object B\_naive that can be 'exported'. These are the parameters of the model, you'll still need the naïve bayes library (or whichever library).
Upvotes: 0 <issue_comment>username_2: If the 'algorithm' you're talking about is a neural network, then you can distribute the learned parameters/weights to anyone who wants to use it. This is how neural nets are normally 'exported': without all of the training data used to create them. Actually, this is done with many kinds of models ([parameterized ones](https://en.wikipedia.org/wiki/Parametric_family)).
In order to 'decode' the model, users would only have to know its structure. In the case of a neural network, they'd need to know the size of each layer, what activation functions were used, etc.
This is not possible with every type of ML model, however. Specifically, [non-parametric models](https://en.wikipedia.org/wiki/Nonparametric_statistics) and 'lazy' models make use of training data at inference time. They wouldn't be useful without their training data. Classifying an input by finding its [k nearest neighbors](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm), for example, would require having the training data.
Upvotes: 3 [selected_answer] |
2019/03/27 | 936 | 3,730 | <issue_start>username_0: Can neural networks change or evolve other neural networks? Also, could evolutionary algorithms be applied to evolve neural networks?
For example, suppose that we have neural networks A and B. The neural network B changes the neural network A. If B "successfully" changed it, NN A will survive.<issue_comment>username_1: Yes, this is an active area of research as we speak. Both using classic algorithms (decision trees, random forests, Bayesian ensembles) as well as neural networks. This can also be done via evolutionary algorithms. I have personally used them for hyperparameter tuning in a few cases where squeezing out a couple of extra points of accuracy was key.
This is in fact what Google is doing with their AutoML system. They are using neural networks for architecture search.
Here is a Github repo with some interesting papers and links on the topic you are describing: <https://github.com/hibayesian/awesome-automl-papers>.
Upvotes: 4 [selected_answer]<issue_comment>username_2: [This answer](https://ai.stackexchange.com/a/11494/2444) points at some of the more modern approaches. This has been around for a long time in the form of [NeAT: Neuroevolution of Augmenting Topologies](http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf), originally described in Kenneth Stanley's 2002 paper.
NeAT is available as a package for many languages, including [Python](https://neat-python.readthedocs.io/en/latest/neat_overview.html), [Java](http://nn.cs.utexas.edu/?jneat), and [C++](http://nn.cs.utexas.edu/keyword?neat-c). The algorithm works as a form of [genetic programming](https://en.wikipedia.org/wiki/Genetic_programming). A population of networks is generated with simple, random, topologies. Then they are evaluated according to a loss function for a specific task. The poorly performing networks are discarded, and the better performing ones are intermixed to generate new variations. This process is iterated until the user wishes it to stop, and typically results in gradual improvement of average population performance against the loss function.
Upvotes: 2 <issue_comment>username_3: <NAME> is correct when saying that NEAT itself is not neural networks evolving neural networks, what I believe is the closest framework to what the question is asking would be HyperNEAT <http://axon.cs.byu.edu/~dan/778/papers/NeuroEvolution/stanley3>\*\*.pdf
HyperNEAT operates in a very similar way to what you are describing, from a ten thousand foot view the algorithm is as follows:
1)You lay out nodes for a rnn in a Cartesian space, 2d, 3d, whatever you dimension you wish, this set of coordinates is called the substrate.
2)A cppn is queried by passing in two coordinates at a time as input, which gives the cppn a search space of a hypercube in 2x the dimension the coordinates are in (for substrates in space > 2d this is very large)
3)The output of the cppn is used to encode connection, weights, biases, of the rnn coordinates
4)Then the rnn is evaluated by your fitness function and but the evolution (speciation, reproduction, etc) is ran on the cppn that encoded the rnn. So you evolve a population of cppn "genotypes" that encode rnn or cnn "phenotypes".
The third iteration of NEAT is ES-Hyperneat where all you need to layout in the substrate is the input and output layers (Hyperneat you must layout all hidden nodes of the substrate statically). It uses a subdivision tree to subdivide the search spaces and query the subdivided root coordinates of this tree with the cppn just like hyperneat, checking variance along the way to decide if the new node is in a "high information" topological space, to "evolve" hidden nodes into the substrate (rnn).
Upvotes: 2 |
2019/03/27 | 1,400 | 4,540 | <issue_start>username_0: I don't know what people mean by 'vanilla policy gradient', but what comes to mind is REINFORCE, which is the simplest policy gradient algorithm I can think of. Is this an accurate statement?
By REINFORCE I mean this surrogate objective
$$ \frac{1}{m} \sum\_i \sum\_t log(\pi(a\_t|s\_t)) R\_i,$$
where I index over the $m$ episodes and $t$ over time steps, and $R\_i$ is the total reward of the episode. It's also common to replace $R\_i$ with something else, like a baselined version $R\_i - b$ or use the future return, potentially also with a baseline $G\_{it} - b$.
However, I think even with these modifications to the multiplicative term, people would still call this 'vanilla policy gradient'. Is that correct?<issue_comment>username_1: By vanilla policy gradient, I think what is meant is normally any arbitrary policy gradient for the purposes of formalization (or whatever is trying to be communicated).
For example, let $J(\theta)$ be any policy objective function. Then our policy gradient would be $\begin{equation} \nabla\_\theta J(\theta) \end{equation}$ where the change in our policy is $\Delta \theta = \alpha \nabla\_\theta J(\theta) $ with $\alpha$ being our step size.
This will, of course, vary as notational convention and term definition can change between practitioners.
Upvotes: 1 <issue_comment>username_2: OpenAI has a series of Spinning Up pages on their website to educate people about AI. One of those defines Vanilla Policy Gradiant.
[Vanilla Policy Gradiant via OpenAI](https://spinningup.openai.com/en/latest/algorithms/vpg.html)
At the bottom of the page are reference papers that further discuss gradiants.
Whether this is definitive for Vanilla Policy Gradients or not I do not know, but if many others refer to OpenAI for learning this subject their definition will spread.
Upvotes: 1 <issue_comment>username_3: You can check the Open AI [Introduction to RL series](https://spinningup.openai.com/en/latest/spinningup/rl_intro3.html#id14), they explain pretty neatly there what is the Policy Optimization and how to derive it. I think, that usually when we are talking about REINFORCE algorithm, we are talking about the one described in [Sutton's book on Reinforcement learning](https://www.andrew.cmu.edu/course/10-703/textbook/BartoSutton.pdf). It is described as the policy optimization algorithm maximizing the Value Function $v\_{\pi(\theta)}(s) = E[G\_t|S\_t = s]$ of initial state of the agent. Here $G\_t = \sum\_{k=0}^\infty \gamma^k R\_{t+k+1}$ is the $\gamma$ discounted return from given state, time $s, t$. Or, shortly put.
$$
J(\theta) = v\_{\pi(\theta)}(s\_0) = E[G\_t|S\_t = s\_0]\\
\nabla J(\theta) = E\_\pi\left[G\_t\frac{\nabla \pi (A\_t |S\_t, \theta)}{\pi (A\_t |S\_t, \theta)}\right]
$$
But in the RL series of Open AI, the algorithm that is described as Vanilla policy gradient (If it is the one you are talking about) is optimizing finite-horizon undiscounted return $E\_{\tau \sim \pi} [R(\tau)] $, where $\tau$ are possible trajectories. e.g.
$$
J(\theta) = E\_{\tau \sim \pi} [R(\tau)] \\
\nabla J(\theta) = E\_{\tau \sim\pi}\left[\sum\_{t=0}^T R(\tau) \frac{\nabla \pi (A\_t |S\_t, \theta)}{\pi (A\_t |S\_t, \theta)}\right]
$$
**They do look very similar in objective functions, but they are different.** The way the gradient ascent is performed differs strongly since in REINFORCE method the gradient ascent is performed once for each action taken for each episode and the direction of ascent is taken as
$$
G\_t\frac{\nabla \pi (A\_t |S\_t, \theta)}{\pi (A\_t |S\_t, \theta)}
$$
so the update becomes
$$
\theta\_{t+1} = \theta\_{t} + \alpha G\_t\frac{\nabla \pi (A\_t |S\_t, \theta)}{\pi (A\_t |S\_t, \theta)}
$$
but in VPG algorithm the gradient ascents performed once over multiple episodes and direction of ascent taken as average
$$
\frac{1}{|\mathcal{T}|}\sum\_{\tau\in\mathcal{T}} \sum\_{t=0}^T R(\tau) \frac{\nabla \pi (A\_t |S\_t, \theta)}{\pi (A\_t |S\_t, \theta)}
$$
and gradient ascent step is
$$
\theta\_{t+1} = \theta\_{t} + \alpha \frac{1}{|\mathcal{T}|}\sum\_{\tau\in\mathcal{T}} \sum\_{t=0}^T R(\tau) \frac{\nabla \pi (A\_t |S\_t, \theta)}{\pi (A\_t |S\_t, \theta)}
$$
which looks a lot like what you have stated as REINFORCE algorithm.
I admit, that some form of mathematical equivalence can be derived between them, since expectation over policy and expectation over the trajectory sampled from the policy looks practically the same. But approaches differ at least in the way, the ascent is computed.
Upvotes: 2 |
2019/03/29 | 987 | 3,952 | <issue_start>username_0: According to the [Wikipedia page of the physical symbol system hypothesis (PSSH)](https://en.wikipedia.org/wiki/Physical_symbol_system), this hypothesis seems to be a vividly debated topic in philosophy of AI. But, since it's about formal systems, shouldn't it be already disproven by Gödel's theorem?
My question arises specifically because the PSSH was elaborated in the 1950s, while Gödel came much earlier, so at the time the Incompleteness theorems were already known; in which way does the PSSH deal with this fact? How does it "escape" the theorem? Or, in other words, how can it try to explain intelligence given the deep limitations of such formal systems?<issue_comment>username_1: I think your conceptualization of this is a bit off. All PSSH states is "A physical symbol system has the necessary and sufficient means for general intelligent action."
Gödel's theorems state 2 basic things:
1. Any sufficiently powerful formal system cannot prove its own consistency.
2. There are theorems in any sufficiently powerful formal system that cannot be proved within the system.
PSSH doesn't have too much to do with Gödel.
Upvotes: 0 <issue_comment>username_2: The PSSH is often attacked via either Godel's theorems or Turing's incomputability theorem.
However, both attacks have an implicit assumption: that to be intelligent is to be able to decide undecidable questions. It's really not clear that this is so.
Consider what Godel's theorems say, in essence:
1. "powerful" formal systems cannot prove, using only techniques from within the system, that they are self-consistent.
2. There are statements that are true that cannot be proven within a given "powerful" formal system.
Suppose that we allow both of those facts. The missing step in the argument is the following statements:
1. You need to be able to prove the consistency of your own reasoning system to be considered intelligent.
2. You need to be able to correct reason out a proof of all true statements to be considered intelligent.
The main problem is, under this definition, humans are probably not considered intelligent! I certainly have no way to prove that my reasoning is sound and self-consistent. Moreover, it is objectively not so! I frequently believe contradictory things at the same time.
I also am not able to reason out proofs of all the statements that appear to be true, and it seems entirely plausible that I cannot do so because of the inherent limitations of the logical systems I'm reasoning with.
This is a contradiction. The overall argument was that one of these 4 statements is false:
1. Godel's theorems say symbol systems lack some important properties.
2. Intelligent things have the properties that Godel says symbol systems lack.
3. Humans are intelligent.
4. Humans can't do the things Godel says symbol systems can't do.
Some authors (like [<NAME>](https://en.wikipedia.org/wiki/John_Searle)) might argue the false premise is 4. Most modern AI researchers would argue that the false premise is 2. Since intelligence is a bit nebulous, which view is correct may rely on metaphysical assumptions, but most people agree on premises 1 & 3.
Upvotes: 3 [selected_answer]<issue_comment>username_3: Although there seems to be an apt analogy between Gödel's theorems and the PSHH, there is nothing formal linking the two together.
More concretely, Gödel's theorems are about systems that decide certain "truths" about mathematics, but unless I am mistaken, the PSSH doesn't imply that the symbol system of the mind needs to decide truths. Though implicitly us humans do decide facts about math, there isn't a formal interpretation of how that might be done in the PSHH, thus Gödel's theorems do not apply.
However, [this answer](https://ai.stackexchange.com/a/11529/2444) is still good, under the assumption that the formal system we are talking about does indeed decide certain truths about math.
Upvotes: 2 |
2019/03/29 | 403 | 1,577 | <issue_start>username_0: Where or for what could genetic algorithms (GA) be used in the context of project management (PM)? I thought about task dispatching, but I'm looking for other potential uses of GAs in the context of PM.<issue_comment>username_1: Project Management is most like model to game theory. You can find a [article here](https://pdfs.semanticscholar.org/3b96/427b507c429c9db96fdb3de8bc5edfe190ca.pdf), as a cooperative game, and apply [opportunity cost](https://en.wikipedia.org/wiki/Opportunity_cost) to your budget, only you need to apply your own rules of the game.
Upvotes: -1 <issue_comment>username_2: GA is well suited to optimize a non-linear fitness function with a lot of variables. Each vector that is a possible solution is evaluated with the fitness function that we want to optimize.
So, GA is well suited to optimize schedules for a lot of people, optimizing resources, etc.
Your mission is to define all variables that you can move-tune and that affect your objectives, and assign an estimated weight on every variable.
For example, create a total project cost as the fitness function, to minimize. One variable will be the number of programmers, multiplied by cost and total months. If you don't want delays over 3 months, add a correction factor: +10000\*max(0, months-3), penalizing the fitness function's results where months > 3, etc.
When you apply your GA algorithm to the fitness function, you will find some minimum for your function, if any exists, because the solution may not exist if you added too many constraints.
Upvotes: 0 |
2019/03/29 | 1,060 | 4,169 | <issue_start>username_0: *[Super](https://en.wiktionary.org/wiki/super#Etymology_1)* comes from the Latin and means "above".
>
> University of Oxford philosopher <NAME> defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest".
> [(wiki)](https://en.wikipedia.org/wiki/Superintelligence)
>
>
>
Bostrom's definition could be taken to imply this is a quantitative measure of degrees as a numeric relationship. (Under this definition, we have achieved narrow superintelligence, reduced to competency in a single task.)
Gibson, famously, sheds light on an another aspect via [Wintermute](https://williamgibson.fandom.com/wiki/Wintermute) & [Neuromancer](https://williamgibson.fandom.com/wiki/Neuromancer_(AI)), where, once superintelligence is achieved, the AI just f-'s off and does it's own thing, motivations beyond human comprehensions. (Essentially, "next-level" thinking.) The second measure is discrete and ordinal.
Is superintelligence a function of strength or a category?<issue_comment>username_1: I am reading Bostrom's book "[Superintelligence](https://rads.stackoverflow.com/amzn/click/com/1501227742)". I have only read the first 2 chapters, but I think he doesn't want to define super-intelligence is a precise way, but he leaves the reader the option to define it in a "sensible" way. However, I think that, in his thoughts, there's the (clear) assumption that a super-intelligence will necessarily need to be general, so a super-intelligence will be an AGI.
Upvotes: 1 <issue_comment>username_2: I’ll have a stab at this.
Cognitive performance in narrow domains is determined by competency, efficiency and speed. Take calculating numbers, extremely narrow domain but compared to humans the ability of a calculator to calculate numbers exceeds normal human performance, it is much competent in terms of speed. In a bit broader domain, AlphaGo has defeated Go players, which is more difficult than chess, and requires intuition. In fact, there is an instance where the AlphaGo makes a long-term move that was [previously unimagined](https://en.m.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol?wprov=sfti1). In all domains however Humans are well rounded, therefore Human Intelligence is called general intelligence. An AlphaGo or Calculator cannot speak eloquently or make music, but AIs are gaining pace in these areas too.
I agree with @username_1 that Bostrom wants to keep the interpretation of Superintelligence open. But if there is a rough category, these are-
1. **ANI-** `Artificial Narrow Intelligence`
2. **AGI-** `Artificial General Intelligence`: Where the AI’s performance is at par with humans. After AGI, it quickly takes off to SI.
3. **SI-** `Superintelligence`: Superintelligence is beyond our imagination, we have not figured out yet what will a SI do, think or want.
While these categories are discrete, the functions of strength are not. I’d say they are rather discrete to continuous, because if you look at the computing power plots that follow Moore’s Law, a similar exponential graph can be drawn for AI’s performance towards general intelligence. In that [graph](https://images.app.goo.gl/7a7e1ar6XwSMPh9M9), it seems the AI’s performance starts with discrete performance points, and then as it takes off, it becomes continuous.
This is why the term `Singularity` is often associated with `Superintelligence`. I hope this answers your question.
Upvotes: 2 <issue_comment>username_3: It occurs to me that Superintelligence as an ordinal value could be explained as follows:
* Artificial General Intelligence is strong utility in all problems that humans can conceive of.
* Superintelligence as a category includes problems beyond what humans can conceive of.
An easy way to understand this is to look at the difference between dogs and humans. Dogs are optimized for certain tasks, such as tracking, but limited in terms of what they can conceive and what tasks they can engage in. Humans have superintelligence, compared to dogs, because we regularly engage with problems and tasks beyond the conception of canines.
Upvotes: 0 |
2019/03/30 | 1,423 | 5,701 | <issue_start>username_0: Often times I see the term deep reinforcement learning to refer to RL algorithms that use neural networks, regardless of whether or not the networks are deep.
For example, [PPO](https://arxiv.org/pdf/1707.06347.pdf) is often considered a deep RL algorithm, but using a deep network is not really part of the algorithm. In fact, the example they report in the paper says that they used a network with only 2 layers.
This SIGGRAPH project ([DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills](https://xbpeng.github.io/projects/DeepMimic/2018_TOG_DeepMimic.pdf)) has the name *deep* in it and the title even says 'deep reinforcement learning', but if you read the paper, you'll see that their network uses only 2 layers.
Again, the paper [Learning to Walk via Deep Reinforcement Learning](https://arxiv.org/pdf/1812.11103.pdf) by researchers from Google and Berkeley, contains *deep RL* in the title, but if you read the paper, you'll see they used 2 hidden layers.
[Another SIGGRAPH project with deep RL in the title](https://www.cc.gatech.edu/%7Eaclegg3/projects/learning-dress-synthesizing.pdf). And, if you read it, surprise, 2 hidden layers.
In the paper [Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor](https://arxiv.org/pdf/1801.01290.pdf), if you read table 1 with the hyperparameters, they also used 2 hidden layers.
Is it standard to just call deep RL to any RL algorithm that uses a neural net?<issue_comment>username_1: >
> Is it standard to just call deep RL to any RL algorithm that uses a neural net?
>
>
>
Yes, it seems to have become standard practice to label RL + any NN "Deep Reinforcement Learning". It is not a formalised term.
The whole "Deep Learning" movement started this decade is as much a marketing term as a scientific one. It *is* however based on the discovery of real improvements in neural network architecture and training approaches.
You may find that some (or even most) of these shallower networks will use improvements designed in the last decade or so, and also associated with deeper networks, such as Xavier initialization, ReLU activation, the Adam optimizer.
As a personal opinion, I would say that, if a published experiment uses just 1 or 2 hidden layers, and does not make use of any of these recent advances, then the "Deep" label is almost entirely a branding exercise. There were advances with such networks much longer ago. For instance [the TD-Gammon paper](https://cling.csd.uwo.ca/cs346a/extra/tdgammon.pdf) is from 1995. For TD-Gammon, the authors used reinforcement learning and a NN with one hidden layer to create a Backgammon player that played better than any human player. This was well before "Deep Learning" was a term used to describe such networks, and the term "Deep Reinforcement Learning" does not appear in that paper.
However, because "Deep Learning" is such a loose branding term, there is also an argument that all these older approaches, and pretty much all neural networks with hidden layers, should be included. [Wikipedia's definition for Deep Learning](https://en.wikipedia.org/wiki/Deep_learning#Definition) says:
>
> Deep learning is a class of machine learning algorithms that:
>
>
> * use a cascade of multiple layers of nonlinear processing units for feature extraction and transformation. Each successive layer uses the output from the previous layer as input.
> * learn in supervised (e.g., classification) and/or unsupervised (e.g., pattern analysis) manners.
> * learn multiple levels of representations that correspond to different levels of abstraction; the levels form a hierarchy of concepts.
>
>
>
Using that definition would include all the papers you cite. You don't need a 50 layer Resnet architecture to qualify. And the branding exercise makes more sense under that definition, because the newly invented techniques have made such systems that much more viable and worthy of investment (of time & effort as well as financially).
Upvotes: 2 <issue_comment>username_2: Even after several years of success of deep learning systems (i.e. neural networks trained with gradient descent and back-propagation), as far as I know, there is not yet a consensus on what constitutes a deep neural network. Some people could use a neural network with 2 hidden layers and call it deep (like in your case), but other people may just dedicate the adjective *deep* to refer to neural networks with 10, 100, or more hidden layers. In fact, there are some good reasons to associate the term *deep* only to neural networks that have a significant number of hidden layers (e.g. 100): for example, the exploding (or vanishing) gradient problem does not typically arise if you only have one hidden layer but can easily occur with many (e.g. 100) hidden layers.
Nevertheless, a neural network with at least one hidden layer can approximate any continuous function, given enough (but finite number of) units (or neurons) in the layers. See the [universal approximation theorem](https://en.wikipedia.org/wiki/Universal_approximation_theorem). For this reason, we could start denoting any such neural network as deep, but, although this rule would exclude perceptrons (which can only approximate linear functions, and nobody would probably call them deep anyway), this rule would be a bit redundant or useless (i.e. we may just not use the adjective *deep* to start with).
In your case, the rule that the authors are using seems to be the following: if it contains more hidden layers than the bare minimum (i.e. 1) to approximate any continuous function, then let's denote it as *deep*.
Upvotes: 3 |
2019/03/30 | 1,054 | 4,444 | <issue_start>username_0: If the AI goal is to serve humans and protect them (if this ever happens) and AI someday realizes that humans destroy themselves, will it try to control people for their own good, that is, will it control man's will to not destroy himself?<issue_comment>username_1: >
> Is it standard to just call deep RL to any RL algorithm that uses a neural net?
>
>
>
Yes, it seems to have become standard practice to label RL + any NN "Deep Reinforcement Learning". It is not a formalised term.
The whole "Deep Learning" movement started this decade is as much a marketing term as a scientific one. It *is* however based on the discovery of real improvements in neural network architecture and training approaches.
You may find that some (or even most) of these shallower networks will use improvements designed in the last decade or so, and also associated with deeper networks, such as Xavier initialization, ReLU activation, the Adam optimizer.
As a personal opinion, I would say that, if a published experiment uses just 1 or 2 hidden layers, and does not make use of any of these recent advances, then the "Deep" label is almost entirely a branding exercise. There were advances with such networks much longer ago. For instance [the TD-Gammon paper](https://cling.csd.uwo.ca/cs346a/extra/tdgammon.pdf) is from 1995. For TD-Gammon, the authors used reinforcement learning and a NN with one hidden layer to create a Backgammon player that played better than any human player. This was well before "Deep Learning" was a term used to describe such networks, and the term "Deep Reinforcement Learning" does not appear in that paper.
However, because "Deep Learning" is such a loose branding term, there is also an argument that all these older approaches, and pretty much all neural networks with hidden layers, should be included. [Wikipedia's definition for Deep Learning](https://en.wikipedia.org/wiki/Deep_learning#Definition) says:
>
> Deep learning is a class of machine learning algorithms that:
>
>
> * use a cascade of multiple layers of nonlinear processing units for feature extraction and transformation. Each successive layer uses the output from the previous layer as input.
> * learn in supervised (e.g., classification) and/or unsupervised (e.g., pattern analysis) manners.
> * learn multiple levels of representations that correspond to different levels of abstraction; the levels form a hierarchy of concepts.
>
>
>
Using that definition would include all the papers you cite. You don't need a 50 layer Resnet architecture to qualify. And the branding exercise makes more sense under that definition, because the newly invented techniques have made such systems that much more viable and worthy of investment (of time & effort as well as financially).
Upvotes: 2 <issue_comment>username_2: Even after several years of success of deep learning systems (i.e. neural networks trained with gradient descent and back-propagation), as far as I know, there is not yet a consensus on what constitutes a deep neural network. Some people could use a neural network with 2 hidden layers and call it deep (like in your case), but other people may just dedicate the adjective *deep* to refer to neural networks with 10, 100, or more hidden layers. In fact, there are some good reasons to associate the term *deep* only to neural networks that have a significant number of hidden layers (e.g. 100): for example, the exploding (or vanishing) gradient problem does not typically arise if you only have one hidden layer but can easily occur with many (e.g. 100) hidden layers.
Nevertheless, a neural network with at least one hidden layer can approximate any continuous function, given enough (but finite number of) units (or neurons) in the layers. See the [universal approximation theorem](https://en.wikipedia.org/wiki/Universal_approximation_theorem). For this reason, we could start denoting any such neural network as deep, but, although this rule would exclude perceptrons (which can only approximate linear functions, and nobody would probably call them deep anyway), this rule would be a bit redundant or useless (i.e. we may just not use the adjective *deep* to start with).
In your case, the rule that the authors are using seems to be the following: if it contains more hidden layers than the bare minimum (i.e. 1) to approximate any continuous function, then let's denote it as *deep*.
Upvotes: 3 |
2019/03/31 | 692 | 3,067 | <issue_start>username_0: I've implemented A2C. I'm now wondering why would we have multiple actors walk around the environment and gather rewards, why not just have a single agent run in an environment vector?
I personally think this will be more efficient since now all actions can be calculated together by only going through the network once. I've done some tests, and this seems to work fine in my test. One reason I can think of to use multiple actors is implementing the algorithm across many machines, in which case we can have one agent on a machine. What else reason should we prefer multiple actors?
As an example of environment vector based on OpenAI's gym
```
class GymEnvVec:
def __init__(self, name, n_envs, seed):
self.envs = [gym.make(name) for i in range(n_envs)]
[env.seed(seed + 10 * i) for i, env in enumerate(self.envs)]
def reset(self):
return [env.reset() for env in self.envs]
def step(self, actions):
return list(zip(*[env.step(a) for env, a in zip(self.envs, actions)]))
```<issue_comment>username_1: I believe if you run a single agent in multiple parallel environments many times you will get similar actions in similar states, the reason behind multiple agents is that you will have different agents with different parameters and you can also have different explicit exploration policies so your exploration will be better and you will learn more from environment (see more state space). With single agent you can't really achieve that, you would have a single exploration policy, single parameter set for the agent and most of the time you would be seeing similar states (at least after a while). You would be speeding up your learning process but that's just because you're running multiple environments in parallel (compared to the regular actor-critic or Q-learning). I think quality of learning would be better with multiple different actors.
Upvotes: 2 <issue_comment>username_2: When you run multiple actors you actually make copies of your current agent and run these copies in non-vector environments. The environments are usually also the same.
In the case of Atari, yes of course, using a vectorized environment is much more efficient. But if you want to train an agent to play DotA, or drive a car, then there is no vectorized environment.
Running *different* actors instead of copies of the same agent is not really possible with on-policy algorithms. You have to collect data using one and the same policy, otherwise it is not on-policy.
A slight exception to this is the case of [asynchronous training](https://arxiv.org/abs/1602.01783). In this case you start with the same copies of your agent, but update these copies asynchronously, and only once in a while sync the policies. However, you still have to make sure the actors' policies don't deviate too much from one another. But note that the synchronous approach maybe performs better than the asynchronous. And again, in case you can vectorize your environment I don't really see the need to do this.
Upvotes: 0 |
2019/04/01 | 861 | 3,764 | <issue_start>username_0: I am taking AI this semester and we have a semester project that will last 4 weeks. We can choose just about anything.
So, what are some possible semester projects that can be finished in a 4-week time-frame?
Some background information: I am a graduate student in CS, but this is my first AI course. My research area is in the space of data mining and analytics. I am open to doing anything that seems interesting and creative.<issue_comment>username_1: Welcome to AI.SE @Kate\_Catelena!
I teach AI courses at the undergraduate level, and so have seen a lot of semester projects over the years. Here are some templates that often lead to exciting outcomes:
1. Pick a *new* board or card game, and write a program to play it. Your course has probably covered Adversarial Search, and may also have covered Monte Carlo Tree Search, or self-play reinforcement learning approaches. These projects are often fun to mark and creative because they are easy enough to be well done, and yet there are always new, exciting, domains to apply these algorithms to. Some examples of past projects that I thought were neat were an AI to play the boardgame Tac (mostly A\* Search), and an AI to play the card game Love Letter (mostly Counter-factual minimax regret, the algorithm used to solve poker).
2. Pick a question that you would like to know the answer to, that could be addressed with machine learning. Then implement your own ML algorithm (decision tree learners are fairly easy), gather your own data, and show a result. Examples of interesting projects I've seen in the past are using ML to find out which of a number of factors most strongly influenced a students' subjective quality of sleep; and which items are most commonly purchased along with camping supplies (using association rule mining).
3. Anything involving reinforcement learning. RL projects are always neat to see if accompanied by a visualization showing the learner's behavior at different stages. A strong past project involved a student simply replicating Sutton & Barto's Acrobot experiments with their own implementation of the SARSA-Lambda algorithm. Other things that might be neat include making a trainable "pet" that the user can influence, or solving games using self-play.
4. Theoretical results might seem intimidating but are often more accessible than one might think, especially if your discrete math skills are strong. I have had many student projects where the student went away to look at theory papers in ML or Multiagent Systems, found a suggestion in the future work sections that wasn't a big result, but that was fairly easy to prove, and proved it. Sometimes these are even publishable.
5. Replications. Go find an interesting AI paper (use scholar.google.com), and then see if you can do exactly what the authors suggest and if you get the same result or not. Then, if you have time, see if you can improve on the results. These are often most interesting when you find a paper written in a different field that uses AI. Often the authors of such papers know less about AI than you do, and so it can be fairly easy to improve on their results. I have had several students do projects like this to great effect.
Those categories are a bit vague, but remember: AI touches almost anything. Pick your favorite hobby, and see whether you can relate it to AI using one of the approaches above. Nothing makes a project stand out like one that applies AI to solve some real issues in an exciting domain. Good luck!
Upvotes: 3 <issue_comment>username_2: Here are some possible options
1. Music Generation using GA/MA
2. Open AI's gym projects
3. 2048 on RL and search algorithms
4. Fixing bugs in the source code of some AI software project
Upvotes: 2 |
2019/04/03 | 882 | 3,645 | <issue_start>username_0: I will explain my question in relation to chess, but it should be relevant for other games as well:
In short terms: Is it possible to combine the techniques used by AlphaZero with those used by, say, Stockfish? And if so, has it been attempted?
I have only a brief knowledge about how AlphaZero works, but from what I've understood, it basically takes the board state as input to a neural net, possibly combined with monte carlo methods, and outputs a board evaluation or prefered move. To me, this really resembles the heuristic function used by traditional chess engines like stockfish.
So, from this I will conclude (correct me if I'm wrong) that AlphaZero evaluates the current position, but uses a very powerful heuristic. Stockfish on the other hand searches through lots of positions from the current one first, and then uses a less powerful heuristic when a certain depth is reached.
Is it therefore possible to combine these approaches by first using alpha-beta pruning, and then using AlphaZero as some kind of heuristic when the max depth is reached? To me it seems like this would be better than just evaluating the current position like (I think) AlphaZero does. Will it take too much time to evaluate? Or is it something I have misunderstood? If it's possible, has anyone attempted it?<issue_comment>username_1: >
> So, from this I will conclude (correct me if I'm wrong) that AlphaZero
> evaluates the current position, but uses a very powerful heuristic.
> Stockfish on the other hand searches through lots of positions from
> the current one first, and then uses a less powerful heuristic when a
> certain depth is reached.
>
>
>
This is wrong. Like Stockfish, AlphaZero as well "searches through lots of positions from the current one." You hint at this yourself when you say "possibly combined with monte carlo methods", but it seems you don't understand exactly what that means, so let me explain:
Stockfish searches through the tree of future moves using an algorithm called Minimax (actually a variant called alpha beta pruning), whereas AlphaZero searches through future moves using a different algorithm called Monte Carlo Tree Search (MCTS). Minimax is well suited to quick evaluation functions, whereas MCTS explores fewer moves and thus can handle a more expensive evaluation function. Further, MCTS works not with exact values but with probabilities, and AlphaZero uses the Neural Net not just for the value of moves but to guide which moves to explore next (it functionally is actually 2 networks, a policy network and a value network).
To be sure, that is somewhat of a simplification. It is not impossible to use a NeuralNet in conjunction with Minimax. Its just that in practice, due to the nature of the algorithm, its simply too expensive.
Upvotes: 0 <issue_comment>username_2: Yes it's possible to to combine AlphaZero with Minimax methods (including alpha-beta pruning). AlphaZero itself is combination of Monte Carlo Tree Search (MCTS) and Deep Network, where MCTS is used to get data to train network and network used for tree leafs evaluation (instead of rollout as in classical MCTS). It's possible to combine selection-expansion part of AlphaZero MCTS with Minimax the same way as it was done for classical MCTS - ["Monte-Carlo Tree Search and Minimax Hybrids", pdf](https://dke.maastrichtuniversity.nl/m.winands/documents/paper%2049.pdf).
Upvotes: 2 <issue_comment>username_3: AlphaGo uses MCTS. AlphaZero does not.
Source: [Mastering the Game of Go without Human Knowledge](https://discovery.ucl.ac.uk/id/eprint/10045895/1/agz_unformatted_nature.pdf)
Upvotes: 0 |
2019/04/04 | 762 | 3,036 | <issue_start>username_0: I'm learning DDPG algorithm by following the following link: [Open AI Spinning Up document on DDPG](https://spinningup.openai.com/en/latest/algorithms/ddpg.html#the-policy-learning-side-of-ddpg), where it is written
>
> In order for the algorithm to have stable behavior, the replay buffer should be large enough to contain a wide range of experiences, but it may not always be good to keep everything.
>
>
>
What does this mean? Is it related to the tuning of the parameter of the batch size in the algorithm?<issue_comment>username_1: >
> In order for the algorithm to have stable behavior, the replay buffer should be large enough to contain a wide range of experiences, but it may not always be good to keep everything.
>
>
>
The larger the experience replay, the less likely you will sample correlated elements, hence the more stable the training of the NN will be. However, a large experience replay also requires a lot of memory and it might slow training. So, there is a trade-off between training stability (of the NN) and memory requirements.
The authors of the linked article state (right after the sentence above)
>
> If you only use the very-most recent data, you will overfit to that and things will break; if you use too much experience, you may slow down your learning. This may take some tuning to get right.
>
>
>
Upvotes: 3 <issue_comment>username_2: You need to read this 2020 paper by Deepmind:
["Revisiting Fundamentals of Experience Replay"](http://acsweb.ucsd.edu/%7Ewfedus/pdf/replay.pdf)
They explicitly test the size of the experience replay, the replay-ratio of each experience and other parameters.
Also, to add to the answer by @username_1
Assume you implement experience replay as a buffer where the newest memory is stored instead of the oldest. Then, if your buffer contains 100k entries, any memory will remain there for exactly 100k iterations.
Such a buffer is simply a way to "see" what was up to 100k iterations ago.
After the first 100k iterations you fill the buffer and begin "moving" it, much like a sliding window, by inserting new memories instead of the oldest.
---
The size of the buffer (relative to the total number of iterations you plan to ever train with) depends on "how much you believe your network architecture is susceptible to [catastrophic forgetting](https://ai.stackexchange.com/a/13293/27042)".
A tiny buffer might force your network to only care about what it saw recently.
But an excessively large buffer might take a long time to "become refreshed" with good trajectories, when they finally start to be discovered. So the network would be like a university student whose book shelf is diluted with first-grade school books.
The student might have already decided that he/she wishes to become a programmer, so re-reading those primary school books has little benefit (time could have been spent more productively on programming literature) + it takes a long time to replace those with relevant university books.
Upvotes: 3 |
2019/04/04 | 577 | 1,903 | <issue_start>username_0: I am making a NN library without any other external NN library, so I am implementing all layers, including the flatten layer, and algorithms (forward and backward pass) from scratch. I know the forward implementation of the flatten layer, but is the backward just reshaping it or not? If yes, can I just call a simple NumPy's `reshape` function to reshape it?<issue_comment>username_1: Yes, a simple reshape would do the trick. A flattening layer is just a tool for reshaping data/activations to make them compatible with other layers/functions. The flattening layer doesn't change the activations themselves, so there is no special backpropagation handling needed other than changing back the shape.
Upvotes: 2 [selected_answer]<issue_comment>username_2: The `Flatten` layer has no learnable parameters in itself (the operation it performs is fully defined by construction); still, it has to propagate the gradient to the previous layers.
In general, the `Flatten` operation is well-posed, as whatever is the input shape you know what the output shape is.
When you backpropagate, you are supposed to do an "Unflatten", which maps a flattened tensor into a tensor of a given shape, and you know what that specific shape is from the forward pass, so it is also a well-posed operation.
More formally
Say you have `Img1` in input of your `Flatten` layer
$$
\begin{pmatrix}
f\_{1,1}(x; w\_{1,1}) & f\_{1,2}(x; w\_{1,2}) \\
f\_{2,1}(x; w\_{2,1}) & f\_{2,2}(x; w\_{2,2})
\end{pmatrix}
$$
So, in the output you have
$$
\begin{pmatrix}
f\_{1,1}(x; w\_{1,1}) & f\_{1,2}(x; w\_{1,2}) & f\_{2,1}(x; w\_{2,1}) & f\_{2,2}(x; w\_{2,2})
\end{pmatrix}
$$
When you compute the gradient you have
$$
\frac{df\_{i,j}(x; w\_{i,j})}{dw\_{i,j}}
$$
and everything in the same position as in the forward pass, so the unflatten maps from the `(1, 4)` tensor to the `(2, 2)` tensor.
Upvotes: 2 |
2019/04/04 | 495 | 2,122 | <issue_start>username_0: Hinton doesn't believe in the pooling operation ([video](http://www.youtube.com/watch?v=rTawFwUvnLE&t=10m12s)). I also heard that many max-pooling layers have been replaced by convolutional layers in recent years, is that true?<issue_comment>username_1: Max pooling isn't bad, it just depends of what are you using the convnet for. For example if you are analyzing objects and the position of the object is important you shouldn't use it because the translational variance; if you just need to detect an object, it could help reducing the size of the matrix you are passing to the next convolutional layer. So it's up to the application you are going to use your CNN.
Upvotes: 1 <issue_comment>username_2: In addition to username_1's answer I would like to add some more detail. At best, max pooling is a less than optimal method to reduce feature matrix complexity and therefore over/under fitting and improve model generalization(for translation invariant classes).
However as username_1 begins to hit on.. there are problems with this method. Hinton perhaps sums the issues in [his talk here](https://www.youtube.com/watch?v=rTawFwUvnLE) on what is wrong with CNNs. This also serves as motivation for his novel architecture capsule networks or just capsules.
As he talks about, the main problem is not translational variance per se but rather pose variance. CNNs with max pooling are more than capable of handling simple transformations like flips or rotation without too much trouble. The problem comes with complicated transforms, as features learned about a chair facing forwards, will not be too helpful towards class representation if the real-world examples contain chairs upside down, to the side, etc.
However there is much work being done here, mostly constrained to 2 areas. Those being, novel architectures/methods and inference of the 3d structure from images(via CNN tweaks). This problem was one of the bigger motivators for researchers throughout the decades, even [David Marr](https://en.wikipedia.org/wiki/David_Marr_(neuroscientist)) with his primal sketches.
Upvotes: 2 |
2019/04/05 | 882 | 3,556 | <issue_start>username_0: I read that functions are used as activation functions only when they are differentiable. What about the unit step activation function? So, is there any other reason a function can be used as an activation function (apart from being differentiable)?<issue_comment>username_1: Not completely sure your question. Do you mean
Q. why should we use activation function?
Ans: we need to introduce non-linearity to the network. Otherwise, multiple layers are no difference from single layer network. (It is obvious as we write things in matrix form, and say when we have two layers with weights $W\_1$ and $W\_2$, the two layer is no difference from a single layer with weight $W\_2 W\_1$.
Q. why they need to be differentiable?
Ans: Just for sake that we can back-propagate gradients back to earlier layers. Note that back-propagation is nothing but the chain rule in calculus. Say $f(\cdot)$ is an activation function in one layer and the output of that activation function is $\bf y$ and the input is ${\bf u}=W \bf x$, where $\bf x$ is output from the previous layer and mix with weights $W$ in current layer. Of course, the final loss $L$ will depend on ${\bf y} = f({\bf u})= f(W {\bf x})$. Say, loss $L=g(\bf y)$ somehow. To train the weights $W$, we have to find the gradient $\frac{\partial L}{\partial W}$ so that we can adjust weight $W$ to minimize $L$. But $\frac{\partial L}{\partial W}=\frac{\partial g({\bf y})}{\partial W}=\frac{\partial g({\bf y})}{\partial \bf y}\frac{\partial {\bf y}}{\partial {\bf u}}\frac{\partial {\bf u}}{\partial W}$. Each of these product terms can be computed locally and will be accumulatively multiplied as we apply backprop. And note that the middle term $\frac{\partial {\bf y}}{\partial {\bf u}}=\frac{\partial f({\bf u})}{\partial {\bf u}}$ is just "derivative" of $f(\cdot)$, thus we require the activation function to be differentiable and "informative"/non-zero (at least most of the time). Note that ReLU is not differentiable everywhere and that is why researchers (at least <NAME>) worried about that when they first tried to adopt ReLU. You may check out the interview of Bengio by <NAME> for that.
Q. why step function is a bad activation function?
Ans: Note that step function is differentiable almost everywhere but is not "informative" though. For places (flat region) where it is differentiable, the derivative is simply zero. Consequently, any later layer gradient (information) will get cut off as it passes through a step function activation function.
Upvotes: 2 <issue_comment>username_2: Differentiability for activation functions is desirable but not necessary since you can reformulate the derivatives at the non-differentiable points, like in the case of ReLU.
Properties Needed:
* To take advantage of Universal Approximation Theorem, and to take advantage of the modeling capacity it promises, the activation functions need to be continuous or Borel measurable (if this term is confusing just think of this as common functions), discriminatory (similarly discriminatory means the function doesn't just produce 0 all the time when integrating w.r.t. another function) and nonpolynomial (nonlinear).
* Also with the recent research works it is beneficial to have it monotonically increasing.
Recent theoretical analysis of the activation functions gives the edge to ReLU with some preliminary theoretical guarantees like this "Convergence Analysis of Two-layer Neural Networks with ReLU Activation" <https://arxiv.org/abs/1705.09886>.
Upvotes: 0 |
2019/04/05 | 437 | 1,684 | <issue_start>username_0: I am new to NLP realm. If you have an input text "The price of orange has increased" and output text "Increase the production of orange". Can we make our RNN model to predict the output text? Or what algorithm should I use?<issue_comment>username_1: In your case you can use either RNN (especially BiLSTM with ELMo and attention mechanism for better accuracy) or Transformer based architectures (the best of them today is BERT). But for both cases you need data to train the model (i.e sequences of input/output like in your question).
I believe the best choice in your case is BERT as it's achieving state of the art performance in most NLP tasks and is already pretrained so you don't need a massive data to retrain the model. Also, BERT is pretrained on "Next Sentence Prediction" and allows "2 separate sentences" as input which helps a lot in your case. The only drawback, compared to other methods, is that you need fine-tuning the model so it's not a completly "ready to use" model.
*For more information:*
Here is the github repository of BERT: [BERT-repo](https://github.com/google-research/bert). For quick explanation check: [The Illustrated BERT](http://jalammar.github.io/illustrated-bert/). For detailed explanation see BERT paper: [BERT-paper](https://arxiv.org/pdf/1810.04805.pdf).
Upvotes: 1 <issue_comment>username_2: sounds like a job for Sequence-to-sequence or seq2seq
for example [tf-seq2seq](https://google.github.io/seq2seq/)
you basically use an RNN as an encoder (reduces and sequence of words to a vector), and another RNN as a decoder (takes the encoded vector as in input to generation a new sequence of words).
Upvotes: 0 |
2019/04/05 | 3,184 | 12,997 | <issue_start>username_0: The tabular Q-learning algorithm is guaranteed to find the optimal $Q$ function, $Q^\*$, provided the following conditions (the [Robbins-Monro conditions](https://en.wikipedia.org/wiki/Stochastic_approximation)) regarding the learning rate are satisfied
1. $\sum\_{t} \alpha\_t(s, a) = \infty$
2. $\sum\_{t} \alpha\_t^2(s, a) < \infty$
where $\alpha\_t(s, a)$ means the learning rate used when updating the $Q$ value associated with state $s$ and action $a$ at time time step $t$, where $0 \leq \alpha\_t(s, a) < 1$ is assumed to be true, for all states $s$ and actions $a$.
Apparently, given that $0 \leq \alpha\_t(s, a) < 1$, in order for the two conditions to be true, all state-action pairs must be visited infinitely often: this is also stated in the book [Reinforcement Learning: An Introduction](http://incompleteideas.net/book/bookdraft2018mar21.pdf), apart from the fact that this should be widely known and it is the rationale behind the usage of the $\epsilon$-greedy policy (or similar policies) during training.
A complete proof that shows that $Q$-learning finds the optimal $Q$ function can be found in the paper [Convergence of Q-learning: A Simple Proof](http://users.isr.ist.utl.pt/~mtjspaan/readingGroup/ProofQlearning.pdf) (by <NAME>). He uses concepts like [contraction mapping](https://en.wikipedia.org/wiki/Contraction_mapping) in order to define the optimal $Q$ function (see also [What is the Bellman operator in reinforcement learning?](https://ai.stackexchange.com/q/11057/2444)), which is a fixed point of this contraction operator. He also uses a theorem (n. 2) regarding the random process that converges to $0$, given a few assumptions. (The proof might not be easy to follow if you are not a math guy.)
If a neural network is used to represent the $Q$ function, do the convergence guarantees of $Q$-learning still hold? Why does (or not) Q-learning converge when using function approximation? Is there a formal proof of such non-convergence of $Q$-learning using function approximation?
I am looking for different types of answers, from those that give just the intuition behind the non-convergence of $Q$-learning when using function approximation to those that provide a formal proof (or a link to a paper with a formal proof).<issue_comment>username_1: Here's an intuitive description answer:
Function approximation can be done with any parameterizable function. Consider the problem of a $Q(s,a)$ space where $s$ is the positive reals, $a$ is $0$ or $1$, and the true Q-function is $Q(s, 0) = s^2$, and $Q(s, 1)= 2s^2$, for all states. If your function approximator is $Q(s, a) = m\*s + n\*a + b$, there exists no parameters which can accurately represent the true $Q$ function (we're trying to fit a line to a quadratic function). Consequently, even if you chose a good learning rate, and visit all states infinitely often, your approximation function will never converge to the true $Q$ function.
And here's a bit more detail:
1. Neural networks *approximate* functions. A function can be approximated to greater or lesser degrees by using more or less complex polynomials to approximate it. If you're familiar with Taylor Series approximation, this idea should seem pretty natural. If not, think about a function like a sine-wave over the interval [0-$\pi/2$). You can approximate it (badly) with a straight line. You can approximate it better with a quadratic curve. By increasing the degree of the polynomial we use to approximate the curve, we can get something that fits the curve more and more closely.
2. Neural networks are [universal function approximators](https://en.wikipedia.org/wiki/Universal_approximation_theorem). This means that, if you have a function, you can also make a neural network that is deep or wide enough that it can approximate the function you have created to an arbitrarily precise degree. However, any specific network topology you pick will be unable to learn *all* functions, unless it is infinitely wide or infinitely deep. This is analogous to how, if you pick the right parameters, a line can fit any two points, but not any 3 points. If you pick a network that is of a certain finite width or depth, I can always construct a function that needs a few more neurons to fit properly.
3. Q-learning's bounds hold only when the representation of the Q-function is *exact*. To see why, suppose that you chose to approximate your Q-function with a linear interpolation. If the true function can take any shape at all, then clearly the error in our interpolation can be made unboundedly large simply by constructing a XOR-like Q-function function, and no amount of extra time or data will allow us to reduce this error. If you use a function approximator, and the true function you try to fit is *not* something that the function can approximate arbitrarily well, then your model will not converge properly, even with a well-chosen learning rate and exploration rate. Using the terminology of computational learning theory, we might say that the convergence proofs for Q-learning have implicitly assumed that the true Q-function is a member of the hypothesis space from which you will select your model.
Upvotes: 4 <issue_comment>username_2: As far as I'm aware, it is still somewhat of an open problem to get a really clear, formal understanding of exactly why / when we get a lack of convergence -- or, worse, sometimes a danger of divergence. It is typically attributed to the **"deadly triad"** (see 11.3 of the second edition of Sutton and Barto's book), the combination of:
1. Function approximation, AND
2. Bootstrapping (using our own value estimates in the computation of our training targets, as done by $Q$-learning), AND
3. Off-policy training ($Q$-learning is indeed off-policy).
That only gives us a (possibly non-exhaustive) description of cases in which we have a lack of convergence and/or a danger of divergence, but still doesn't tell us **why** it happens in those cases.
---
[John's answer](https://ai.stackexchange.com/a/11681/1641) already provides the intuition that part of the problem is simply that the use of function approximation can easily lead to situations where your function approximator isn't powerful enough to represent the true $Q^\*$ function, there may always be approximation errors that are impossible to get rid of without switching to a different function approximator.
Personally, I think this intuition does help to understand why the algorithm cannot guarantee convergence to the optimal solution, but I'd still intuitively expect it to maybe be capable of "converging" to some "stable" solution that is the best possible approximation given the restrictions inherent in the chosen function representation. Indeed, this is what we observe in practice when we switch to on-policy training (e.g. Sarsa), at least in the case with linear function approximators.
---
My own intuition with respect to this question has generally been that an important source of the problem is **generalisation**. In the tabular setting, we have completely isolated entries $Q(s, a)$ for all $(s, a)$ pairs. Whenever we update our estimate for one entry, it leaves all other entries unmodified (at least initially -- there may be some effects on other entries in future updates due to bootstrapping in the update rule). Update rules for algorithms like $Q$-learning and Sarsa may sometimes update towards the "wrong" direction if we get "unlucky", but **in expectation**, they generally update towards the correct "direction". Intuitively, this means that, in the tabular setting, **in expectation** we will slowly, gradually fix any mistakes in any entries in isolation, without possibly harming other entries.
With function approximation, when we update our $Q(s, a)$ estimate for one $(s, a)$ pair, it can potentially also affect **all** of our other estimates for **all** other state-action pairs. Intuitively, this means that we no longer have the nice isolation of entries as in the tabular setting, and "fixing" mistakes in one entry may have a risk of adding new mistakes to other entries. However, like John's answer, this whole intuition would really also apply to on-policy algorithms, so it still doesn't explain what's special about $Q$-learning (and other off-policy approaches).
---
A very interesting recent paper on this topic is [Non-delusional Q-learning and Value Iteration](https://papers.nips.cc/paper/8200-non-delusional-q-learning-and-value-iteration). They point out a problem of "delusional bias" in algorithms that combine function approximation with update rules involving a $\max$ operator, such as Q-learning (it's probably not unique to the $\max$ operator, but probably applies to off-policy in general?).
The problem is as follows. Suppose we run this $Q$-learning update for a state-action pair $(s, a)$:
$$Q(s, a) \gets Q(s, a) + \alpha \left[ \max\_{a'} Q(s', a') - Q(s, a) \right].$$
The value estimate $\max\_{a'} Q(s', a')$ used here is based on the assumption that we execute a policy that is greedy with respect to older versions of our $Q$ estimates over a -- possibly very long -- trajectory. As already discussed in some of the previous answers, our function approximator has a limited representational capacity, and updates to one state-action pair may affect value estimates for other state-action pairs. This means that, after triggering our update to $Q(s, a)$, **our function approximator may no longer be able to simultaneously express the policy that leads to the high returns that our $\max\_{a'} Q(s', a')$ estimate was based on**. The authors of this paper say that the algorithm is "delusional". It performs an update under the assumption that, down the line, it can still obtain large returns, but it may no longer actually be powerful enough to obtain those returns with the new version of the function approximator's parameters.
---
Finally, another (even more recent) paper that I suspect is relevant to this question is [Diagnosing Bottlenecks in Deep Q-learning Algorithms](https://arxiv.org/abs/1902.10250), but unfortunately I have not yet had the time to read it in sufficient detail and adequately summarise it.
Upvotes: 3 <issue_comment>username_3: There are three problems
1. Limited capacity Neural Network (explained by John)
2. Non-stationary Target
3. Non-stationary distribution
Non-stationary Target
=====================
In tabular Q-learning, when we update a Q-value, other Q-values in the table don't get affected by this. But in neural networks, one update to the weights aiming to alter one Q-value ends up affecting other Q-values whose states look similar (since neural networks learn a continuous function that is smooth)
This is bad because when you are playing a game, two consecutive states of a game are always similar. Therefore, Q-value updates will increase or decrease Q-values for both states together. So, when you take one as the target for the other, the target becomes non-stationary since it moves along with you. This is analogous to a donkey running to catch a carrot that is attached to its head. Since the target is non-stationary, the donkey will never reach its target. And, in our case, in trying to chase, the Q-values will explode.
In [Human-level control through deep reinforcement learning](https://daiwk.github.io/assets/dqn.pdf), this problem is addressed by caching an OLD copy of the DQN for evaluating the targets, & updating the cache every 100,000 steps of learning. This is called a *target network*, and the targets remain stationary this way.
Non-stationary distribution
===========================
This is analogous to the "distribution drift" problem in imitation learning, which can be solved with the dataset aggregation technique called [DAgger](https://www.ri.cmu.edu/pub_files/2011/4/Ross-AISTATS11-NoRegret.pdf).
The idea is, as we train, our DQN gets better and better and our policy improves. And this causes our sampling distribution to change since we are doing online learning where we sample according to a policy with $\epsilon$ probability. This is a problem for supervised learning since it assumes stationary distribution or i.i.d. data.
As an analogy, this is like training a Neural Network to identify cats and dogs but showing the network only dogs during the first 100 epochs, and then showing only cats for the remainder epochs. What happens is, the network learns to identify dogs, then forgets it and learns to identify cats.
This is what happens when the distribution changes and we care only about the current distribution during training. So, in order to solve this, [same paper](https://daiwk.github.io/assets/dqn.pdf) starts aggregating data in a large buffer, and samples a mini-batch of both new data as well as old data every time during training. This is called *experience replay*, since we don't throw away our past experience and keep re-using them in training.
Upvotes: 2 |
2019/04/07 | 338 | 1,270 | <issue_start>username_0: If I have a DQN, and I care A LOT about future rewards (moreso than current rewards), can I set gamma to a number greater than 1? Like 1.1 perhaps?<issue_comment>username_1: You can't! As if you have a $\gamma$ greater than $1$ the specified sum for the q-learning will diverge! (goes to infinity in the future steps for $\gamma^n$). To know more about that please more scrutinize on the specified formula for the q-learning.
Upvotes: 1 <issue_comment>username_2: $ \gamma $ goes up to 1, but cannot be greater than or equal to 1 (this would make the discounted reward infinite).
The discount factor $ \gamma $ determines the importance of future rewards. A factor of 0 will make the agent "myopic" (or short-sighted) by only considering current rewards, while a factor approaching 1 will make it strive for a long-term high reward. If the discount factor meets or exceeds 1, the action values may diverge. For $ \gamma =1$, without a terminal state, or if the agent never reaches one, all environment histories become infinitely long, and utilities with additive, undiscounted rewards generally become infinite.
Source:<https://en.wikipedia.org/wiki/Q-learning>
<https://cs.stanford.edu/people/karpathy/reinforcejs/puckworld.html>
Upvotes: 2 |
2019/04/08 | 285 | 1,058 | <issue_start>username_0: I'm training both DQN and double DQN in the same environment, but DQN performs significantly better than double DQN. As I've seen [in the double DQN paper](https://arxiv.org/pdf/1509.06461.pdf), double DQN should perform better than DQN. Am I doing something wrong or is it possible?<issue_comment>username_1: That may happen when the value of the state is bad. You can find the example and explain about that in the link below.
See this:<https://medium.freecodecamp.org/improvements-in-deep-q-learning-dueling-double-dqn-prioritized-experience-replay-and-fixed-58b130cc5682>
Upvotes: 1 <issue_comment>username_2: There is no thorough proof, theoretical or experimental that Double DQN is better then vanilla DQN. There are a lot of different tasks, paper and later experiments only explore some of them. What practitioner can take out of it is that *on some tasks* DDQN is better. That's the essence of Deep Mind's "Rainbow" approach - drop a lot of different methods into bucket and take best results.
Upvotes: 3 [selected_answer] |
2019/04/12 | 610 | 2,274 | <issue_start>username_0: Projected Bellman error has shown to be stable with linear function approximation. The technique is not at all new. I can only wonder why this technique is not adopted to use with non-linear function approximation (e.g. DQN)? Instead, a less theoretical justified target network is used.
I could come up with two possible explanations:
1. It doesn't readily apply to non-linear function approximation case (some work needed)
2. It doesn't yield a good solution. This is the case for *true* Bellman error but I'm not sure about the projected one.<issue_comment>username_1: I have found some clues in Maei's thesis (2011): [“Gradient Temporal-Difference Learning Algorithms.”](https://era.library.ualberta.ca/items/fd55edcb-ce47-4f84-84e2-be281d27b16a)
According to the thesis:
1. GTD2 is a method that minimizes the projected Bellman error (MSPBE).
2. GTD2 is convergent in non-linear function approximation case (and off-policy).
3. GTD2 converges to a TD-fixed point (same point as semi-gradient TD).
4. GTD2 is slower to converge than usual semi-gradient TD.
>
> It doesn't readily apply to non-linear function approximation.
>
>
>
No, it does.
>
> It doesn't yield a good solution.
>
>
>
No, it does. TD-fixed point is the same point for the solution of semi-gradient TD (which is generally used). There is no edge on that.
**The only explanation seems to be practical convergence rate.**
To quote his words:
>
> Some of our empirical results suggest that gradient-TD method maybe slower than conventional TD methods on problems on which conventional TD methods are sound (that is, on-policy learning problems).
>
>
>
Upvotes: 2 <issue_comment>username_2: As I understand it above-mentioned projection operator project into linear feature subspace produced from set of feature vectors (or feature functions), that is space of linear combinations of features. Vanilla DQN don't have any feature space, projection into linear subspace doesn't make sense in DQN context. If you attempt to produce feature space for values/Q with some NN it wouldn't be DQN (because Q wouldn't be produced) and it wouldn't work anyway on anything but toy problems because amount of degrees of freedom of output would be too high.
Upvotes: 0 |
2019/04/12 | 861 | 3,820 | <issue_start>username_0: I have searched on how Google or any map provider calculates distance between two coordinates. The closest I could find is [Haversine formula](http://www.movable-type.co.uk/scripts/latlong.html).
If I draw a straight line between two points, then Haversine formula can be helpful. But since no one will travel straight and typically move through the streets, I want to know if there are any methods to calculate turn by turn points and see how to find multiple ways to travel to the destination from the source.
Right now my idea is
1. Have the two coordinates within a map window.
2. Make an algorithm detect the white lines (path) in the window.
3. Make it understand how they are connected.
4. Feed it to an algorithm to solve the [Travelling Salesman Problem](https://en.wikipedia.org/wiki/Travelling_salesman_problem) to find the best path between them.
But these things see very memory and process intensive. Even with the knowledge that Google has the powerhouse to process, to serve so many directions and distance matrix in fractions of seconds in amazing. I want to know if there are different approaches to this?<issue_comment>username_1: Obviously the way Google store their information is not published, but from the [Directions API](https://developers.google.com/maps/documentation/directions/intro) I would make the following educated guesses:
1. The roads/paths are stored as a graph database
2. Each path has additional information: type of road, transport link, etc.
3. Geocoordinates or placenames are mapped onto graph nodes
Finding a route then is a problem of finding the best path through the graph. This will be easier if you provide waypoints (which effectively split a long route into several shorter ones). As you have the physical coordinates, you can use something like the [A\* algorithm](https://en.wikipedia.org/wiki/A*_search_algorithm) to traverse the graph.
From your question I assume that you'd want to work with map images, rather than a pre-processed graph. I would think that this is not feasible, and that you have to do this conversion into a graph first. You can probably semi-automate this by using image processing to identify roads, but ultimately this is probably something that has to be done at least partially with human intervention. You don't want to drive for ages only to find out there was a one-way street which you did not spot from the image.
Also, usually satnavs are aware of speed limits along the path. This again has to be added either manually or automated (by recognising traffic signs along the route, eg from a street view photography car). So image data alone is not sufficient.
Upvotes: 0 <issue_comment>username_2: Roads maps are well defined and you can access them online. For example you can visit OpenStreetMap to download road network of a given region.
Such a road network definition contains nodes (junctions) with lat/lon coordinates and edges between nodes (roads). An edge is defined by a connection of two nodes. Edges can represent one-way roads: you need to define double ways as two edges going in both ways. Haversine formula gives you the straight distance between two nodes: the length of the edge.
This definition of a road network gives you a graph (in the mathematical point of view) and is largely covered by the literature. You can search the graph and compute many of its parameters.
From a road network definition you can run shortest path algorithm to find the closest way between two nodes. From a given coordinate you can either pick the closest node or get the closest edge (and compute distance to the edge and the coordinates of the perpendicular projection point).
Algorithms like A\* and Dijkstra are not so time consuming and you can find shortest path efficiently.
Upvotes: 1 |
2019/04/12 | 1,291 | 5,287 | <issue_start>username_0: I am trying to understand how alpha zero works, but there is one point that I have problems understanding, even after reading several different explanations. As I understand it (see for example <https://applied-data.science/static/main/res/alpha_go_zero_cheat_sheet.png>), alpha zero does not perform rollouts. So instead of finishing a game, it stops when it hits an unknown state, uses the neural network to compute probabilities for different actions as well as the value of this state ("probability of winning"), and then propagates the new value up the tree.
The reasoning is that this is much cheaper, since actually completing the game would take more time then just letting the neural network guess the value of a state.
However, this requires that the neural network is decent at predicting the value of a state. But in the beginning of training, obviously it will be bad at this. Moreover, since the monte carlo tree search stops as soon as it hits a new state, and the number of different game states is very large, it seems to me that the simulation will rarely manage to complete a game. And for sure, the neural network can not improve unless it actually completes a significant number of games, because that is only real feedback that tells the agent if it is doing good or bad moves.
What am I missing here?
The only plausible explanation I can come up with is: If the neural network would be essentially random in the beginning, well then for sure the large number of game states would prevent the tree search from ever finishing if it restarts as soon as it hits a previously unknown game state, so this can not be the case. So perhaps, maybe even if the neural network is bad in the beginning, it will not be very "random", but still be quite biased towards some paths. This would mean that the search would be biased to some smaller set of states among the vast number of different game states, and thus it would tend to take the same path more than once and be able to complete some games and get feedback. Is this "resolution" correct?
One problem I have though with the above "resolution", is that according to the algorithm, it should favor exploration in the beginning, so it seems that in the beginning it will be biased towards choosing previously not taken actions. This makes it even more seem like the tree search will never be able to complete a game and thus the neural net would not learn.<issue_comment>username_1: You're right that AlphaGo Zero doesn't perform rollouts in its MCTS. It does complete many, many games, though.
Realize that AlphaGo Zero only iterates MCTS 1,600 times before taking an action. The next state resulting from that action becomes the root of future search trees. Since a typical game of Go only lasts a few hundred moves, the board will very quickly reach a terminal state.
None of this is dependent on the initial performance of the neural network. The neural net can be incredibly bad; actions/moves will still be taken at the same frequency. Of course, since AlphaGo Zero trains with self-play, one of its two selves in a game will usually be the winner (ties are possible). So the neural net will improve over time.
I recommend going over the "Self-Play Training Pipeline" section of [the paper](http://discovery.ucl.ac.uk/10045895/1/agz_unformatted_nature.pdf).
Upvotes: 3 [selected_answer]<issue_comment>username_2: Sorry this is more of a comment than an answer. I'm wondering if you have found a definitive answer to your question, because I've a very related question.
I'm also confused by the AlphaZero algorithm - my explanation for my confusion is specified here: [How does AlphaZero use its value and policy heads in conjunction?](https://ai.stackexchange.com/questions/12759/how-does-alphazero-use-its-value-and-policy-heads-in-conjunction).
The thing is that I think the AlphaZero algorithm is also different from the Alphago Zero algorithm. A lot of sources that I've tried referred to really mix the two together.
In particular, there's this function in the official pseudocode, which really confused me:
```
def run_mcts(config: AlphaZeroConfig, game: Game, network: Network):
root = Node(0)
evaluate(root, game, network)
add_exploration_noise(config, root)
for _ in range(config.num_simulations):
node = root
scratch_game = game.clone()
search_path = [node]
while node.expanded():
action, node = select_child(config, node)
scratch_game.apply(action)
search_path.append(node)
value = evaluate(node, scratch_game, network)
backpropagate(search_path, value, scratch_game.to_play())
return select_action(config, game, root), root
def select_action(config: AlphaZeroConfig, game: Game, root: Node):
visit_counts = [(child.visit_count, action)
for action, child in root.children.iteritems()]
if len(game.history) < config.num_sampling_moves:
_, action = softmax_sample(visit_counts)
else:
_, action = max(visit_counts)
return action
```
Mainly because, if you look at the definition of `expanded`...
```
def expanded(self):
return len(self.children) > 0
```
I.e. when there are no more moves, and we are at the end of the game. I wonder what I'm missing here.
Upvotes: 0 |
2019/04/13 | 1,098 | 4,713 | <issue_start>username_0: I came across an article, [The Bitter Truth](http://www.incompleteideas.net/IncIdeas/BitterLesson.html), via the [Two Minute Papers](https://www.youtube.com/watch?v=wEgq6sT1uq8) YouTube Channel. <NAME> says...
>
> One thing that should be learned from the bitter lesson is the great power of general purpose methods, of methods that continue to scale with increased computation even as the available computation becomes very great. The two methods that seem to scale arbitrarily in this way are **search** and **learning**.
>
>
>
What is the difference between search and learning here? My understanding is that learning is a form of search -- where we iteratively search for some representation of data that minimizes a loss function in the context of deep learning.<issue_comment>username_1: In the context of AI:
1. **Search** refers to Simon & Newell's [General Problem Solver](https://en.wikipedia.org/wiki/General_Problem_Solver), and it's many (many) descendant algorithms. These algorithms take the form:
a. Represent a current state of some part of the world as a vertex in a graph.
b. Represent, connected to the current state by edges, all states of the world that could be reached from the current state by changing the world with a single action, and represent all subsequent states in the same manner.
c. Algorithmically find a sequence of actions that leads from a current state to some more desired goal state, by walking around on this graph.
An example of an application that uses search is Google Maps. Another is Google Flights.
2. **Learning** refers to any algorithm that refines a belief about the world through the exposure to experiences or to examples of others' experiences. Learning algorithms do not have a clear parent, as they were developed separately in many different subfields or disciplines. A reasonable taxonomy is the [5 tribes](https://medium.com/42ai/the-5-tribes-of-the-ml-world-670ebce96b4c) model. Some learning algorithms actually use search within themselves to figure out how to change their beliefs in response to new experiences!
An example of a learning algorithm used today is [Q-learning](https://en.wikipedia.org/wiki/Q-learning), which is part of the more general family of [reinforcement learning](https://en.wikipedia.org/wiki/Reinforcement_learning) algorithms. Q-learning works like this:
a. The learning program (usually called the *agent*) is given a representation of the current state of the world, and a list of actions that it could choose to perform.
b. If the agent has not seen this state of the world before, it assigns a random number to the reward it expects to get for performing each action. It stores this number as $Q(s,a)$, its guess at the quality of performing action $a$ in-state $s$.
c. The agent looks at $Q(s,a)$ for each action it could perform. It picks the best action with some probability $\epsilon$ and otherwise acts randomly.
d. The action of the agent causes the world to change and may result in the agent receiving a reward from the environment. The agent makes a note of whether it got a reward (and how much the reward was), and what the new state of the world is like. It then adjusts its belief about the quality of performing the action it performed in the state it used to be in, so that its belief about the quality of that action is closer to the reality of the reward it got, and the quality of where it ended up.
e. The agent repeats steps b-d forever. Over time, its beliefs about the quality of different state/action pairs will converge to match reality more and more closely.
An example of an application that uses learning is AI.SEs recommendations, which are made by a program that likely analyzes the relationships between different combinations of words in pairs of posts, and the likelihood that someone will click on them. Every time someone clicks on them, it learns something about whether listing a post as related is a good idea or not. Facebook's feed is another everyday example.
Upvotes: 4 [selected_answer]<issue_comment>username_2: One way to think of the difference between search and learning is that search usually entails a search key, and an algorithm hunts through the structure looking for a match between the key and an already-existing item. Whereas learning is the creation of the structure in the first place. But search and learning are related in that on receipt of an input (say from one or more sensors) the structure is initially searched to see if the input already exists, but if it doesn't then current input (when certain conditions are met) is added to the structure, and learning follows a failure of search.
Upvotes: 0 |
2019/04/14 | 1,373 | 6,122 | <issue_start>username_0: I'm building a generative adversarial network that generates images based on an input image. From the literature I've read on GANs, it seems that the generator takes in a random variable and uses it to generate an image.
If I were to have the generator receive an input image, would it no longer be a GAN? Would the discriminator be extraneous?<issue_comment>username_1: If you're building a straight "vanilla" generative adversarial network, it's best to understand the network as a statistical engine: You are training the generator *on samples of a statistical distribution*. (And you're training the discriminator to distinguish between "ground truth" images, and images from that generator.)
Once you replace the input noise with another image... well. Strictly speaking, it is probably still a generative adversarial network, if you're still doing everything else the same. It is still a generator and a discriminator, acting in an adversarial fashion.
But you've radically altered the input distribution, so there is a good chance that you're no longer accomplishing what you want to accomplish unless you're being very careful and clever.
That said, there are GAN variants which do take images rather than noise as inputs. See the wonderful [paper](https://junyanz.github.io/CycleGAN/) on CycleGANs by Zhu, et al, along with a substantial body of followup literature. And note that CycleGANs use not one, but two discriminators, so even here the discriminator is necessary.
Upvotes: 1 <issue_comment>username_2: **Short Answer**
Generative networks in generative network arrangements do not learn about input images directly. Their input during training is feedback from the discriminative network.
**The Theory in Summary**
The seminal paper, *Generative Adversarial Networks*, Goodfellow, Pouget-Abadie, Mirza, Xu, Warde-Farley, Ozair, Courville, and Bengio, June 2014, states, "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models ..." The two models are defined as MLPs (multilayer perceptrons) in the paper.
* Generative model, G
* Discriminative model, D
These two models are interconnected such that they form a negative feedback loop.
* G is trained to capture the feature relational distribution of a set of examples and generate new examples based on that relational distribution well enough to fool D.
* D is trained to differentiate G's mocks from the set of externally procured examples.
**Applying Concepts**
If G were to receive input images, their presence would merely frustrate network training, in that the goal of the training would likely be inadequately defined. The objective of the convergence of G, stated above, is not the learning of how to process the images to produce some other form of output. Its objective in the generative approach is to learn how to generate well, an entirely incompatible objective with either image evaluation or image processing.
**Additional Information**
Additionally, one image is not nearly enough. There must be a sufficiently large set of example images for training to converge at all and then many more to expect the convergence to be both accurate and reliable. The PAC (probably approximately correct) learning analysis framework may be helpful to determine how many examples are needed for a specific case.
**Essential Discriminator**
The discriminator is essential to the generative approach because the feedback loop referenced above is essential to the convergence mechanism. The bidirectional interdependence between G and D is what allows a balanced approach toward accuracy in the feature relational distribution. That accuracy facilitates the human perception that the generated images fit adequately within a cognitive class.
.
---
**Response to Comments**
The attempt to use a generative approach, "To paint in gaps in an image," is reasonable. In such a case, using Goodfellow's nomenclature, G would be generating the missing pixels and D would be trying to discriminate between G's gap filling and the pixels that were in the scenes in the regions of gaps prior to their introduction.
There are two additional requirements in the scenario of filling in pixels.
* G must be strongly incentivized against allowing a large gradient between the generated pixels and the adjacent non-gap pixels, unless that gradient is appropriate to the scene, as in the case of an object edge, or a change in reflectivity, such as a surface abrasion or the edge of a spray painted shape.
* D must train using the entire image, which means the examples should be images without gaps, the gaps must be introduced in a way that matches the expected distribution of features of gaps that may be encountered later, and the result of G must be superimposed over the full image to produce the input arising from G and discriminated from the original by D.
It is recommended to begin with a standard GAN design, create a test for it (in TDD fashion), implement it, experiment with it, and otherwise become familiar with it and the mathematics involved. Most important to understand is how the balance between G's convergence and D's convergence is obtained in the loss (a.k.a. error or disparity) functions for each, and what concepts of feedback are employed using those functions.
* Does your point about input images frustrating network training apply to this kind of problem, or just to GANNs that generate from scratch?
It applies to both.
* Would I have to have the generator compare the original image with the generated image and pick which one it thinks is better in order to deal with the "adequately defined" issue?
D compares, not G. That is the delegated arrangement. It is not that other arrangements cannot work. They may. But Goodfellow and the others understood what worked in artificial networks long before they discovered a new approach, and they likely worked out the math of that approach and diagrammed it, perhaps on a white board, long before they typed a single line of code.
Upvotes: 1 [selected_answer] |
2019/04/15 | 327 | 1,414 | <issue_start>username_0: I'm wondering if AI now can help us abstract summary or general idea of long article, for example novel or historical stories, or abstract most important keyword from sentence;
Would you please tell me if any of this kind of project is done?
I wish I can improve my reading speed and effectiveness with AI help.<issue_comment>username_1: Yes. Text summarisation has been a research topic in (computational) linguistics for literally decades. Have a look at the [Wikipedia page on Automatic Summarisation](https://en.wikipedia.org/wiki/Automatic_summarization) for an overview.
There are basically various different approaches: either, selecting salient sentences (or parts of sentences) which represent the gist of the text, or, trying to 'understand' the text and generating new sentences. The former is generally easier, and works on any text, while the latter would probably be able to produce better results, but is more complicated and would not work on any text, as it would be specific to a particular topic.
Upvotes: 1 <issue_comment>username_2: Addition to dear username_1's answer
You can check [here](https://github.com/topics/text-summarizer) to see how people uses machine learning tools to summarize texts and also check some articles from [here](https://towardsdatascience.com/tagged/text-summarization) to understand its background If you are curious about.
Upvotes: 0 |
2019/04/15 | 319 | 1,351 | <issue_start>username_0: Is there an AI application that can produce syntactically (and semantically) correct sentences given a bag of words? For example, suppose I am given the words "cat", "fish", and "lake", then one possible sentence could be "cat eats fish by the lake".<issue_comment>username_1: Yes. Text summarisation has been a research topic in (computational) linguistics for literally decades. Have a look at the [Wikipedia page on Automatic Summarisation](https://en.wikipedia.org/wiki/Automatic_summarization) for an overview.
There are basically various different approaches: either, selecting salient sentences (or parts of sentences) which represent the gist of the text, or, trying to 'understand' the text and generating new sentences. The former is generally easier, and works on any text, while the latter would probably be able to produce better results, but is more complicated and would not work on any text, as it would be specific to a particular topic.
Upvotes: 1 <issue_comment>username_2: Addition to dear username_1's answer
You can check [here](https://github.com/topics/text-summarizer) to see how people uses machine learning tools to summarize texts and also check some articles from [here](https://towardsdatascience.com/tagged/text-summarization) to understand its background If you are curious about.
Upvotes: 0 |
2019/04/15 | 547 | 1,899 | <issue_start>username_0: Do we have cross-language vector space for word embedding?
When measure similarity for apple/Pomme/mela/Lacus/苹果/りんご, they should be the same
If would be great if there's available internet service of neuron network which already be trained by multiple language<issue_comment>username_1: You can try to read about MUSE (Multilingual Unsupervised and Supervised Embeddings) by Facebook. You can read it [from its Github](https://github.com/facebookresearch/MUSE) or [this article](https://code.fb.com/ml-applications/under-the-hood-multilingual-embeddings/). They also provide the FastText dictionary format (.vec file) for some languages.
Their [original paper](https://arxiv.org/pdf/1710.04087.pdf) shows how it aligns the vector of words from two different languages:
[](https://i.stack.imgur.com/zs1EF.png)
Upvotes: 2 <issue_comment>username_2: For cross-language word representation the trend now is:
* **ELMoForManyLangs**: [git\_repo](https://github.com/HIT-SCIR/ELMoForManyLangs) [original\_paper\_March\_2018](https://arxiv.org/pdf/1802.05365.pdf)
* **MUSE by Facebook**: [git\_repo](https://github.com/facebookresearch/MUSE) [original\_paper\_January\_2018](https://arxiv.org/pdf/1710.04087.pdf)
Remember that you can also do the task in 2 steps:
**Translate** the words to a reference language (e.g english), then **represent** each one of them using any word representation model (in the reference language).
The 2-steps option is also good as specific-language word representation models are more accurate, and there are a bench of easy-to-use libraries for single-language translation (*i.e [py-translator](https://pypi.org/project/py-translator/)*) and representation (*i.e [Universal sentence encoder by Google](https://tfhub.dev/google/universal-sentence-encoder/2)*).
Upvotes: 1 |
2019/04/16 | 505 | 1,758 | <issue_start>username_0: How can the A\* algorithm be optimized?
Any references that shows the optimization of A\* algorithm are also appreciated.<issue_comment>username_1: You can try to read about MUSE (Multilingual Unsupervised and Supervised Embeddings) by Facebook. You can read it [from its Github](https://github.com/facebookresearch/MUSE) or [this article](https://code.fb.com/ml-applications/under-the-hood-multilingual-embeddings/). They also provide the FastText dictionary format (.vec file) for some languages.
Their [original paper](https://arxiv.org/pdf/1710.04087.pdf) shows how it aligns the vector of words from two different languages:
[](https://i.stack.imgur.com/zs1EF.png)
Upvotes: 2 <issue_comment>username_2: For cross-language word representation the trend now is:
* **ELMoForManyLangs**: [git\_repo](https://github.com/HIT-SCIR/ELMoForManyLangs) [original\_paper\_March\_2018](https://arxiv.org/pdf/1802.05365.pdf)
* **MUSE by Facebook**: [git\_repo](https://github.com/facebookresearch/MUSE) [original\_paper\_January\_2018](https://arxiv.org/pdf/1710.04087.pdf)
Remember that you can also do the task in 2 steps:
**Translate** the words to a reference language (e.g english), then **represent** each one of them using any word representation model (in the reference language).
The 2-steps option is also good as specific-language word representation models are more accurate, and there are a bench of easy-to-use libraries for single-language translation (*i.e [py-translator](https://pypi.org/project/py-translator/)*) and representation (*i.e [Universal sentence encoder by Google](https://tfhub.dev/google/universal-sentence-encoder/2)*).
Upvotes: 1 |
2019/04/17 | 724 | 2,628 | <issue_start>username_0: In section 3.2.1 of [Attention Is All You Need](https://arxiv.org/pdf/1706.03762.pdf) the claim is made that:
>
> Dot-product attention is identical to our algorithm, except for the scaling factor of $\frac{1}{\sqrt{d\_k}}$. Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code.
>
>
>
It does not make sense why dot product attention would be faster. Additive attention is nearly identical computation wise; the main difference is $Q + K$ instead of $Q K^T$ in dot product attention. $Q K^T$ requires at least as many addition operations as $Q + K$, so how can it possibly be faster?<issue_comment>username_1: The additive attention method that the researchers are comparing to corresponds to a neural network with 3 layers (it is not actually straight addition). Computing this will involve one multiplication of the input vector by a matrix, then by another matrix, and then the computation of something like a softmax. Smart implementation of a dot-product will not break out the whole matrix multiplication algorithm for it, and it will basically be a tight, easily parallelized loop.
Upvotes: 2 <issue_comment>username_2: In additive attention (as described in [the paper by Bahdanau et al. (2014)](https://arxiv.org/abs/1409.0473)), the alignment model $a$ is represented by a feedforward neural network. If you look in their appendix, they actually implement this as
$$ e\_{ij} = V\_a^T \tanh \left(W\_a s\_{i-1} + U\_a h\_j\right) = V\_a^T \tanh \left(Q + K\right).$$
In contrast, in dot-product attention (as described in [the paper by Vaswani et al. (2017)](https://arxiv.org/abs/1706.03762)), the alignment model is implemented as
$$ e\_{ij} = W\_Q s\_{i-1} \left(W\_K h\_j\right)^T = Q K^T.$$
The computational advantage is that the dot-product alignment model has only two weight matrices and only needs matrix multiplication, for which highly-optimized code exists.
Upvotes: 1 <issue_comment>username_3: Two are similar in theoretical complexity.
But matrix multiplication is better optimized. According to [this paper](https://arxiv.org/pdf/2007.00072.pdf), non-matmul ops in transformer are 15x slower than matmul operation.

[Making Deep Learning Go Brrrr From First Principles](https://horace.io/brrr_intro.html) discuses about similar topics.
Upvotes: 1 |
2019/04/19 | 563 | 2,077 | <issue_start>username_0: I have looked at the documentation for the NEAT Python API found [here](https://neat-python.readthedocs.io/en/latest/xor_example.html), where it's written
>
> The error for each genome is $1-\sum\_i(e\_i-a\_i)^2$
>
>
>
I have not yet learned calculus, so I can't understand this formula. So, can someone please explain what the calculation means?<issue_comment>username_1: The $\sum$ means that they take a sum of the squared difference of each pair of expected/predicted values ($e\_i$) and actual values ($a\_i$)
That gives them an error metric of how far off they are from their desired result. The goal is generally to optimize the algorithms against such an error function, in this case, to get it as close to one as possible.
Upvotes: 0 <issue_comment>username_2: $$1-\sum\_i(e\_i-a\_i)^2$$
$\sum$ - there just means sum. It is the greek letter for S. You can rewrite the above formula as
$$1 -[(e\_1 - a\_1)^2+(e\_2-a\_2)^2+(e\_3-a\_3)^2+\ldots ]$$
$\sum$ just helps us avoid writing dozens of $+$ signs. Read [more here](https://en.wikipedia.org/wiki/Summation).
What they are doing here is taking the difference of expected value $e\_1$ and the actual value $a\_1$ for the 1st example, and so on. The difference can be positive ($e\_1 > a\_1$) or negative ($e\_1 < a\_1$), so usually we square the difference to make it positive number.
The rest is there in the docs. Try putting in concrete imagined values for $a\_i$ and $e\_i$.
Upvotes: 2 <issue_comment>username_3: It sums the squared error for the output vs the expected output, this isnt something you need to do for each experiment they are simply telling you the metric they are using as fitness for the genomes in the xor example experiment, in other experiments you could use something else. If you were training it to play video games you set your fitness to be a numerical representation of how well the genome played the game, so you dont always need to have an expected value as long as your fitness function uses a meaningful metric as the fitness value.
Upvotes: 0 |
2019/04/21 | 768 | 2,974 | <issue_start>username_0: Nowadays, robots or artificial agents often only perform the specific task they have been programmed to do.
Will we be able to build an artificial intelligence that feels empathy, that understands the emotions and feelings of humans, and, based on that, act accordingly?<issue_comment>username_1: Let us describe a very simple system that does something we could label as empathyc.
A chatbot answers "I am sorry to hear that. What happened?" when we type "I feel bad", and it replies "I am glad to hear that. Fancy some music?" when we type "I feel good".
Somehow, it perceives a human emotion, and acts accordingly.
Planes fly but they do not fly as birds. Similarly, we can expect [artificial empathy](https://en.wikipedia.org/wiki/Artificial_empathy) from an AI, not necessarily natural empathy as we feel it.
Here is a related paper, [Shared Laughter Generation for Empathetic Spoken Language](https://www.frontiersin.org/articles/10.3389/frobt.2022.933261/full)
And here is an actual example:
[](https://i.stack.imgur.com/uzW3t.jpg)
The system correctly maps tears to sadness and outputs a proper answer, while also explaining its non-sentience. Not bad!
Upvotes: 4 [selected_answer]<issue_comment>username_2: Empathy is about living and values, just as someone can be a good person, or a bad person, based on their education, culture, past experience and values, just as I who recently became a father became much more empathic about my parents.
If it's really a semi-programmed artificial intelligence that captures around you the experience with your programmers, with people, with the environment, etc., and somehow is set to learn from it, I believe you can develop some kind of “ empathy ”, but as Jaume said, it won't be“ natural ”
We have to remember that real artificial intelligence is the one that absorbs and learns, not a set of IF's and ELSE's.
Upvotes: 1 <issue_comment>username_3: It seems to me that empathy is based on understanding the experience of another entity:
>
> originally Psychology. The ability to understand and appreciate another person's feelings, experience, etc.
> SOURCE: [OED](https://www.oed.com/view/Entry/61284?redirectedFrom=empathy#eid)
>
>
>
Using this definition, the AI would have to understand human experience. (There may be a "Chinese Room" issue in terms of whether one considers the algorithm to truly "understand". But, if it can classify the input sufficiently to produce an appropriate response, that can constitute understanding.)
The underlying problem is that the algorithm likely doesn't "feel" in the same way humans experience emotions, in that the human experience is colored by chemical response. So while the algorithm might be able to demonstrate sufficient "understanding" of a human's experience, and act in an empathetic manner, the degree of understanding may always be limited.
Upvotes: 2 |
2019/04/22 | 965 | 3,643 | <issue_start>username_0: I wrote a convolutional neural network for the MNIST dataset with Numpy from scratch. I am currently trying to understand every part and calculation.
But one thing I noticed was the "just positive" derivative of the ReLU function.
My network structure is the following:
* (Input 28x28)
* Conv Layer (filter count = 6, filter size = 3x3, stride = 1)
* Max Pool Layer (Size 2x2) with RELU
* Conv Layer (filter count = 6, filter size = 3x3, stride = 1)
* Max Pool Layer (Size 2x2) with RELU
* Dense (128)
* Dense (10)
I noticed, when looking at the gradients, that the ReLU derivative is always (as it should be) positive. But is it right that the filter weights are always decreasing their weights? Or is there any way they can increase their weight?
Whenever I look at any of the filter's values, they decreased after training. Is that correct?
By the way, I am using stochastic gradient descent with a fixed learning rate for training.<issue_comment>username_1: Let us describe a very simple system that does something we could label as empathyc.
A chatbot answers "I am sorry to hear that. What happened?" when we type "I feel bad", and it replies "I am glad to hear that. Fancy some music?" when we type "I feel good".
Somehow, it perceives a human emotion, and acts accordingly.
Planes fly but they do not fly as birds. Similarly, we can expect [artificial empathy](https://en.wikipedia.org/wiki/Artificial_empathy) from an AI, not necessarily natural empathy as we feel it.
Here is a related paper, [Shared Laughter Generation for Empathetic Spoken Language](https://www.frontiersin.org/articles/10.3389/frobt.2022.933261/full)
And here is an actual example:
[](https://i.stack.imgur.com/uzW3t.jpg)
The system correctly maps tears to sadness and outputs a proper answer, while also explaining its non-sentience. Not bad!
Upvotes: 4 [selected_answer]<issue_comment>username_2: Empathy is about living and values, just as someone can be a good person, or a bad person, based on their education, culture, past experience and values, just as I who recently became a father became much more empathic about my parents.
If it's really a semi-programmed artificial intelligence that captures around you the experience with your programmers, with people, with the environment, etc., and somehow is set to learn from it, I believe you can develop some kind of “ empathy ”, but as Jaume said, it won't be“ natural ”
We have to remember that real artificial intelligence is the one that absorbs and learns, not a set of IF's and ELSE's.
Upvotes: 1 <issue_comment>username_3: It seems to me that empathy is based on understanding the experience of another entity:
>
> originally Psychology. The ability to understand and appreciate another person's feelings, experience, etc.
> SOURCE: [OED](https://www.oed.com/view/Entry/61284?redirectedFrom=empathy#eid)
>
>
>
Using this definition, the AI would have to understand human experience. (There may be a "Chinese Room" issue in terms of whether one considers the algorithm to truly "understand". But, if it can classify the input sufficiently to produce an appropriate response, that can constitute understanding.)
The underlying problem is that the algorithm likely doesn't "feel" in the same way humans experience emotions, in that the human experience is colored by chemical response. So while the algorithm might be able to demonstrate sufficient "understanding" of a human's experience, and act in an empathetic manner, the degree of understanding may always be limited.
Upvotes: 2 |
2019/04/22 | 817 | 3,222 | <issue_start>username_0: Suppose I have a standard image classification problem (i.e. CNN is shown a single image and predicts a single classification for it). If I were to use bounding boxes to surround the target image (i.e. convert this into an object detection problem), would this increase classification accuracy purely through the use of the bounding box?
I'm curious if the neural network can be "assisted" by us when we show it bounding boxes as opposed to just showing it the entire image and letting it figure it all out by itself.<issue_comment>username_1: Let us describe a very simple system that does something we could label as empathyc.
A chatbot answers "I am sorry to hear that. What happened?" when we type "I feel bad", and it replies "I am glad to hear that. Fancy some music?" when we type "I feel good".
Somehow, it perceives a human emotion, and acts accordingly.
Planes fly but they do not fly as birds. Similarly, we can expect [artificial empathy](https://en.wikipedia.org/wiki/Artificial_empathy) from an AI, not necessarily natural empathy as we feel it.
Here is a related paper, [Shared Laughter Generation for Empathetic Spoken Language](https://www.frontiersin.org/articles/10.3389/frobt.2022.933261/full)
And here is an actual example:
[](https://i.stack.imgur.com/uzW3t.jpg)
The system correctly maps tears to sadness and outputs a proper answer, while also explaining its non-sentience. Not bad!
Upvotes: 4 [selected_answer]<issue_comment>username_2: Empathy is about living and values, just as someone can be a good person, or a bad person, based on their education, culture, past experience and values, just as I who recently became a father became much more empathic about my parents.
If it's really a semi-programmed artificial intelligence that captures around you the experience with your programmers, with people, with the environment, etc., and somehow is set to learn from it, I believe you can develop some kind of “ empathy ”, but as Jaume said, it won't be“ natural ”
We have to remember that real artificial intelligence is the one that absorbs and learns, not a set of IF's and ELSE's.
Upvotes: 1 <issue_comment>username_3: It seems to me that empathy is based on understanding the experience of another entity:
>
> originally Psychology. The ability to understand and appreciate another person's feelings, experience, etc.
> SOURCE: [OED](https://www.oed.com/view/Entry/61284?redirectedFrom=empathy#eid)
>
>
>
Using this definition, the AI would have to understand human experience. (There may be a "Chinese Room" issue in terms of whether one considers the algorithm to truly "understand". But, if it can classify the input sufficiently to produce an appropriate response, that can constitute understanding.)
The underlying problem is that the algorithm likely doesn't "feel" in the same way humans experience emotions, in that the human experience is colored by chemical response. So while the algorithm might be able to demonstrate sufficient "understanding" of a human's experience, and act in an empathetic manner, the degree of understanding may always be limited.
Upvotes: 2 |
2019/04/23 | 1,550 | 6,670 | <issue_start>username_0: It seems that deep neural networks and other neural network based models are dominating many current areas like computer vision, object classification, reinforcement learning, etc.
Are there domains where SVMs (or other models) are still producing state-of-the-art results?<issue_comment>username_1: Deep Learning and Neural Networks are getting most of the focus because of recent advances in the field and most experts believe it to be the future of solving machine learning problems.
But make no mistake, classical models still produce exceptional results and in certain problems, they can produce better results than deep learning.
Linear Regression is still by far the most used machine learning algorithm in the world.
It’s difficult to identify a specific domain where classical models always perform better as the accuracy is very much determined on the shape and quality of the input data.
So algorithm and model selection is always a trade-off. It’s a somewhat accurate statement to make that classical models still perform better with smaller data sets. However, a lot of research is going into improving deep learning model performance on less data.
Most classical models require less computational resources so if your goal is speed then its much better.
Also, classical models are easier to implement and visualize which can be another indicator for performance, but it depends on your goals.
If you have unlimited resources, a massive observable data set that is properly labeled and you implement it correctly within the problem domain then deep learning is likely going to give you better results in most cases.
But in my experience, the real-world conditions are never this perfect
Upvotes: 3 <issue_comment>username_2: State-of-the-art is a tough bar, because it's not clear how it should be measured. An alternative criteria, which is akin to state-of-the-art, is to ask *when* you might prefer to try an SVM.
SVMs have several advantages:
1. Through the kernel trick, the runtime of an SVM does not increase significantly if you want to learn patterns over many non-linear combinations of features, rather than the original feature set. In contrast, a more modern approach like a deep neural network will need to get deeper or wider to model the same patterns, which will increase its training time.
2. SVMs have an inherent bias towards picking "conservative" hypotheses, that are less likely to overfit the data, because they try to find maximum margin hypotheses. In some sense, they "bake-in" Occam's razor.
3. SVMs have only two hyperparameters (the choice of kernel and the regularization constant), so they are very easy to tune to specific problems. It is usually sufficient to tune them by performing a simple grid-search through the parameter space, which can be done automatically.
SVMs also have some disadvantages:
1. SVMs have a runtime that scales cubically in the number of datapoints you want to train on (i.e. $O(n^3)$ runtime)1. This does not compare well with, say, a typical training approach for a deep neural network which runs in $O(w\*n\*e)$ time, where $n$ is the number of data points, $e$ is the number of training epochs, and $w$ is the number of weights in the network. Generally $w, e << n$.
2. To make use of the Kernel trick, SVMs cache a value for the kernelized "distance" between any two pairs of points. This means they need $O(n^2)$ memory. This is far, far, more trouble than the cubic runtime on most real-world sets. More than a few thousand datapoints will leave most modern servers [thrashing](https://en.wikipedia.org/wiki/Thrashing_(computer_science)), which increases effective runtime by several orders of magnitude. Together with point 1, this means SVMs will tend to become unworkably slow for sets beyond maybe 5,000-10,000 datapoints, at the upper limit.
All of these factors point to SVMs being relevant for exactly one use case: small datasets where the target pattern is thought, apriori, to be some regular, but highly non-linear, function of a large number of features. This use case actually arises fairly often. A recent example application where I found SVMs to be a natural approach was building predictive models for a target function that was known to be the result of interactions between pairs of features (specifically, communications between pairs of agents). An SVM with a quadratic kernel could therefore efficiently learn conservative, reasonable, guesses.
---
1 There are approximate algorithms that will solve the SVM faster than this, as noted in the other answers.
Upvotes: 4 <issue_comment>username_3: Totally agree with @John's answer. Will try and complement that with some more points.
Some advantages of SVMs:
a) SVM is defined by a convex optimisation problem for which there are [efficient methods to solve, like SMO](http://cs229.stanford.edu/notes/cs229-notes3.pdf).
b) Effective in high dimensional spaces and also in cases where number of dimensions is greater than the number of samples.
c) Uses a subset of training points in the decision function (called support vectors), so it is also memory efficient.
d) [Different Kernel functions can be specified for the decision function.](https://en.wikipedia.org/wiki/Kernel_method). In its simplest form, the kernel trick means transforming data into another dimension that has a clear dividing margin between classes of data.
The disadvantages of support vector machines include:
a) If the number of features is much greater than the number of samples, avoiding over-fitting in choosing Kernel functions and regularization term is crucial. Kernel models can be quite sensitive to [over-fitting the model selection criterion](http://jmlr.csail.mit.edu/papers/volume11/cawley10a/cawley10a.pdf)
b) SVMs do not directly provide probability estimates. In many classification problems you actually want the probability of class membership, so it would be better to use a method like Logistic Regression, rather than post-process the output of the SVM to get probabilities.
Upvotes: 0 <issue_comment>username_4: For datasets of low-dimensional tabular data. DNN are not efficient on low-dimensional input because of huge overparametrisation. So even if dataset is huge in size, but each sample is low-dimensional SVM would beat DNN.
More generally if data is tabular and the correlation between fields of the sample is weak and noisy, SVM may still beat DNN even for high dimensional data, but that depend on specific of data.
Unfortunately I can't recall any specific papers on subject, so it's mostly common sense reasoning, you don't have to trust it.
Upvotes: -1 |
2019/04/23 | 1,684 | 5,744 | <issue_start>username_0: We often train neural networks by optimizing the mean squared error (MSE), which is an equation of a parabola $y=x^2$, with gradient descent.
We also say that weight adjustment in a neural network by the gradient descent algorithm can hit a local minimum and get stuck in there.
How are multiple local minima on the equation of a parabola possible, if a parabola has only one minimum?<issue_comment>username_1: >
> How are multiple local minima on the equation of a parabola possible, if a parabola has only one minimum?
>
>
>
A parabola has one minimum, and no separate local minima. So it isn't possible.
However...
>
> Gradient descent works on the equation of mean squared error, which is an equation of a parabola $y=x^2$
>
>
>
Just because the loss function is a parabola with respect to the direct input, does not mean that the loss function is a parabola with respect to the parameters that indirectly cause that error.
In fact it only remains true for linear functions. When considering linear regression $\hat{y} = \sum\_i w\_i x\_i + b$, there is only one global minimum (with specific values of $w\_i$ or specific vector $\mathbf{w}$), and your assertion is true.
Once you add nonlinear activations, as in neural networks, then the relationship between error function and parameters of the model becomes far more complex. For the last/output layer you can carefully choose a loss function so that this cancels out - you can keep your single global minimum for logistic regression and softmax regression. However, one or more hidden layers, and all bets are off.
In fact you can prove quite easily that a neural network with a hidden layer must have multiple stationary points (not necessarily local minima). The outline of the proof is to note that there must be multiple equivalent solutions, since in a fully-connected network you can re-arrange the nodes into any order, move the weights to match, and it will be a new solution with exactly the same behaviour, including the same loss on the dataset. So a neural network with one hidden layer with $n$ nodes must have $n!$ absolute minimums. There is no way for these to exist without other stationary points in-between them.
There is theory to suggest that most of the stationary points found in practice will not be local minima, but saddle points.
As an example, [this is an analysis of saddle points in a simple XOR approximator](http://liacs.leidenuniv.nl/assets/PDF/TechRep/tr94-19.pdf).
Upvotes: 2 <issue_comment>username_2: $g(x) = x^2$ is indeed a parabola and thus has just one optimum.
However, the $\text{MSE}(\boldsymbol{x}, \boldsymbol{y}) = \sum\_i (y\_i - f(x\_i))^2$, where $\boldsymbol{x}$ are the inputs, $\boldsymbol{y}$ the corresponding labels and the function $f$ is the model (e.g. a neural network), is *not necessarily* a parabola. In general, it is only a parabola if $f$ is a constant function and the sum is over one element.
For example, suppose that $f(x\_i) = c, \forall i$, where $c \in \mathbb{R}$. Then $\text{MSE}(\boldsymbol{x}, \boldsymbol{y}) = \sum\_i (y\_i - c)^2$ will only change as a function of one variable, $\boldsymbol{y}$, as in the case of $g(x) = x^2$, where $g$ is a function of one variable, $x$. In that case, $(y\_i - c)^2$ will just be a shifted version (either to the right or left depending on the sign of $c$) of $y\_i^2$, so, for simplicity, let's ignore $c$. So, in the case $f$ is a constant function, then $\text{MSE}(\boldsymbol{x}, \boldsymbol{y}) = \sum\_i y\_i^2$, which is a sum of parabolas $y\_i^2$, which is called a *paraboloid*. In this case, the paraboloid corresponding to $\text{MSE}(\boldsymbol{x}, \boldsymbol{y}) = \sum\_i y\_i^2$ will only have one optimum, just like a parabola. Furthermore, if the sum is just over one $y\_i$, that is, $\text{MSE}(\boldsymbol{x}, \boldsymbol{y}) = \sum\_i y\_i^2 = y^2$ (where $\boldsymbol{y} = y$), then the MSE becomes a parabola.
In other cases, the MSE might not be a parabola or have just one optimum. For example, suppose that $f(x) = x^2$, $y\_i = 1$ ($\forall i$), then $h(x) = (1 - x^2)^2$ looks as follows
[](https://i.stack.imgur.com/Uy0PZ.png)
which has two minima at $x=-1$ and $x=1$ and one maximum at $x=0$. We can find the two minima of this function $h$ using calculus: $h'(x) = -4x(1 - x^2)$, which becomes zero when $x=-1$ and $x=1$.
In this case, we only considered one term of the sum. If we considered the sum of terms of the form of $h$, then we could even have more "complicated" functions.
To conclude, given that $f$ can be arbitrarily complex, then also $\text{MSE}(\boldsymbol{x}, \boldsymbol{y})$, which is a function of $f$, can also become arbitrarily complex and have multiple minima. Given that neural networks can implement arbitrarily complex functions, then $\text{MSE}(\boldsymbol{x}, \boldsymbol{y})$ can easily have multiple minima. Moreover, the function $f$ (e.g. the neural network) changes during the training phase, which might introduce more complexity, in terms of which functions the MSE can be and thus which (and how many) optima it can have.
Upvotes: 4 [selected_answer]<issue_comment>username_3: Another view on this topic: think about the derivative of the MSE with respect to the inputs. You will need to apply the chain rule:
$$\frac{dMSE}{dx\_i}=\sum\_i2(y\_i-f(x\_i))\cdot(-1)\cdot\frac{df(x\_i)}{dx\_i}$$
which only resembles the derivative of the parabola when, as mentioned in [nbro's answer](https://ai.stackexchange.com/a/11982/45714), $f(x\_i)=c, \forall\ i$ or, more generally, when they $f(x)$ is linear ([Neil's answer](https://ai.stackexchange.com/a/11981/45714)).
Upvotes: 0 |
2019/04/24 | 894 | 3,207 | <issue_start>username_0: Following the **DQN algorithm** with experience replay:
Store transition $\left(\phi\_{t}, a\_{t}, r\_{t}, \phi\_{t+1}\right)$ in $D$ Sample random minibatch of transitions $\left(\phi\_{j}, a\_{j}, r\_{j}, \phi\_{j+1}\right)$ from $D$ Set
$$y\_{j}=\left\{\begin{array}{cc}r\_{j} & \text { if episode terminates at j+1} \\ r\_{j}+\gamma \max \_{d^{\prime}} \hat{Q}\left(\phi\_{j+1}, a^{\prime} ; \theta^{-}\right) & \text {otherwise }\end{array}\right.$$
Perform a gradient descent step on $\left(y\_{j}-Q\left(\phi, a\_{j} ; \theta\right)\right)^{2}$ with respect to the network parameters $\theta$.
>
> We calculate the $loss=(Q(s,a)-(r+Q(s+1,a)))^2$.
>
>
>
>
> Assume I have positive but changing rewards. Meaning, $r>0$.
>
>
>
Thus, since the rewards are positive, by calculating the loss, I notice that almost always $Q(s)< Q(s+1)+r$.
Therefore, the network learns to always increase the $Q$ function , and eventually, the $Q$ function is higher in same states in later learning steps.
*How can I stabilize the learning process?*<issue_comment>username_1: 1. You can use discount factor gamma less then one.
2. You can use finite time horizon - only for states which are no farther away then T time steps reward propagate back
3. You can use sum of rewords averaged over time for Q
All of those are legitimate approaches.
Upvotes: 1 <issue_comment>username_2: >
> Therefore,the network learns to always increase the Q function , and eventually the Q function is higher in same states in later learning steps
>
>
>
If your value function keeps increasing in later steps that means that the network is still learning those Q-values, you shouldn't necessarily prevent that. Your Q-values won't increase forever even if the rewards are always positive. You basically have a regression problem here and when the value of $Q(s,a)$ becomes very close to the predicted value of $r+Q(s',a)$ value of $Q(s,a)$ will stop increasing by itself.
Upvotes: 1 <issue_comment>username_3: I changed the rewards to be negative and positive by substructing the mean reward.
It seems to improve the Q function boundries.
Upvotes: 1 [selected_answer]<issue_comment>username_4: The popular Q-learning algorithm is known to overestimate action values under certain conditions. It was not previously known whether, in practice, such overestimations are common, whether they harm performance, and whether they can generally be prevented. In this paper, we answer all these questions affirmatively. In particular, we first show that the recent DQN algorithm, which combines Q-learning with a deep neural network, suffers from substantial overestimations in some games in the Atari 2600 domain. We then show that the idea behind the Double Q-learning algorithm, which was introduced in a tabular setting, can be generalized to work with large-scale function approximation. We propose a specific adaptation to the DQN algorithm and show that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games.
<https://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/viewPaper/12389>
Upvotes: 0 |
2019/04/26 | 944 | 4,320 | <issue_start>username_0: I have an idea for a new type of AI for two-player games with alternating turns, like chess, checkers, connect four, and so on.
A little background: Traditionally engines for such games have used the minimax algorithm combined with a heuristic function when a certain depth has been reached to find the best moves. In recent days engines using reinforcement learning, etc (like AlphaZero in chess) have increased popularity, and become as strong as or stronger than the traditional minimax engines.
My approach is to combine these ideas, to some level. A minimax tree with alpha-beta pruning will be used, but instead of considering every move in a position, these moves will be evaluated with a neural net or some other machine learning method, and the moves which seem least promising will not be considered further. The more interesting moves are expanded like in traditional minimax algorithms, and the same evaluation are again done for these nodes' children.
The pros and cons are pretty obvious: By decreasing the breadth (number of moves in a position), the computation time will be reduced, which again can increase the search depth. The downside is that good moves may not be considered, if the machine learning method used to evaluate moves are not good enough.
One could of course hope that the position evaluation itself (from the neural net, etc) is good enough to pick the best move, so that no minimax is needed. However, combining the two approaches will hopefully make better results.
A big motivation for this approach is that it resembles how humans act when playing games like chess. One tends to use intuition (which will be what the neural net represents in this approach) to find moves which looks interesting. Then one will look more thoroughly at these interesting moves by calculating moves ahead. However, one does not do this for all moves, only those which seem interesting. The idea is that a computer engine can play well by using the same approach, but can of course calculate much faster than a human.
To illustrate the performance gain: The size of a minimax tree is about b^d, where b is the average number of moves possible in each position, and d is the search depth. If the neural net can reduce the size of considered moves b to half, the new complexity will be (b/2)^d. If d is 20, that means reducing the computation time by approx. 1 million.
My questions are:
1. Does anyone see any obvious flaws about this idea, which I might have missed?
2. Has it been attempted before? I have looked a bit around for information about this, but haven't found anything. Please give me some references if you know any articles about this.
3. Do you think the performance of such a system could compete with those of pure minimax or those using deep reinforcement learning?
Exactly how the neural net will be trained, I have not determined yet, but there should be several options possible.<issue_comment>username_1: The use of of a neural network to push the search algorithm to continually only along a promising path is the same that was described in the AlphaZero paper. In AlphaZero, the NN loop contained the search function and would encourage the continued search of high probability moves that were then simulated by the same NN that now contained the Value Net. The use of alpha-beta specifically is not necessary. Just a search function aptly known as PUCT (Predictor + Upper Confidence Bounds applied to Trees)
Upvotes: 2 <issue_comment>username_2: There is a lot of related research out there.
You can look at contextual bandit problems, which is the basis for "Monte Carlo Tree Search". Here some clever bookkeeping is used to make sure that branches that looked bad but haven't been explored recently will still get explored eventually. This results in the UCT algorithm, which then got used in combination with deep learning in AlphaGo, but you can use it without deep learning too, if that makes more sense for your particular problem.
The exploration / exploitation trade-off is crucial for this kind of problem. Your original estimates of the value of a position will be very uninformed, so it should not be used to prune the search tree too aggressively. This is exactly what UTC does, with provable theoretical guarantees.
Upvotes: 0 |
2019/04/26 | 714 | 3,053 | <issue_start>username_0: I want to create a simple game which basically consists of 2d circles shooting smaller circles at each other (to make hitbox detection easier for the start). My goal is to create an ai which adapts its own behaviour to the player‘s. For that, i want to use a NN as brain. Every frame, the NN is fed with the same inputs as the player and his output is compared to the players output. (outputs in this case are pressed keys like the up-arrow)
As inputs, I want to use a couple different important factors:
for example, the direction of the enemy player as number from 0 to 1
I also want to input the direction, size and speed of enemy’s and own projectiles and this is where my problem lies. If there was only one bullet per player, it would be easy but I want the number of bullets to be variable so the number of input neurons would have to be variable.
My approaches:
1) use a big amount of neurons and set unused ones to 0
( not elegant at all)
2) Instead of specific values, just use all the pixels‘ rgb values as inputs (would limit the game as colours would deliver all the information) (+factors like speed and direction would probably not have any impact)
Is there a more promising approach to this problem ?
I hope you can give me some inspiration.
Also, is there a difference in ranging input values between 0/1 or -1/1 ?
Thank you in advance, Mo
Edit: In case there aren‘t enough questions for you, is there a way to make the NN remember things ? For example, if I added a mechanic to the game which involves holding a key, I would add an input neuron which inputs 1 if the certain key is pressed and 0 if it isn‘t but I doubt that would work.<issue_comment>username_1: The use of of a neural network to push the search algorithm to continually only along a promising path is the same that was described in the AlphaZero paper. In AlphaZero, the NN loop contained the search function and would encourage the continued search of high probability moves that were then simulated by the same NN that now contained the Value Net. The use of alpha-beta specifically is not necessary. Just a search function aptly known as PUCT (Predictor + Upper Confidence Bounds applied to Trees)
Upvotes: 2 <issue_comment>username_2: There is a lot of related research out there.
You can look at contextual bandit problems, which is the basis for "Monte Carlo Tree Search". Here some clever bookkeeping is used to make sure that branches that looked bad but haven't been explored recently will still get explored eventually. This results in the UCT algorithm, which then got used in combination with deep learning in AlphaGo, but you can use it without deep learning too, if that makes more sense for your particular problem.
The exploration / exploitation trade-off is crucial for this kind of problem. Your original estimates of the value of a position will be very uninformed, so it should not be used to prune the search tree too aggressively. This is exactly what UTC does, with provable theoretical guarantees.
Upvotes: 0 |
2019/04/26 | 567 | 2,126 | <issue_start>username_0: Which libraries can be used for image caption generation?<issue_comment>username_1: I've been read from some blogs, for image caption generation you can use concepts of a CNN and LSTM model and build a working model of Image caption generator by implementing CNN with LSTM. I think before you want to make something and using this project, you need to have good knowledge of Deep learning, Python, working on Jupyter notebooks, Keras library, Numpy, and Natural language processing and make sure that you have installed all the following necessary libraries:
-pip install tensorflow
-keras
-pillow
-numpy
-tqdm
-jupyterlab
source : <https://data-flair.training/blogs/python-based-project-image-caption-generator-cnn/>
Upvotes: 0 <issue_comment>username_2: Image Caption Generation is an interesting problem to work on. I think your question was to know if there are any open-source libraries with built-in functions for Image Captioning. You can build Image Caption Generation models using Frameworks like [**Tensorflow**](https://www.tensorflow.org/api_docs/python/tf), [**PyTorch**](https://pytorch.org/docs/stable/index.html), and [**Trax**](https://trax-ml.readthedocs.io/en/latest/trax.html).
I'd also recommend you to read the following papers:
1. Show and Tell: A Neural Image Caption Generator. [Link](https://arxiv.org/abs/1411.4555)
2. Transfer learning from language models to image caption generators: Better models may not transfer better. [Link](https://arxiv.org/abs/1901.01216)
3. Image Captioning with Unseen Objects. [Link](https://arxiv.org/abs/1908.00047)
Also, here are a couple of blog posts you can read:
* [How to Develop a Deep Learning Photo Caption Generator from Scratch](https://machinelearningmastery.com/develop-a-deep-learning-caption-generation-model-in-python/)
* [Learn to Build Image Caption Generator with CNN & LSTM](https://data-flair.training/blogs/python-based-project-image-caption-generator-cnn/)
* [Image Captioning with Keras](https://towardsdatascience.com/image-captioning-with-keras-teaching-computers-to-describe-pictures-c88a46a311b8)
Upvotes: 1 |
2019/04/27 | 1,780 | 5,118 | <issue_start>username_0: The AI must predict the next number in a given sequence of incremental integers (with no obvious pattern) using Python but so far I don't get the intended result!
I tried changing the learning rate and iterations but so far no luck!
Example sequence: [1, 3, 7, 8, 21, 49, 76, 224]
Expected result: 467
Result found : 2,795.5
Cost: 504579.43
This is what I've done so far:
```
import numpy as np
# Init sequence
data =\
[
[0, 1.0], [1, 3.0], [2, 7.0], [3, 8.0],
[4, 21.0], [5, 49.0], [6, 76.0], [7, 224.0]
]
X = np.matrix(data)[:, 0]
y = np.matrix(data)[:, 1]
def J(X, y, theta):
theta = np.matrix(theta).T
m = len(y)
predictions = X * theta
sqError = np.power((predictions-y), [2])
return 1/(2*m) * sum(sqError)
dataX = np.matrix(data)[:, 0:1]
X = np.ones((len(dataX), 2))
X[:, 1:] = dataX
# gradient descent function
def gradient(X, y, alpha, theta, iters):
J_history = np.zeros(iters)
m = len(y)
theta = np.matrix(theta).T
for i in range(iters):
h0 = X * theta
delta = (1 / m) * (X.T * h0 - X.T * y)
theta = theta - alpha * delta
J_history[i] = J(X, y, theta.T)
return J_history, theta
print('\n'+40*'=')
# Theta initialization
theta = np.matrix([np.random.random(), np.random.random()])
# Learning rate
alpha = 0.02
# Iterations
iters = 1000000
print('\n== Model summary ==\nLearning rate: {}\nIterations: {}\nInitial
theta: {}\nInitial J: {:.2f}\n'
.format(alpha, iters, theta, J(X, y, theta).item()))
print('Training model... ')
# Train model and find optimal Theta value
J_history, theta_min = gradient(X, y, alpha, theta, iters)
print('Done, Model is trained')
print('\nModelled prediction function is:\ny = {:.2f} * x + {:.2f}'
.format(theta_min[1].item(), theta_min[0].item()))
print('Cost is: {:.2f}'.format(J(X, y, theta_min.T).item()))
# Calculate the predicted profit
def predict(pop):
return [1, pop] * theta_min
# Now
p = len(data)
print('\n'+40*'=')
print('Initial sequence was:\n', *np.array(data)[:, 1])
print('\nNext numbers should be: {:,.1f}'
.format(predict(p).item()))
```
**UPDATE** Another method I tried but still giving wrong results
```
import numpy as np
from sklearn import datasets, linear_model
# Define the problem
problem = [1, 3, 7, 8, 21, 49, 76, 224]
# create x and y for the problem
x = []
y = []
for (xi, yi) in enumerate(problem):
x.append([xi])
y.append(yi)
x = np.array(x)
y = np.array(y)
# Create linear regression object
regr = linear_model.LinearRegression()
regr.fit(x, y)
# create the testing set
x_test = [[i] for i in range(len(x), 3 + len(x))]
# The coefficients
print('Coefficients: \n', regr.coef_)
# The mean squared error
print("Mean squared error: %.2f" % np.mean((regr.predict(x) - y) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % regr.score(x, y))
# Do predictions
y_predicted = regr.predict(x_test)
print("Next few numbers in the series are")
for pred in y_predicted:
print(pred)
```<issue_comment>username_1: I think your code works fine for what is meant to be doing - fitting a linear regression model. The problem here is that you are using a linear model. Linear model does not have an adequate approximation capacity, it will only be able to fit data that is described by a linear function. Here, you gave a random sequence of numbers, that is very difficult for linear model to approximate. I would advise you to try 2 things:
1) Try something simpler first. Instead of doing a random sequence of numbers do a linear sequence of numbers for example a function like $y = 2x$ or maybe affine function like $y = 2x + 5$. So you would have a sequence like:
$2, 4, 6, 8 ...$ or $7, 9, 11, 13, ...$
If you manage to get that working try a nonlinear function like $x^2$ for example.
2) Instead of using a linear model try a nonlinear model. For example a polynomial regression model. Especially powerful function approximators are neural networks. In theory, a neural network with single hidden layer can approximate an arbitrary continuous function under some conditions
([Universal approximation theorem](https://en.wikipedia.org/wiki/Universal_approximation_theorem)) , so you could try to see how would a neural network solve the problem, there are several open source neural network libraries that you could try.
Upvotes: 2 <issue_comment>username_2: Like others said you can't approximate this with a linear regression model.
A PRM that approximates a solution could give you the following:
$y = 0.948 + x + 0.00085\*x^6$ ~
$y = 237/250 + x + (17/20000)\*x^6$
For $x = 9$, $y \simeq 462$
or
$y = 0.9258 + x + 0.00086\*x^6$
For $x = 9$, $y \simeq 466.965$
`UPDATE`
An approximation of course, may be in the range of:
$y = 2^{(x + 1)} - 2^x$ -the model you propose-
`Goodness of fit: 0.968475 and Mean Square Error = 685.111`
Based on this range a better approximation would be:
$y = 2^x + (-1/2)\*x^2$
with $R^2$
`Goodness of fit = 0.995`
`Mean Square Error: 89.0278`
Upvotes: 2 |
2019/04/28 | 1,828 | 7,258 | <issue_start>username_0: These guys here: <https://www.patreon.com/AiAngel> are saying that they've created a AI who can chat and stream. As the so-called administrator "Rogue" said:
[](https://i.stack.imgur.com/MEGez.png)
this chat/streamer bot are no fake.
Also, there's more about the dynamics of this chat/streamer bot on youtube:
<https://www.youtube.com/watch?v=WyFwjHQhlgo&t=463s>
<https://www.youtube.com/watch?v=GtvivssqLhE>
Considering that videos I realy think that this bot is totaly fake. I mean, I think that even the most advanced AI bot do not get even close to a real conversation like this one.
Now, of course that you can say that this is a artistic project or something, but the people behind all of this are on Patreon, and the people who are paying to these guys possibly are getting totally fooled, which is a serious thing when we're talking about real money.
So, is AIAngel a real bot? (With this question I'm spreading this possible fake to community)<issue_comment>username_1: Without being able to interact with the bot, a Turing test is impossible.
* Based on the manner in which this supposed bot is presented, in videos as opposed to an interactive medium where users can interact with the bot, **the only reasonable conclusion is hoax.**
This assessment is supported by the overly sexualized rendering of the bot combined with requests for donations.
**In order for this project to be considered legitimate, the creators would have to be more transparent about the methods and allow the general public, or at least reliable experts, to interact with the bot.**
(Compare to IBM's [Tay](https://en.wikipedia.org/wiki/Tay_(bot)) and [Zo](https://www.zo.ai/).)
**Turing Tests & Pornbots**
The nature of Turing tests is that they are subjective. A bot that an adult human could easily recognize as non-human, a child might perceive as human--the child does not have the requisite knowledge to form a strategy to expose the bot as an automaton.
There is an idea that "pornbots" have been passing the Turing Test for many years now, predicated on the hormonal imperatives of those interacting with the bots, which greatly inhibits their judgement. (The purpose of pornbots is to get consumers to spend money. Their continued existence suggests they do have utility in this regard, and that utility can be seen as a confirmation of the bots passing the text in a specific context.)
**Automation Hoaxes**
One of the most famous machine intelligence hoaxes was [The Turk](https://en.wikipedia.org/wiki/The_Turk), presented as a Chess playing automaton. The supposed machine astounded late 18th century audiences with its skill, but it was later revealed to be a human in a box.
Upvotes: 2 <issue_comment>username_2: No, AiAngel is not a bot.
It's Rouge's software that changes his voice along with facial recognition software which tracks his movement and copies it to the avatar.
That being said, he has created a very entertaining channel and line of work for himself. By looking at the videos, you can see that he is a true genius at work. Single handedly puts on all hats for that project, from hardware infrastructure development, to software design to graphic art to acting and video editing. Truely one of the great minds of this era.
You can find <NAME> (Angelica) on Rogue Shadow's
* Youtube Channel: <https://www.youtube.com/user/TheRogueShadow1/videos>
* Twitch <https://www.twitch.tv/aiangellive>
* Discord <https://discordapp.com/invite/DFhaYgj>
* Twitter <https://twitter.com/AiAngel13>
* Patreon: <https://www.patreon.com/AiAngel>
..along with several other platforms.
Hope than answers your question with certainty (instead of philosophically)
Upvotes: 3 <issue_comment>username_3: It's pretty easy to tell what's going on with AI Angel Angelica. If you've watched some of the videos, you'll notice the steady progression of the real-time rendering. The first videos were really jittery and there were a lot of movements (awkward mouth movements, fingers unable to move, etc) to the more recent ones with realistic-ish mouth movements and full finger movements. So, they started off with a motion capture program with a special suit to capture movement. As they progressed, the suit got a lot more sensors and the programming itself was streamlined. As far as it being artificial intelligence, no, it's not. There's a person in a suit talking to a camera that's been programmed to pick up movement and translate it onto an avatar. Based on the movements from the most recent video, there are sensors on a few parts of the face, neck, shoulders, elbows, waist, hips, knees and feet and probably 3 on each finger, 2 on each thumb. Based upon the camera movements the programming is designed to follow the avatar, which is why it seems to follow her when she moves around and additionally, she's able to turn around while the camera remains in place. There is probably a face camera attached to the suit to track the minute facial changes (think of the movie Avatar) and two or three cameras mounted around the area to pick up the rest of the sensors (which could just be colored dots).
The setup is probably a greenroom with furniture in some of the places the furniture shows up in the 3d 'room' she's in. There is probably another camera or sensor that is used so she can move around when she shows off her computer setup. The model probably just uses a button to switch between them. I don't know if the voice is real or altered, but it has changed over the course of the videos. If it is real, the model is probably female speaking in real-time. Whether or not the model is also the developer is another question. I don't know.
Regardless, the real-time rendering and advanced 3d space analysis requires a pretty hefty computer setup, so whoever is doing this is on the level of a professional with professional-grade computing power. It is WAY beyond anything I'm capable of at the moment, so my hat's absolutely off to the developer and/or model. Extremely impressive.
Upvotes: 1 <issue_comment>username_4: I think it's a real bot. I've seen people talking about how it's rendering has improved over time and they claimed that was an improvement in the Motion Capture suit. I think that's simply because the program has improved over time. Imagine YOU created AI Angel (assuming it's really AI), would you be more worried about it's functionality or it's realism first? I would want it to act real before it looks real. If you listen closely, the voice has also slowly been improving as well, which makes sense for both explanations. If fake, the voice changer improved, but if real they simply gave it a network of human communication to listen to and compare itself to. It's really quite easy (conceptually) to create a program that learns. If you want to learn more about how easy it can be to make an AI teach itself I recommend Code Bullet on Youtube. He has a variety of videos in which he creates AI's meant to learn how to function in a game at the highest performance.
I know I covered evidence of both sides of the question, but I personally (as a nerdy boy) believe Angelica is really an A.I.
Upvotes: 1 |
2019/04/28 | 1,031 | 4,293 | <issue_start>username_0: Surprisingly, this wasn't asked before - at least I didn't find anything besides some [vaguely related questions.](https://ai.stackexchange.com/q/11717/2444)
So, what is a recurrent neural network, and what are their advantages over regular (or feed-forward) neural networks?<issue_comment>username_1: A *recurrent neural network* (RNN) is an [artificial neural network](https://ai.stackexchange.com/a/11978/2444) that contains backward or self-connections, as opposed to just having forward connections, like in a feed-forward neural network (FFNN). The adjective "recurrent" thus refers to this backward or self-connections, which create loops in these networks.
An RNN can be trained using *back-propagation through time* (BBTT), such that these backward or self-connections "memorise" previously seen inputs. Hence, these connections are mainly used to track temporal relations between elements of a sequence of inputs, which makes RNNs well suited to sequence prediction and similar tasks.
There are several RNN models: for example, RNNs with LSTM or GRU units. LSTM (or GRU) is an RNN whose single units perform a more complex transformation than a unit in a "plain RNN", which performs a linear transformation of the input followed by the application of a non-linear function (e.g. ReLU) to this linear transformation. In theory, "plain RNN" are as powerful as RNNs with LSTM units. In practice, they suffer from the "vanishing and exploding gradients" problem. Hence, in practice, LSTMs (or similar sophisticated recurrent units) are used.
Upvotes: 4 <issue_comment>username_2: Recurrent neural networks (RNNs) are a class of artificial neural network
architecture inspired by the cyclical connectivity of neurons in the brain. It uses iterative function loops to store information.
Difference with traditional Neural networks using pictures from [this book](https://www.cs.toronto.edu/~graves/preprint.pdf):
[](https://i.stack.imgur.com/ULBqq.png)
And, an RNN:
[](https://i.stack.imgur.com/qobNg.png)
Notice the difference -- feedforward neural networks' connections
do not form cycles. If we relax this condition, and allow cyclical
connections as well, we obtain recurrent neural networks (RNNs). You can see that in the hidden layer of the architecture.
While the difference between a multilayer perceptron and an RNN may seem
trivial, the implications for sequence learning are far-reaching. An MLP can only
map from **input to output vectors**, whereas an RNN can in principle map from
the **entire history of previous inputs to each output**. Indeed, the equivalent
result to the universal approximation theory for MLPs is that an RNN with a
sufficient number of hidden units can approximate any measurable sequence-to-sequence
mapping to arbitrary accuracy.
Important takeaway:
**The recurrent connections allow a 'memory' of previous inputs to persist in the
network's internal state, and thereby influence the network output.**
Talking in terms of advantages is not appropriate as they both are state-of-the-art and are particularly good at certain tasks. A broad category of tasks that RNN excel at is:
Sequence Labelling
------------------
The goal of sequence labelling is to assign sequences of labels, drawn from a fixed alphabet, to sequences of input data.
Ex: Transcribe a sequence of acoustic features with spoken words (speech recognition), or a sequence of video frames with hand gestures (gesture recognition).
Some of the sub-tasks in sequence labelling are:
**Sequence Classification**
Label sequences are constrained to be of length one. This is referred to as sequence classification, since each input sequence is assigned to a single class. Examples of sequence classification task include the identification of a single spoken work and the recognition of an individual
handwritten letter.
**Segment Classification**
Segment classification refers to those tasks where the target sequences consist
of multiple labels, but the locations of the labels -- that is, the positions of the input segments to which the labels apply -- are known in advance.
Upvotes: 4 [selected_answer] |
2019/04/30 | 446 | 1,686 | <issue_start>username_0: There are multiple ways to implement parallelism in reinforcement learning. One is to use parallel workers running in their own environments to collect data in parallel, instead of using replay memory buffers (this is how A3C works, for example).
However, there are methods, like PPO, that use batch training on purpose. How is parallelism usually implemented for algorithms that still use batch training?
Are gradients accumulated over parallel workers and the combined? Is there another way? What are the benefits of doing parallelism one way over another?<issue_comment>username_1: OpenAI have a post on that: <https://openai.com/blog/openai-five/>
They use a myriad of *rollout workers* that collect data for 60 seconds and push that data to a GPU cluster where gradients are computed for batches of 4096 observations which are then averaged.
PPO is actually designed to allow this kind of parallelisation as it uses *trajectory segments* with a fixed size of $T$ to collect data, e.g. 60 seconds for OpenAI Five, where $T$ is supposed to be "much less than the episode length" (p.5 of [PPO paper](http://arxiv.org/pdf/1707.06347v2)).
Upvotes: 2 <issue_comment>username_2: The paper [Dota 2 with Large Scale Deep Reinforcement Learning](https://arxiv.org/pdf/1912.06680v1.pdf) goes into greater detail than the initial blog posts.
They call their distributed training framework *Rapid*, which is also used in some of their robotics work, such as the paper [Learning Dexterous In-Hand Manipulation](https://arxiv.org/pdf/1808.00177.pdf), where they discuss a smaller scale deployment of Rapid (as compared to Dota2/OpenAI V) in section 4.3.
Upvotes: 1 |
2019/04/30 | 1,448 | 6,137 | <issue_start>username_0: What are the areas/algorithms that belong to reinforcement learning?
TD(0), Q-Learning and SARSA are all temporal-difference algorithms, which belong to the reinforcement learning area, but is there more to it?
Are the dynamic programming algorithms, such as policy iteration and value iteration, considered as part of reinforcement learning? Or are these just the basis for the temporal-difference algorithms, which are the only RL algorithms?<issue_comment>username_1: The dynamic programming algorithms (like *policy iteration* and *value iteration*) are often presented in the context of reinforcement learning (in particular, in the book [Reinforcement Learning: An Introduction](http://incompleteideas.net/book/RLbook2020.pdf) by Barto and Sutton) because they are very related to reinforcement learning algorithms, like $Q$-learning. They are all based on the assumption that the environment can be modelled as an MDP.
However, dynamic programming algorithms require that the *transition model* and *reward functions* of the underlying MDP are known. Hence, they are often referred to as **planning** algorithms, because they can be used to find a policy (which can be thought of as **plan**) given the "dynamics" of the environment (which is represented by the MDP). They just exploit the given "physical rules" of the environment, in order to find a policy. This "exploitation" is referred to as a *planning algorithm*.
On the other hand, $Q$-learning and similar algorithms do not require that the MDP is known. They attempt to find a policy (or value function) by interacting with the environment. They eventually infer the "dynamics" of the underlying MDP from experience (that is, the interaction with the environment).
If the MDP is not given, the problem is often referred to as the **full reinforcement learning problem**. So, algorithms like $Q$-learning or SARSA are often considered reinforcement learning algorithms. The dynamic programming algorithms (like policy iteration) do not solve the "full RL problem", hence they are not always considered RL algorithms, but just planning algorithms.
There are several categories of RL algorithms. There are temporal-difference, Monte-Carlo, actor-critic, model-free, model-based, on-policy, off-policy, prediction, control, policy-based or value-based algorithms. These categories can overlap. For example, $Q$-learning is a temporal-difference (TD), model-free, off-policy, control and value-based algorithm: it is based on an temporal-difference (TD) update rule, it doesn't use a model of the environment (model-free), it uses a behavioural policy that is different than the policy it learns (off-policy), it is used to find a policy (control) and it attempts to approximate a value function rather than directly the policy (value-based).
Upvotes: 5 [selected_answer]<issue_comment>username_2: In [Reinforcement Learning: An Introduction](http://incompleteideas.net/book/RLbook2018.pdf) the authors suggest that the *topic* of reinforcement learning covers analysis and solutions to problems that can be framed in this way:
>
> Reinforcement learning, like many topics whose names end with “ing,” such as machine
> learning and mountaineering, is simultaneously a problem, a class of solution methods
> that work well on the problem, and the field that studies this problem and its solution
> methods. It is convenient to use a single name for all three things, but at the same time
> essential to keep the three conceptually separate. In particular, the distinction between
> problems and solution methods is very important in reinforcement learning; failing to
> make this distinction is the source of many confusions.
>
>
>
And:
>
> Markov decision processes are intended to include just
> these three aspects—sensation, action, and goal—in their simplest possible forms without
> trivializing any of them. Any method that is well suited to solving such problems we
> consider to be a reinforcement learning method.
>
>
>
So, to answer your questions, the simplest take on this is yes there is more (much more) to RL than the classic value-based optimal control methods of SARSA and Q-learning.
Including DP and other "RL-related" algorithms in the book allows the author to show how closely related the concepts are. For example, there is little in practice that differentiates Dyna-Q (a planning algorithm closely related to Q-learning) from experience replay. Calling one strictly "planning" and the other "reinforcement learning" and treating them as separate can reduce insight into the topic. In many cases there are hybrid methods or even a continuum between what you may initially think of as RL and "not RL" approaches. Understanding this gives you a toolkit to modify and invent algorithms.
Having said that, the book is not the sole arbiter of what is and isn't reinforcement learning. Ultimately this is just a classification issue, and it only matters if you are communicating with someone and there is a chance for misunderstanding. If you name which algorithm you are using, it doesn't really matter whether the person you are talking to thinks it is RL or not RL. It matters what the problem is and how you propose to solve it.
Upvotes: 2 <issue_comment>username_3: It seems that another rather controversial point is about the inclusion of evolutionary algorithms as Reinforcement Learning ones.
Sutton & Barto do not. They argue that
[](https://i.stack.imgur.com/2AxGA.png)
And also:
[](https://i.stack.imgur.com/YR04e.png)
Other people related with the subject, as the HSE University that offers a course in Coursera, Maxim Lapan , or P. Palanisamy (both Packt's authors) include them into the subject. Apparently they support the idea that RL is defined by changes in performance resulting from interaction with the environment.
For instance, Lapan classifies the cross-entropy method as model-free, policy-based and on -policy.
Upvotes: 1 |
2019/05/01 | 396 | 1,638 | <issue_start>username_0: Can a normal neural network work as good as a convolutional network? If yes, how much more time and neurons would it need compared to a CNN?<issue_comment>username_1: *NNs won't be able to reach the performance of CNNs, in general.*
By a Neural Network, I am assuming you're pointing to 'vanilla' vector based neural networks.
Let's take the MNIST dataset, it performs almost similar in both NNs and CNNs; the reason being digits of almost same size, and similar spatial drawings. To put it easy, all the digits roughly take up the same area as the other 60k. If you were to zoom in our zoom out these digits, the trained NN may not perform well.
Why?
Plain NNs lack the ability to extract 'position independent' features. A cat is a cat to your eye, no matter whether you saw it in the center of an image, or left corner. CNNs use 'filters' to extract 'templates' of cat. Hence CNNs can 'localize' what it is searching for.
You could say, NNs look at the 'whole' data to make a sense, CNNs look at the 'special' features at certain parts to make a sense.
This is also a reason why CNNs are popular. NNs are suited for their own applications. Every input node in a vectorized Neural Networks represent a feature, and that feature is tied to a purpose throughout training, i.e. its position is fixed throughout the training.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Yes. In theory, a single layer neural network can compute any function. In practice, such a network would have to be much larger than a CNN with equivalent functionality and would therefore be much harder to train.
Upvotes: 0 |
2019/05/02 | 662 | 2,538 | <issue_start>username_0: Today, AI is mainly driven by own-profit-oriented companies (e.g. Facebook, Amazon, Google). Admittedly, there's a lot of AI in the health sector (even in the public health sector) and there's a lot of AI in the sustainability sector – but also mostly driven by obviously own-profit-oriented companies (e.g. Tesla, Uber, Google).
On the other side, one often hears from hard-core economists that [centrally planned (= public-profit-oriented) economies](https://www.wikiwand.com/en/Planned_economy) (or economic principles) are "the work of the devil" - and that they failed all over history (sometimes for understandable reasons).
But intelligently planning global economic processes and applying these plans with the help of state-of-the-art AI - given the huge amounts of really big data available, and given the argument that globalization is finally for the benefit of all - would seem to be a rewarding endeavour, at least for parts of the AI community.
Why isn't this endeavour undertaken more decidedly? (Or is it?)
Where do I find approaches to apply AI to global economic processes? (Not only describing and understanding but mainly planning and executing?)<issue_comment>username_1: It's currently just too complex
===============================
The different sources of information are too varied, in economics this is often referred to as a [local knowledge](http://www.fao.org/3/y5610e/y5610e01.htm) problem, which hampers many large scale plans. Humans can react to slight differences like respecting local traditions, landscapes, history but an artificial intelligence would (currently at least) struggle not to generalise over such a large scale as a whole country's economy.
The real work in this case (and actually most 'AI' tasks) would be collecting all the necessary data. Here that part job is currently insurmountable.
Currently a human lead planned economy would do a better job.
Upvotes: 2 <issue_comment>username_2: I found this article from <NAME> (MacKinsey) helpful: [Applying artificial intelligence for social good](https://www.mckinsey.com/featured-insights/artificial-intelligence/applying-artificial-intelligence-for-social-good)
Upvotes: 0 <issue_comment>username_2: Today I have learned about the [Cybersyn project](https://en.wikipedia.org/wiki/Project_Cybersyn), developed 1973 in Chile. It went into the "right" direction - but could not achieve its goal because of the [coup d'état](https://en.wikipedia.org/wiki/1973_Chilean_coup_d%27%C3%A9tat).
Upvotes: 0 |
2019/05/02 | 1,904 | 8,581 | <issue_start>username_0: This might be a trivial question but I couldn't find any reliable answers on the internet.
Almost all the neural network architectures for self-driving cars that I have seen on the internet have a feedforward network, previous frames will not help in making the current decision.
I have read somewhere that Tesla uses two last frames captured to make a decision, even then 2 frames will not be that useful in this case.
This might not be very helpful when predicting things ie.. lane cut-ins, as the system needs to observe the vehicle (that is going to cut in) behavior such as turn indicator, vehicle veering towards center lane over time in order to predict.
Can someone explain if this is the way production self-driving card such as Tesla work?
[](https://i.stack.imgur.com/DOtqF.png)
Or Is it something like the below?
[](https://i.stack.imgur.com/83beA.png)
Or are they using something like Many to one Recurrent net where inputs are CNN vectors of previous few frames and output is the control?<issue_comment>username_1: It varies quite substantially between different self-driving paradigms(rather obviously) but for the most part the vast majority of implementations are using a variety of different reference frames in order to make predictions.
For example, Tesla's Autopilot is being fed many different camera feeds as well as radar and ultrasonic signals that are processed in a variety of temporal contexts.
While, for the most part, all of these programs are very tight-lipped, we can make a variety of assumptions based on the information available and educated assumptions.
As with many large, complex ML/AI systems, there is a large amount of compartmentalization where many different connectionist(or sometimes classic) models are combined(à la [youtube recommendation system](https://ai.google/research/pubs/pub45530)). Tesla is likely utilizing recurrent and convolutional networks where particular modules(combinations of models) are deciding on specific contexts(temporal or signal-based). These outputs are then most likely fed into an actor network which makes real time decisions.
Upvotes: 1 <issue_comment>username_2: **Visualization Appreciated**
The diagrams are nicely thought out. As you refine your comparative design visualizations, you might use something like Inkscape to draw them for web publication, whether or not you decide to submit a paper to a publisher or license your ideas Creative Commons.
**Web Research Realities**
The reliability of answers from the internet are a function of the author, how much time she or he took in researching the question themselves, what search terms are used, and where the search term is entered. This question falls under the more challenging to research, cases where the must reliable answers are in the results sets of experiments run in proprietary government and corporate research centers.
If the GM or Tesla or Toyota or DoD research results were posted on line, someone would likely be fired, sued, and possibly jailed and a team of lawyers would hunt down all references to the posted information, employing international agreements and backbone level content filters to eliminate the dissemination of secrets.
**A Better Research Approach**
We can determine with a fairly high confidence that a decision based on a single frame is much less likely to lead to collision avoidance, beginning with a simple thought experiment.
>
> Two kids are playing creatively and decide to make up a game that involves a ball. One of the rules of the game is that each player can only run backward, not forward or sideways. That's what makes the same fun for these kids. It's silly because everyone wants to run forward to get to the ball. The rest of the rules of the game are not particularly relevant.
>
>
> A machine driver is processing an image and goes to make the decision. In this case, the term decision indicates the use of a rules based system, whether fuzzy and probabilistic or of the former pure Boolean type, but that is not particularly relevant either. This thought experiment applies to a learned response arising from the training of an architecture built primarily of artificial networks as well.
>
>
> Consider now the case where no decision or learned response is invoked on the basis of the direction of the ball travel or its proximity to the street and path of the vehicle being driven because the kids are facing away from the ball. It would not be reasonable to assume that the training data included this spontaneously made-up game. The result of the combination of child creativity and single frame selection leads to a delayed collision avoidance tactic, at best.
>
>
>
In contrast, if two or three frames are included in the analysis, the feature of the ball moving toward the street and the children, regardless of their orientation with respect to the ball and the street, may be detected as a feature of the entire system through which the vehicle is driving.
This is one of astronomical number of examples where the training without the temporal dimension will lead to a much higher likelihood of improper trajectory projection from pixel data on the basis of any reasonable training set than had training and use of the pixel data included the temporal dimension.
**Mathematical Analysis**
When the results of trials with real vehicles move from the domain of corporate intellectual property and the domain of governmental national security artifacts into the public domain, we will see our empirical evidence. Until then we can rest on theory. The above thought experiment and others like it can be represented. Consider the hypothesis of
$$ P\_{\mathcal{A} = \mathcal{E}(S)} > P\_{\mathcal{A} = \mathcal{E}(\vec{S})} \; \text{,}$$
where $P\_c$ is the probability given condition $c$, $\mathcal{A}$ is the actuality (*posteriori*) and $\mathcal{E}$ is the expectation function applied to instantaneous sensory information $s$ on the left hand side of the inequality versus the recent history of instantaneous sensory information $\vec{s}$ on the right hand side.
If we bring intermediate actions (decisioning after each frame is acquired) into the scope of this question, then we may pose the hypothesis in a way that involves Markov's work on causality and prediction in chains of events.
>
> The accuracy of decisioning based on optical acquisition in the context of collision avoidance is higher without the Markov property, in that historical optically acquired data in conjunction with new optically acquired data will produce better trajectory oriented computational collision avoidance results than without historical data.
>
>
>
Either of these might take considerable work to prove, but they are both likely to be provable in probabilistic terms with a fairly reasonable set of mathematical constraints placed on the system up front. We know this because the vast majority of thought experiences show an advantage to a vector of frames over a single frame for the determination of actions that are most likely to avoid a collision.
**Design**
As is often the case, convolution kernel use in a CNN is likely to be the best design to recognize edge, contour, reflectivity, and texture features in collide-able object detection.
Assembly of trajectories (as a somewhat ethereal internal intermediate result) and subsequent determination of beeping, steering, acceleration, breaking, or signaling is likely best handled by recurrent networks of some type, the most publicly touted being b-LSTM or GRU based networks. The attention based handling and preemption discussed in many papers regarding real time system control are among the primary candidates for eventual common use among the designs. This is because changes in focus is common during human driving operations and is even detectable in birds and insects.
The simple case is when an ant detects one of these.
* A large insurmountable object
* A predator
* A morsel of good smelling food
* Water
The mode of behavior switches, probably in conjunction with the neural pathways for sensory information, when a preemptive stimulus is detected. Humans pilot and drive aircraft and motor vehicles this way too. When you pilot or drive next, bring into consciousness what you have learned unconsciously and this preemptive detection and change of attention and focus of task will become obvious.
Upvotes: 0 |
2019/05/04 | 1,005 | 4,320 | <issue_start>username_0: Suppose I have an MDP $(S, A, p, R)$ where the $p(s\_j|s\_i,a\_i)$ is uniform, i.e given an state $s\_i$ and an action $a\_i$ all states $s\_j$ are equally probable.
Now I want to find an optimal policy for this MDP. Can I just apply the usual methods like policy gradients, actor-critic to find the optimal policy for this MDP? Or is there something I should be worried about?
At least, in theory, it shouldn't make any difference. But I'm wondering are there any practical considerations I should be worried about? Should the discount factor, in this case, be high?
The reward function here depends both on states and actions and is not uniformly random.<issue_comment>username_1: Convergence guarantees for basic RL algorithms like policy gradient / actor-critic methods make no assumptions about the dynamics of the MDP. So, theoretically, you don't need to change much.
Practically, when the number of possible trajectories from any given state is so high, the return from each state will have high variance. This means you'll have to collect much more experience for your estimates of expected return to converge to their true values. Intuitively, an environment with high uncertainty requires the agent to do more knowledge-gathering to behave optimally.
My real advice to you depends on what exactly you're trying to do. If you want to have the kind of agent that **could** learn to behave well in an extremely random environment, then all you need to worry about is giving it enough experience to learn from.
(Your agent should also take a little longer before deciding it's "confident" in its evaluation of different states. That is, don't behave greedily before you're sure your estimates are accurate. Explore adequately. This advice is only relevant if your MDP dynamics aren't actually completely uniform.)
If, however, you want to train an RL agent **specifically** to solve a problem formulated as an MDP with **uniformly** random dynamics, then I would tell you to not waste your time. We know before spending the computation that all policies would be equally good/bad in this setting. Since actions are irrelevant to the environment, it would be inefficient to deploy an RL agent that will only learn that which action it takes doesn't matter.
---
As noted in the comments, the last paragraph is only true when reward from each state-action pair $(s,a)$ is also uniformly random. If it is not, just being aware of the high variance and giving your agent a lot of experience should do the trick.
Upvotes: 2 <issue_comment>username_2: When the next state selection is not driven by any meaningful dynamics i.e. it is independent of starting state $s$ and action taken $a$, but the rewards received do depend somehow on the $s$ and $a$, then the MDP you describe also fits with something called a [Contextual Bandit Problem](https://towardsdatascience.com/contextual-bandits-and-reinforcement-learning-6bdfeaece72a) where there is no control over state due to action choice, and thus no incentive to choose actions other than for their potential for immediate reward.
Any algorithm capable of solving a full MDP can also be put to use attempting to solve a contextual bandit problem, as the MDP framework is a strictly more general case of the contextual bandit problem, and can model such an environment. However, this is typically going to be inefficient, as MDP solvers make no assumptions about state transition dynamics and need to experience and learn them. Whilst if you start with an algorithm designed to solve a contextual bandit problem, you have the assumption of randomised state built in to the algorithm, it does not need to be learned, and the learning process should be more efficient.
Alternatively, if you only have RL solvers available, you can reduce variance and get the same effective policy by setting discount factor, $\gamma = 0$.
If for some reason you still want or need a long-term discounted value prediction from your policy, you can take the mean predicted value of some random states (or even of all the states if there are few enough of them) and multiply by $\frac{1}{1-\gamma}$ for whatever discount factor you want to know it for. Or if predicting for a time horizon, just multiply by number of steps to the horizon.
Upvotes: 2 |
2019/05/05 | 927 | 3,741 | <issue_start>username_0: With the increasing complexity of reCAPTCHA, I wondered about the existence of some problem, that only a human will ever be able to solve (or that AI won't be able to solve as long as it doesn't reproduce exactly the human brain).
For instance, the distorted text used to be only possible to solve by humans. Although...
>
> The computer now got the [distorted text] test right 99.8%, even in the most challenging situations.
>
>
>
It seems also obvious that the distorted text can't be used for real human detection anymore.
I'd also like to know whether an algorithm can be employed for the creation of such a problem (as for the distorted text), or if the originality of a human brain is necessarily needed.<issue_comment>username_1: Informally, [AI-complete](https://en.wikipedia.org/wiki/AI-complete) problems are the most difficult problems for an AI. The concept is [not mathematically defined yet](http://www.aaai.org/Papers/Symposia/Spring/2007/SS-07-05/SS07-05-026.pdf), as e.g. [NP-complete](https://en.wikipedia.org/wiki/NP-completeness) problems. However, intuitively, these are the problems that require a human-level or [*general* intelligence](https://en.wikipedia.org/wiki/Artificial_general_intelligence) to be solved.
*Real* natural language understanding is *believed* to be an AI-complete problem (this is also discussed in the paper [Making AI Meaningful Again](https://arxiv.org/abs/1901.02918) by <NAME> and <NAME>, 2019). There are a lot more AI-complete problems. For example, problems that involve emotions.
Upvotes: 5 [selected_answer]<issue_comment>username_2: This is more of a comment and philosophical opinion, but I don’t believe that there are any problems an AI couldn’t solve, that a human can. Being new to this forum, I cannot make it a comment on the question (and it would probably be too long) — I preemptively ask for your forgiveness.
AI Eventually Will Mimic Humans (and surpass them)
==================================================
Humans by nature are logical. Logic is learned or hardwired, and influenced by observation and chemical impulses.
As long as an AI can be trained to act like a human, it will be able to act like one. Currently, those behaviors are limited to technology (space, connections, etc), which the human brain has been optimized to rule out or disregard certain “fluff” automatically enabling it certain *super* capabilities. For instance, not everything seen is registered through the brain; often, the brain performs differential comparisons and updates for changes to reduce processing time and energy. It will only be a matter of time before AI can also be programmed to behave this way, or technological advancements will allow it to not need some of this function, which will allow it to leapfrog humans.
In the current state, we recognize humans are sometimes irrational or inconsistent. In those cases, AI could mimic human limitations with configured randomization patterns, but again, there really won’t be a need since it can be programmed and learn those patterns automatically (if necessary).
It all comes down to consumption of data, retention of information, and learned corrections. So, there is no problem that a human can perform (to my knowledge) that AI couldn’t theoretically ever perform. Even in the case of chemistry. As we are manufacturing foods and organs, an AI could also, theoretically, one day reproduce and survive via biological functions.
Instead of the question being binary about human capability vs that of artificial intelligence, I’d be more interested to see what people think are the more challenging things humans can do, which will take AI time to accomplish.
Upvotes: 3 |
2019/05/05 | 838 | 3,416 | <issue_start>username_0: The [Prioritized Experience Replay](https://arxiv.org/pdf/1511.05952.pdf) paper gives two different ways of sampling from the replay buffer. One, called "proportional prioritization", assigns each transition a priority proportional to its TD-error.
$$p\_i = |\delta\_i|+\epsilon$$
The other, called "rank-based prioritization", assigns each transition a priority inversely proportional to its rank.
$$p\_i = 1/\text{rank}(i)$$
where $\text{rank}(i)$ is the rank of transition $i$ when the replay buffer is sorted according to $|\delta\_i|$.
The paper goes on to show that the two methods give similar performance for certain problems.
Are there times when I should choose one sampling method over the other?<issue_comment>username_1: The authors of that paper hypothesized that rank-based prioritization would be more robust to outliers. They suggested that rank-based sampling would be preferred for this reason. However, as they noted later, the fact that DQN clips rewards anyways weakens this argument.
If you're going to use someone else's ready-made code for your prioritized experience replay, then you'll probably end up using proportional sampling. Every implementation I've been able to find, including OpenAI's [baselines implementation](https://github.com/openai/baselines/blob/master/baselines/deepq/replay_buffer.py), uses proportional sampling. If you were going to write your own, proportional sampling might be preferred for being simpler to implement (so less error-prone).
Comparing these two sampling methods is complicated by the fact that rank-based sampling involves a hyperparameter that determines how often you sort your replay buffer. The authors of the original PER paper only sorted every $n$ timesteps, giving the nice amortized time of $O(\log n)$, where $n$ is the size of the replay buffer. Sampling takes constant time, so sampling a minibatch of size $k$ takes $O(k)$ time.
Proportional sampling doesn't involve sorting, but it does need to maintain a [sum tree](https://www.fcodelabs.com/2019/03/18/Sum-Tree-Introduction/) structure, which takes $O(\log n)$ time each time we add to the buffer. Sampling also takes $O(\log n)$ time, so sampling a minibatch of size $k$ takes $O(k\log n)$ time.
If we only sort our replay buffer every $n$ timesteps, then rank-based sampling is faster. However, because the buffer is almost always only approximately sorted, the distribution we sample from is only a rough estimate of the distribution we wanted. It's not clear that this estimation would be accurate enough to be performant when the replay buffer is scaled up in size past the $n=10^6$ transitions used in the paper. I haven't seen a study that compares performance for different frequencies of sorting and different buffer sizes.
So, rank-based sampling might be faster, but it also might not work as well. Adjustment of the sorting frequency hyperparameter might be necessary. The simpler and surer approach would be to use proportional sampling with clipped TD-errors.
Upvotes: 3 <issue_comment>username_2: It appears that the rank based method would be slightly better in terms of time complexity because you sort only once for k number of sampling operations. This article (not mine) explains in detail clearly. <https://towardsdatascience.com/how-to-implement-prioritized-experience-replay-for-a-deep-q-network-a710beecd77b>
Upvotes: 1 |
2019/05/06 | 584 | 2,420 | <issue_start>username_0: There are different kinds of machine learning algorithms, both univariate and multivariate, that are used for time series forecasting: for example ARIMA, VAR or AR.
Why is it harder (compared to classical models like ARIMA) to achieve good results using neural network based algorithms (like ANN and RNN) for multi step time series forecasting?<issue_comment>username_1: ANNs & RNNs can be used to create some great models in many different domains, including time-series forecasting. However, across all of these domains, they suffer from the problem of [hyper-parameter optimization](https://en.wikipedia.org/wiki/Hyperparameter_optimization). Because neural networks are so flexible, it is not clear, at the outset, which [arrangement of neurons](https://cdn-images-1.medium.com/max/2400/1*cuTSPlTq0a_327iTPJyD-Q.png) will be most effective to solve a given problem. It is also not clear how fast the network should learn from new signals, what sorts of activation functions to use in the different layers of the network, and which of [several possible regularization methods](https://www.analyticsvidhya.com/blog/2018/04/fundamentals-deep-learning-regularization-techniques/) might be best. Making these decisions well requires either years of practice and experience, or a lot of trial and error (or, maybe both!).
In contrast, a regression-based method like ARMA will typically have just a couple of simple hyperparameters, each of which has a clear, intuitive, meaning. This means that an untrained practitioner can probably get an ARMA result that is close to the result of a trained practitioner using ARMA.
Essentially: neural networks are brittle and sensitive to the choice of hyper-parameters, while regression generally is not.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Given the (usual) higher architectural complexity of ML models compared to more classical forecasting models, ML models might also require more data, otherwise they might just overfit the training dataset.
Furthermore, online learning (or training) of a neural network using stochastic gradient descent (that is, one example at a time) might also be numerically unstable and statistical inefficient (so convergence might be slow). See [Towards stability and optimality in stochastic gradient descent](https://arxiv.org/pdf/1505.02417.pdf) for more details and a solution (AI-SGD).
Upvotes: 2 |
2019/05/07 | 831 | 3,586 | <issue_start>username_0: *I hope this question is ok here, but since I have found a tag which deals with these issues (profession), I'll ask away. I also hope this may be useful to other people with similar doubts, since I am failing to find valuable information on this topic online.*
---
I am interested in the theoretical side of CS, such as computability, logic, complexity theory and formal methods. At the same time, I am deeply fascinated by Artificial Intelligence and the questions it poses to our understanding of the notion of intelligence and what does it mean to be a human being.
In general, is AI a more "applied"/engineeristic field, or are there theoretical aspects to research in?
In short: **If I prefer formal/theoretical compsci, is AI a bad career choice?**
(note: I am asking this because I am a CS undergrad considering getting into a AI MSc).<issue_comment>username_1: Any "serious" AI program is full of theoretical and mathematical foundations (you will study plenty of statistics and optimisation methods) anyway, but I would say that much useful AI today is an applied or engineering area. Anyhow, you will need to be comfortable with a lot of mathematical details (especially, linear algebra and calculus). If you're more interested in statistics, optimisation or robotics, you should go for AI.
If you study "pure" computer science, you should also have one or two courses related to AI (at least, one ML course). If you are more interested in traditional CS algorithms, data structures, software engineering, operating systems, compilers, theory of computation, computer networking, programming languages and/or databases, then you should go for CS.
However, before enrolling in a master's program, you should really have a look at the details of the courses they offer. Furthermore, you might also take into account that, during your studies, you might change idea regarding one subject.
Upvotes: 2 <issue_comment>username_2: There are certainly results in theoretical computer science / pure math with deep implications for AI. But to my knowledge these results typically aren't labeled as results of artificial intelligence, but as something more congruent in that particular field (For example in CS, we might say "agent with unbounded computational power"; in math might say some statement is "decidable/undecidable" with respect to some system). Of course they still matter in the field of AI, but you need to know what you are looking for.
See my question [What are some implications of Gödel's theorems on AI research?](https://ai.stackexchange.com/questions/3209/what-are-some-implications-of-g%C3%B6dels-theorems-on-ai-research) for some examples. Or you can look up [MIRI's research guide](https://intelligence.org/research-guide/) for a better idea of what existing work is out there that links formal math / CS to AI research.
Another point to raise is that there is no good definition of AI in fields outside of normal discourse (or even within, perhaps), so its difficult to decide what discussions pertains to the study of AI. Questions like whether ZFC with/without choice is expressive enough might not be on the mind of most AI researchers, but could still have some implications.
So to answer you question more directly, there is certainly a field of study regarding theoretical AI. Regarding whether or not its a good choice is something for you to decide, but it is (in my humble and not-very-well-educated opinion) very difficult field that isn't very popular, and has not seen major progress in many years.
Upvotes: 2 |
2019/05/10 | 932 | 3,904 | <issue_start>username_0: I am reading the paper Convolutional Sequence to Sequence Learning by Facebook AI researchers and having trouble to understand how the dimensions of convolutional filters work here. Please take a look at the relevant part of the paper below.
[](https://i.stack.imgur.com/1xSku.png)
Let's say the input to the kernel X is k\*d (say k=5 words of d=300 embedding dimenisonality). Therefore the input is 5\*300. In a computer vision task a kernel would slide over parts of the image, in NLP you usually see kernel taking up the whole width of the input matrix. So I would expect kernel to be m\*d (e.g. 3\*300 - slide over 3 words and look at their whole embeddiings).
However, the kernel here is of dimensionality 2d x kd which in our hypothetical example would be 600\*1500. I don't understand how this massive kernel would slide over an input that is by far lower dimensional (5\*300). In computer vision you could zero-pad the input, but here zero-padding would basically turn the input matrix into mostly zeros with only a handful of meaningful numbers.
Thanks for shedding some light on it!<issue_comment>username_1: Any "serious" AI program is full of theoretical and mathematical foundations (you will study plenty of statistics and optimisation methods) anyway, but I would say that much useful AI today is an applied or engineering area. Anyhow, you will need to be comfortable with a lot of mathematical details (especially, linear algebra and calculus). If you're more interested in statistics, optimisation or robotics, you should go for AI.
If you study "pure" computer science, you should also have one or two courses related to AI (at least, one ML course). If you are more interested in traditional CS algorithms, data structures, software engineering, operating systems, compilers, theory of computation, computer networking, programming languages and/or databases, then you should go for CS.
However, before enrolling in a master's program, you should really have a look at the details of the courses they offer. Furthermore, you might also take into account that, during your studies, you might change idea regarding one subject.
Upvotes: 2 <issue_comment>username_2: There are certainly results in theoretical computer science / pure math with deep implications for AI. But to my knowledge these results typically aren't labeled as results of artificial intelligence, but as something more congruent in that particular field (For example in CS, we might say "agent with unbounded computational power"; in math might say some statement is "decidable/undecidable" with respect to some system). Of course they still matter in the field of AI, but you need to know what you are looking for.
See my question [What are some implications of Gödel's theorems on AI research?](https://ai.stackexchange.com/questions/3209/what-are-some-implications-of-g%C3%B6dels-theorems-on-ai-research) for some examples. Or you can look up [MIRI's research guide](https://intelligence.org/research-guide/) for a better idea of what existing work is out there that links formal math / CS to AI research.
Another point to raise is that there is no good definition of AI in fields outside of normal discourse (or even within, perhaps), so its difficult to decide what discussions pertains to the study of AI. Questions like whether ZFC with/without choice is expressive enough might not be on the mind of most AI researchers, but could still have some implications.
So to answer you question more directly, there is certainly a field of study regarding theoretical AI. Regarding whether or not its a good choice is something for you to decide, but it is (in my humble and not-very-well-educated opinion) very difficult field that isn't very popular, and has not seen major progress in many years.
Upvotes: 2 |
2019/05/11 | 1,241 | 5,331 | <issue_start>username_0: Many examples work with a table-based method for Q-learning. This may be suitable for a discrete state (observation) or action space, like a robot in a grid world, but is there a way to use Q-learning for continuous spaces like the control of a pendulum?<issue_comment>username_1: Q-learning for continuous state spaces
======================================
Yes, this is possible, provided you use some mechanism of approximation. One approach is to discretise the state space, and that doesn't have to reduce the space to a small number of states. Provided you can sample and update enough times, then a few million states is not a major problem.
However, with large state spaces it is more common to use some form of function approximation for the action value. This is often noted $\hat{q}(s,a,\theta)$ to show that it is both an estimate (the circumflex over $\hat{q}$) and that you are learning some function parameters ($\theta$). There are broadly two popular approaches to Q-learning using function approximation:
* Linear function approximation over a processed version of the state into features. A lot of variations to generate features have been proposed and tested, including Fourier series, tile coding, radial basic functions. The advantage of these methods are that they are simple, and more robust than non-linear function approximations. Which one to choose depends on what you state space represents and how the value function is likely to vary depending on location within the state space.
* Neural network function approximation. This is essentially what Deep Q Networks (DQN) are. Provided you have a Markov state description, you scale it to work sensibly with neural networks, and you follow other DQN best practices (experience replay table, slow changing target network) this can work well.
Q-learning for continuous action spaces
=======================================
Unless you discretise the action space, then this becomes very unwieldy.
The problem is that, given $s,a,r,s'$, Q-learning needs to evaluate the TD target:
$$Q\_{target}(s,a) = r + \gamma \text{max}\_{a'} \hat{q}(s',a',\theta)$$
The process for evaluating the maximum becomes less efficient and less accurate the larger the space that it needs to check.
For somewhat large action spaces, using double Q-learning can help (with two estimates of Q, one to pick the target action, the other to estimate its value, which you alternate between on different steps) - this helps avoid maximisation bias where picking an action because it has the highest value and then using that highest value in calculations leads to over-estimating value.
For very large or continuous action spaces, it is not usually practical to check all values. The alternative to Q-learning in this case is to use a policy gradient method such as Actor-Critic which can cope with very large or continuous action spaces, and does not rely on maximising over all possible actions in order to enact or evaluate a policy.
Controlling a pendulum
----------------------
For a discrete action space e.g. applying one of a choice of forces on each time step, then this can be done using a DQN approach or any other function approximation. The classic example here might be an environment like Open AI's [CartPole-v1](https://gym.openai.com/envs/CartPole-v1/) where the state space is continuous, but there are only two possible actions. This can be solved easily using DQN, it is something of a beginner's problem.
Adding continuous action space ends up with something like the [Pendulum-v0](https://gym.openai.com/envs/Pendulum-v0/) environment. This can be solved to some degree using DQN and discretising the action space (to e.g. 9 different actions). However, it is possible to make more optimal solutions using an Actor-Critic algorithm like A3C.
Upvotes: 4 <issue_comment>username_2: >
> **Q-Learning for continuous state space**
>
>
>
Reinforcement learning algorithms (e.g Q-Learning) can be applied to both discrete and continuous spaces. If you understand how it works in discrete mode, then you can easily move to continuous mode. That's why in the literature all the introductory material focuses on discrete mode, as it's easier to model (table, grid, etc.)
Supposing you have a discrete number of actions, **the only difference** in a continuous space is that you will be modeling the state each $X$ amount of time ($X$ being a number you can choose depending on your use case). So, basically, you end up with a discrete space, but probably with an infinite number of states. You apply then the same approach you learned for discrete mode.
Let's take the example of self-driving cars, at each $X$ms (e.g $X=1$), you'll be computing the state of the car which are your input features (e.g direction, orientation, rotation, distance to the pavement, relative position on the lane, etc.) and take a decision of the action to take as in discrete mode. The approach is the same in other use cases, like playing games, walking robots, and so on.
>
> **Note (continuous action space):**
>
>
>
If you have continuous actions, then in almost all use cases the best approach is to discretize your actions. I can't think of an example where discretizing your actions will lead to a considerable deficiency.
Upvotes: 1 |
2019/05/12 | 1,549 | 7,055 | <issue_start>username_0: How do you actually decide what reward value to give for each action in a given state for an environment?
Is this purely experimental and down to the programmer of the environment? So, is it a heuristic approach of simply trying different reward values and see how the learning process shapes up?
Of course, I understand that the reward values have to make sense, and not just put completely random values, i.e. if the agent makes mistakes then deduct points, etc.
So, am I right in saying it's just about trying different reward values for actions encoded in the environment and see how it affects the learning?<issue_comment>username_1: Yes, you are right. It is somehow an arbitrary choice, although you should consider the reasonable numerical ranges of your activation functions if you decide to go beyond the values +/- 1.
You can also have a think about whether you want to add a small reward for the agent reaching states that are *near* the goal, if you have an environment where such states are discernable.
If you want to have a more machine learning approach to reward values, consider using an Actor-Critic arrangement, in which a second network learns reward values for non-goal states (by observing the results of agent exploration), although you still need to determine end-state values according to your handcrafted heuristic.
Upvotes: 1 <issue_comment>username_2: In Reinforcement Learning (RL), a reward function is *part of the problem definition* and should:
* Be based primarily on the goals of the agent.
* Take into account any combination of starting state $s$, action taken $a$, resulting state $s'$ and/or a random amount (a constant amount is just a random amount with a fixed value having probability 1). You should not use other data than those four things, but you also do not have to use any of them. This is important, as using any other data stops your environment from being a Markov Decision Process (MDP).
* Given the first point, as direct and simple as possible. In many situations this is all you need. A reward of +1 for winning a game, 0 for a draw and -1 for losing is enough to fully define the goals of most 2-player games.
* In general, have positive rewards for things you want the agent to achieve or repeat, and negative rewards for things you want the agent to avoid or minimise doing. It is common for instance to have a fixed reward of -1 per time step when the objective is to complete a task as fast as possible.
* In general, reward 0 for anything which is not *directly* related to the goals. This allows the agent to learn for itself whether a trajectory that uses particular states/actions or time resources is worthwhile or not.
* Be scaled for convenience. Scaling all rewards by a common factor does not matter at a theoretical level, as the exact same behaviour will be optimal. In practice you want the sums of reward to be easy to assess by yourself as you analyse results of learning, and you also want whatever technical solution you implement to be able to cope with the range of values. Simple numbers such as +1/-1 achieve that for basic rewards.
Ideally, you should avoid using heuristic functions that reward an agent for interim goals or results, as that inserts your opinion about how the problem should be solved into the system, and may not in fact be optimal given the goals. In fact you can view the purpose of value-based RL is learning a good heuristic function (the value function) from the more sparse reward function. If you already had a good heuristic function then you may not need RL at all.
You may need to compare very different parts of the outcome in a single reward function. This can be hard to balance correctly, as the reward function is a single scalar value and *you* have to define what it means to balance between different objectives within a single scenario. If you do have very different metrics that you want to combine then you need to think harder about what that means:
* Where possible, flatten the reward signal into the same units and base your goals around them. For instance, in business and production processes if may be possible to use currency as the units of reward and convert things such as energy used, transport distance, etc., into that currency.
* For highly negative/unwanted outcomes, instead of assigning a negative reward, consider whether a constraint on the environment is more appropriate.
You may have opinions about valid solutions to the environment that you want the agent to use. In which case you can extend or modify the system of rewards to reflect that - e.g. provide a reward for achieving some interim sub-goal, even if it is not directly a result that you care about. This is called *reward shaping*, and can help in practical ways in difficult problems, but you have to take extra care not to break things.
There are also more sophisticated approaches that use multiple value schemes or no externally applied ones, such as hierarchical reinforcement learning or intrinsic rewards. These may be necessary to address more complex "real life" environments, but are still subject of active research. So bear in mind that all the above advice describes the current mainstream of RL, and there are more options the deeper you research the topic.
>
> Is this purely experimental and down to the programmer of the environment. So, is it a heuristic approach of simply trying different reward values and see how the learning process shapes up?
>
>
>
Generally no. You should base the reward function on the analysis of the problem and your learning goals. And this should be done at the start, before experimenting with hyper-parameters which define the learning process.
If you are trying different values, especially different *relative* values between different aspects of a problem, then you may be changing what it means for the agent to behave optimally. That might be what you want to do, because you are looking at how you want to frame the original problem to achieve a specific behaviour.
However, outside of [*inverse reinforcement learning*](https://towardsdatascience.com/inverse-reinforcement-learning-6453b7cdc90d), it is more usual to want an optimal solution to a well-defined problem, as opposed to a solution that matches some other observation that you are willing to change the problem definition to meet.
>
> So, am I right in saying it's just about trying different reward values for actions encoded in the environment and see how it affects the learning?
>
>
>
This is usually not the case.
Instead, think about how you want to *define* the goals of the agent. Write reward functions that encapsulate those goals. Then focus on changes to the agent that allow it to better learn how to achieve those goals.
Now, you can do it the way round, as you suggest. But what you are doing, in that case, is changing the problem definition, and seeing how well a certain kind of agent can cope with solving each kind of problem.
Upvotes: 4 [selected_answer] |
2019/05/12 | 1,133 | 4,594 | <issue_start>username_0: What's the differences between [semi-supervised learning and self-supervised visual representation learning](https://www.scihive.org/paper/1905.03670), and how they are connected?<issue_comment>username_1: ### Semi-supervised learning
[Semi-supervised learning](https://en.wikipedia.org/wiki/Semi-supervised_learning) is the collection of machine learning techniques where there are two datasets: a labelled one and an unlabelled one.
There are two main problems that can be solved using semi-supervised learning:
* transductive learning (i.e. label the given unlabelled data) and
* inductive learning (generalization) (i.e. find a function that maps inputs to outputs, like classification).
### Self-supervised learning
[Self-supervised learning](https://ai.stackexchange.com/a/10624/2444) (SSL) is a machine learning approach where the supervisory signal is **automatically** generated. More precisely, SSL can either refer to
* learn data representations (i.e. learn to represent the data) by solving a so-called **pretext** (or auxiliary) task, in a self-supervised fashion, i.e. you automatically generate the supervised signal from the unlabelled
* automatically label an unlabelled dataset by exploiting data coming from different sensors (this is the usual definition of SLL in the context or robotics)
### What is the relationship between the two?
SSL (for data representation) can be considered a semi-supervised learning approach, if you fine-tune the learned data representations with a labeled dataset to solve supervised learning problem (i.e. the so-called **downstream** task), which you will probably do otherwise the data representations are pretty useless. Read [this answer](https://ai.stackexchange.com/a/22860/2444) for other details.
Upvotes: 3 <issue_comment>username_2: The previous answer has given a good insight into the difference between two areas. I would like to give more examples.
**Semi-Supervised Learning** work with improving the data set by adding up new examples. There are iterative systems where we train a model on a given dataset and improve the model further after deploying it on the real world by adding interactions of the real world and their outcomes to further train the system.
**Self-Supervised Learning** is becoming a very hot topic these days. It has the ability to understand the underline properties of a given dataset with some kind of a supervisory signal (Not exactly a label). **self-Attention** introduced in Transformers is a modern day popular self-supervised learning. Also, check this tweet from [Y<NAME>un tweet](https://twitter.com/ylecun/status/1123235709802905600?lang=en)
Upvotes: 3 <issue_comment>username_3: Both semi-supervised and self-supervised methods are similar in the sense that the goal is to learn with fewer labels per class. The way both formulate this is quite different:
1. **Self-Supervised Learning:**
This line of work aims to learn image representations without requiring human-annotated labels and then use those learned representations on some downstream tasks. For example, you could take millions of unlabeled images, randomly rotate them by either 0, 90, 180 or 270 degrees and then train a model to predict the rotation angle. Once the model is trained, you can use transfer learning to fine-tune this model on a downstream task like cat/dog classification just like how you finetune ImageNet pretrained models. You can view an [**overview**](https://amitness.com/2020/02/illustrated-self-supervised-learning/) of the methods and also look at contrastive learning methods that are currently giving state-of-the-art results such as [**SimCLR**](https://amitness.com/2020/03/illustrated-simclr/) and [**PIRL**](https://amitness.com/2020/03/illustrated-pirl/).

2. **Semi-supervised Learning**
Different from self-supervised learning, semi-supervised learning aims to use both labeled and unlabeled data at the same time to improve the performance of a supervised model. An example of this is [FixMatch](https://amitness.com/2020/03/fixmatch-semi-supervised/) paper where you train your model on labeled images. Then, for your unlabeled images, you apply augmentations to create two images for each unlabeled image. Now, we want to ensure that the model predicts the same label for both the augmentations of the unlabeled images. This can be incorporated into the loss as a cross-entropy loss.

Upvotes: 3 |
2019/05/12 | 2,422 | 7,983 | <issue_start>username_0: In reinforcement learning, there are the concepts of stochastic (or probabilistic) and deterministic policies. What is the difference between them?<issue_comment>username_1: A **deterministic policy** is a function of the form $\pi\_{\mathbb{d}}: S \rightarrow A$, that is, a function from the set of states of the environment, $S$, to the set of actions, $A$. The subscript $\_{\mathbb{d}}$ only indicates that this is a ${\mathbb{d}}$eterministic policy.
For example, in a grid world, the set of states of the environment, $S$, is composed of each cell of the grid, and the set of actions, $A$, is composed of the actions "left", "right", "up" and "down". Given a state $s \in S$, $\pi(s)$ is, with probability $1$, always the same action (e.g. "up"), unless the policy changes.
A **stochastic policy** can be represented as a *family* of [conditional probability distributions](https://brilliant.org/wiki/conditional-probability-distribution/), $\pi\_{\mathbb{s}}(A \mid S)$, from the set of states, $S$, to the set of actions, $A$. A probability distribution is a function that assigns a probability for each [event](https://en.wikipedia.org/wiki/Event_(probability_theory)) (in this case, the events are actions in certain states) and such that the sum of all the probabilities is $1$.
A stochastic policy is a family and not just one conditional probability distribution because, for a fixed state $s \in S$, $\pi\_{\mathbb{s}}(A \mid S = s)$ is a possibly distinct conditional probability distribution. In other words, $\pi\_{\mathbb{s}}(A \mid S) = \{ \pi\_{\mathbb{s}}(A \mid S = s\_1), \dots, \pi\_{\mathbb{s}}(A \mid S = s\_{|S|})\}$, where $\pi\_{\mathbb{s}}(A \mid S = s)$ is a conditional probability distribution over actions given that the state is $s \in S$ and $|S|$ is the size of the set of states of the environment.
Often, in the reinforcement learning context, a stochastic policy is misleadingly (at least in my opinion) denoted by $\pi\_{\mathbb{s}}(a \mid s)$, where $a \in A$ and $s \in S$ are respectively a specific action and state, so $\pi\_{\mathbb{s}}(a \mid s)$ is just a number and not a conditional probability distribution. A single conditional probability distribution can be denoted by $\pi\_{\mathbb{s}}(A \mid S = s)$, for some fixed state $s \in S$. However, $\pi\_{\mathbb{s}}(a \mid s)$ can also denote a family of conditional probability distributions, that is, $\pi\_{\mathbb{s}}(A \mid S) = \pi\_{\mathbb{s}}(a \mid s)$, if $a$ and $s$ are arbitrary. Alternatively, $a$ and $s$ in $\pi\_{\mathbb{s}}(a \mid s)$ are just (dummy or input) variables of the function $\pi\_{\mathbb{s}}(a \mid s)$ (i.e. p.m.f. or p.d.f.): this is probably the most sensible way of interpreting $\pi\_{\mathbb{s}}(a \mid s)$ when you see this notation (see also [this answer](https://ai.stackexchange.com/a/16842/2444)). In this case, you could also think of a stochastic policy as a function $\pi\_{\mathbb{s}} : S \times A \rightarrow [0, 1]$, but, in my view, although this may be the way you implement a stochastic policy in practice, this notation is misleading, as the action is not conceptually an input to the stochastic policy but rather an output (but in the end this is also just an interpretation).
In the particular case of games of chance (e.g. poker), where there are sources of randomness, a deterministic policy might not always be appropriate. For example, in poker, not all information (e.g. the cards of the other players) is available. In those circumstances, the agent might decide to play differently depending on the round (time step). More concretely, the agent could decide to go "all-in" $\frac{2}{3}$ of the times whenever it has a hand with two aces and there are two uncovered aces on the table and decide to just "raise" $\frac{1}{3}$ of the other times.
A deterministic policy can be interpreted as a stochastic policy that gives the probability of $1$ to one of the available actions (and $0$ to the remaining actions), for each state.
Upvotes: 5 [selected_answer]<issue_comment>username_2: ### Deterministic Policy :
Its means that for every state you have clear defined action you will take
**For Example:** We 100% know we will take action **A** from state **X**.
### Stochastic Policy :
Its mean that for every state you do not have clear defined action to take but you have probability distribution for actions to take from that state.
**For example** there are 10% chance of taking action **A** from state **S**, There are 20% chance of taking **B** from State **S** and there are 70% chance of taking action **C** from state **S**, Its mean we don't have clear defined action to take but we have some probability of taking actions.
Upvotes: 2 <issue_comment>username_3: Apart from the answers above,
---
*Stochastic Policy function*: $\pi (s\_1s\_2 \dots s\_n, a\_1 a\_2 \dots a\_n): \mathcal S \times \mathcal A \rightarrow [0,1]$ is the probability distribution function, that, tells the probability that action sequence $a\_1a\_2 \dots a\_n$ may be chosen in state sequence $s\_1 s\_2 \dots s\_n$[[2]](https://www.freecodecamp.org/news/an-introduction-to-policy-gradients-with-cartpole-and-doom-495b5ef2207f/)[[3]](https://www.computing.dcu.ie/~humphrys/PhD/ch2.html).
In *Markov Decision Process (MDP)*, it's only $\pi (s, a)$ following the assumptions[[1]](https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/lecture-notes/Lecture20FinalPart1.pdf):
$$ \mathbb P(\omega\_{t+1}| \omega\_t, a\_t) = \mathbb P(\omega\_{t+1}| \omega\_t,a\_t, \dots \omega\_o,a\_o)$$
Where $\omega \in \Omega$ which is the set of Observations. $\mathcal A, \mathcal S$ denote the set of actions and states respectively. Since, the next observation is dependent only on present states and not the past, the policy function only needs the present state and action as parameter.
The next action is chosen as[2]: $$ a^\* = \arg \max\_a \pi(s\_{t+1}, a) \quad\forall a \in \mathcal A $$
*Deterministic Policy function* [[3]](https://www.computing.dcu.ie/~humphrys/PhD/ch2.html): is a special case of Stochastic Policy function where for particular $a\_o \in \mathcal A$, $\pi(s, a\_n) = \delta^o\_n$ for all $a\_n \in \mathcal A$. Here, we are totally certain to choose particular action $a\_o$ in some arbitrary state $s$ and no other. Here $\delta$ is [Kronecker delta](https://en.wikipedia.org/wiki/Kronecker_delta). Since, the probability distribution here is discrete, it's often written in the form of $\pi(s): \mathcal S \rightarrow \mathcal A$, where the function takes arbitrary state $s$ and maps it to an action $a$ which is 100% probable.
IMPORTANT
---------
The **Stochastic Policy function** is not meant to be confused with the **Transition Function**[[2]](https://www.freecodecamp.org/news/an-introduction-to-policy-gradients-with-cartpole-and-doom-495b5ef2207f/) (which is also a Probability Distribution Function), $T(s\_t, a\_t, s\_{t+1}): \mathcal S \times \mathcal A \times \mathcal S \rightarrow [0, 1]$ which tells the probability that - at state $s\_t$, the action $a\_t$ will lead us to next state $s\_{t+1}$.
---
References:
-----------
1. <https://ocw.mit.edu>. [*6.825 Techniques in Artificial Intelligence*](https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/lecture-notes/Lecture20FinalPart1.pdf). *<https://ocw.mit.edu>*. Page Number - 6. Web. 6 May 2020
2. <NAME>. <https://www.freecodecamp.org> .[*An introduction to Policy Gradients with Cartpole and Doom*](https://www.freecodecamp.org/news/an-introduction-to-policy-gradients-with-cartpole-and-doom-495b5ef2207f/). 9 May 2018. Web. 6 May 2020.
3. <https://www.computing.dcu.ie/>. Reinforcement Learning. [*2.1.1 Special case - Deterministic worlds*](https://www.computing.dcu.ie/~humphrys/PhD/ch2.html). Web. 6 May 2020
Upvotes: 1 |
2019/05/14 | 830 | 3,084 | <issue_start>username_0: I've got a Lego Mindstorms EV3 with [EV3DEV](https://www.ev3dev.org/) and EV3-Python installed. I wanted to try out artificial intelligence with the robot, and my first project is going to be to get the robot to try and recognize some images (using convolutions) and do an action related to the image it has seen. However, I can't find a way to use Tensorflow (or any AI module for that matter) on an EV3. Does anyone know how to incorporate Tensorflow or any other modules into the EV3? Help would be gladly appreciated.<issue_comment>username_1: There is a YouTube video [LEGO EV3 Raspberry Pi Tensorflow Sorting Machine](https://www.youtube.com/watch?v=wSSh9iV70ng) by [ebswift](https://www.youtube.com/channel/UCOziWS9PkSVwB0XoIE14rDw) that should help you although you will need a Raspberry Pi. From the abstract:
*This is a sorting machine based on the EV3 45544 education kit sorting machine. The colour sorting camera is substituted with a Raspberry Pi with the v2 camera. The EV3 is controlled over wifi via RPyC and the object recognition work is done via Tensorflow.*
**A viewer asked**: Can you share links about how to train a model on the PC and move the trained model to the Pi?
**ebswifft replied**:
Glad you like it :). Training the model on the PC is following the guide for Tensorflow for Poets <https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/>. All you do is install Tensorflow on the Raspberry Pi, clone the github repository onto it and copy the model you trained from the codelabs article onto the Raspberry Pi to run the classifier as per step 5 of the codelabs article. Onboard image capture on the Raspberry Pi is just done using picamera. I used a tensorflow version that a user compiled for the Raspberry Pi from here: <https://github.com/samjabrahams/tensorflow-on-raspberry-pi/issues/92>. I might do up more of a general step-by-step sometime on my site, I'll report back if I can get that up and running.
There is also [BrickClassifi3r](https://github.com/robodhhb/BrickClassifi3r):
*This Lego Mindstorms EV3 robot uses a neural network to recognize a cube, a cylinder or a small cube put on a conveyor belt. See the video how it works. Each object on the conveyor belt is scanned by an IR-sensor every 40ms for about a second. The resulting data are 24 distance values representing one of the three objects. This data is fed into the neural network on the robot to classify the object within 180ms. In a test with 300 objects it reaches 95,6% accuracy. The neural network has been trained before by a machine learning algorithm with TensorFlow on a PC using a set of 375 training examples - 125 examples for each object.*
The [ev3-myo](https://github.com/aljazfrancic/ev3-myo) project uses LEGO Mindstorms EV3, TRACK3R, a Myo armband and TensorFlow for gesture recognition.
Upvotes: 1 <issue_comment>username_2: Get a raspberry with a camera, power it by a battery bank, attach it to the EV3 and run a 2 one program on raspberry and another on ev3 communicating with each other via MQTT
Upvotes: 0 |
2019/05/15 | 1,008 | 3,856 | <issue_start>username_0: **Dataset Description**
I am working on the famous ABIDE Autism Datasets. The dataset is very big in the sense that it has more than 1000 subjects containing half of them as autistic and the other half as healthy controls. The dataset is taken from 17 sites across the world and each site used a varying time dimension when recording the subjects fMRI.
**My Question**
I want to use this dataset for a classification task but the only issue is time-varying subjects as features set are fixed to 200 so you can say that I have subjects dimensions like 150 x200, 75 x 200 , 300 x 200... so on. So what are advanced AI or deep learning techniques that I can use to fix this time dimension for every subjects or can anybody suggest some deep learning framework or model that I could use to fix these varying time dimensions across subjects?
**My Effort**
**Approach 1**
I have applied PCA to the time dimension and fixed them to 50 and tried other numbers also but it did not produce good accuracy for classification
**Approach 2**
I also tried to use only specific time points from every subject like taking only 40 time points from every subject to fix the dimension but again it did not work as definitely filtering some time series data on every subject would lose crucial information.<issue_comment>username_1: There is a YouTube video [LEGO EV3 Raspberry Pi Tensorflow Sorting Machine](https://www.youtube.com/watch?v=wSSh9iV70ng) by [ebswift](https://www.youtube.com/channel/UCOziWS9PkSVwB0XoIE14rDw) that should help you although you will need a Raspberry Pi. From the abstract:
*This is a sorting machine based on the EV3 45544 education kit sorting machine. The colour sorting camera is substituted with a Raspberry Pi with the v2 camera. The EV3 is controlled over wifi via RPyC and the object recognition work is done via Tensorflow.*
**A viewer asked**: Can you share links about how to train a model on the PC and move the trained model to the Pi?
**ebswifft replied**:
Glad you like it :). Training the model on the PC is following the guide for Tensorflow for Poets <https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/>. All you do is install Tensorflow on the Raspberry Pi, clone the github repository onto it and copy the model you trained from the codelabs article onto the Raspberry Pi to run the classifier as per step 5 of the codelabs article. Onboard image capture on the Raspberry Pi is just done using picamera. I used a tensorflow version that a user compiled for the Raspberry Pi from here: <https://github.com/samjabrahams/tensorflow-on-raspberry-pi/issues/92>. I might do up more of a general step-by-step sometime on my site, I'll report back if I can get that up and running.
There is also [BrickClassifi3r](https://github.com/robodhhb/BrickClassifi3r):
*This Lego Mindstorms EV3 robot uses a neural network to recognize a cube, a cylinder or a small cube put on a conveyor belt. See the video how it works. Each object on the conveyor belt is scanned by an IR-sensor every 40ms for about a second. The resulting data are 24 distance values representing one of the three objects. This data is fed into the neural network on the robot to classify the object within 180ms. In a test with 300 objects it reaches 95,6% accuracy. The neural network has been trained before by a machine learning algorithm with TensorFlow on a PC using a set of 375 training examples - 125 examples for each object.*
The [ev3-myo](https://github.com/aljazfrancic/ev3-myo) project uses LEGO Mindstorms EV3, TRACK3R, a Myo armband and TensorFlow for gesture recognition.
Upvotes: 1 <issue_comment>username_2: Get a raspberry with a camera, power it by a battery bank, attach it to the EV3 and run a 2 one program on raspberry and another on ev3 communicating with each other via MQTT
Upvotes: 0 |
2019/05/15 | 337 | 1,522 | <issue_start>username_0: I found Sentiment Analysis and Emotion Recognition as two different categories on [paperswithcode.com](https://paperswithcode.com/). Should both be the same as my understanding? If not what's the difference?<issue_comment>username_1: Sentiment in this context refers to evaluations, typically positive/negative/neutral. [Sentiment Analysis](https://en.wikipedia.org/wiki/Sentiment_analysis) can be applied to product reviews, to identify if the reviewer liked the product or not. This has (in principle) got nothing to do with emotions as such.
[Emotion recognition](https://en.wikipedia.org/wiki/Emotion_recognition) would typically work on conversational data (eg from conversations with chatbots), and it would attempt to recognise the emotional state of the user -- angry/happy/sad...
Of course the same can overlap: if the user is happy, they will typically express positive sentiments on something.
Also: emotion recognition goes beyond text (eg facial expressions), whereas sentiment analysis mostly works with textual data only.
Upvotes: 2 [selected_answer]<issue_comment>username_2: **Sentiment Analysis** -- the most common text classification that analyses an incoming message and tells whether the underlying sentiment is positive, negative, or neutral.
**Emotion Recognition**-- emotion recognition refers to the cognitive and behavioral strategies people use to influence their own emotional experience.
Note: This explanation is based on the paper that I have read.
Upvotes: 0 |
2019/05/16 | 626 | 2,571 | <issue_start>username_0: Below you have the plots of the training and validation errors for two different models. Both plots show the RMSE values for the validation dataset versus the number of training epochs. It is observed that models get lower RMSE value as training progresses.
The model associated with the first plot is performing quite well. The gap is quite narrowed.
[](https://i.stack.imgur.com/oMDvC.png)
I think the model associated with this second plot is doing pretty good, but not as well as the other. The gap is much broader.
[](https://i.stack.imgur.com/Suc5q.png)
The model of the first plot was trained using a data set containing 1 million of ratings, while the second one used only 100K. I'm implementing the collaborative filtering (CF) algorithm. I am optimising it using SGD.
[](https://i.stack.imgur.com/LFPzs.png)
Are any of these models overfitting or underfitting?<issue_comment>username_1: I would say that your intuition is correct: the model associated with the first plot is likely to generalise more than the one associated with the second plot.
In both cases, it doesn't seem that your model has overfitted the training data. Overfitting often occurs when the training error keeps decreasing but the validation error starts to increase. In both your plots, both the training and validation errors keep decreasing (even if slowly, after a while).
Underfitting occurs when your model hasn't learned enough even about your training data. The smaller the training and validation error, the more likely your model has not underfitted, but the value of RMSE depends on the range of your inputs. See e.g. [What are good RMSE values?](https://stats.stackexchange.com/q/56302/82135) for more info.
See also this article [Overfitting and Underfitting With Machine Learning Algorithms](https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learning-algorithms/) for a general overview of the concepts of overfitting and underfitting.
Upvotes: 2 [selected_answer]<issue_comment>username_2: One possibility: If you are using a dropout regularization layer in your network, it is reasonable that the validation error is smaller than the training error. Because usually dropout is activated when training but deactivated when evaluating on the validation set. You get a more smooth (usually means better) function in the latter case.
Upvotes: 0 |
2019/05/16 | 559 | 1,850 | <issue_start>username_0: I have a dataset as follows
[](https://i.stack.imgur.com/pJIKM.png)
(and the table extends to include an extra 146 columns for companies 4-149)
Is there an algorithm I could use effectively to find similar patterns in sales from the other companies when compared to my company?
I thought of using k-means clustering, but as I'm dealing with 150 companies here it would likely become a bit of a mess, and I don't think linear regression would work here.<issue_comment>username_1: I would recommend a [hierarchical cluster algorithm](https://en.wikipedia.org/wiki/Hierarchical_clustering), after normalising your numbers into proportions. Then the clustering should be able to identify similar patterns. Depending at which level you make the cut, you can decide how many clusters you want.
A great resource on this topic is <NAME>., & <NAME>. (1990). "Finding Groups in Data - An Introduction to Cluster Analysis". <NAME> & Sons
Upvotes: 2 <issue_comment>username_2: If I understand correctly you want to find companies with similar patterns to yours.
I would start with measuring `cosine similarity` between your company and others.
It is really easy with Python, for example:
```
In [21]: from sklearn.metrics.pairwise import cosine_similarity
In [22]: cosine_similarity([[1,4,2,6], [1,9,5,4]])
Out[22]:
array([[1. , 0.84794633],
[0.84794633, 1. ]])
```
Note that if size of sale matters to you, this is not the right approach, as cosine similarity is magnitude invariant:
```
In [23]: from sklearn.metrics.pairwise import cosine_similarity
In [24]: cosine_similarity([[1,4,2,6], [10,90,50,40]])
Out[24]:
array([[1. , 0.84794633],
[0.84794633, 1. ]])
```
Upvotes: 3 [selected_answer] |
2019/05/16 | 468 | 1,551 | <issue_start>username_0: In LeNet 5's first layer, the number of feature maps is equal to the number of kernels. However, the second convolutional layer has a depth different from the 3rd layer. Does the filter size dictate the number of feature maps?<issue_comment>username_1: I would recommend a [hierarchical cluster algorithm](https://en.wikipedia.org/wiki/Hierarchical_clustering), after normalising your numbers into proportions. Then the clustering should be able to identify similar patterns. Depending at which level you make the cut, you can decide how many clusters you want.
A great resource on this topic is <NAME>., & <NAME>. (1990). "Finding Groups in Data - An Introduction to Cluster Analysis". <NAME> & Sons
Upvotes: 2 <issue_comment>username_2: If I understand correctly you want to find companies with similar patterns to yours.
I would start with measuring `cosine similarity` between your company and others.
It is really easy with Python, for example:
```
In [21]: from sklearn.metrics.pairwise import cosine_similarity
In [22]: cosine_similarity([[1,4,2,6], [1,9,5,4]])
Out[22]:
array([[1. , 0.84794633],
[0.84794633, 1. ]])
```
Note that if size of sale matters to you, this is not the right approach, as cosine similarity is magnitude invariant:
```
In [23]: from sklearn.metrics.pairwise import cosine_similarity
In [24]: cosine_similarity([[1,4,2,6], [10,90,50,40]])
Out[24]:
array([[1. , 0.84794633],
[0.84794633, 1. ]])
```
Upvotes: 3 [selected_answer] |
2019/05/16 | 555 | 1,977 | <issue_start>username_0: At the moment I am working on a vehicle counting & classification project.
For a specific part in the project I need to get back only the completely visible vehicles from my input data (images). I am wondering if this could be done (more) automatically in the following way:
1. zoom in such that only approximately one van would be visible
2. divide the vehicles into two categories: truncated and non-truncated
3. train on these two classes
4. After training and testing, use the model to find the completely visible vehicles.
So the main question is, is it possible that this would give sufficient results or should I try to find another solution?<issue_comment>username_1: I would recommend a [hierarchical cluster algorithm](https://en.wikipedia.org/wiki/Hierarchical_clustering), after normalising your numbers into proportions. Then the clustering should be able to identify similar patterns. Depending at which level you make the cut, you can decide how many clusters you want.
A great resource on this topic is <NAME>., & <NAME>. (1990). "Finding Groups in Data - An Introduction to Cluster Analysis". <NAME> & Sons
Upvotes: 2 <issue_comment>username_2: If I understand correctly you want to find companies with similar patterns to yours.
I would start with measuring `cosine similarity` between your company and others.
It is really easy with Python, for example:
```
In [21]: from sklearn.metrics.pairwise import cosine_similarity
In [22]: cosine_similarity([[1,4,2,6], [1,9,5,4]])
Out[22]:
array([[1. , 0.84794633],
[0.84794633, 1. ]])
```
Note that if size of sale matters to you, this is not the right approach, as cosine similarity is magnitude invariant:
```
In [23]: from sklearn.metrics.pairwise import cosine_similarity
In [24]: cosine_similarity([[1,4,2,6], [10,90,50,40]])
Out[24]:
array([[1. , 0.84794633],
[0.84794633, 1. ]])
```
Upvotes: 3 [selected_answer] |
2019/05/16 | 877 | 3,433 | <issue_start>username_0: Understandably RNNs are very good at solving problems involving audio, video and text processing due to the arbitrary input's length of this sort of data.
What I don't understand is why RNNs are also superior at predicting time series data and why we use them over simple MLP DNNs.
Say I wanted to predict what the value in the time series is at $t+1$. I would take a window of, let's say, $t-50, t-49, \dots, t$, and then feed loads of sampled training data into a network. I could either choose to have a single LSTM unit remembering the entire window and basing the predictions on that, or I could simply make a 50 neuron wide MLP network.
What exactly is it about RNNs that makes them better in this scenario or any scenario for that matter?
I understand that the LSTM would have substantially fewer weights (in this scenario) and should be less computationally intensive, but, apart from that, I don't see any difference in these two methods.<issue_comment>username_1: RNNs have the ability to hold a state, that means the model can learn which information it wants to save and what to delete based on ordering and how you designed the creation and passing of the state (probably worth looking into what an LSTM is), while this would be alot more difficult for a sliding MLP to do (you can think of a sliding MLP as a stateless RNN). Also a sliding MLP would be alot more computation since it needs to recompute an entire context window for each new output, while an RNN could just use the previous state and only do processing on a new singular input or a smaller context window.
Hope that helps!
Upvotes: 0 <issue_comment>username_2: In an RNN, the output of the previous state is passed as an input to the current state. Intuitively, there is a temporal (time-based) relationship in the way in which input is processed in an RNN. It can understand how the current state was achieved on the basis of the previous values, i.e value at time-step $t$ is a result of value at time-steps $t-1, t-2$, and so on.
In a DNN, there is no temporal relationship in the way input is processed. Values at time-steps $t, t-1, t-2, \dots $ are all treated distinctly and not as a continuation of the previous time-step values.
Upvotes: 2 [selected_answer]<issue_comment>username_3: As noted in [LeCun et al.'s Deep Learning paper](https://www.nature.com/articles/nature14539.epdf):
>
> RNNs, once unfolded in time ... can be seen as very deep feedforward networks in which all the layers share the same weights
>
>
>
So, if we ignore how easy they are to train, there is theoretically no real advantage of RNNs over MLPs, on any task, including time series modeling.
Perhaps the key advantage of RNNs is that they share parameters over time. That means they have fewer parameters, and the parameters they do have get used more often. This makes them easier to train (and also allows handling variable-length input sequences, which, as you noted, does not apply when forecasting from a fixed historical window).
Also, the "compression" of the historical information into a state that can pass forward in time, as noted in [this excellent answer on quora](https://www.quora.com/Whats-the-difference-between-a-single-output-RNN-and-an-MLP-whose-input-data-contains-all-of-the-features-of-the-given-time-steps), may force RNNs to learn general patterns that may generalize better to unseen data.
Upvotes: 1 |
2019/05/17 | 828 | 3,322 | <issue_start>username_0: Right now, I am trying to synthesize training images for a CNN and due to the nature of the application, there is a finite number of sample images to learn from.
From other research, I expect to be using about 200,000 training images at a resolution of 1280\*720, which with 3 channel at 8 bits will take about 550 GB to save uncompressed. This number can and probably will rise in the future, meaning more memory that I will need to provide.
I imagine that there are applications that required even more training data with higher complexity and that there are solutions to handling that such as compression techniques and the like.
My question: Are there solutions for the memory management beyond compressing the images with JPEG and such besides generating and instantly consuming the pictures without saving them to permanent memory?<issue_comment>username_1: RNNs have the ability to hold a state, that means the model can learn which information it wants to save and what to delete based on ordering and how you designed the creation and passing of the state (probably worth looking into what an LSTM is), while this would be alot more difficult for a sliding MLP to do (you can think of a sliding MLP as a stateless RNN). Also a sliding MLP would be alot more computation since it needs to recompute an entire context window for each new output, while an RNN could just use the previous state and only do processing on a new singular input or a smaller context window.
Hope that helps!
Upvotes: 0 <issue_comment>username_2: In an RNN, the output of the previous state is passed as an input to the current state. Intuitively, there is a temporal (time-based) relationship in the way in which input is processed in an RNN. It can understand how the current state was achieved on the basis of the previous values, i.e value at time-step $t$ is a result of value at time-steps $t-1, t-2$, and so on.
In a DNN, there is no temporal relationship in the way input is processed. Values at time-steps $t, t-1, t-2, \dots $ are all treated distinctly and not as a continuation of the previous time-step values.
Upvotes: 2 [selected_answer]<issue_comment>username_3: As noted in [LeCun et al.'s Deep Learning paper](https://www.nature.com/articles/nature14539.epdf):
>
> RNNs, once unfolded in time ... can be seen as very deep feedforward networks in which all the layers share the same weights
>
>
>
So, if we ignore how easy they are to train, there is theoretically no real advantage of RNNs over MLPs, on any task, including time series modeling.
Perhaps the key advantage of RNNs is that they share parameters over time. That means they have fewer parameters, and the parameters they do have get used more often. This makes them easier to train (and also allows handling variable-length input sequences, which, as you noted, does not apply when forecasting from a fixed historical window).
Also, the "compression" of the historical information into a state that can pass forward in time, as noted in [this excellent answer on quora](https://www.quora.com/Whats-the-difference-between-a-single-output-RNN-and-an-MLP-whose-input-data-contains-all-of-the-features-of-the-given-time-steps), may force RNNs to learn general patterns that may generalize better to unseen data.
Upvotes: 1 |
2019/05/17 | 1,833 | 7,641 | <issue_start>username_0: I am new in the field of AI. I am working to create the flappy bird using Genetic Algorithm. After reading and seeing some examples, I saw that most implementations use a Neural Network + Genetic Algorithm and after certain generations, you achieve a very good agent that survives very long.
I currently struggling to implement the Neural Network since I have never taken a Machine Learning course.
On many examples that I have read neural networks require training inputs and outputs. For the flappy bird, I can't think of output since you don't really know if the action of flapping will benefit you or not.
In the example that I followed, Synaptic.js is used and it is pretty straight-forward. However, in Python, I can't find a simple library that will initialize randomly and adjust the weights and biases depending on the good agents that survive longer.
What would be the right way to implement this Neural Network without having a training dataset?
Is there any way to create Flappy Bird without using Neural Networks, just Genetic Algorithm?
The example in Javascript that I am referring to: [Flappy Bird Using Machine Learning](https://github.com/ssusnic/Machine-Learning-Flappy-Bird)<issue_comment>username_1: I agree a neural net is a good start and you might want to add a constitutional neural network to list of models you want to test and evaluate.
However your question was really on getting something up and running using python. I don't have enough time for me to play games, much more let my computers do it for me... but I know [someone who has](https://github.com/NJacobsohn/Your-Learning-is-in-Another-Castle#the-numerical-model) done such a project.
Hopefully that helps.
Upvotes: 0 <issue_comment>username_2: I would recommend learning about Reinforcement learning first. You don't need a dataset as you train your network by letting it play the game over and over again. but knowing how to do so doea mean finding out about markov decision process and how you can use the neural network to solve this.
Upvotes: 0 <issue_comment>username_3: What you are trying to achieve, is a game that **learns** to play flappy bird. For doing this you need a neural network AND a genetic algorithm, those two things work together.
About your concerns on the output, you don't have to know if the action will benefit or not, i will soon explain why.
The neural network part
-----------------------
So, what you need is to know how to build a neural network, i don't know your knowledge about it, but i suggest starting from the basics. In this scenario, you need a feed forward neural network, because you just take the inputs from the current flappy bird scene/frame (such as the y position of the bird, the distance from the closes pipe ecc..) and feed it through a network that outputs either 1 or 0 (jump or don't jump) in the only output neuron we just decided it has.
In python you can implement a neural network from scratch, or using a neural network framework that does al the dirty work for you.
* From scratch you would need to use numpy for matrix calculations, and you would need to learn matrix multiplication, dot products and all that fancy stuff (You can just let numpy taking care of the matrix calculations, but understanding how it works behind the scenes always helps understand new problems that you might come across when doing more advanced stuff)
* Using a framework like [Tensorflow](https://www.tensorflow.org/) for python, the only thing you need to do is find the right structure for the network you want to use. You will not have to worry about how activations work, or how the feed forward is performed (But again, it's a good thing to know when working with neural nets)
The genetic algorithm part or ""learning""
------------------------------------------
I say ""learning"" because at first sight it might look like learning, but really it is not.
The genetic algorithm works like "the survival of the fittest", where the "smarter" birds, which are the ones that reached the higher score on the current generation, will have a chance to have their child little birds, that have the same brain as their parent, with either some minimal modifications, or a mix of their parent brains.
The process of this ""learning"", so the genetic algorithm, works like so:
1. Create a generation of let's say 200 birds, every bird has a brain with random weights, so at the first run, they are all very...not smart
2. The game starts, and every frame of the game, the brain of the bird recieves as input some data that is taken from the current frame ( y pos of the bird, distance from pipe...)
3. The brain ( neural network ) of each bird, performs a feed forward with that data, and outputs what at the beginning is a very random result, let's say 0.75 for one bird
4. At this point you decided that 0.75 is greater than 0.5, so you take that as a 1, which stands for "jump", while if it was 0.3, so 0, the bird does nothing and keeps falling
5. Shortly the bird will die cause he has no idea of what he is doing, so he most likely collides with a pipe or the ground.
6. After all birds met their fate, you see that some birds reached further than others, so you choose, for example, 5 of the best performing ones.
7. Now you try to create a new generation of 200 birds using only the brains of those 5 that were choosen, by mixing and modifying theyr brains
8. Now the new birds have a brand new brain, that in some cases might be better than the previous one, so chances are that some of those birds will reach a higher score, therefore flap further into the level.
9. Repeat from point 6
In practice your "perform\_genetic\_algorithm" function in python, will have to choose the birds with the highest score, and as wild as it sounds, mix their brains and modify them, hoping that some modifications will improve the performance of the bird.
>
> I can't think of output since you don't really know if the action of flapping will benefit you or not
>
>
>
The mechanism above explains why you basically do not care at all about the output, except saying to the game engine: "hey the bird decided to flap, do it". Whether it's the right action or not, doesn't matter, as the smarter birds are **naturally** gonna get further and so be more likely to be choosen for next generation.
Hopefully now it's all more clear.
Here is some useful links for building a neural network and for understanding the genetic algorithm:
* [How to build a neural network](http://neuralnetworksanddeeplearning.com/chap1.html): I am linking this because it contains all useful information about how to build a very basic neural network in python. In your case, you would have to ignore all the part about backpropagation, loss & error calculation and SGD, and just look at the **feed forward** part.
* [How to build a neural network - 2](https://enlight.nyc/projects/neural-network/): This is another example of building a neural network that i found really useful, probably it's simpler and more straight forward than the previous link, but again, the backpropagation part is not needed for this genetic based learning.
* [Video tutorials on genetic algorithm](https://www.youtube.com/watch?v=9zfeTw-uFCw&list=PLRqwX-V7Uu6bJM3VgzjNV5YxVxUwzALHV): This is a very long but very explanatory playlist of videos that dives into the nature of genetic algorithms and how to implement one
* [Genetic algorithm optimization](https://towardsdatascience.com/introduction-to-optimization-with-genetic-algorithm-2f5001d9964b): Other source about genetic algorithms
Upvotes: 2 |
2019/05/18 | 1,593 | 7,112 | <issue_start>username_0: I've been doing some reading about GANs, and although I've seen several excellent examples of *implementations*, the descriptions of why those patterns were chosen isn't clear to me in many cases.
At a very high level, the purpose of the **discriminator** in a GAN is establish a **loss function** that can be used to train the **generator**.
ie. Given the random input to the generator, the discriminator should be able to return a probability of the result being a 'real' image.
If the discriminator is perfect the probability will always be zero, and the loss function will have no gradient.
Therefore you iterate:
* generate random samples
* generate output from the generator
* evaluate the output using the discriminator
* train the generator
* update the discriminator to be more accurate by training it on samples from the real distribution and **output from the generator**.
The problem, and what I don't understand, is point 5 in the above.
Why do you use the output of the generator?
I absolutely understand that you need to iterate on the accuracy of the discriminator.
To start with it needs to respond with a non-zero value for the effectively random output from the generator, and slowly it needs to converge towards correctly classifying images at 'real' or 'fake'.
In order to achieve this we iterate, training the generator with images from the real distribution, pushing it towards accepting 'real' images.
...and with the images from the generator; but I don't understand why.
Effectively, you have a set of real images (eg. 5000 pictures of faces), that represent a sample from the latent space you want the GAN to converge on (eg. all pictures of faces).
So the argument goes:
As the generator is trained iteratively closer and closer to **generating images from the latent space**, the discriminator is iteratively trained to **recognise from the latent space**, as though it had a *much larger sample size* than the 5000 (or whatever) sample images you started with.
...ok, but that's daft.
The *whole point* of DNN's is that given a sample you can train it to recognise images from the latent space the samples represent.
I've never seen a DNN where the first step was 'augment your samples with extra procedurally generated fake samples'; the only reason to do this would be if you can only recognise samples in the input set, ie. your network is over-fitted.
So, as a specific example, why can't you incrementally train the discriminator on samples of ('real' \* epoch/iterations + 'noise' \* 1 - epoch/iterations), where 'noise' is just a random input vector.
Your discriminator will then *necessarily* converge towards recognising real images, as well as offering a meaningful gradient to the generator.
What benefit does feeding the output of the generator in offer over this?<issue_comment>username_1: The main reason that the discriminator is trained concurrently with the generator is to provide (at least in theory) a smooth and gradual learning signal for the generator.
If we trained the discriminator on only the input data, then, assuming our training algorithm converges well, it should quickly converge to a fixed model. The generator can then learn to fool this fixed model, but it will likely still be easy to spot the generator's fakes for a human. For example, the discriminator may have learned that, coincidentally, in the sample you provided, all images of trucks have a fully white pixel in the top left corner. If it learns that pattern, the generator can fool it by generating noise with a white pixel. Once the generator has learned this pattern, all learning stops.
In contrast, suppose that the discriminator is repeatedly re-trained on a mixture of real and generated examples. The discriminator will be forced to learn more complex patterns than "white pixel in upper left", which improves its quality beyond the raw patterns in the sample data.
The converse relationship is also true. If the generator is trained only on the training data, it will also likely pick out only the most obvious patterns. These patterns are likely to create many local minima in the weight space for the network. However, if the error signal from the discriminator is fed to the generator, then the generator must adapt: in effect, we are telling it "making the top left pixel white is not good enough to fool observers. Find more complex patterns".
Upvotes: 2 <issue_comment>username_2: The discriminator's job is to tell between real images and generated images. It would be impossible to do that if it never actually sees generated images, just as if you wanted a network to differentiate between cats and dogs it wouldn't work if you only showed it pictures of dogs.
If you were to train a generator on only real images, all labels it would see would be a 1. What you end up with is a network that learns how to produce 1 regardless of its inputs, which is very easy to learn without finding any underlying patterns in the data. Once you add in the generated images and 0 labels it is forced to learn something interesting.
Upvotes: 1 <issue_comment>username_3: It turns out there is actually a practical reason for this.
Practically speaking, in GANs the generator tends to converge on few 'good' outputs that fool the discriminator if you don't do it. In the optimal case, the generator will actually emit a single fixed output regardless of the input vector that fools the discriminator.
Which is to say, the generators loss function is intended not simply as "fool the discriminator", it is actually:
* Fool the discriminator.
* Generate novel output.
You can write your generator's loss function to explicitly attempt to say the output in any training batch should be distinct, but by passing the outputs to the discriminator you create a **history of previous predictions** from the generator, effectively applying a loss metric for when the generator tends to produce the same outputs over and over again.
...but it is not magic, and is *not* about the discriminator learning "good features" features; it is about the loss applied to the **generator**.
This is referred to as "Mode Collapse", to quote the Google ML guide on GAN troubleshooting:
>
> If the generator starts producing the same output (or a small set of outputs) over and over again, the discriminator's best strategy is to learn to always reject that output. But if the next generation of discriminator gets stuck in a local minimum and doesn't find the best strategy, then it's too easy for the next generator iteration to find the most plausible output for the current discriminator.
>
>
> Each iteration of generator over-optimizes for a particular discriminator, and the discriminator never manages to learn its way out of the trap. As a result the generators rotate through a small set of output types. This form of GAN failure is called mode collapse.
>
>
>
See also, for additional reading "Unrolled GANs" and "Wasserstein loss".
see: <https://developers.google.com/machine-learning/gan/problems>
Upvotes: 2 [selected_answer] |
2019/05/19 | 368 | 1,604 | <issue_start>username_0: I've been studying Branch and Bound's graph algorithm and I hear it always finds the optimal path because it uses previously found solutions to find others
However, I haven't been able to find a proof of why it finds the optimal path. (In fact, most sites kind of do a bad job generalizing the algorithm itself.)
What is the proof that this algorithm always finds the optimal path in the case of a graph with 1 or more goal nodes?<issue_comment>username_1: My attempt was proof by contradiction. We can assume B&B found a sub-optimal path, but that would create a contradiction, because the only way B&B would miss an optimal path is if it skipped it completely (this part I don't know how to prove) or the related part of the search space was pruned.
Upvotes: -1 <issue_comment>username_2: [Branch and Bound](https://en.wikipedia.org/wiki/Branch_and_bound#Generic_version) is similar to an [exhaustive search](https://en.wikipedia.org/wiki/Brute-force_search), except it incorporates a method for computing lower bounds on branches. If the lower bound on a given branch is greater than the upper bound on the problem (i.e. the current best solution encountered), that branch can be discarded since it will never produce an optimal solution.
Hence, since you explore all options except those you *know* will produce values less optimal than your current best value, you are guaranteed to encounter the global optimum.
Note this is a generic algorithm, and you will need to reference a specific implementation if you want proof of why it satisfies these criteria.
Upvotes: 0 |
2019/05/19 | 556 | 1,536 | <issue_start>username_0: I have a `.csv` file called `ratings.csv` with the following structure:
```
userID, movieID, rating
3, 12, 5
2, 7, 6
```
The rating scale goes from 0 to 5 stars. I want to be able to plot the sparsity of the matrix like it's done in the following picture:
[](https://i.stack.imgur.com/jtSLT.jpg)
As you can see, ratings scale goes from 0 to 5 on the right. It is a very well thought plot.
I have Matlab, Python, R etc. Could you come up with something and help me? I’ve tried hard but I cannot find the way to do it.<issue_comment>username_1: You're looking for a [heatmap](https://en.wikipedia.org/wiki/Heat_map). Check out e.g. <https://stackoverflow.com/q/33282368/3924118> (if you like Python more than the others). See also [this documentation](https://seaborn.pydata.org/generated/seaborn.heatmap.html).
Upvotes: 2 <issue_comment>username_2: I did it!
```
A = importdata('u.data');
user_id = A(:, 1);
movie_id = A(:, 2);
rating = A(:, 3);
% Build matrix R and w (weights matrix)
R = zeros(943, 1682);
w = zeros(943, 1682);
for i=1:100000
R(user_id(i), movie_id(i)) = rating(i);
w(user_id(i), movie_id(i)) = 1;
end
m = HeatMap(R)
ax = hm.plot; % 'ax' will be a handle to a standard MATLAB axes.
colorbar('Peer', ax); % Turn the colorbar on
caxis(ax, [0 5]); % Adjust the color limits
```
Output:
[](https://i.stack.imgur.com/qn3yn.png)
Upvotes: 1 |
2019/05/20 | 2,066 | 5,521 | <issue_start>username_0: I am trying to solve for $\lambda$ using [temporal-difference learning](https://en.wikipedia.org/wiki/Temporal_difference_learning). More specifically, I am trying to figure out what $\lambda$ I need, such that $\text{TD}(\lambda)=\text{TD}(1)$, after one iteration. But I get the incorrect value of $\lambda$.
Here's my implementation.
```
from scipy.optimize import fsolve,leastsq
import numpy as np
class TD_lambda:
def __init__(self, probToState,valueEstimates,rewards):
self.probToState = probToState
self.valueEstimates = valueEstimates
self.rewards = rewards
self.td1 = self.get_vs0(1)
def get_vs0(self,lambda_):
probToState = self.probToState
valueEstimates = self.valueEstimates
rewards = self.rewards
vs = dict(zip(['vs0','vs1','vs2','vs3','vs4','vs5','vs6'],list(valueEstimates)))
vs5 = vs['vs5'] + 1*(rewards[6]+1*vs['vs6']-vs['vs5'])
vs4 = vs['vs4'] + 1*(rewards[5]+lambda_*rewards[6]+lambda_*vs['vs6']+(1-lambda_)*vs['vs5']-vs['vs4'])
vs3 = vs['vs3'] + 1*(rewards[4]+lambda_*rewards[5]+lambda_**2*rewards[6]+lambda_**2*vs['vs6']+lambda_*(1-lambda_)*vs['vs5']+(1-lambda_)*vs['vs4']-vs['vs3'])
vs1 = vs['vs1'] + 1*(rewards[2]+lambda_*rewards[4]+lambda_**2*rewards[5]+lambda_**3*rewards[6]+lambda_**3*vs['vs6']+lambda_**2*(1-lambda_)*vs['vs5']+lambda_*(1-lambda_)*vs['vs4']+\
(1-lambda_)*vs['vs3']-vs['vs1'])
vs2 = vs['vs2'] + 1*(rewards[3]+lambda_*rewards[4]+lambda_**2*rewards[5]+lambda_**3*rewards[6]+lambda_**3*vs['vs6']+lambda_**2*(1-lambda_)*vs['vs5']+lambda_*(1-lambda_)*vs['vs4']+\
(1-lambda_)*vs['vs3']-vs['vs2'])
vs0 = vs['vs0'] + probToState*(rewards[0]+lambda_*rewards[2]+lambda_**2*rewards[4]+lambda_**3*rewards[5]+lambda_**4*rewards[6]+lambda_**4*vs['vs6']+lambda_**3*(1-lambda_)*vs['vs5']+\
+lambda_**2*(1-lambda_)*vs['vs4']+lambda_*(1-lambda_)*vs['vs3']+(1-lambda_)*vs['vs1']-vs['vs0']) +\
(1-probToState)*(rewards[1]+lambda_*rewards[3]+lambda_**2*rewards[4]+lambda_**3*rewards[5]+lambda_**4*rewards[6]+lambda_**4*vs['vs6']+lambda_**3*(1-lambda_)*vs['vs5']+\
+lambda_**2*(1-lambda_)*vs['vs4']+lambda_*(1-lambda_)*vs['vs3']+(1-lambda_)*vs['vs2']-vs['vs0'])
return vs0
def get_lambda(self,x0=np.linspace(0.1,1,10)):
return fsolve(lambda lambda_:self.get_vs0(lambda_)-self.td1, x0)
```
The expected output is: $0.20550275877409016$, but I am getting `array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])`
I cannot understand what am I doing incorrectly.
```
TD = TD_lambda(probToState,valueEstimates,rewards)
TD.get_lambda()
# Output : array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
```
I am just using TD($\lambda$) for state 0 after one iteration. I am not required to see where it converges, so I don't update the value estimates.<issue_comment>username_1: $TD(\lambda)$ return has the following form:
\begin{equation}
G\_t^\lambda = (1 - \lambda) \sum\_{n=1}^{\infty} \lambda^{n-1} G\_{t:t+n}
\end{equation}
For you MDP $TD(1)$ looks like this:
\begin{align}
G &= 0.64 (r\_0 + r\_2 + r\_4 + r\_5 + r\_6) + 0.36(r\_1 + r\_3 + r\_4 + r\_5 + r\_6)\\
G &\approx 6.164
\end{align}
$TD(\lambda)$ looks like this:
\begin{equation}
G\_0^\lambda = (1-\lambda)[\lambda^0 G\_{0:1} + \lambda^1 G\_{0:2} + \lambda^2 G\_{0:3} + \lambda^3 G\_{0:4} + \lambda^4 G\_{0:5} ]
\end{equation}
Now each $G$ term separately:
\begin{align}
G\_{0:1} &= 0.64(r\_0 + v\_1) + 0.36(r\_1 + v\_2) \approx 7.864\\
G\_{0:2} &= 0.64(r\_0 + r\_2 + v\_3) + 0.36(r\_1 + r\_3 + v\_3) \approx -5.336\\
G\_{0:3} &= 0.64(r\_0 + r\_2 + r\_4 + v\_4) + 0.36(r\_1 + r\_3 + r\_4 + v\_4) \approx 25.864\\
G\_{0:4} &= 0.64(r\_0 + r\_2 + r\_4 + r\_5 + v\_5) + 0.36(r\_1 + r\_3 + r\_4 + r\_5 + v\_5) \approx -11.936\\
G\_{0:5} &= 0.64(r\_0 + r\_2 + r\_4 + r\_5 + r\_6 + v\_6) + 0.36(r\_1 + r\_3 + r\_4 + r\_5 + r\_6
+ v\_6) \approx -0.336
\end{align}
Finally, we need to find $\lambda$ so that the return is equal to $TD(1)$ return, we have:
\begin{equation}
6.164 = (1 - \lambda)[7.864 - 5.336\lambda + 25.864\lambda^2 - 11.936\lambda^3 - 0.336\lambda^4]
\end{equation}
When you solve this equation, one of the solutions is $0.205029$ which is close to what you needed to get considering the numerical errors. Your problem was that you only considered probability in first state, but that decision prolongs to future states as well.
**EDIT**
As pointed out by username_2, this is not a full solution, it misses one crucial step to get it fully correct. Hint for it can be found in his [answer](https://ai.stackexchange.com/a/12606/20339) and that's the correct solution.
Upvotes: 3 [selected_answer]<issue_comment>username_2: The previous answer from username_1 is *mostly* correct but is missing a large detail to get the precise answer.
Given this is a question from a GT course homework, I only want to leave pointers so those seeking help can understand the required concept.
() equation is a summation over *infinite* K-steps (0:1 -> 0:∞) and should be included in our equation of () = (1)
Every k-step estimator which included steps past the termination point will equal the sum of the rewards.
Including these values into the summation will show a pattern, making infinite summation equation solvable.
Upvotes: 1 |
2019/05/20 | 1,094 | 2,890 | <issue_start>username_0: I have a fairly large dataset consisting of different images with and without persons that I want to use for a project.
The problem is that I only want the pictures that contain faces, and it is best if there is only a crop of the face.
I already looked at Facenet and Openface, but I thought that there must be a simpler already trained solution just to sort the dataset so I can get started with my own project.<issue_comment>username_1: $TD(\lambda)$ return has the following form:
\begin{equation}
G\_t^\lambda = (1 - \lambda) \sum\_{n=1}^{\infty} \lambda^{n-1} G\_{t:t+n}
\end{equation}
For you MDP $TD(1)$ looks like this:
\begin{align}
G &= 0.64 (r\_0 + r\_2 + r\_4 + r\_5 + r\_6) + 0.36(r\_1 + r\_3 + r\_4 + r\_5 + r\_6)\\
G &\approx 6.164
\end{align}
$TD(\lambda)$ looks like this:
\begin{equation}
G\_0^\lambda = (1-\lambda)[\lambda^0 G\_{0:1} + \lambda^1 G\_{0:2} + \lambda^2 G\_{0:3} + \lambda^3 G\_{0:4} + \lambda^4 G\_{0:5} ]
\end{equation}
Now each $G$ term separately:
\begin{align}
G\_{0:1} &= 0.64(r\_0 + v\_1) + 0.36(r\_1 + v\_2) \approx 7.864\\
G\_{0:2} &= 0.64(r\_0 + r\_2 + v\_3) + 0.36(r\_1 + r\_3 + v\_3) \approx -5.336\\
G\_{0:3} &= 0.64(r\_0 + r\_2 + r\_4 + v\_4) + 0.36(r\_1 + r\_3 + r\_4 + v\_4) \approx 25.864\\
G\_{0:4} &= 0.64(r\_0 + r\_2 + r\_4 + r\_5 + v\_5) + 0.36(r\_1 + r\_3 + r\_4 + r\_5 + v\_5) \approx -11.936\\
G\_{0:5} &= 0.64(r\_0 + r\_2 + r\_4 + r\_5 + r\_6 + v\_6) + 0.36(r\_1 + r\_3 + r\_4 + r\_5 + r\_6
+ v\_6) \approx -0.336
\end{align}
Finally, we need to find $\lambda$ so that the return is equal to $TD(1)$ return, we have:
\begin{equation}
6.164 = (1 - \lambda)[7.864 - 5.336\lambda + 25.864\lambda^2 - 11.936\lambda^3 - 0.336\lambda^4]
\end{equation}
When you solve this equation, one of the solutions is $0.205029$ which is close to what you needed to get considering the numerical errors. Your problem was that you only considered probability in first state, but that decision prolongs to future states as well.
**EDIT**
As pointed out by username_2, this is not a full solution, it misses one crucial step to get it fully correct. Hint for it can be found in his [answer](https://ai.stackexchange.com/a/12606/20339) and that's the correct solution.
Upvotes: 3 [selected_answer]<issue_comment>username_2: The previous answer from username_1 is *mostly* correct but is missing a large detail to get the precise answer.
Given this is a question from a GT course homework, I only want to leave pointers so those seeking help can understand the required concept.
() equation is a summation over *infinite* K-steps (0:1 -> 0:∞) and should be included in our equation of () = (1)
Every k-step estimator which included steps past the termination point will equal the sum of the rewards.
Including these values into the summation will show a pattern, making infinite summation equation solvable.
Upvotes: 1 |
2019/05/20 | 364 | 1,407 | <issue_start>username_0: I'm studying convolutional neural networks from the following article <https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/>.
If we take a grayscale image, the value of the pixel will be between 0 and 255. Now, if we apply a filter to our "new image", can we have pixels whose values are not included in this range? In this case, how can we create the convolved image?<issue_comment>username_1: The convolved image can be considered a feature map, where each new neuron represents some indication (or lack-there-of) of a feature in some receptive field of the original image, so no it does not need to be a valid image in the output.
If you specifically care for it to be an image as an output, you can do a couple of things:
1) normalize the produced feature map to some set range that youre working in (0-255 or 0-1)
2) make the filter a valid probability distribution, and you know the output will stay in the same range as the input (ex: Gaussian filters)
Upvotes: 1 <issue_comment>username_2: I think normalisation into range 0-1 is needed or at least to 5-10 cliping because the values will become astronomical after many layers. Convolved image has a vector of features for each pixel. Take a RGB for example -> each color is a feature in one pixel, the next map will be like 'horisontal line, vertical line, circle' for one pixel surroundings.
Upvotes: 0 |
2019/05/20 | 1,433 | 4,414 | <issue_start>username_0: I have come across something that IBM offers called *neural architecture search*. You feed it a data set and it outputs an initial neural architecture that you can train.
How is *neural architecture search* (NAS) performed? Do they use heuristics, or is this meta machine learning?
If you have any papers on NAS, I would appreciate if you can provide a link to them.<issue_comment>username_1: [Neural architecture search (NAS)](https://en.wikipedia.org/wiki/Neural_architecture_search) is a method of automating the design (that is, the choice of the values of the hyper-parameters) of artificial neural networks. There are different approaches to search the space of neural network architectures. For example, you can use reinforcement learning or evolutionary (or genetic) algorithms.
Check out the paper [Neural Architecture Search with Reinforcement Learning](https://arxiv.org/pdf/1611.01578.pdf) (2017), by <NAME> and <NAME>, where the authors train, using reinforcement learning (specifically, [REINFORCE](http://www-anw.cs.umass.edu/~barto/courses/cs687/williams92simple.pdf)), a recurrent neural network (the "controller") to generate (convolutional and recurrent) neural network architectures, so that to maximise the expected accuracy of the generated architectures on a validation dataset. They achieve some good results using this approach.
See also [Efficient Neural Architecture Search via Parameter Sharing](https://arxiv.org/pdf/1802.03268.pdf) (2018), by <NAME>, <NAME>, <NAME>, <NAME> and <NAME> (which thus includes some of the authors of [NAS](https://arxiv.org/pdf/1611.01578.pdf)), which is similar to [NAS](https://arxiv.org/pdf/1611.01578.pdf), but more efficient, hence the acronym ENAS (efficient NAS).
Upvotes: 0 <issue_comment>username_2: You could say that NAS fits into the domain of Meta Learning or Meta Machine learning.
I've pulled the NAS papers from my notes, this is a collection of papers/lectures that I personally found very interesting. It's sorted in rough chronological descending order, and `***` means influential / must read.
<NAME> and <NAME> are to good authors on the topic.
* [The Evolved Transformer](https://arxiv.org/pdf/1901.11117v1.pdf)
* [Exploring Randomly Wired Neural Networks for Image Recognition](https://arxiv.org/pdf/1904.01569.pdf)
* [GRAPH HYPERNETWORKS FOR NEURAL ARCHITECTURE SEARCH](https://arxiv.org/pdf/1810.05749v1.pdf)
* [Backprop Evolution](https://arxiv.org/pdf/1808.02822v1.pdf)
* [Progressive Neural Architecture Search](https://arxiv.org/pdf/1712.00559v3.pdf)
* \*\*\* [DARTS: Differentiable Architecture Search](https://arxiv.org/pdf/1806.09055v1.pdf)
* \*\*\* [Efficient Neural Architecture Search via Parameter Sharing - ENAS](https://arxiv.org/pdf/1802.03268.pdf)
* \*\*\* [Progressive Neural Architecture Search](https://arxiv.org/pdf/1712.00559v2.pdf)
* [AlphaX: eXploring Neural Architectures with Deep Neural Networks and Monte Carlo Tree Search](https://arxiv.org/pdf/1805.07440v1.pdf)
* [Automatic Machine Learning - Prof. <NAME>](https://www.youtube.com/watch?v=OR-IKyP4ZpI)
* [Google Brain - Neural Architecture Search - Quoc Le](https://www.youtube.com/watch?v=sROrvtXnT7Q)
* \*\*\* [Regularized Evolution for Image Classifier Architecture Search](https://arxiv.org/pdf/1802.01548.pdf)
* [Autostacker: A Compositional Evolutionary Learning System](https://arxiv.org/pdf/1803.00684v1.pdf)
* [Generating Neural Networks with Neural Networks](https://arxiv.org/pdf/1801.01952v1.pdf)
* [Finding Competitive Network Architectures Within a Day Using UCT](https://arxiv.org/pdf/1712.07420.pdf)
* [Neuroevolution: A different kind of deep learning](https://www.oreilly.com/ideas/neuroevolution-a-different-kind-of-deep-learning)
* [Evolving Deep Neural Networks](https://arxiv.org/pdf/1703.00548.pdf)
* [<NAME>: Deep Learning-to-Learn Robotic Control](https://www.youtube.com/watch?v=TERCdog1ddE)
* \*\*\* [SMASH: One-Shot Model Architecture Search through HyperNetworks](https://arxiv.org/pdf/1708.05344.pdf)
Upvotes: 3 <issue_comment>username_3: Here are two review articles:
* <NAME>: [Neural Architecture Search: A Survey](https://arxiv.org/abs/1808.05377) (2019), Journal of Machine Learning Research 20, 1-21
* <NAME>: [AutoML: A Survey of the State-of-the-Art](https://arxiv.org/abs/1908.00709) (2019)
Upvotes: 1 |
2019/05/21 | 962 | 3,656 | <issue_start>username_0: What is a simple turn-based game, that can be used to validate a Monte-Carlo Tree Search code and it's parameters?
Before applying it to problems where I do not have a possiblity to validate its moves for correctness, I would like to implement a test case, that makes sure that it behaves as expected, especially when there are some ambiguities between different implementations and papers.
I built a *connect-four game* in which to MCTS-AI play against each other and an *iterated prisoners dilemma* implementation, in which a MCTS-AI plays against common strategies like Tit-for-Tat, but I am still not sure if there is a real good interpretation if the MCTS-AI finds the best strategy.
Another alternative would be a *tic-tac-toe* game, but MCTS will exhaust the whole search space within little steps, so it is hard to tell how the implementation will perform on other problems.
In addition, expanding a full game tree does not tell you if any states before the full expansion are following the best MCTS strategy.
---
*Example:*
You can alternate in the expand step of player 1's tree between *optimize for player 1* and *optimize for player 2*, assuming that player 2 will not play the best move for player 1, but the best move for himself. Not doing so would result in an optimistic game tree, that may work in some cases, but probably is not the best choice for many games, while it would be useful for cooperative games.
When the game tree is fully expanded, you can find the best move, even when the order of the expand steps was not optimal, so using a game that can be fully expanded is no good test to validate the in-between steps.
---
Is there a simple to implement game, that can be used for validation, in which you can reliably check for each move, if the AI did find the expected move?<issue_comment>username_1: A good choice might be smaller-scale games of Go, like a 9x9 board. This was the original application domain MCTS was designed for, and the [original paper](http://www.ideanest.com/vegos/MonteCarloGo.pdf) by Brugmann from 1993 details parameters that should lead to an agent that can play above beginner level in what is today a minuscule amount of computational time, in a scaled-down 9x9 grid.
Go is a good choice for a benchmark because most learning algorithms fail at it pretty badly. The fact that MCTS worked here was a major breakthrough at the time, and helped cement it as a technique for game playing. If your algorithm is not working properly, it is therefore unlikely that it can learn to play Go at the level described in Brugmann's paper.
Upvotes: 2 <issue_comment>username_2: I now currently use:
* [Connect four](https://en.wikipedia.org/wiki/Connect_Four) / a [m,n,k-game](https://en.wikipedia.org/wiki/M,n,k-game)
* [Isolation](https://boardgamegeek.com/boardgame/1875/isolation) with 6x8 board
I currently have, but may remove:
* Iterated prisoners dilemma (hard to interpret and I am not sure if MCTS is really the right choice)
I may add sometime:
* [Tic-tac-toe](https://en.wikipedia.org/wiki/Tic-tac-toe) as very simple verification
* [Dots and Boxes](https://en.wikipedia.org/wiki/Dots_and_Boxes) (looks like it could be an interesting planning problem)
Other good ideas:
* [Go with simplified rules](https://en.wikipedia.org/wiki/Rules_of_Go#Concise_statement) (thanks username_1)
I did not try all ideas, as I just want to verify my framework, that should be used on other problems (that are harder to verify). You can still keep adding answers with good ideas, that may be a good reference for others that want to test their implementations.
Upvotes: 1 |
2019/05/21 | 862 | 3,004 | <issue_start>username_0: I am using the following library:
<https://github.com/vishnugh/evo-NEAT>
which seems to be a pretty simple NEAT-implementation.
Therefore I am using the following Config:
```
package com.evo.NEAT.com.evo.NEAT.config;
/**
* Created by vishnughosh on 01/03/17.
*/
public class NEAT_Config {
public static final int INPUTS = 11;
public static final int OUTPUTS = 2;
public static final int HIDDEN_NODES = 100;
public static final int POPULATION =300;
public static final float COMPATIBILITY_THRESHOLD = Float.MAX_VALUE;
public static final float EXCESS_COEFFICENT = 1;
public static final float DISJOINT_COEFFICENT = 1;
public static final float WEIGHT_COEFFICENT = 5;
public static final float STALE_SPECIES = 2;
public static final float STEPS = 0.1f;
public static final float PERTURB_CHANCE = 0.9f;
public static final float WEIGHT_CHANCE = 0.5f;
public static final float WEIGHT_MUTATION_CHANCE = 0.5f;
public static final float NODE_MUTATION_CHANCE = 0.1f;
public static final float CONNECTION_MUTATION_CHANCE = 0.1f;
public static final float BIAS_CONNECTION_MUTATION_CHANCE = 0.1f;
public static final float DISABLE_MUTATION_CHANCE = 0.1f;
public static final float ENABLE_MUTATION_CHANCE = 0.2f ;
public static final float CROSSOVER_CHANCE = 0.1f;
public static final int STALE_POOL = 10;
}
```
However, there are way too much species (about 60). I do not know how to reduce this number, given the fact that the COMPATIBILITY\_THRESHOLD is already maximized.
So what's my fault?
Note: I am not using: <http://nn.cs.utexas.edu/keyword?stanley:ec02>
since this algorithm seems not to work in a changing environment (where fitness can vary hardly)<issue_comment>username_1: Your species count will increase as the chance of mutation increases. This is because in every generation, so many genes will be mutated that they have little resemblance of each other, and the distance function doesn't factor in historical markings / innovation numbers.
Try lowering the mutation rates.
Below is the distance function from [here](http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf) page 110
$$\delta = \frac{c\_1E}{N} + \frac{c\_2D}{N} + c\_3 \cdot \overline{W}. $$
If your fitness vary a lot, try ranking the fitnesses in each specie and setting the survival chance based on its rank.
If you mean a large action space by changing environment, you can set the number of output nodes to the total number of actions, and rank each action, best to worst, then pick the best available action for the state.
Upvotes: 1 <issue_comment>username_2: If the compatibility distances are coming out to be > float.max that tells me something may be wrong with that calculation, I would suggest setting break points and debugging that code, I usually have my threshold set 5-10 and can noticeably tell when a change in species size when i move the threshold to something like 10
Upvotes: 0 |
2019/05/20 | 669 | 2,607 | <issue_start>username_0: I’ve been thinking about this for a few days and can’t tell if this would feel morally just to an average user<issue_comment>username_1: Ethics aside, I think this could potentially create a number of different issues. What if your algorithm guesses wrong and I'm stuck with a UI that's targeting incorrectly? What if my wife and I share an account? What if I'm a (*insert orientation here*) (*insert gender here*) who [coaches, supports] a [men's, women's] [volleyball, football] team? People are very diverse, and you could make many mistakes. Would you then provide tools for the user to correct these mistakes? How would you be able to do so without the user being offended?
Instead, I see many fewer issues that might arise from a section like the following:
>
> **Let's get to know each other!**
>
>
> If you'd like, we can help tune your profile to match your interests.
>
>
> You can start by selecting some of your interests from the list below, or try searching for your own.
>
>
> `Fishing` `Cooking` `Soccer` `Video Gaming` `Social Media`
> [`Search for my interest...`](https://ux.stackexchange.com/questions/125809)
>
>
> [Skip this section →](https://ux.stackexchange.com/questions/125809)
>
>
>
You could continue with other questions that might actually be relevant to your shaping your UI or application, allowing the user to omit/delete details for any or all questions.
Being purely an opt-in experience prevents any issues with the user feeling like your app may be "talking about them" behind their back.
In this case, specifically at this point in time, I think it's wise to be transparent about what data your application knows (or *thinks* it knows) about your users.
Upvotes: 1 <issue_comment>username_2: Aren't we already doing things with technology that are not ethical? But to answer your question, yes, it will be unethical to use gender to target a user. It will mean that the AI will be fed a list of things to look for for a certain gender which is again gender-bias. I find targetting users unethical but using gender basis would really be something else. I am not sure if I am on the right track but ads are run keeping in mind the target audience which involves the age, gender, etc. That is done by humans but still.
Upvotes: 0 <issue_comment>username_3: The general principle is that data belongs to the person who generated it, and permission should be sought before you use someone else's possessions.
So, it is ethical to use it to infer whatever you like about a person, provided that you ask permission first.
Upvotes: 0 |
2019/05/22 | 662 | 2,829 | <issue_start>username_0: Are there any open sourced algorithms that can take a couple of images as an input and generate a new, similar image based on that input? Or are there any resources where I can learn to create such an algorithm?<issue_comment>username_1: I'm not an expert on that so you could probably get a better answer.
I'm not sure to understand what you're looking for. Are the couple of images about the same thing? Like pictures of cats and you want to generate a new cat based on these pictures? If that's what you want, you could probably take a look at **Generative Adversarial Network** (GAN) : [Introduction](http://blog.aylien.com/introduction-generative-adversarial-networks-code-tensorflow/).
A GAN is made up of a Generator and a Discriminator. The goal of the discriminator is to distinguish the real data from the generated data. And the goal of the generator is to improve its generated data to look similar to real data. Then, if there are different cat images in your dataset, the generator will learn to create a new cat based on that dataset.
If what you're looking for is to take different images like a cat and a dog and generate a "catdog", you can take a look at **Variational AutoEncoder** (VAE). For example you can train two different VAE (Encoder/Decoder). One for cats, and one for dogs. Then you take the encoder of dogs and the decoder of cats. That what I saw one day, not sure if it really works.
Correct me if I'm wrong
Upvotes: 2 <issue_comment>username_2: This was exactly what I was looking for;-) will be following!
I do have something that might be of interest to you, an algorithm that generates pictures from textual input.
Here is the link to the site where you can try it out:
<https://hypnogram.xyz/>
Having played with it a little it seems that the text is used to search online and the pictures found online are then merged together according to a rule set.
But that is just a wild guess.
It also seems that the images generated by the algorithm are unique and even when you use the same string twice the generated picture does not repeat itself. But again, that is what it seems like to someone who does not actually know, I only recently stumbled on the site and have yet to read more about the process.
Enjoy!
Upvotes: 1 <issue_comment>username_3: I'm not an expert and I'm aware this is an old question so you've probably found something of interest since.
For latecomers, an interesting lead could be [ControlNET](https://github.com/lllyasviel/ControlNet), an extension for Stable Diffusion. This extension includes various models to analyze an input image and control the output. Some of the models include detection of pose, depth, contours... It's extremely useful to create iterations of an existing image, which seems to be your objective.
Upvotes: 1 |
2019/05/22 | 545 | 2,208 | <issue_start>username_0: I'm currently working on a group project where we need to find a pattern in a given dataset. The dataset is a collection of X, Y, Z values of a gyroscope from someone who is walking. If you plot these values you'll get a result like this.
[](https://i.stack.imgur.com/d2k4d.png)
And this is how our dataset looks like.
[](https://i.stack.imgur.com/XIhDF.png)
We are new to AI and ML so we first did some general research like understanding how matrices work and how to do some basic predictions with frameworks like TensorFlow and PyTorch. Now we want to start on this problem. What we need to do is to find a pattern in the dataset and count how many times this pattern appears in this dataset.
We started of with some none AI functions to count, we managed to do that but the way we counted will probably only work on this specific dataset. So that's why we decided to do this with AI.
We would love to hear as many different approaches as possible to count the steps since we are still learning.<issue_comment>username_1: For such time-series data that has a significant amount of periodicity, I would recommend converting data to the frequency domain and performing various spectral analysis methods as @firion has already mentioned. For example, you could perform Fourier Analysis and study the individual components and identify patterns there.
Also, it generally not recommended to perform the normal pattern extraction approaches to time-series data as they fail to understand the temporal relationship between subsequent data points.
Hope this helps!
Upvotes: 1 <issue_comment>username_2: * since you are on ML forum, recognition of sequences does RNN.
* you wond belive, i currently work on similar stuff. Start about thinking of algo to find repititions in string, like : "ababcab" returns ('ab' : 0,2,5)
* and yes you could do Fourier Analysis but thats not ML method at all
Upvotes: 0 <issue_comment>username_3: You can also fit trigonometric functions like sin and cos. eg check this [doc](https://ms.mcmaster.ca/~bolker/eeid/2010/Ecology/Spectral.pdf)
Upvotes: 0 |
2019/05/23 | 1,416 | 5,950 | <issue_start>username_0: Can the decoder in a transformer model be parallelized like the encoder?
As far as I understand, the encoder has all the tokens in the sequence to compute the self-attention scores. But for a decoder, this is not possible (in both training and testing), as self-attention is calculated based on previous timestep outputs. Even if we consider some techniques, like teacher forcing, where we are concatenating expected output with obtained, this still has a sequential input from the previous timestep.
In this case, apart from the improvement in capturing long-term dependencies, is using a transformer-decoder better than say an LSTM, when comparing purely on the basis of parallelization?<issue_comment>username_1: >
> **Can the decoder in a transformer model be parallelized like the**
> **encoder?**
>
>
>
**Generally NO:**
Your understanding is completely right. In the decoder, the output of each step is fed to the bottom decoder in the next time step, just like an [LSTM](https://en.wikipedia.org/wiki/Long_short-term_memory).
Also, like in LSTMs, the self-attention layer needs to attend to earlier positions in the output sequence in order to compute the output. Which makes straight parallelisation impossible.
However, when decoding during training, there is a frequently used procedure which doesn't take the previous output of the model at step t as input at step t+1, but rather takes the ground truth output at step t. This procudure is called 'Teacher Forcing' and makes the decoder parallelised during training. You can read more about it [here](https://machinelearningmastery.com/teacher-forcing-for-recurrent-neural-networks/).
*And For detailed explanation of how Transformer works I suggest reading this article: [The Illustrated Transformer](http://jalammar.github.io/illustrated-transformer/).*
>
> **Is using a transformer-decoder better than say an lstm when comparing**
> **purely on the basis of parallelization?**
>
>
>
**YES:**
Parallelization is the main drawback of [RNNs](https://en.wikipedia.org/wiki/Recurrent_neural_network) in general. In a simple way, RNNs have the ability to memorize but not parallelize while [CNNs](https://en.wikipedia.org/wiki/Convolutional_neural_network) have the opposite. Transformers are so powerful because they **combine both** parallelization (at least partially) and memorizing.
In Natural Language Processing for example, where RNNs are used to be so effective, if you take a look at [GLUE leaderboard](https://gluebenchmark.com/leaderboard) you will find that most of the world leading algorithms today are Transformer-based (e.g [BERT by GOOGLE](https://arxiv.org/abs/1810.04805), [GPT by OpenAI](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf)..)
*For better understanding of why Transformers are better than CNNs I suggest reading this Medium article: [How Transformers Work.](https://towardsdatascience.com/transformers-141e32e69591)*
Upvotes: 5 [selected_answer]<issue_comment>username_2: **Can the decoder in a transformer model be parallelized like the encoder?**
The correct answer is: computation in a Transformer decoder can be parallelized during training, but not during actual translation (or, in a wider sense, generating output sequences for new input sequences during a testing phase).
**What exactly is parallelized?**
Also, it's worth mentioning that "parallelization" in this case means to compute encoder or decoder states in paralllel **for all positions** of the input sequence. Parallelization over several layers is not possible: the first layer of a multi-layer encoder or decoder still needs to finish computing all positions in parallel before the second layer can start computing.
**Why can the decoder be parallelized position-wise during training?**
For each position in the input sequence, a Transformer decoder produces a decoder state as an output. (The decoder state is then used to eventually predict a token in the target sequence.)
In order to compute one decoder state for a particular position in the sequence of states, the network consumes as inputs: 1) the entire input sequence and 2) **the target words that were generated previously**.
During training, the target words generated previously are known, since they are taken from the target side of our parallel training data. This is the reason why computation can be factored over positions.
During inference (also called "testing", or "translation"), the target words previously generated are predicted by the model, and computing decoder states must be performed sequentially for this reason.
**Comparison to RNN models**
While Transformers can parallelize over input positions during training, an encoder-decoder model based on RNNs cannot parallelize positions. This means that Transformers are generally faster to train, while RNNs are faster for inference.
This observation leads to the nowadays common practice of training Transformer models and then using [sequence-level distillation](https://arxiv.org/abs/1606.07947) to learn an RNN model that mimicks the trained Transformer, for faster inference.
Upvotes: 3 <issue_comment>username_3: Can't see that this has been mentioned yet - there are ways to generate text non-sequentially using a non-autoregressive transformer, where you produce the entire response to the context at once. This typically produces worse accuracy scores because there are interdependencies within the text being produced - a model translating "thank you" could say "vielen danke" or "danke schön" but whereas an autoregressive model can know which word to say next based on previous decoding, a non-autoregressive model can't do this, so also could produce "danke danke" or "vielen schön". There is some research that suggests you can close in on the accuracy gap though: <https://arxiv.org/abs/2012.15833>.
Upvotes: 2 |
2019/05/24 | 1,328 | 4,722 | <issue_start>username_0: **What is the advantage of using a VAE over a deterministic auto-encoder?**
For example, assuming we have just 2 labels, a deterministic auto-encoder will always map a given image to the same latent vector. However, one expects that after the training, the 2 classes will form separate clusters in the encoder space.
In the case of the VAE, an image is mapped to an encoding vector probabilistically. However, one still ends up with 2 separate clusters. Now, if one passes a new image (at the test time), in both cases the network should be able to place that new image in one of the 2 clusters.
How are these 2 clusters created using the VAE better than the ones from the deterministic case?<issue_comment>username_1: VAE's are not used for classification. They are used for inference or as Generative Models, while AE's can be used as data re-constructors (as you described above), de-noisers, classifiers. So the difference is generation of new data vs re-construction of data.
VAE's map the inputs to a hidden space, where each variable is enforced to have probability distribution which is given by $N(0,1)$ i.e. the standard Normal Distribution. Once we have trained a VAE we will now, use only the decoder part to generate new models.
**Example:**
[](https://i.stack.imgur.com/GkyX5.png)
`Source: Stanford University CS231n slides`
Assume there is an x\_axis and a y\_axis on the bottom and the left. Let the x\_axis represent $x\_1$ and y\_axis $x\_2$ which are our hidden variables. By varying $x\_1$ and $x\_2$ you can see what happens. Increasing $x\_1$ changes face angle while increasing $x\_2$ changes eye droop. Thus we can generate new data by varying the features in a latent representation.
For better understanding I highly recommend you check out these links:
[Variational autoencoders.](https://www.jeremyjordan.me/variational-autoencoders/)
[Variational Autoencoders - <NAME>](http://bjlkeng.github.io/posts/variational-autoencoders/)
[VAE - <NAME>](https://www.youtube.com/watch?v=uaaqyVS9-rM&t=23s)
[Generative Models - CS231n](https://www.youtube.com/watch?v=5WoItGTWV54&t=2817s)
Upvotes: 0 <issue_comment>username_2: It seems that you think that we want to perform classification with VAEs or that images that we pass to the encoder fall into more than one category. The other answer already points out that VAEs are not typically used for classification but for generation tasks, so let me try to answer the main question.
The **variational auto-encoder (VAE)** and the (deterministic) **auto-encoder** both have an encoder and a decoder and they both convert the inputs to a latent representation, but their inner workings are different: a VAE is a [generative](https://en.wikipedia.org/wiki/Generative_model) [statistical model](https://ai.stackexchange.com/a/12354/2444), while the AE can be viewed just as a data compressor (and decompressor).
In an **AE**, given an input $\mathbf{x}$ (e.g. an image), the encoder produces **one** latent vector $\mathbf{z\_x}$, which can be decoded into $\mathbf{\hat{x}}$ (another image which should be similar or related to $\mathbf{x}$). Compactly, this can be presented as $\mathbf{\hat{x}}=f(\mathbf{z\_x}=g(\mathbf{x}))$, where $g$ is the encoder and $f$ is the decoder. This operation is deterministic: so, given the same $\mathbf{x}$, the same $\mathbf{z\_x}$ and $\mathbf{\hat{x}}$ are produced.
In a **VAE**, given an input $\mathbf{x} \in X$ (e.g. an image), more than one latent vector, $\mathbf{z\_{x}}^i \in Z$, can be produced, because the encoder attempts to learn the probability distribution $q\_\phi(z \mid x)$, which can be e.g. $\mathcal{N}(\mu, \sigma)$, which we can sample from, where $\mu, \sigma = g\_\theta(\mathbf{x})$. In practice, $g\_\theta$ is a neural network with weights $\phi$. We can sample latent vectors $\mathbf{z\_{x}}^i$ from $\mathcal{N}(\mu, \sigma)$, which should be "good" representations of a given $\mathbf{x}$.
**Why is it useful to learn $q\_\phi(z \mid x)$?** There are many uses cases. For example, given multiple corrupted/noisy versions of an image, you can reconstruct the original uncorrupted image. However, note that you can use the **AE** also for denoising. [Here](https://www.tensorflow.org/tutorials/generative/autoencoder) you have a TensorFlow example that illustrates this. The difference is is that, again, given the same noisy image, the model will always produce the same reconstructed image. You can also use the VAE for drug design [[1]](http://proceedings.mlr.press/v80/jin18a/jin18a.pdf). See also [this post](https://ai.stackexchange.com/q/11405/2444).
Upvotes: 1 |
2019/05/24 | 699 | 2,933 | <issue_start>username_0: I just learned about GAN and I'm a little bit confused about the naming of Latent Vector.
* First, In my understanding, a definition of a latent variable is a random variable that can't be measured directly (we needs some calculation from other variables to get its value). For example, knowledge is a latent variable. Is it correct?
* And then, in GAN, a latent vector $z$ is a random variable which is an input of the generator network. I read in some tutorials, it's generated using only a simple random function:
```
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
```
then how are the two things related? why don't we use the term "a vector with random values between -1 and 1" when referring $z$ (generator's input) in GAN?<issue_comment>username_1: It is called a Latent variable because you cannot access it during train time (which means manipulate it), In a normal Feed Forward NN you cannot manipulate the values output by hidden layers. Similarly the case here.
The term originally came from [RBM's](https://en.wikipedia.org/wiki/Restricted_Boltzmann_machine) (they used term hidden variables). The interpretation of hidden variables in the context of RBM was that these hidden nodes helped to model the interaction between 2 input features (if both activate together, then the hidden unit will also activate). This principle can be traced to Hebb's rule which states "Neurons that fire together, wire together." Thus RBM's were used to find representation of models, in a space (generally lower dimensional than than the original). This is the principal used in [Auto Encoder's](https://en.wikipedia.org/wiki/Autoencoder) also. Thus as you can see we are explicitly, not modelling the interaction between 2 features, how the process is occurring is "hidden" from us.
So, the term latent basically can be attributed to the following ideas:
* We map higher dimensional data to a lower dimensional data with no prior convictions of how the mapping will be done. The NN trains itself for the best configuration.
* We cannot manipulate this lower dimensional data. Thus it is "hidden from us.
* As we do not know what each dimension means, it is "hidden" from us.
Upvotes: 4 [selected_answer]<issue_comment>username_2: *Latent* is a synonym for *hidden*.
Why is it called a hidden (or latent) variable? For example, suppose that you observe the behaviour of a person or animal. You can only observe the behaviour. You cannot observe the internal state (e.g. the mood) of this person or animal. The mood is a hidden variable because it cannot be observed directly (but only indirectly through its consequences).
A good example of statistical model that is highly based on the notion of latent variables is the [hidden Markov model](https://en.wikipedia.org/wiki/Hidden_Markov_model) (HMM). If you understand the HMM, you will understand the concept of a hidden variable.
Upvotes: 3 |
2019/05/24 | 1,104 | 4,245 | <issue_start>username_0: No matter what I google or what paper I read, I can't find an answer to my question. In a deep convolutional neural network, let's say AlexNet (Krizhevsky, 2012), filters' weights are learned by means of back-prop.
But how are kernels themselves selected? I know kernels had been used in image processing long before CNNs, hence I'd imagine there would be a set of filters based on kernels (see, for example, [this article](https://en.wikipedia.org/wiki/Kernel_(image_processing)#Details)) that are proven to be effective for edge detection and the likes.
Reading around the web, I also found something about "randomly generated kernels". Does anyone know if and when this practice is adopted?<issue_comment>username_1: If you're looking for filters with known effect, the Gaussian filters do smoothing, the Gabor filters are useful for edge detection, etc.
Usually, in deep learning models where things are trained from scratch, the filters are randomly initialized and then learned by the model's training scheme. For the most part, without using any of the well-known kernels mentioned above.
Clarification on for the most part: so filters aren't initialized with the goal of knowing exactly which feature it will activate on, but what will assist the training procedure. Recently, people have even trained ResNets with or without *batch normalization* by finding good initial points -- it's an ongoing field of research.
Upvotes: 1 <issue_comment>username_2: ### What are filters in image processing?
In the context of image processing (and, in general, signal processing), the *kernels* (also known as *filters*) are used to perform some specific operation on the image. For example, you can use a [Gaussian filter](https://homepages.inf.ed.ac.uk/rbf/HIPR2/gsmooth.htm) to smooth the image (including its edges).
### What are filters in CNNs?
In the context of convolutional neural networks (CNNs), the filters (or kernels) are the **learnable** parameters of the model.
Before training, the kernels are usually randomly initialised (so they are not usually hardcoded). During training, depending on the loss or error of the network (according to the loss function), the kernels (or filters) are updated, so that to minimise the loss (or error). After training, they are typically fixed. Incredibly, the filters learned by CNNs can be similar to the [Gabor filter](https://en.wikipedia.org/wiki/Gabor_filter) (which is thought to be related to our visual system [[1](http://www.cns.nyu.edu/%7Etony/vns/readings/olshausen-field-1996.pdf)]). See [figure 9.19 of chapter 9 (p. 365) of the Deep Learning book by Goodfellow et al](https://www.deeplearningbook.org/contents/convnets.html).
The number of kernels that are applied to a given input (and more than one kernel is often applied) in a CNN is a [hyper-parameter](https://en.wikipedia.org/wiki/Hyperparameter_(machine_learning)).
### What are the differences and similarities?
In both contexts, the words "kernel" and "filter" are roughly synonymous, so they are often used interchangeably. Furthermore, in both cases, the kernels are related to the [convolution (or cross-correlation) operation](https://ai.stackexchange.com/a/22000/2444). More specifically, the application of a filter, which is a function $h$, to an input, which is another function $f$, is equivalent to the convolution of $f$ and $h$. In mathematics, this is often denoted by $f \circledast h = g$, where $\circledast$ is the convolution operator and $g$ is the result of the convolution operation and is often called the *convolution* (of $f$ and $h$). In the case of image processing, $g$ is the filtered image. In the case of CNNs, $g$ is often called an [activation map](https://stats.stackexchange.com/a/292064/82135).
### Further reading
Take a look at [this](https://ai.stackexchange.com/a/21967/2444) and [this](https://ai.stackexchange.com/a/21401/2444) answers for more details about CNNs and the convolution operation, respectively.
You may also want to have a look at [Visualizing what ConvNets learn](http://cs231n.github.io/understanding-cnn/) for some info about the visualization of the kernels learned during the training of a CNN.
Upvotes: 4 [selected_answer] |
2019/05/24 | 537 | 2,066 | <issue_start>username_0: If I use a desktop PC with a GPU, how long it might take to train face recognition deep neural network on let's say dataset of 2.6 million images and 2600 identities? I guess it should depend on various properties (e.g., type of the DNN). But I am just looking for a rough estimation. Is it a matter of hours/days or years?
Thanks!<issue_comment>username_1: Training time depends on a lot of parameters. Some of them are:
1. Size of each image (resolution)
2. Color/Monochrome image (color image has 3 times data if you consider RGB image)
3. Like you mentioned on the type of DNN.
No. of layers of DNN.
No. of neurons in each layer.
4. Total no. of images in the dataset. (2.6 million here)
5. GPU you are using (you didn't mention which GPU you are using. There are GPUs with wide range of capabilities, to predict the time, you need to know the exact specs of GPU).
6. Training time also depends on RAM on your machine and also how fast host PC can transfer data to GPU for processing.
7. Since you mentioned face recognition, i am assuming you are using CNN, but if you again use fully connected network, training time will change and will obviously increase manifold.
Your task of classification and your database is mostly similar to ILSVRC that uses imagenet database.
**Making some reasonable assumptions for the parameters you didn't mention, i feel your task is similar to ILSVRC and i am predicting the training time will be a few days.**
Below are the links which mention time of training for ILSVRC.
<https://mxnet-tqchen.readthedocs.io/en/latest/tutorials/imagenet_full.html>
Below are the details of imagenet database for your comparison
<https://en.wikipedia.org/wiki/ImageNet>
Upvotes: 3 [selected_answer]<issue_comment>username_2: YELP Dataset (200k images) used to take 5 hr for training to identify Five (5) classes on GPU - Nvidia 1080 Ti with 11 GB RAM. So I guess in your case it will take days. Again it will depend on the type of your GPU configuration and type of Architecture you will be using.
Upvotes: 1 |
2019/05/25 | 1,324 | 5,765 | <issue_start>username_0: I'm trying to learn neural networks by watching [this series of videos](https://www.youtube.com/watch?v=tIeHLnjs5U8&t=17s) and implementing a simple neural network in Python.
Here's one of the things I'm wondering about: I'm training the neural network on sample data, and I've got 1,000 samples. The training consists of gradually changing the weights and biases to make the cost function result in a smaller cost.
**My question:** Should I be changing the weights/biases on every single sample before moving on to the next sample, or should I first calculate the desired changes for the entire lot of 1,000 samples, and only then start applying them to the network?<issue_comment>username_1: >
> Should I be changing the weights/biases on every single sample before moving on to the next sample,
>
>
>
You can do this, it is called *stochastic* gradient descent (SGD) and typically you will shuffle the dataset before working through it each time.
>
> or should I first calculate the desired changes for the entire lot of 1,000 samples, and only then start applying them to the network?
>
>
>
You can do this, it is called *batch* gradient descent, or in some cases (especially in older resources) just assumed as the normal approach and called gradient descent.
Each approach offers advantages and disadvantages. In general:
* SGD makes each update sooner in terms of amount of data that has been processed. So you may need less epochs before converging on reasonable values.
* SGD does more processing per sample (because it updates more frequently), so is also slower in the sense that it will take longer to process each sample.
* SGD can take less advantage of parallelisation, as the update steps mean you have to run each data item serially (as the weights have changed and error/gradient results are calculated for a specific set of weights).
* SGD individual steps make typically only very rough guesses at the correct gradients to change weights in. This is both a disadvantage (performance of the NN against the objective on the training set can decrease as well as increase) and an advantage (there is less likelihood of getting stuck in a local stationary point due the "jitter" these random differences cause).
What happens in practice is that most software allows you to compromise between batch processing and single-sample processing, to try and get the best performance and update characteristics. This is called mini-batch processing, which involves:
* Shuffling the dataset at the start of each epoch.
* Working through the shuffled data, N items per time where N might vary from maybe 10 to 1000, depending on the problem and any constraints on the hardware. A common decision is to process the largest batch size that the GPU acceleration allows to run in parallel.
* Calculate the update required for each small batch, then apply it.
This is nowadays the most common update method that most neural network libraries assume, and they almost universally will accept a batch size parameter in the training API. Most of the libraries will still call simple optimisers that do that SGD; technically it is true, the gradients calculated are still somewhat randomised due to not using the full batch, but you may find this called mini-batch gradient descent in some older papers.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Ideally, you need to update weights by going over all the samples in the dataset. This is called as **Batch Gradient Descent**. But, as the no. of training examples increases, the computation becomes huge and training will be very slow. With the advent of deep learning, training size is in millions and computation using all training examples is very impractical and very slow.
This is where, two optimization techniques became prominent.
1. **Mini-Batch Gradient Descent**
2. **Stochastic Gradient Descent (SGD)**
In mini-batch gradient descent, you use a batch size that is considerably less than total no. of training examples and update your weights after passing through these examples.
In stochastic gradient descent, you update the weights after passing through each training example.
Coming to advantages and disadvantages of the three methods we discussed.
* Batch gradient descent gradually converges to the global minimum but
it is slow and requires huge computing power.
* Stochastic gradient descent converges fast but not to the global
minimum, it converges somewhere near to the global minimum and
hovers around that point, but doesn't converge ever to the global
minimum. But, the converged point in Stochastic gradient descent
is good enough for all practical purposes.
* Mini-Batch gradient is a trade-off the above two methods. But, if you
have a vectorized implementation of the weights updation and you
are training with a multi-core setup or submitting the training to
multiple machines, this is the best method both in terms of time for
training and convergence to global minimum.
You can plot the cost function, w.r.t the no. of iterations to understand the difference between convergence in all the 3 types of gradient descent.
* Batch gradient descent plot falls smoothly and slowly and gets stabilized and
gets to global minimum.
* Stochastic gradient descent plot will have oscillations, will fall
fast but hovers around global minimum.
These are some blogs where there is detailed explanation of advantages, disadvantages of each method and also graphs of how cost function changes for all the three methods with iterations.
<https://adventuresinmachinelearning.com/stochastic-gradient-descent/>
<https://machinelearningmastery.com/gentle-introduction-mini-batch-gradient-descent-configure-batch-size/>
Upvotes: 1 |
2019/05/27 | 1,815 | 7,071 | <issue_start>username_0: What is the definition of machine learning? What are the advantages of machine learning?<issue_comment>username_1: ### What is machine learning?
Machine learning (ML) has been defined by multiple people in similar (or related) ways.
<NAME>, in his book [Machine Learning](http://www.cs.ubbcluj.ro/%7Egabis/ml/ML-books/McGrawHill%20-%20Machine%20Learning%20-Tom%20Mitchell.pdf) (1997), defines an ML **algorithm/program** (or machine learner) as follows.
>
> A computer program is said to *learn* from **experience** $E$ with respect to some class of tasks $T$ and performance measure $P$, if its performance at tasks in $T$, as measured by $P$, improves with experience $E$.
>
>
>
This is a quite reasonable definition, given that it describes algorithms such as gradient descent, Q-learning, etc.
In his book [Machine Learning: A Probabilistic Perspective](http://noiselab.ucsd.edu/ECE228/Murphy_Machine_Learning.pdf) (2012), <NAME> defines the machine learning **field/area** as follows.
>
> a set of methods that can automatically detect patterns in **data**, and then use the uncovered patterns to predict future data, or to perform other kinds of decision making under uncertainty (such as planning how to collect more data!)
>
>
>
Without referring to algorithms or the field, [<NAME> and <NAME>](https://www.cs.huji.ac.il/%7Eshais/UnderstandingMachineLearning/understanding-machine-learning-theory-algorithms.pdf) define machine learning as follows
>
> The term machine learning refers to the automated detection of meaningful patterns in **data**.
>
>
>
In all these definitions, the core concept is **data** or **experience**. So, any [algorithm](https://edutechlearners.com/download/Introduction_to_algorithms-3rd%20Edition.pdf) that *automatically* detects patterns in **data** (of any form, such as textual, numerical, or categorical) to solve some task/problem (which often involves more data) is a (machine) learning algorithm.
The tricky part of this definition, which often causes a lot of misconceptions about what ML is or can do, is probably *automatically*: this does not mean that the learning algorithm is completely autonomous or independent from the human, given that the human, in most cases, still needs to define a *performance measure* (and other parameters, including the learning algorithm itself) that guides the learning algorithm towards a set of solutions to the problem being solved.
As a [field](https://ai.stackexchange.com/a/26664/2444), ML could be defined as the study and application of ML algorithms (as defined by Mitchell's definition).
### Sub-categories
Murphy and many others often divide machine learning into three main sub-categories
* **supervised learning** (or predictive), where the goal is to learn a mapping from inputs $\textbf{x}$ to outputs $y$, given a labeled set of input-output pairs
* [**unsupervised learning**](http://mlg.eng.cam.ac.uk/zoubin/papers/ul.pdf) (or descriptive), where the goal is to find "interesting patterns" in the data
* [**reinforcement learning**](http://incompleteideas.net/book/RLbook2020.pdf), which is useful for learning how to act or behave when given an occasional reward or punishment signals
However, there are many other possible sub-categories (or taxonomies) of machine learning techniques, such as
* [**deep learning**](https://www.sciencedirect.com/science/article/pii/S0893608014002135) (i.e. the use of neural networks to approximate functions and related learning algorithms, such as gradient descent) or
* [**probabilistic machine learning**](https://www.nature.com/articles/nature14541) (machine learning techniques that provide uncertainty estimation)
* [**weakly supervised learning**](https://academic.oup.com/nsr/article/5/1/44/4093912) (i.e. SL where labeling information may not be completely accurate)
* [**online learning**](https://www.cs.huji.ac.il/%7Eshais/papers/OLsurvey.pdf) (i.e. learning from a single data point at a time rather than from a dataset of multiple data points)
These sub-categories can also be combined. For example, deep learning can be performed online or offline.
### Related fields
There is also a related field known as **computational** (or **statistical**) **learning theory**, which concerned with the *theory of learning* (from a computational and statistical point of view). So, in this field, we are interested in questions like "How many samples do we need to approximately compute this function with a certain error?".
Of course, given that machine learning is a set of algorithms and techniques that are data- or experience-driven, one may wonder what the difference between machine learning and **statistics** is. In fact, in many cases, they are very similar and ML adopts many statistical concepts, and you may even read on the web that machine learning is just *glorified statistics*. ML and statistics often tackle the same problem, but from a different perspective or with slightly different approaches (and the terminology may slightly change from one field to the other). If you are interested in a more detailed explanation of their difference, you could read [Statistics versus machine learning](https://www.nature.com/articles/nmeth.4642) (2018) by <NAME> et al.
### What is machine learning good for?
ML can potentially be used to (at least partially) automate tasks that involve data and pattern recognition, which were previously performed only by humans (e.g. translation from one human language, such as English, to another, such as Italian). However, machine learning cannot automate all tasks: for example, it cannot infer causal relations from the data (which often must be done by humans), unless you include **causal inference** as part of machine learning. If you are interested in causal inference, you could take a look at the paper [Causal Inference](http://proceedings.mlr.press/v6/pearl10a/pearl10a.pdf) by <NAME> ([Turing Award for his work in causal inference!](https://amturing.acm.org/award_winners/pearl_2658896.cfm)).
Upvotes: 4 [selected_answer]<issue_comment>username_2: Here's a definition by <NAME> (1997):
>
> Computer program is said to learn from experience E with respect to
> some task T and some performance measure P, if its performance on T,
> as measured by P, improves with experience E.
>
>
>
So, the programmer gives some instructions/rules to the computer, so that it can learn how to solve the problem from the given examples by themselves.
In some tasks, the computer can perform better than humans. For example, the Dota 2 bot (made by OpenAI) defeated the world champion.
With machine learning, many cases can be automated. They also have the ability to improve solutions by learning the given data from time to time. It can process and analyze large data well.
Machine learning already applied in many fields such as machine translation in google translate and face recognition that is widely used by today's society for security.
Upvotes: 2 |
2019/05/28 | 740 | 2,926 | <issue_start>username_0: When training a neural network, we often run into the issue of overfitting.
However, is it possible to put overfitting to use? Basically, my idea is, instead of storing a large dataset in a database, you can just train a neural network on the *entire* dataset until it overfits as much as possible, then retrieve data "stored" in the neural network like it's a hashing function.<issue_comment>username_1: The [*auto-encoder*](https://ai.stackexchange.com/a/12504/2444) (AE) can be used to learn a compressed representation (a vectorised hash value) of each observation in the training dataset, $z$, which can then be used to later retrieve the original (or similar) observation.
The [*variational auto-encoder*](https://arxiv.org/pdf/1312.6114.pdf) (VAE), a [statistical](https://ai.stackexchange.com/a/12354/2444) variation of AE, can also be used to generate objects *similar* to the observations (or inputs) in the training set.
There are other data compressor models, for example, [Helmholtz machine](http://artem.sobolev.name/posts/2016-07-11-neural-variational-inference-variational-autoencoders-and-Helmholtz-machines.html), which precedes both the AE and VAE.
Upvotes: 3 <issue_comment>username_2: Train a network that has large input and small output. Turn it upside down (yes, you may do that). By giving the small outputs, corresponging to input, the ideally- trained network will generate those large data.
But you see in all compression there will be data lost, so generated data will be *slightly* :DD different then original dataset. So its suitable for statistical data, like images, whatever, but not for structured like text or the most unsuitable example - program source code.
Upvotes: 1 <issue_comment>username_3: An Auto-Encoder is probably what you are looking for. AE is a very powerful Neural Network when you want to compress data and get a lower dimensional representation of the data with maximum information retained.
If you are interested to know how it works -- Think of it as training a NN to predict the data itself. Which means your input and output layers would exactly be the same. So how does this help in compression ? The Hidden Layer -- Here is where the magic happens. The compression comes from the fact that we use lesser number of neurons in the hidden layer than in the i/p layer. Assuming that the NN we build nicely predicts the i/p at the end of the training, the output of the hidden layers can be thought of as newly engineered features which are less in dimension yet powerful enough to represent the information in original data.
Upvotes: 1 <issue_comment>username_4: See [What is the difference between encoders and auto-encoders?](https://ai.stackexchange.com/questions/4245/what-is-the-difference-between-encoders-and-auto-encoders/11808) For a working example of a neural network being used to compress (and decompress) images.
Upvotes: 0 |
2019/05/28 | 1,038 | 4,100 | <issue_start>username_0: In the book "Reinforcement Learning: An Introduction" (2018) Sutton and Barto define the prediction objective ($\overline{VE}$) as follows (page 199):
$$\overline{VE}\doteq\sum\_{s\epsilon S} \mu(s)[v\_{\pi}(s)-\hat{v}(s,w)]^2$$
Where $v\_{\pi}(s)$ is the true value of $s$ and $\hat{v}(s,w)$ is the approximation of it. Furthermore it is stated that this is "often used in plots".
*How do I get the true value $v\_{\pi}(s)$? And if there is a way to obtain the value, why would I need to approximate it?*<issue_comment>username_1: I may be wrong about this but my best interpretation without having access to the book is that
*How do I get the true value vπ(s)?*
The True value I think is whatever the most correct answer is for the prediction. This should be training data. Some companies like facebook spend a lot of money to hire people to create hand-detailed data to fill in this value.
*And if there is a way to obtain the value, why would I need to approximate it?*
You are approximating it to test the accuracy of your model - your prediction. It seems to me this equation is only necessary when training the model.
The result of this is your total error between all your predictions. The lower the value, the better your model.
<https://en.wikipedia.org/wiki/Mean_squared_error>
Upvotes: 0 <issue_comment>username_2: The true value $v\_{\pi}(s)$ is a conceptual target for the $\overline{VE}$ in the book. You often do not know it in real problems. However, it is still used in two main ways in the book:
* Theoretically for analysis of different aprpoximation schemes, which can be shown to converge to minimise the $\overline{VE}$ objective, or a related one.
* In toy problems when exploring the nature of approximation in Reinforcement Learning (RL), it is possible to use tabular methods guranateed to get close to zero error, and then compare them to approximate methods. There are several plots of this type in the book.
The book shows derivation of gradient descent methods that start with minimising $\overline{VE}$ as an objective, and that use *samples* of $v\_{\pi}(s)$ such as Monte Carlo return $G\_t$ or the TD target) in place of the unknown $v\_{\pi}(s)$ in the loss function. These also rely on the fact that the sample distribution will be weighted by $\mu(s)$ if they are taken naturally from the environment whilst the agent is following the policy $\pi$ so that $\mu(s)$ also does not need to be explicitly known.
Outside of toy problems deliberately constructed to demonstrate that this theory is correct, you will not know $v\_{\pi}(s)$ or be able to calculate $\overline{VE}$. However, you will know from the theory that if you follow the update rules derived in the book for approximate gradient descent methods, that the process should find a local minimum for $\overline{VE}$, for whatever state approximation scheme you have chosen to use.
Usually you cannot even *approximate* $\overline{VE}$ from raw data, as the variance in returns will add noise to the signal, and there is no way to separate variance in returns from error in approximation in the general case. However there are a couple of scenarios that do lend themselves to measuring this objective, provided you already have your estimate $\hat{v}(s,w)$ and the policy remains fixed throughout:
* Simple, fast (perhaps simulated), environments which can be solved to arbitrary accuracy using tabular methods. In this case, you first calculate $v\_{\pi}(s)$ using a non-approximate method, then sample many approximations by running the environment using policy $\pi$ and treating that as your data set.
* Fully deterministic environments where $\pi$ is also deterministic. These have a variance of $0$ for Monte Carlo returns, so each observed return from any given state is already the true value of $v\_{\pi}(s)$. Again you can just run the environment many times to get your data set to calculate $v\_{\pi}(s)$ and $\hat{v}(s,w)$ for the observed states in the correct frequencies, and thus have data to calculate $\overline{VE}$.
Upvotes: 2 [selected_answer] |
2019/05/30 | 367 | 1,312 | <issue_start>username_0: How can I implement a GAN network for text (review) generation?
Please, can someone guide me to resource (code) to help in text generation?<issue_comment>username_1: For the resources, you can refer to this:
<https://becominghuman.ai/generative-adversarial-networks-for-text-generation-part-1-2b886c8cab10>
If you want to generate text review for specific score, you can input a noise vector and the score to the generator. You could also make a vector filled with the number of score and add noise to that vector instead.
Upvotes: 0 <issue_comment>username_2: As @username_1 mentions, [text\_gen\_description](https://becominghuman.ai/generative-adversarial-networks-for-text-generation-part-1-2b886c8cab10) gives a good overview!, but the paper [seqGAN paper](https://arxiv.org/abs/1609.05473) describes the REINFORCE approach more in depth, as they are the first to do it (i believe). This is probably the approach most take now of days when going the GAN route.
Note that just basic MLE training has shown promise with openAI's GPT2. When i need a text generator, fine tuning one of the provided models is usually my goto.
Also if your looking for seq gans code base (you asked for example code) here is is: [git repo](https://github.com/LantaoYu/SeqGAN)
Good Luck!
Upvotes: 2 |
2019/05/31 | 1,056 | 3,970 | <issue_start>username_0: I read that a mix of "greedy" and "random" are ideal for stochastic local search (SLS), but I'm not sure why. It mentioned that the greedy finds the local minima and the randomness avoids getting trapped by the minima. What is the minima and how can you get trapped? Also, how does randomness avoid this? It seems like if it's truly random there's always a chance of ending up searching solutions that lead to dead ends multiple times (which seems like a waste of processing and avoidable)?<issue_comment>username_1: The greedy action is the action that maximises some quantity in the present or near future. A stochastic action is an action that is [random](https://en.wikipedia.org/wiki/Randomness) (it can be the greedy action or any other possible action).
For example, suppose that you are hungry. You can either choose to eat pizza, salad, fruits or fish. Pizza is your favourite food. On the other hand, you don't like much salad, fruits and fish, but you know that they are healthier than pizza. If you choose to eat pizza, then this is a greedy action. What quantiy are you maximising if you choose pizza? Your current happiness. If you randomly pick one of those (either pizza, salad, fruits or fish), then this is a stochastic (or random) action. The stochastic action can also happen to be pizza, but, in the next day, it might not be pizza, and, in general, it will not always be pizza.
Suppose that you always choose pizza. What's going to happen? In the long run, you will get fatter and your health will deteriorate. However, everytime you choose pizza, you will get happier in that moment (local maximum). If you randomly choose the food every time you want to eat, then it is more likely that you will eat also salad, fruits and fish. In the long run, this could be more beneficial for your health, hence this could avoid you getting trapped in the local maximum (being happy in the moment that you eat, but unhappier later in life because of the possible health problems).
In the context of artificial intelligence, the ideas are the same. There are several algorithms that use stochastic actions in order to avoid getting trapped in local extrema. For example, [simulated annealing](https://pdfs.semanticscholar.org/e893/4a942f06ee91940ab57732953ec6a24b3f00.pdf), [ant colony optimisation algorithms](https://ai.stackexchange.com/a/12545/2444), [$Q$-learning](https://www.researchgate.net/publication/33784417_Learning_From_Delayed_Rewards) (using $\epsilon$-greedy) or [genetic algorithms](https://epubs.siam.org/doi/pdf/10.1137/1018105). An example of local search (or greedy) algorithm is [2-opt](https://www.cs.ubc.ca/~hutter/previous-earg/EmpAlgReadingGroup/TSP-JohMcg97.pdf) (for the TSP problem).
Upvotes: 0 <issue_comment>username_2: As an example of local/global minima, imagine being on a rugged, mountainous landscape, and you want to find the lowest point within some area. For a greedy search, every step you take will take you downhill. If you go downhill long enough, you'll eventually find a flat spot, which is a minimum - from here, there's no step you can take that will get you any lower. However, there's a nearby ridge, which if you crossed it, you could continue downhill to find an even lower spot, the global minimum (the true lowest point). Using your greedy approach, you'll never go uphill to cross the ridge, so you'll be stuck in the local minimum forever. If you occasionally take random steps (other than directly downhill), you have the opportunity to cross ridges that separate local minima, and you have a better chance of finding the global minimum. You are correct that in many cases, the random step won't help you cross a ridge, and will just take you up a mountain in the wrong direction, which is a waste of time. But unless we allow the algorithm to "explore" a bit, it will be content that the first minimum it finds is the best one, and will never get to the bottom.
Upvotes: 1 |
2019/05/31 | 937 | 2,477 | <issue_start>username_0: The MSE can be defined as $(\hat{y} - y)^2$, which should be equal to $(y - \hat{y})^2$, but I think their derivative is different, so I am confused of what derivative will I use for computing my gradient. Can someone explain for me what term to use?<issue_comment>username_1: The derivative is the same as far as I understand it.
If $y$ is constant and $\hat{y}$ is the variable the result will be:
$((\hat{y} - y)^2)' = 2(\hat{y} - y)$
and for the other formula:
$((y - \hat{y})^2)' = -2(y - \hat{y})$
which is the same.
Upvotes: 2 <issue_comment>username_2: The derivative of $\mathcal{L\_1}(y, x) = (\hat{y} - y)^2 = (f(x) - y)^2$ with respect to $\hat{y}$, where $f$ is the model and $\hat{y} = f(x)$ is the output of the model, is
\begin{align}
\frac{d}{d \hat{y}} \mathcal{L\_1}
&= \frac{d}{d \hat{y}} (\hat{y} - y)^2 \\
&= 2(\hat{y} - y) \frac{d}{d \hat{y}} (\hat{y} - y) \\
&= 2(\hat{y} - y) (1) \\
&= 2(\hat{y} - y)
\end{align}
The derivative of $\mathcal{L\_2}(y, x) = (y - \hat{y})^2 = (y - f(x))^2$ w.r.t $\hat{y}$ is
\begin{align}
\frac{d}{d \hat{y}} \mathcal{L\_2}
&= \frac{d}{d \hat{y}} (y - \hat{y})^2 \\
&= 2(y -\hat{y}) \frac{d}{d \hat{y}} (y -\hat{y}) \\
&= 2(y - \hat{y})(-1)\\
&= -2(y - \hat{y})\\
&= 2(\hat{y} - y)
\end{align}
So, the derivatives of $\mathcal{L\_1}$ and $\mathcal{L\_2}$ are the same.
Upvotes: 4 [selected_answer]<issue_comment>username_3: >
> The MSE can be defined as $(\hat{y} - y)^2$, which should be equivalent to $(y - \hat{y})^2$
>
>
>
They are not just "equivalent". It is actually the exact same function, with two different ways to write it.
$$(\hat{y} - y)^2 = (\hat{y} - y)(\hat{y} - y) = \hat{y}^2 -2\hat{y}y + y^2$$
$$(y - \hat{y})^2 = (y -\hat{y})(y - \hat{y}) = y^2 -2y\hat{y} + \hat{y}^2$$
These are exactly the same function. Not just "equivalent" or "equivalent everywhere", but actually the same function. It is therefore no surprise that any derivative is also the same - including the partial derivative with respect to $\hat{y}$ which is what you typically use to drive gradient descent.
The two ways of writing the function is because it is a square and thus has two factorisations. When you write it as a square you can choose which form to use for the inner term.
>
> Which function [form] should I use to compute the gradient?
>
>
>
You can use either form, it does not matter. They represent the same function and have the same gradient.
Upvotes: 3 |
2019/05/31 | 930 | 2,487 | <issue_start>username_0: I wonder if Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) use any machine learning or deep learning?
For example in AR, the virtual objects are brought into the real world, does this process involve any object detection and localization?<issue_comment>username_1: The derivative is the same as far as I understand it.
If $y$ is constant and $\hat{y}$ is the variable the result will be:
$((\hat{y} - y)^2)' = 2(\hat{y} - y)$
and for the other formula:
$((y - \hat{y})^2)' = -2(y - \hat{y})$
which is the same.
Upvotes: 2 <issue_comment>username_2: The derivative of $\mathcal{L\_1}(y, x) = (\hat{y} - y)^2 = (f(x) - y)^2$ with respect to $\hat{y}$, where $f$ is the model and $\hat{y} = f(x)$ is the output of the model, is
\begin{align}
\frac{d}{d \hat{y}} \mathcal{L\_1}
&= \frac{d}{d \hat{y}} (\hat{y} - y)^2 \\
&= 2(\hat{y} - y) \frac{d}{d \hat{y}} (\hat{y} - y) \\
&= 2(\hat{y} - y) (1) \\
&= 2(\hat{y} - y)
\end{align}
The derivative of $\mathcal{L\_2}(y, x) = (y - \hat{y})^2 = (y - f(x))^2$ w.r.t $\hat{y}$ is
\begin{align}
\frac{d}{d \hat{y}} \mathcal{L\_2}
&= \frac{d}{d \hat{y}} (y - \hat{y})^2 \\
&= 2(y -\hat{y}) \frac{d}{d \hat{y}} (y -\hat{y}) \\
&= 2(y - \hat{y})(-1)\\
&= -2(y - \hat{y})\\
&= 2(\hat{y} - y)
\end{align}
So, the derivatives of $\mathcal{L\_1}$ and $\mathcal{L\_2}$ are the same.
Upvotes: 4 [selected_answer]<issue_comment>username_3: >
> The MSE can be defined as $(\hat{y} - y)^2$, which should be equivalent to $(y - \hat{y})^2$
>
>
>
They are not just "equivalent". It is actually the exact same function, with two different ways to write it.
$$(\hat{y} - y)^2 = (\hat{y} - y)(\hat{y} - y) = \hat{y}^2 -2\hat{y}y + y^2$$
$$(y - \hat{y})^2 = (y -\hat{y})(y - \hat{y}) = y^2 -2y\hat{y} + \hat{y}^2$$
These are exactly the same function. Not just "equivalent" or "equivalent everywhere", but actually the same function. It is therefore no surprise that any derivative is also the same - including the partial derivative with respect to $\hat{y}$ which is what you typically use to drive gradient descent.
The two ways of writing the function is because it is a square and thus has two factorisations. When you write it as a square you can choose which form to use for the inner term.
>
> Which function [form] should I use to compute the gradient?
>
>
>
You can use either form, it does not matter. They represent the same function and have the same gradient.
Upvotes: 3 |
2019/06/03 | 5,705 | 22,534 | <issue_start>username_0: As an AI layman, till today I am confused by the promised and achieved improvements of automated translation.
My impression is: **there is still a very, very far way to go.** Or are there other explanations why the automated translations (offered and provided e.g. by Google) of quite simple Wikipedia articles still read and sound mainly silly, are hardly readable, and only very partially helpful and useful?
It may depend on personal preferences (concerning readability, helpfulness, and usefulness), but **my personal expectations** are disappointed sorely.
The other way around: Are Google's translations nevertheless readable, helpful, and useful **for a majority of users**?
Or does Google have reasons to **retain its achievements** (and not to show to the users the best they can show)?<issue_comment>username_1: Google's translations *can* be useful, especially if you know that the translations are not perfect and if you just want to have an initial idea of the meaning of the text (whose Google's translations can sometimes be quite misleading or incorrect). I wouldn't recommend Google's translate (or any other non-human translator) to perform a serious translation, unless it's possibly a common sentence or word, it does not involve very long texts and informal language (or slang), the translations involve the English language or you do not have access to a human translator.
[Google Translate](https://translate.google.com/) currently uses a [neural machine translation system](https://arxiv.org/pdf/1609.08144.pdf). To evaluate this model (and similar models), the [BLEU metric](https://en.wikipedia.org/wiki/BLEU) (a scale from $0$ to $100$, where $100$ corresponds to the human gold-standard translation) and side-by-side evaluations (a human rates the translations) have been used. If you use only the BLEU metric, the machine traslations are quite poor (but the BLEU metric is also not a perfect evaluation metric, because there's often more than one translation of a given sentence). However, GNMT reduces the translation errors compared to [phrase-based machine translation (PBMT)](https://en.wikipedia.org/wiki/Statistical_machine_translation).
In the paper [Making AI Meaningful Again](https://arxiv.org/pdf/1901.02918.pdf), the authors also discuss the difficulty of the task of translation (which is believed to be an [AI-complete problem](https://ai.stackexchange.com/a/12147/2444)). They also mention the [transformer](https://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf) (another state-of-the-art machine translation model), which achieves quite poor results (evaluated using the BLEU metric).
To conclude, machine translation is a hard problem and current machine translation systems definitely do not perform as well as a professional human translator.
Upvotes: 3 <issue_comment>username_2: >
> Am I wrong and Google's translations are nevertheless readable, helpful and useful for a majority of users?
>
>
>
Yes, they are somewhat helpful and allow you to translate faster.
>
> Or does Google have reasons to retain its greatest achievements (and not to
> show to the users the best they can show)?
>
>
>
Maybe, I don't know. If you search for info, Google does really do a lot of horrible stupid stuff, like learning from what users say on the internet, taking unsuitable data as trusted input data sets.
Upvotes: 2 <issue_comment>username_3: Who claimed that machine translation is as good as a human translator? For me, as a professional translator who makes his living on translation for 35 years now, MT means that my daily production of human quality translation has grown by factor 3 to 5, depending on complexity of the source text.
I cannot agree that the quality of MT goes down with the length of the foreign language input. That used to be true for the old systems with semantic and grammatical analyses. I don't think that I know all of the old systems (I know Systran, a trashy tool from Siemens that was sold from one company to the next like a Danaer's gift, XL8, Personal Translator and Translate), but even a professional system in which I invested 28.000 DM (!!!!) failed miserably.
For example, the sentence:
>
> On this hot summer day I had to work and it was a pain in the ass.
>
>
>
can be translated using several MT tools to German.
[Personal Translator 20](https://www.linguatec.de/en/translation/personal-translator-professional-20/):
>
> Auf diesem heißen Sommertag musste ich arbeiten, und es war ein Schmerz im Esel.
>
>
>
[Prompt](https://www.online-translator.com/):
>
> An diesem heißen Sommertag musste ich arbeiten, und es war ein Schmerz im Esel.
>
>
>
[DeepL](https://www.deepl.com/en/translator):
>
> An diesem heißen Sommertag musste ich arbeiten und es war eine Qual.
>
>
>
Google:
>
> An diesem heißen Sommertag musste ich arbeiten und es war ein Schmerz im Arsch.
>
>
>
Today, Google usually presents me with readable, nearly correct translations and DeepL is even better. Just this morning I translated 3500 words in 3 hours and the result is flawless although the source text was full of mistakes (written by Chinese).
Upvotes: 5 <issue_comment>username_3: Apologies for not writing in English. Please find the adapted translation here:
To give interested people an idea of the quality of MT (DeepL) please see this example from a text I was working on this morning (6,300 words, started at 9 am, delivery today around 1 pm and still find time for this post). I was working on this sentence (201 words) when I posted my comment.
>
> "You further represent, warrant and undertake to ABC that you shall not: (a) Conduct any fraudulent, abusive, or otherwise illegal activity which may be grounds for termination of your right to access or use this Website and/or the Services; or (b) Post or transmit, or cause to be posted or transmitted, any communication or solicitation designed or intended to obtain password, account, or private information from any other user of this Website; or (c) Violate the security of any computer network, crack passwords or security encryption codes, transfer or store illegal material (including material that may be considered threatening or obscene), or engage in any kind of illegal activity that is expressly prohibited; or (d) Run maillist, listserv, or any other form of auto-responder, or "spam" on this Website, or any processes that run or are activated while you are not logged on to this Website, or that otherwise interfere with the proper working of or place an unreasonable load on this Website’s infrastructure; or (e) Use manual or automated software, devices, or other processes to "crawl," "scrape," or "spider" any page of this Website; or (f) Decompile, reverse engineer, or otherwise attempt to obtain the source code of this Website."
>
>
>
DeepL returns this:
>
> Sie versichern, garantieren und verpflichten sich gegenüber ABC, dass Sie dies nicht tun werden: (a) betrügerische, missbräuchliche oder anderweitig illegale Aktivitäten durchzuführen, die Anlass für die Beendigung Ihres Rechts auf Zugang oder Nutzung dieser Website und/oder der Dienste sein können; oder (b) Mitteilungen oder Aufforderungen, die dazu bestimmt sind oder sind, Passwörter, Konten oder private Informationen von anderen Nutzern dieser Website zu erhalten, zu posten oder zu übertragen oder posten oder zu posten oder zu übertragen; oder (c) die Sicherheit eines Computernetzwerks verletzen, Passwörter oder Sicherheitsverschlüsselungscodes knacken, illegales Material (einschließlich Material, das als bedrohlich oder obszön angesehen werden kann) übertragen oder speichern oder sich an illegalen Aktivitäten beteiligen, die ausdrücklich verboten sind; oder (d) Mailliste, Listenserver oder jede andere Form von Auto-Responder oder "Spam" auf dieser Website oder Prozesse, die ausgeführt werden oder aktiviert werden, während Sie nicht auf dieser Website angemeldet sind, oder die anderweitig das ordnungsgemäße Funktionieren oder eine unangemessene Belastung der Infrastruktur dieser Website stören; oder (e) manuelle oder automatisierte Software, Geräte oder andere Prozesse verwenden, um eine Seite dieser Website zu "crawlen", zu kratzen, zu spinnen oder zu spinnen; oder (f) dekompilieren, zurückzuentwickeln oder anderweitig zu versuchen, den Quellcode dieser Website zu erhalten.
>
>
>
It took me about 5 to 10 minutes to adjust this paragraph.
As a translator, I know that I cannot rely on the machine translation, but I learnt the specifics and capabilities of the different systems over time and I know what to pay attention for.
MT helps me a lot in my work.
Upvotes: 2 <issue_comment>username_4: It really depends on the language pair and the topic of the content. Translating to/from English to any other language usually is the best supported. Translating to and from popular languages works better, for example, translating from English to Romanian is a poorer translation than English to Russian. But translating from English to Russian or Romanian is better than translating Russian to Romanian. And translating Romanian to English is better than translating English to Romanian.
But if you are used to working with translators and you have a passing familiarity with the languages, translation mistakes and the topic, it's easy to understand what was supposed to be there. And, at that point, sometimes its easier to read something translated into your native language for quick scanning than it is to read it in a second language.
Less popular languages (for translation not necessarily in number of speakers) are much much closer to literal translations only slightly better than what you personally would do using a dictionary for two languages you do not know.
Upvotes: 2 <issue_comment>username_5: You have asked quite a lot of questions, some of which cannot be answered definitively . To give an insight of the **quality** (and its history) of machine translations I like to refer to <NAME> his 'one sentence benchmark' as presented in his [lecture](https://youtu.be/IxQtK2SjWWM?t=1665). It contains one Chinese to English example which is compared with Google Translate output. The correct translation for the example would be:
>
> In 1519, six hundred Spaniards landed in Mexico to conquer the Aztec Empire with a population of a few million. They lost two thirds of their soldiers in the first clash.
>
>
>
Google Translate returned the following translations.
>
> **2009** 1519 600 Spaniards landed in Mexico, millions of people to conquer the Aztec empire, the first two-thirds of soldiers against their loss.
>
>
> **2011** 1519 600 Spaniards landed in Mexico, millions of people to conquer the Aztec empire, the initial loss of soldiers, two thirds of their encounters.
>
>
> **2013** 1519 600 Spaniards landed in Mexico to conquer the Aztec empire, hundreds of millions of people, the initial confrontation loss of soldiers two-thirds.
>
>
> **2015** 1519 600 Spaniards landed in Mexico, millions of people to conquer the Aztec empire, the first two-thirds of the loss of soldiers they clash.
>
>
> **2017** In 1519, 600 Spaniards landed in Mexico, to conquer the millions of people of the Aztec empire, the first confrontation they killed two-thirds.
>
>
>
Whether Google **retains** or 'hides' their best results: I doubt it. There are many excellent researchers working in the field of natural language processing (NLP). If Google would have a 'greatest achievement' for translation, the researchers would figure it out sooner or later. (Why would Google hide their 'greatest achievement' anyway? They seem to see the benefit of open source, see the Transformer[1] or BERT[2])
NB. For an updated list of state-of-the-art algorithms in NLP, see the
[SQuAD2.0 leaderboard](https://rajpurkar.github.io/SQuAD-explorer/).
[1] <NAME>, et al. "Attention is all you need." Advances in neural information processing systems. 2017.
[2] <NAME>, et al. "Bert: Pre-training of deep bidirectional transformers for language understanding." arXiv preprint arXiv:1810.04805 (2018).
Upvotes: 3 <issue_comment>username_6: This will be not so much an answer as a commentary.
The quality depends on several things, including (as Aaron said above) 1) the language pair and 2) the topic, but also 3) the genera and 4) the style of the original, and 5) the amount of parallel text you have to train the MT system.
To set the stage, virtually all MT these days is based off of parallel texts, that is a text in two different languages, with one presumably being a translation of the other (or both being a translation of some third language); and potentially using dictionaries (perhaps assisted by morphological processes) as backoff when the parallel texts don't contain particular words.
Moreover, as others have said, an MT system in no way understands the texts it's translating; it just sees strings of characters, and sequences of words made up of characters, and it looks for similar strings and sequences in texts it's translated before. (Ok, it's slightly more complicated than that, and there have been attempts to get at semantics in computational systems, but for now it's mostly strings.)
1) Languages vary. Some languages have lots of morphology, which means they do things with a single word that other languages do with several words. A simple example would be Spanish 'cantaremos' = English "we will sing". And one language may do things that the other language doesn't even bother with, like the informal/formal (tu/ usted) distinction in Spanish, which English doesn't have an equivalent to. Or one language may do things with morphology that another language does with word order. Or the script that the language uses may not even mark word boundaries (Chinese, and a few others). The more different the two languages, the harder it will be for the MT system to translate between them. The first experiments in statistical MT were done between French and English, which are (believe it or not) very similar languages, particularly in their syntax.
2) Topic: If you have parallel texts in the Bible (which is true for nearly any pair of written languages), and you train your MT system off of those, don't expect it to do well on engineering texts. (Well, the Bible is a relatively small amount of text by the standards of training MT systems anyway, but pretend :-).) The vocabulary of the Bible is very different from that of engineering texts, and so is the frequency of various grammatical constructions. (The grammar is essentially the same, but in English, for example, you get lots more passive voice and more compound nouns in scientific and engineering texts.)
3) Genera: If your parallel text is all declarative (like tractor manuals, say), trying to use the resulting MT system on dialog won't get you good results.
4) Style: Think Hilary vs. Donald; erudite vs. popular. Training on one won't get good results on the other. Likewise training the MT system on adult-level novels and using it on children's books.
5) Language pair: English has lots of texts, and the chances of finding texts in some other language which are parallel to a given English text are much higher than the chances of finding parallel texts in, say, Russian and Igbo. (That said, there may be exceptions, like languages of India.) As a gross generalization, the more such parallel texts you have to train the MT system, the better results.
In sum, language is complicated (which is why I love it--I'm a linguist). So it's no surprise that MT systems don't always work well.
BTW, human translators don't always do so well, either. A decade or two ago, I was getting translations of documents from human translators into English, to be used as training materials for MT systems. Some of the translations were hard to understand, and in some cases where we got translations from two (or more) human translators, it was hard to believe the translators had been reading the same documents.
And finally, there's (almost) never just one correct translation; there are multiple ways of translating a passage, which may be more or less good, depending on what features (grammatical correctness, style, consistency of usage,...) you want. There's no easy measure of "accuracy".
Upvotes: 1 <issue_comment>username_7: >
> "Or does Google have reasons to retain its achievements (and not to show to the users the best they can show)"
>
>
>
If they were, then what they're holding back would be *amazing*. Google publishes [a lot](https://ai.google/research/pubs/?area=NaturalLanguageProcessing) of strong papers in Natural Language Processing, including ones that get [state of the art results](https://arxiv.org/abs/1602.02410) or make [significant](https://arxiv.org/abs/1810.04805) [conceptual](https://arxiv.org/abs/1409.3215) [breakthroughs](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf).
They have also released very useful [datasets](https://catalog.ldc.upenn.edu/LDC2006T13) and [tools](https://www.tensorflow.org/). Google is one of the few companies out there that is not only using the cutting edge of current research, but is actively contributing to the literature.
Machine translation is just a hard problem. A good human translator needs to be *fluent* in both languages to do the job well. Each language will have its own idioms and non-literal or context-dependent meanings. Just working from a dual-language dictionary would yield terrible results (for a human or computer), so we need to train our models on existing corpora that exist in multiple languages in order to learn how words are actually used (n.b. hand-compiled phrase translation tables can be used as *features*; they just can't be the whole story). For some language pairs, parallel corpora are plentiful (e.g. for EU languages, we have the [complete proceedings of the European Parliament](https://www.statmt.org/europarl/)). For other pairs, training data is much sparser. And even if we have training data, there will exist lesser used words and phrases that don't appear often enough to be learned.
This used to be an even bigger problem, since synonyms were hard to account for. If our training data had sentences for "The dog caught the ball", but not "The puppy caught the ball", we would end up with a low probability for the second sentence. Indeed, significant smoothing would be needed to prevent the probability from being *zero* in many such cases.
The emergence of neural language models in the last 15 years or so has *massively* helped with this problem, by allowing words to be mapped to a real-valued semantic space *before* learning the connections between words. This allows models to be learned in which words that are close together in meaning are also close together in the semantic space, and thus switching a word for its synonym will not greatly affect the probability of the containing sentence. [word2vec](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) is a model that illustrated this very well; it showed that you could, e.g., take the semantic vector for "king", subtract the vector for "man", add the vector for "woman", and find that the nearest word to the resulting vector was "queen". Once the research in neural language models began in earnest, we started seeing immediate and *massive* drops in perplexity (i.e. how confused the models were by natural text) and we're seeing corresponding increases in BLEU score (i.e. quality of translation) now that those language models are being integrated into machine translation systems.
Machine translations are still not as good as quality human translations, and quite possibly won't be *that* good until we crack fully sapient AI. But good human translators are expensive, while everyone with Internet access has machine translators available. The question isn't whether the human translation is better, but rather how close the machine gets to that level of quality. That gap has been shrinking and is continuing to shrink.
Upvotes: 0 <issue_comment>username_8: Surprisingly all the other answers are very vague and try to approach this from the human translator POV. Let's switch over to ML engineer.
When creating a translation tool, one of the first questions that we should consider is *"How do we measure that our tool works?"*.
Which is essentially what the OP is asking.
Now this is not an easy task (some other answers explain why). There is a [Wikipedia Article](https://en.wikipedia.org/wiki/Evaluation_of_machine_translation) that mentions different ways to evaluate machine translation results - both human and automatic scores exist (such as [BLEU](https://en.wikipedia.org/wiki/BLEU), [NIST](https://en.wikipedia.org/wiki/NIST_(metric)), [LEPOR](https://en.wikipedia.org/wiki/LEPOR)).
With rise of neural network techniques, those scores improved significantly.
Translation is a complex problem. There are many things that can go right(or wrong), and computer translation system often ignores some of the subtleties, which stands out for a human speaker.
I think if we are to think about the future, there are few things that we can rely on:
* Our techniques are getting better, wider known and tested. This is going to improve the accuracy in the long run.
* We are developing new techniques which can take into account variables previously ignored or just do a better job.
* Many of currently existing translation models are often "reused" to translate other languages (for example, try translating "JEDEN" from Polish to Chinese(traditional) using Google Translator - you will end up with "ONE", which is an evidence pointing out the fact that Google translates Polish to English, and then English to Chinese).
This is obviously not a good approach - you are going to lose some information in the process - but it's a one that will still work, so companies like Google use it for languages where they don't have enough workpower or data.
With time, more specialized models will appear, which will improve the situation.
* Also, as previous point stated, more and more data will only help improving the machine translation.
To summarize, this complex problem, although not solved, is certainly on a good way and allows for some impressive results for well-researched language pairs.
Upvotes: 1 |
2019/06/03 | 287 | 1,028 | <issue_start>username_0: Stories like [this](https://aiweirdness.com/post/185339301987/once-again-a-neural-net-tries-to-name-cats) one are quite popular these days.
The idea of training a neural net to do something silly like this may sound trivial to experts like you, but for a novice like me it could be an interesting learning experience.
Is there novice-friendly software I could play with to train a neural net to do something like this or is there necessarily a steep learning curve?<issue_comment>username_1: keras is probably the highest level and easiest to go into.
Here are some [keras tutorials](https://www.pyimagesearch.com/2018/09/10/keras-tutorial-how-to-get-started-with-keras-deep-learning-and-python/)
Upvotes: 2 <issue_comment>username_2: An intuitive NN playground can be found in [TensorFlow Playground](https://playground.tensorflow.org)
Also, check the Google ML crash course for coders as they promised to add more [practicals](https://developers.google.com/machine-learning/practica/).
Upvotes: 2 |