date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2019/06/03
227
869
<issue_start>username_0: As stated in the universal approximation theorem, a neural network can approximate almost any function. Is there a way to calculate the closed-form (or analytical) expression of the function that a neural network computes/approximates? Or, alternatively, figure out if the function is linear or non-linear?<issue_comment>username_1: keras is probably the highest level and easiest to go into. Here are some [keras tutorials](https://www.pyimagesearch.com/2018/09/10/keras-tutorial-how-to-get-started-with-keras-deep-learning-and-python/) Upvotes: 2 <issue_comment>username_2: An intuitive NN playground can be found in [TensorFlow Playground](https://playground.tensorflow.org) Also, check the Google ML crash course for coders as they promised to add more [practicals](https://developers.google.com/machine-learning/practica/). Upvotes: 2
2019/06/04
2,063
5,813
<issue_start>username_0: I was reading the following book: <http://neuralnetworksanddeeplearning.com/chap2.html> and towards the end of equation 29, there is a paragraph that explains this: [![enter image description here](https://i.stack.imgur.com/KkBdn.png)](https://i.stack.imgur.com/KkBdn.png) However I am unsure how the equation below is derived: [![enter image description here](https://i.stack.imgur.com/DlOfT.png)](https://i.stack.imgur.com/DlOfT.png)<issue_comment>username_1: I think that Nielsen just wanted to convey the idea of the back-propagation algorithm using that formula, as you can read from the next paragraph "Now, this demon is a good demon...", so I don't think that that partial derivative is mathematically correct, provided the partial derivative is still with respect to $z\_j^l$. $C$ is the cost (or loss) function. $z\_j^l$ is the linear output of neuron $j$ in layer $l$, which is followed by a non-linear function (e.g. sigmoid), denoted by $\sigma$. So, the actual output of neuron $j$ in layer $l$ is $\sigma(z\_j^l)$. The partial derivative of the cost function $C$ with respect to this neuron's linear output, $z\_j^l$, is $$\frac{\partial C}{\partial z\_j^l} = \frac{\partial C}{\partial z\_j^l} 1 = \frac{\partial C}{\partial z\_j^l} \frac{\partial z\_j^l}{\partial z\_j^l}.$$ If the the linear output of node $j$ in layer $l$ is now $z\_j^l + \Delta z\_j^l$, then the partial derivative with respect to $z\_j^l$ becomes \begin{align} \frac{\partial C}{\partial z\_j^l} &= \frac{\partial C}{\partial (z\_j^l + \Delta z\_j^l)}\frac{\partial (z\_j^l + \Delta z\_j^l)}{\partial z\_j^l} \\ &= \frac{\partial C}{\partial (z\_j^l + \Delta z\_j^l)} \left( \frac{\partial z\_j^l}{\partial z\_j^l} + \frac{\partial \Delta z\_j^l}{\partial z\_j^l} \right) \\ &= \frac{\partial C}{\partial (z\_j^l + \Delta z\_j^l)} \left( 1 + \frac{\partial \Delta z\_j^l}{\partial z\_j^l} \right) \\ &= \frac{\partial C}{\partial (z\_j^l + \Delta z\_j^l)} + \frac{\partial C}{\partial (z\_j^l + \Delta z\_j^l)}\frac{\partial \Delta z\_j^l}{\partial z\_j^l} \\ \end{align} $\Delta z\_j^l$ depends on $z\_j^l$, but it is not specified how. Upvotes: 0 <issue_comment>username_2: I believe he's just saying that: $$ \frac{\partial C}{\partial z\_j^l} \Delta z\_j^l \approx \frac{\partial C}{\partial z\_j^l} \partial z\_j^l \approx \partial C $$ so that the change in cost function can be arrived at simply for a small enough perturbation $\Delta z\_j^l$. Or, taking that line of approximations backwards, the change in the cost function for a given perturbation is just: $$ \partial C \approx \frac{\partial C}{\partial z\_j^l} \partial z\_j^l \approx \frac{\partial C}{\partial z\_j^l} \Delta z\_j^l $$ Upvotes: 1 <issue_comment>username_3: The derivative of a function ($f(x\_1,x\_2..x\_n)$) w.r.t to one of the variables ($x\_1,x\_2..x\_n$) gives us the rate of change of the function w.r.t the rate of change of the variable. This roughly means that by how much will the function value change if we change the variable by a "unit amount" or $+1$. (we cannot use the change as $+1$ as the change needs to be infinitesimally small, this is just a rough explanation) The image shows a tangent line to a curve or function $f(x)$. The slope of this tangent is given by $\frac{df(x)}{dx}$ at that particular $x$. If you move by a very very small amount in the direction of positive $x$ i.e. $x+\delta x$ the change in the value of the $f(x)= y $ will almost be the same as the change in the value of the $y$ of the tangent line. [![enter image description here](https://i.stack.imgur.com/MDjXL.jpg)](https://i.stack.imgur.com/MDjXL.jpg) Now as per the excerpt the cost function $C$ is a function of $z\_j^l$. Thus, it can be written as $$C = f(z\_j^l, .....).$$ So, $$\frac{\partial C}{\partial z\_j^l}$$ indicates how much $C$ will vary w.r.t $z\_j^l$, i.e. when $z\_j^l$ is changed by an infinitesimally small amount and thus the formulae: $$\frac{\partial C}{\partial z\_j^l} \Delta z\_j^l$$ where the author assumed $\Delta z\_j^l$ to be very small. This gives the infinitesimal change in $C$ or gives us $\Delta C$, for infinitesimally small change in $z\_j^l$ or any varible affecting the cost function. This can be derived by series expansions too (given below), but this is an intuitive explanation. An explanation can be given from **[Taylor Series Theorem](https://users.math.msu.edu/users/magyar/Math133/11.10-Taylor-Series.pdf)** which states: : > > Let $f(x)$ be a function which is analytic at $x = a$. Then we can write > $f(x)$ as the following power series, called the Taylor series of $f(x)$ > at $x = a$, then we can write $f(x)$ as: > > > $$f(x) = f(a) + f'(a)(x-a) + f''(a)\frac{(x-a)^2}{2!} + f'''(a)\frac{(x-a)^3}{3!}... $$ Now if we keep other variables constant and make cost function $f$ vary only with $z^l\_j$ and if we put $a=z^l\_j$ and $x=z^l\_j + \Delta z^l\_j$ the equation becomes: $$f(z^l\_j + \Delta z^l\_j) = f(z^l\_j) + f'(z^l\_j)(\Delta z^l\_j) + f''(z^l\_j)\frac{(\Delta z^l\_j)^2}{2!} + f'''(a)\frac{(\Delta z^l\_j)^3}{3!}... $$ which if we ignore the higher order terms of $\Delta z^l\_j$, since terms containing $\Delta z^l\_j$ for powers greater than 1 will be negligible compared to $\Delta z^l\_j$ with power 1. Thus the equation now effectively is: $$f(z^l\_j + \Delta z^l\_j) = f(z^l\_j) + f'(z^l\_j)(\Delta z^l\_j)$$ $$f(z^l\_j + \Delta z^l\_j) - f(z^l\_j)= f'(z^l\_j)(\Delta z^l\_j)$$ $$f(z^l\_j + \Delta z^l\_j) - f(z^l\_j)= \frac{\partial f(z^l\_j)}{\partial z^l\_j}(\Delta z^l\_j)$$ where $$f(z^l\_j + \Delta z^l\_j) - f(z^l\_j)$$ can be thought of as $\Delta C$ or the change in cost function for small change in $z^l\_j$ NOTE: I have glossed over some requirements for a Taylor Series to be convergent. Upvotes: 0
2019/06/04
2,168
6,318
<issue_start>username_0: I need to manually classify thousands of pictures into discrete categories, say, where each picture is to be tagged either A, B, or C. **Edit:** I want to do this work myself, not outsource / crowdsource / whatever online collaborative distributed shenanigans. Also, I'm currently not interested in active learning. Finally, I don't need to label features inside the images (eg. [Sloth](https://cvhci.anthropomatik.kit.edu/~baeuml/projects/a-universal-labeling-tool-for-computer-vision-sloth/)) just file each image as either A, B, or C. Ideally I need a tool that will show me a picture, wait for me to press a single key (0 to 9 or A to Z), save the classification (filename + chosen character) in a simple CSV file in the same directory as the pictures, and show the next picture. Maybe also showing a progress bar for the entire work and ETA estimation. Before I go ahead and code it myself, is there anything like this already available?<issue_comment>username_1: I think that Nielsen just wanted to convey the idea of the back-propagation algorithm using that formula, as you can read from the next paragraph "Now, this demon is a good demon...", so I don't think that that partial derivative is mathematically correct, provided the partial derivative is still with respect to $z\_j^l$. $C$ is the cost (or loss) function. $z\_j^l$ is the linear output of neuron $j$ in layer $l$, which is followed by a non-linear function (e.g. sigmoid), denoted by $\sigma$. So, the actual output of neuron $j$ in layer $l$ is $\sigma(z\_j^l)$. The partial derivative of the cost function $C$ with respect to this neuron's linear output, $z\_j^l$, is $$\frac{\partial C}{\partial z\_j^l} = \frac{\partial C}{\partial z\_j^l} 1 = \frac{\partial C}{\partial z\_j^l} \frac{\partial z\_j^l}{\partial z\_j^l}.$$ If the the linear output of node $j$ in layer $l$ is now $z\_j^l + \Delta z\_j^l$, then the partial derivative with respect to $z\_j^l$ becomes \begin{align} \frac{\partial C}{\partial z\_j^l} &= \frac{\partial C}{\partial (z\_j^l + \Delta z\_j^l)}\frac{\partial (z\_j^l + \Delta z\_j^l)}{\partial z\_j^l} \\ &= \frac{\partial C}{\partial (z\_j^l + \Delta z\_j^l)} \left( \frac{\partial z\_j^l}{\partial z\_j^l} + \frac{\partial \Delta z\_j^l}{\partial z\_j^l} \right) \\ &= \frac{\partial C}{\partial (z\_j^l + \Delta z\_j^l)} \left( 1 + \frac{\partial \Delta z\_j^l}{\partial z\_j^l} \right) \\ &= \frac{\partial C}{\partial (z\_j^l + \Delta z\_j^l)} + \frac{\partial C}{\partial (z\_j^l + \Delta z\_j^l)}\frac{\partial \Delta z\_j^l}{\partial z\_j^l} \\ \end{align} $\Delta z\_j^l$ depends on $z\_j^l$, but it is not specified how. Upvotes: 0 <issue_comment>username_2: I believe he's just saying that: $$ \frac{\partial C}{\partial z\_j^l} \Delta z\_j^l \approx \frac{\partial C}{\partial z\_j^l} \partial z\_j^l \approx \partial C $$ so that the change in cost function can be arrived at simply for a small enough perturbation $\Delta z\_j^l$. Or, taking that line of approximations backwards, the change in the cost function for a given perturbation is just: $$ \partial C \approx \frac{\partial C}{\partial z\_j^l} \partial z\_j^l \approx \frac{\partial C}{\partial z\_j^l} \Delta z\_j^l $$ Upvotes: 1 <issue_comment>username_3: The derivative of a function ($f(x\_1,x\_2..x\_n)$) w.r.t to one of the variables ($x\_1,x\_2..x\_n$) gives us the rate of change of the function w.r.t the rate of change of the variable. This roughly means that by how much will the function value change if we change the variable by a "unit amount" or $+1$. (we cannot use the change as $+1$ as the change needs to be infinitesimally small, this is just a rough explanation) The image shows a tangent line to a curve or function $f(x)$. The slope of this tangent is given by $\frac{df(x)}{dx}$ at that particular $x$. If you move by a very very small amount in the direction of positive $x$ i.e. $x+\delta x$ the change in the value of the $f(x)= y $ will almost be the same as the change in the value of the $y$ of the tangent line. [![enter image description here](https://i.stack.imgur.com/MDjXL.jpg)](https://i.stack.imgur.com/MDjXL.jpg) Now as per the excerpt the cost function $C$ is a function of $z\_j^l$. Thus, it can be written as $$C = f(z\_j^l, .....).$$ So, $$\frac{\partial C}{\partial z\_j^l}$$ indicates how much $C$ will vary w.r.t $z\_j^l$, i.e. when $z\_j^l$ is changed by an infinitesimally small amount and thus the formulae: $$\frac{\partial C}{\partial z\_j^l} \Delta z\_j^l$$ where the author assumed $\Delta z\_j^l$ to be very small. This gives the infinitesimal change in $C$ or gives us $\Delta C$, for infinitesimally small change in $z\_j^l$ or any varible affecting the cost function. This can be derived by series expansions too (given below), but this is an intuitive explanation. An explanation can be given from **[Taylor Series Theorem](https://users.math.msu.edu/users/magyar/Math133/11.10-Taylor-Series.pdf)** which states: : > > Let $f(x)$ be a function which is analytic at $x = a$. Then we can write > $f(x)$ as the following power series, called the Taylor series of $f(x)$ > at $x = a$, then we can write $f(x)$ as: > > > $$f(x) = f(a) + f'(a)(x-a) + f''(a)\frac{(x-a)^2}{2!} + f'''(a)\frac{(x-a)^3}{3!}... $$ Now if we keep other variables constant and make cost function $f$ vary only with $z^l\_j$ and if we put $a=z^l\_j$ and $x=z^l\_j + \Delta z^l\_j$ the equation becomes: $$f(z^l\_j + \Delta z^l\_j) = f(z^l\_j) + f'(z^l\_j)(\Delta z^l\_j) + f''(z^l\_j)\frac{(\Delta z^l\_j)^2}{2!} + f'''(a)\frac{(\Delta z^l\_j)^3}{3!}... $$ which if we ignore the higher order terms of $\Delta z^l\_j$, since terms containing $\Delta z^l\_j$ for powers greater than 1 will be negligible compared to $\Delta z^l\_j$ with power 1. Thus the equation now effectively is: $$f(z^l\_j + \Delta z^l\_j) = f(z^l\_j) + f'(z^l\_j)(\Delta z^l\_j)$$ $$f(z^l\_j + \Delta z^l\_j) - f(z^l\_j)= f'(z^l\_j)(\Delta z^l\_j)$$ $$f(z^l\_j + \Delta z^l\_j) - f(z^l\_j)= \frac{\partial f(z^l\_j)}{\partial z^l\_j}(\Delta z^l\_j)$$ where $$f(z^l\_j + \Delta z^l\_j) - f(z^l\_j)$$ can be thought of as $\Delta C$ or the change in cost function for small change in $z^l\_j$ NOTE: I have glossed over some requirements for a Taylor Series to be convergent. Upvotes: 0
2019/06/05
2,011
5,769
<issue_start>username_0: There was a recent informal question on chat about RTS games suitable for AI benchmarks, and I thought it would be useful to ask a question about them in relation to AI research. Compact is defined as the fewest mechanics, elements, and smallest gameboard that produces a balanced, intractable, strategic game. *(This is important because greater compactness facilitates mathematical analysis.)*<issue_comment>username_1: I think that Nielsen just wanted to convey the idea of the back-propagation algorithm using that formula, as you can read from the next paragraph "Now, this demon is a good demon...", so I don't think that that partial derivative is mathematically correct, provided the partial derivative is still with respect to $z\_j^l$. $C$ is the cost (or loss) function. $z\_j^l$ is the linear output of neuron $j$ in layer $l$, which is followed by a non-linear function (e.g. sigmoid), denoted by $\sigma$. So, the actual output of neuron $j$ in layer $l$ is $\sigma(z\_j^l)$. The partial derivative of the cost function $C$ with respect to this neuron's linear output, $z\_j^l$, is $$\frac{\partial C}{\partial z\_j^l} = \frac{\partial C}{\partial z\_j^l} 1 = \frac{\partial C}{\partial z\_j^l} \frac{\partial z\_j^l}{\partial z\_j^l}.$$ If the the linear output of node $j$ in layer $l$ is now $z\_j^l + \Delta z\_j^l$, then the partial derivative with respect to $z\_j^l$ becomes \begin{align} \frac{\partial C}{\partial z\_j^l} &= \frac{\partial C}{\partial (z\_j^l + \Delta z\_j^l)}\frac{\partial (z\_j^l + \Delta z\_j^l)}{\partial z\_j^l} \\ &= \frac{\partial C}{\partial (z\_j^l + \Delta z\_j^l)} \left( \frac{\partial z\_j^l}{\partial z\_j^l} + \frac{\partial \Delta z\_j^l}{\partial z\_j^l} \right) \\ &= \frac{\partial C}{\partial (z\_j^l + \Delta z\_j^l)} \left( 1 + \frac{\partial \Delta z\_j^l}{\partial z\_j^l} \right) \\ &= \frac{\partial C}{\partial (z\_j^l + \Delta z\_j^l)} + \frac{\partial C}{\partial (z\_j^l + \Delta z\_j^l)}\frac{\partial \Delta z\_j^l}{\partial z\_j^l} \\ \end{align} $\Delta z\_j^l$ depends on $z\_j^l$, but it is not specified how. Upvotes: 0 <issue_comment>username_2: I believe he's just saying that: $$ \frac{\partial C}{\partial z\_j^l} \Delta z\_j^l \approx \frac{\partial C}{\partial z\_j^l} \partial z\_j^l \approx \partial C $$ so that the change in cost function can be arrived at simply for a small enough perturbation $\Delta z\_j^l$. Or, taking that line of approximations backwards, the change in the cost function for a given perturbation is just: $$ \partial C \approx \frac{\partial C}{\partial z\_j^l} \partial z\_j^l \approx \frac{\partial C}{\partial z\_j^l} \Delta z\_j^l $$ Upvotes: 1 <issue_comment>username_3: The derivative of a function ($f(x\_1,x\_2..x\_n)$) w.r.t to one of the variables ($x\_1,x\_2..x\_n$) gives us the rate of change of the function w.r.t the rate of change of the variable. This roughly means that by how much will the function value change if we change the variable by a "unit amount" or $+1$. (we cannot use the change as $+1$ as the change needs to be infinitesimally small, this is just a rough explanation) The image shows a tangent line to a curve or function $f(x)$. The slope of this tangent is given by $\frac{df(x)}{dx}$ at that particular $x$. If you move by a very very small amount in the direction of positive $x$ i.e. $x+\delta x$ the change in the value of the $f(x)= y $ will almost be the same as the change in the value of the $y$ of the tangent line. [![enter image description here](https://i.stack.imgur.com/MDjXL.jpg)](https://i.stack.imgur.com/MDjXL.jpg) Now as per the excerpt the cost function $C$ is a function of $z\_j^l$. Thus, it can be written as $$C = f(z\_j^l, .....).$$ So, $$\frac{\partial C}{\partial z\_j^l}$$ indicates how much $C$ will vary w.r.t $z\_j^l$, i.e. when $z\_j^l$ is changed by an infinitesimally small amount and thus the formulae: $$\frac{\partial C}{\partial z\_j^l} \Delta z\_j^l$$ where the author assumed $\Delta z\_j^l$ to be very small. This gives the infinitesimal change in $C$ or gives us $\Delta C$, for infinitesimally small change in $z\_j^l$ or any varible affecting the cost function. This can be derived by series expansions too (given below), but this is an intuitive explanation. An explanation can be given from **[Taylor Series Theorem](https://users.math.msu.edu/users/magyar/Math133/11.10-Taylor-Series.pdf)** which states: : > > Let $f(x)$ be a function which is analytic at $x = a$. Then we can write > $f(x)$ as the following power series, called the Taylor series of $f(x)$ > at $x = a$, then we can write $f(x)$ as: > > > $$f(x) = f(a) + f'(a)(x-a) + f''(a)\frac{(x-a)^2}{2!} + f'''(a)\frac{(x-a)^3}{3!}... $$ Now if we keep other variables constant and make cost function $f$ vary only with $z^l\_j$ and if we put $a=z^l\_j$ and $x=z^l\_j + \Delta z^l\_j$ the equation becomes: $$f(z^l\_j + \Delta z^l\_j) = f(z^l\_j) + f'(z^l\_j)(\Delta z^l\_j) + f''(z^l\_j)\frac{(\Delta z^l\_j)^2}{2!} + f'''(a)\frac{(\Delta z^l\_j)^3}{3!}... $$ which if we ignore the higher order terms of $\Delta z^l\_j$, since terms containing $\Delta z^l\_j$ for powers greater than 1 will be negligible compared to $\Delta z^l\_j$ with power 1. Thus the equation now effectively is: $$f(z^l\_j + \Delta z^l\_j) = f(z^l\_j) + f'(z^l\_j)(\Delta z^l\_j)$$ $$f(z^l\_j + \Delta z^l\_j) - f(z^l\_j)= f'(z^l\_j)(\Delta z^l\_j)$$ $$f(z^l\_j + \Delta z^l\_j) - f(z^l\_j)= \frac{\partial f(z^l\_j)}{\partial z^l\_j}(\Delta z^l\_j)$$ where $$f(z^l\_j + \Delta z^l\_j) - f(z^l\_j)$$ can be thought of as $\Delta C$ or the change in cost function for small change in $z^l\_j$ NOTE: I have glossed over some requirements for a Taylor Series to be convergent. Upvotes: 0
2019/06/06
576
2,080
<issue_start>username_0: As far I know, the RNN accepts a sequence as input and can produce as a sequence as output. Are there neural networks that accept graphs or trees as inputs, so that to represent the relationships between the nodes of the graph or tree?<issue_comment>username_1: There are types of neural networks designed exactly for that purpose. For example, [*graph convolutional networks*](https://arxiv.org/pdf/1609.02907.pdf) (GCN) by <NAME>. The input to the network will be a matrix of size $N \times F$, where $N$ is the number of nodes and $F$ the number of features (for each node). You then can multiply the feature matrix with the adjacency matrix (each node is going to be a weighted sum of its first-degree neighbors). There are a lot of other variations, such as [*diffusion convolutional networks*](https://arxiv.org/pdf/1511.02136.pdf), [*gated graph neural networks*](https://arxiv.org/pdf/1511.05493.pdf), etc. There is a nice survey that describes most of the recent related work in the field [Graph Neural Networks: A review of methods and applications](https://arxiv.org/pdf/1812.08434.pdf) by <NAME> et al. Upvotes: 1 <issue_comment>username_2: Yes, there are numerous, coming under the umbrella term [**Graph Neural Networks**](https://ai.stackexchange.com/questions/11169/what-is-a-graph-neural-network/20626#20626) (GNN). The most common input structures accepted by these techniques are the adjacency matrix of the graph (optionally accompanied by its node feature matrix and/or edge feature matrix, if the graph has such information). [A Comprehensive Survey on Graph Neural Networks](https://arxiv.org/abs/1901.00596), Wu et al (2019) divides GNN's into four subgroups: * Recurrent graph neural networks (RecGNN) * Convolutional graph neural networks (ConvGNN) * Graph autoencoders (GAE) * Spatial-temporal graph neural networks (STGNN) ConvGNN's can themselves be classified by whether they use Spectral methods or Spatial methods, and GAE's by whether they are designed for Network embedding or Graph generation. Upvotes: 2
2019/06/07
424
1,803
<issue_start>username_0: The question is little bit broad, but I could not find any concrete explanation anywhere, hence decided to ask the experts here. I have trained a classifier model for binary classification task. Now I am trying to fine tune the model. With different sets of hyperparameters I am getting different sets of accuracy on my train and test set. For example: ``` (1) Train set: 0.99 | Cross-validation set: 0.72 (2) Train set: 0.75 | Cross-validation set: 0.70 (3) Train set: 0.69 | Cross-validation set: 0.69 ``` These are approximate numbers. But my point is - for certain set of hyperparameters I am getting more or less similar CV accuracy, while the accuracy on training data varies from overfit to not so much overfit. My question is - which of these models will work best on future unseen data? What is the recommendation in this scenario, shall we choose the model with higher training accuracy or lower training accuracy, given that CV accuracy is similar in all cases above (in fact CV score is better in the overfitted model)?<issue_comment>username_1: Assuming that your cross-validation scores(both on train set and test set) indicate model's prediction performance correctly, you should definitely decide which trained model to use based on your **validation accuracy only,** regardless your model is overfitted or not. Upvotes: 0 <issue_comment>username_2: You should only look for the cross-validation score. If this set is large enough, it will give you an accurate prediction of how your model will act for unseen data. Your case is exceptional. The fitted model which is obviously overfitted actually performs better on the cross-validation set. This means in turn that your overfitted model will perform better with unseen data. Upvotes: 2 [selected_answer]
2019/06/07
1,837
8,132
<issue_start>username_0: In regression, in order to minimize an error function, a functional form of hypothesis $h$ must be decided upon, and it must be assumed (as far as I'm concerned) that $f$, the true mapping of instance space to target space, must have the same form as $h$ (if $h$ is linear, $f$ should be linear. If $h$ is sinusoidal, $f$ should be sinusoidal. Otherwise the choice of $h$ was poor). However, doesn't this require a priori knowledge of datasets that we are wanting to let computers do on their own in the first place? I thought machine learning was letting machines do the work and have minimal input from the human. Are we not telling the machine what general form $f$ will take and letting the machine using such things as error minimization do the rest? That seems to me to forsake the whole point of machine learning. I thought we were supposed to have the machine work for us by analyzing data after providing a training set. But it seems we're doing a lot of the work for it, looking at the data too and saying "This will be linear. Find the coefficients $m, b$ that fit the data."<issue_comment>username_1: So in a sense you are correct. Using your jargon: linear regression will only "work" if the true function is approximately $y=h(x)=\beta^{T}x+\beta\_0$. Advantages to using this is that its light, its convex, and all-around easy. but for alot of larger problems, this wont work. As you said you want the machine to do the work, so this is (kinda) where deeper models come into play: You allow a learn-able featurization and classification/regression. Think about it this way, the result of your regression is most likely linearly associated with some set of features, they just may not be the ones you are interested in (you can prove this actually with any infinitely wide network :: [Universal approx Thm](https://en.wikipedia.org/wiki/Universal_approximation_theorem)). Unfortunately we cant use an infinitely dimensional model, so we run with these giant over-parametrized models where we hope the a good function can be described by a sub-structure (only recently are we starting to pay attention, to how these sub-structures form -- look at this [paper](https://arxiv.org/abs/1803.03635)) But the way you bring about thinking about it is a large pit fall for many trying to move forward. Alot of ML people now of days gain success by throwing a function without alot of parameters on a big data problem, but youll see the largest advancements in the field come from a theoretical understanding of the featurization and optimization. I hope this helped Upvotes: 2 <issue_comment>username_2: It is just a statistical technique that is used in machine learning and it depends on the nature of the machine learning problem. I think you should be referred to the relation of the statistics and machine learning. These are no the same, but you can see the statistical methods in machine learning methods. For your specific problem, there are a lot of optimization techniques in AI (not specifically in machine learning). So, I think you should scrutinize more on the problem to find the relation of machine learning, AI, and statistics in this regression example. Upvotes: 0 <issue_comment>username_3: Actually regression comes under the statistical analysis. As you know many business activity(decision making) relies in the previous trends that can be grabbed from the organizations transaction data. When regression is performed on those organizational data. One can understand what decision can be made. One could even simulate the different conditions when the regression line is generated and to predict the unknown cases, decision maker can pass the numerical values corresponding to the certain phenomena in the operation of the organization. **How regression is machine learning?** Let's start from the definition of machine learning. > > Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it learn for themselves. > > > Source: <https://www.expertsystem.com/machine-learning-definition/> As from the definition it becomes clear that machine learning is to know the inner insight about the data without being explicitly programmed. **Doesn't it fills great to know about what my previous trends in the business related transaction data is trying to convey me.** > > Please note that, in machine learning algorithms like Regression, one is trying to built some relation between the transnational data. > > > **So how relation between the data is being built?** Consider you are in the business of selling and buying the house and you want to predict the house price according to the latest trend. So what you have got is the data of house price and the feature of the house. > > Feature : house\_area, no\_of\_rooms > > Target (what you want to predict): Price > > > Now, you perform regression on those data and you want to find out what would be best price for the house with the feature that is not mentioned in the latest trend's data. Suppose general regression becomes like: > > price = a \* hourse\_area + b \* no\_of\_rooms + some\_constant > > > So in some sense. We're just trying to find the best fit line of the latest trend data with some variables like a, b and some\_constant. Isn't it great to find such higher level of details from those trends data to know what would be the house price for non-mentioned data in the so called 'training data'. **Choosing the objective function for best mapping?** Suppose relation is sometimes is non-linear. But how my algorithms would know that. In such case, one can use Artificial-Neural Network as it can learn to hypothesize the non-linear training data too. > > Note: You can learn to simulate the non linear data at: <https://playground.tensorflow.org> > > > Upvotes: 1 <issue_comment>username_4: What you are asking touches upon two very different approaches to machine learning: 1. The empirical approach (many people just call this 'machine learning', and some people like to call it 'algorithmic machine learning') 2. The statistical approach (some people like to call this 'statistical machine learning') The purely empirical approach is very goal-oriented - think discriminative models that are only used for prediction. You really only care about whether the data fits the training + test data well according to whichever metric you've selected. The statistical approach is very process-oriented - you would want to identify the processes that generate the data, the distributions they follow, whether your results are statistically significant, etc. Along this spectrum, most folks fall somewhere into the middle. What you've described is closer to statistical machine learning - to practitioners of the other approach, regression only means that you are trying to predict for a continuous target variable (whereas classification would be for a discrete target variable). Then you might poke around the data a bit, fiddle around with features & hyperparameters, and try out a lot of different regression algorithms, going from OLS, SVMs, nearest neighbour regressors, random forests, gradient boosted trees, and maybe even RNNs, etc. In the extreme case, a purist of this approach wouldn't care about the statistics or whatever the underlying distributions are at all, but only care if the final regression gives good results in practice. While there are clear risks with this approach (especially when the underlying assumptions of the models fall apart), it can give good results, especially when the practitioner is a good coder and can try out lots of possibilities very quickly, and even produce novel algorithms. The fact is that maths does sometimes lag development of other fields - Fourier analysis for example, and deep neural networks. Another very approximate analogy would be science vs engineering. Upvotes: 0
2019/06/09
371
1,640
<issue_start>username_0: I have way more unlabeled data than labeled data. Therefore I would like to train an autoencoder using MobileNetV2 as the encoder. Then I will use the pre-trained model for the classification of the labeled data. I think it is rather difficult to "invert" the MobileNet architecture to create a decoder. Therefore, my question is: can I use a different architecture for the decoder, or will this introduce weird artefacts?<issue_comment>username_1: > > can I use a different architecture for the decoder, or will this > introduce weird artifacts? > > > If you are using [U-net](https://arxiv.org/abs/1505.04597) -like architecture with skip connection from corresponding encoding to decoding layer outputs of corresponding layers should have the same spatial resolution. There is no other commonly recognized limitations on decoder architecture for convolutional networks. Upvotes: 2 [selected_answer]<issue_comment>username_2: Other replies are commenting on the skip connections for a U-Net. I believe you want to exclude these skip connections from your auto-encoder. You say you want to use the auto-encoder for unsupervised pretraining, for which you want to pass the data through a bottle neck, so adding skip connections would work against you if you want to use the encoder for a classification task. You ask whether the decoder should 'mirror' the MobileNet encoder. This is actually an interesting one, and I think it could work even if the decoder does not look like the encoder at all. Since you don't need to (and in fact shouldn't) add skip connections, this should be easy to try. Upvotes: 2
2019/06/10
455
1,962
<issue_start>username_0: I was working on CNN. I modified the training procedure on runtime. [![Validation Loss and Validation Accuracy](https://i.stack.imgur.com/0hrgz.png)](https://i.stack.imgur.com/0hrgz.png) As we can see from the validation loss and validation accuracy, the yellow curve does not fluctuate much. The green curve and red curve fluctuate suddenly to higher validation loss and lower validation accuracy, then goes to the lower validation loss and the higher validation accuracy, especially for the green curve. Is it happening because of overfitting or something else? I am asking it because, after fluctuation, the loss decreases to the lowest point, and also the accuracy increases to the highest point. Can anyone tell me why is it happening?<issue_comment>username_1: > > can I use a different architecture for the decoder, or will this > introduce weird artifacts? > > > If you are using [U-net](https://arxiv.org/abs/1505.04597) -like architecture with skip connection from corresponding encoding to decoding layer outputs of corresponding layers should have the same spatial resolution. There is no other commonly recognized limitations on decoder architecture for convolutional networks. Upvotes: 2 [selected_answer]<issue_comment>username_2: Other replies are commenting on the skip connections for a U-Net. I believe you want to exclude these skip connections from your auto-encoder. You say you want to use the auto-encoder for unsupervised pretraining, for which you want to pass the data through a bottle neck, so adding skip connections would work against you if you want to use the encoder for a classification task. You ask whether the decoder should 'mirror' the MobileNet encoder. This is actually an interesting one, and I think it could work even if the decoder does not look like the encoder at all. Since you don't need to (and in fact shouldn't) add skip connections, this should be easy to try. Upvotes: 2
2019/06/10
452
2,120
<issue_start>username_0: In general, how does one make a neural network learn the training data while also forcing it to represent some known structure (e.g., representing a family of functions)? The neural network might find the optimal weights, but those weights might no longer make the layer represent the function I originally intended. For example, suppose I want to create a convolutional layer in the middle of my neural network that is a low-pass filter. In the context of the entire network, however, the layer might cease to be a low-pass filter at the end of training because the backpropagation algorithm found a better optimum. How do I allow the weights to be as optimal as possible, while still maintaining the low-pass characteristics I originally wanted? General tips or pointing to specific literature would be much appreciated.<issue_comment>username_1: Extending @mirror2image's comment, if you have a certain metric that allows you to measure how close the intended layer is to a low pass filter (something that compares its output with what a low pass filter would have produced, for example), the simplest way to achieve what you want would be to add a term in your loss function that calculates the value of this metric. This way, each time you do a training step, the network now is not only made to output the correct predictions but is also forced to do so while also keeping that specific layer's behavior as close to a low-pass filter as possible. This is the most common way of tweaking the behavior of neural networks and is often encountered in many research papers. Upvotes: 2 <issue_comment>username_2: the idea about training that it is allowed for weights (not layer as you wrote) "learn" values what they "want" from general network setting. But actualy i v thinked about that too, and to have weights represent as you want, you can first train a shorter network, while having those weights (1) as very last, so you have maximum control over them, after trained a lot append next layers to those and learning rate for weights (1) make much smaller as for new weights Upvotes: 0
2019/06/12
595
2,602
<issue_start>username_0: Basically, economic decision making is not restricted to mundane finance, the managing of money, but any decision that involves expected utility (some result with some degree of optimality.) * Can Machine Learning algorithms make economic decisions as well as or better than humans? "Like humans" means understanding classes of objects and their interactions, including agents such as other humans. At a fundamental level, there must be some physical representation of an object, leading to usage of an object, leading to management of resources that the objects constitute. This may include ability to effectively handle semantic data (NLP) because mcuh of the relevant information is communicated in human languages.<issue_comment>username_1: at this time, as open source - NOT. i guess: * for a decision make we need a broad input layer/-s of data flows, * we need a 20 000-200 000 layers of neural networks or more complex and dynamic architectures * we need a deep research of date-time influence for historical data flow what we have at this time: * only sensors - opencv and object recognition, nlp-tagging, data predicting so, sensors isn't AI, sensors and machine learning is previous experience. it is not ready for the change analysis. Upvotes: 0 <issue_comment>username_2: Consider managing a memory structure as an economic function. (Where to put, and how to manage, the resources constituted by data.) This is something computers can do better and faster than any human. The reason is that the system in which the economic decisions are being made is fully defined. Routing of packages is a similar, economic function that computers do much better than humans. These functions haven't been handled by Machine Learning in the past, but, soon after the AlphaGo milestone, Google found an economic application for Machine Learning. [Google's DeepMind trains AI to cut its energy bills by 40% (Wired)](https://www.wired.co.uk/article/google-deepmind-data-centres-efficiency) So it's entirely context dependent. As the model increases in complexity and nuanced, utility will be reduced. (In the former case it's a time and space issue related to computational complexity, and in the latter case, often a function of [incomplete information](https://en.wikipedia.org/wiki/Complete_information) or inability to define parameters.) But as the sophistication of the machine learning algorithms increases, and the models continue to be refined, the algorithms will get better and better at managing intractability and incomplete information. Upvotes: 1
2019/06/15
6,318
19,039
<issue_start>username_0: [Explainable artificial intelligence (XAI)](https://en.wikipedia.org/wiki/Explainable_artificial_intelligence) is concerned with the development of techniques that can enhance the interpretability, accountability, and transparency of artificial intelligence and, in particular, machine learning algorithms and models, especially black-box ones, such as artificial neural networks, so that these can also be adopted in areas, like healthcare, where the interpretability and understanding of the results (e.g. classifications) are required. Which XAI techniques are there? If there are many, to avoid making this question too broad, you can just provide a few examples (the most famous or effective ones), and, for people interested in more techniques and details, you can also provide one or more references/surveys/books that go into the details of XAI. The idea of this question is that people could easily find one technique that they could study to understand what XAI really is or how it can be approached.<issue_comment>username_1: There are a few [XAI techniques](https://christophm.github.io/interpretable-ml-book/agnostic.html) that are (partially) agnostic to the model to be interpreted * Layer-wise relevance propagation (LRP), introduced in [On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0130140) (2015) * Local Interpretable Model-agnostic Explanations (LIME), introduced in ["Why Should I Trust You?" Explaining the Predictions of Any Classifier](https://arxiv.org/pdf/1602.04938v1.pdf) (2016) * Model Agnostic Contrastive Explanations Method (MACEM), introduced in [Model Agnostic Contrastive Explanations for Structured Data](https://arxiv.org/pdf/1906.00117.pdf) (2019) There are also ML models that are not considered black boxes and that are thus [more interpretable than black boxes](https://christophm.github.io/interpretable-ml-book/simple.html), such as * linear models (e.g. linear regression) * decision trees * naive Bayes (and, in general, Bayesian networks) For a more complete list of such techniques and models, have a look at the online book [Interpretable Machine Learning: A Guide for Making Black Box Models Explainable](https://christophm.github.io/interpretable-ml-book/index.html), by <NAME>, which attempts to categorise and present the main XAI techniques. Upvotes: 3 <issue_comment>username_2: *Explainable AI* and *model interpretability* are hyper-active and hyper-hot areas of current research (think of holy grail, or something), which have been brought forward lately not least due to the (often tremendous) success of deep learning models in various tasks, plus the necessity of algorithmic fairness & accountability. Here are some state of the art algorithms and approaches, together with implementations and frameworks. --- **Model-agnostic approaches** * LIME: Local Interpretable Model-agnostic Explanations ([paper](https://arxiv.org/abs/1602.04938), [code](https://github.com/marcotcr/lime), [blog post](https://www.oreilly.com/learning/introduction-to-local-interpretable-model-agnostic-explanations-lime), [R port](https://cran.r-project.org/web/packages/lime/index.html)) * SHAP: A Unified Approach to Interpreting Model Predictions ([paper](https://arxiv.org/abs/1705.07874), [Python package](https://github.com/slundberg/shap), [R package](https://github.com/redichh/ShapleyR)). GPU implementation for tree models by NVIDIA using RAPIDS - GPUTreeShap ([paper](https://arxiv.org/abs/2010.13972), [code](https://github.com/rapidsai/gputreeshap), [blog post](https://developer.nvidia.com/blog/explaining-and-accelerating-machine-learning-for-loan-delinquencies/)) * Anchors: High-Precision Model-Agnostic Explanations ([paper](https://homes.cs.washington.edu/%7Emarcotcr/aaai18.pdf), [authors' Python code](https://github.com/marcotcr/anchor), [Java](https://github.com/viadee/javaAnchorExplainer) implementation) * Diverse Counterfactual Explanations (DiCE) by Microsoft ([paper](https://arxiv.org/abs/1905.07697), [code](https://github.com/microsoft/dice), [blog post](https://www.microsoft.com/en-us/research/blog/open-source-library-provides-explanation-for-machine-learning-through-diverse-counterfactuals/)) * [Black Box Auditing](https://arxiv.org/abs/1602.07043) and [Certifying and Removing Disparate Impact](https://arxiv.org/abs/1412.3756) (authors' [Python code](https://github.com/algofairness/BlackBoxAuditing)) * FairML: Auditing Black-Box Predictive Models, by Cloudera Fast Forward Labs ([blog post](http://blog.fastforwardlabs.com/2017/03/09/fairml-auditing-black-box-predictive-models.html), [paper](https://arxiv.org/abs/1611.04967), [code](https://github.com/adebayoj/fairml)) SHAP seems to enjoy [high popularity](https://twitter.com/christophmolnar/status/1162460768631177216?s=11) among practitioners; the method has firm theoretical foundations on co-operational game theory (Shapley values), and it has in a great degree integrated the LIME approach under a common framework. Although model-agnostic, specific & efficient implementations are available for neural networks ([DeepExplainer](https://shap.readthedocs.io/en/latest/generated/shap.DeepExplainer.html)) and tree ensembles ([TreeExplainer](https://shap.readthedocs.io/en/latest/generated/shap.TreeExplainer.html), [paper](https://www.nature.com/articles/s42256-019-0138-9.epdf?shared_access_token=<PASSWORD>h<PASSWORD>8nlgU<PASSWORD>IvZstjQ7Xdc5g%3D%3D)). --- **Neural network approaches** (mostly, but not exclusively, for computer vision models) * The Layer-wise Relevance Propagation (LRP) toolbox for neural networks ([2015 paper @ PLoS ONE](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0130140), [2016 paper @ JMLR](http://www.jmlr.org/papers/v17/15-618.html), [project page](http://heatmapping.org/), [code](https://github.com/sebastian-lapuschkin/lrp_toolbox), [TF Slim wrapper](https://github.com/VigneshSrinivasan10/interprettensor)) * Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization ([paper](https://arxiv.org/abs/1610.02391), authors' [Torch code](https://github.com/ramprs/grad-cam), [Tensorflow code](https://github.com/Ankush96/grad-cam.tensorflow), [PyTorch code](https://github.com/meliketoy/gradcam.pytorch), yet another [Pytorch implementation](https://github.com/jacobgil/pytorch-grad-cam), Keras [example notebook](http://nbviewer.jupyter.org/github/fchollet/deep-learning-with-python-notebooks/blob/master/5.4-visualizing-what-convnets-learn.ipynb), Coursera [Guided Project](https://www.coursera.org/projects/scene-classification-gradcam)) * Axiom-based Grad-CAM (XGrad-CAM): Towards Accurate Visualization and Explanation of CNNs, a refinement of the existing Grad-CAM method ([paper](https://arxiv.org/abs/2008.02312), [code](https://github.com/Fu0511/XGrad-CAM)) * SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability ([paper](https://arxiv.org/abs/1706.05806), [code](https://github.com/google/svcca), [Google blog post](https://ai.googleblog.com/2017/11/interpreting-deep-neural-networks-with.html)) * TCAV: Testing with Concept Activation Vectors ([ICML 2018 paper](https://arxiv.org/abs/1711.11279), [Tensorflow code](https://github.com/tensorflow/tcav)) * Integrated Gradients ([paper](https://arxiv.org/abs/1703.01365), [code](https://github.com/ankurtaly/Integrated-Gradients), [Tensorflow tutorial](https://www.tensorflow.org/tutorials/interpretability/integrated_gradients), [independent implementations](https://paperswithcode.com/paper/axiomatic-attribution-for-deep-networks#code)) * Network Dissection: Quantifying Interpretability of Deep Visual Representations, by MIT CSAIL ([project page](http://netdissect.csail.mit.edu/), [Caffe code](https://github.com/CSAILVision/NetDissect), [PyTorch port](https://github.com/CSAILVision/NetDissect-Lite)) * GAN Dissection: Visualizing and Understanding Generative Adversarial Networks, by MIT CSAIL ([project page](https://gandissect.csail.mit.edu/), with links to paper & code) * Explain to Fix: A Framework to Interpret and Correct DNN Object Detector Predictions ([paper](https://arxiv.org/abs/1811.08011), [code](https://github.com/gudovskiy/e2x)) * Transparecy-by-Design (TbD) networks ([paper](https://arxiv.org/abs/1803.05268), [code](https://github.com/davidmascharka/tbd-nets), [demo](https://mybinder.org/v2/gh/davidmascharka/tbd-nets/binder?filepath=full-vqa-example.ipynb)) * [Distilling a Neural Network Into a Soft Decision Tree](https://arxiv.org/abs/1711.09784), a 2017 paper by <NAME>, with various independent [PyTorch implementations](https://paperswithcode.com/paper/distilling-a-neural-network-into-a-soft) * Understanding Deep Networks via Extremal Perturbations and Smooth Masks ([paper](https://arxiv.org/abs/1910.08485)), implemented in [TorchRay](https://github.com/facebookresearch/torchray) (see below) * Understanding the Role of Individual Units in a Deep Neural Network ([preprint](https://arxiv.org/abs/2009.05041), [2020 paper @ PNAS](https://www.pnas.org/content/early/2020/08/31/1907375117), [code](https://github.com/davidbau/dissect), [project page](https://dissect.csail.mit.edu/)) * GNNExplainer: Generating Explanations for Graph Neural Networks ([paper](https://proceedings.neurips.cc/paper/2019/file/d80b7040b773199015de6d3b4293c8ff-Paper.pdf), [code](https://github.com/RexYing/gnn-model-explainer)) * Benchmarking Deep Learning Interpretability in Time Series Predictions ([paper](https://proceedings.neurips.cc//paper/2020/file/47a3893cc405396a5c30d91320572d6d-Paper.pdf) @ NeurIPS 2020, [code](https://github.com/ayaabdelsalam91/TS-Interpretability-Benchmark) utilizing [Captum](https://captum.ai)) * Concept Whitening for Interpretable Image Recognition ([paper](https://www.nature.com/articles/s42256-020-00265-z), [preprint](https://arxiv.org/abs/2002.01650), [code](https://github.com/zhiCHEN96/ConceptWhitening)) --- **Libraries & frameworks** As interpretability moves toward the mainstream, there are already frameworks and toolboxes that incorporate more than one of the algorithms and techniques mentioned and linked above; here is a partial list: * The ELI5 Python library ([code](https://github.com/TeamHG-Memex/eli5), [documentation](https://eli5.readthedocs.io/en/latest/)) * DALEX - moDel Agnostic Language for Exploration and eXplanation ([homepage](https://modeloriented.github.io/DALEX/), [code](https://github.com/ModelOriented/DALEX), [JMLR paper](https://www.jmlr.org/papers/volume19/18-416/18-416.pdf)), part of the [DrWhy.AI](https://modeloriented.github.io/DrWhy/) project * The What-If tool by Google, a feature of the open-source TensorBoard web application, which let users analyze an ML model without writing code ([project page](https://pair-code.github.io/what-if-tool/), [code](https://github.com/pair-code/what-if-tool), [blog post](https://ai.googleblog.com/2018/09/the-what-if-tool-code-free-probing-of.html)) * The Language Interpretability Tool (LIT) by Google, a visual, interactive model-understanding tool for NLP models ([project page](https://pair-code.github.io/lit/), [code](https://github.com/PAIR-code/lit/), [blog post](https://ai.googleblog.com/2020/11/the-language-interpretability-tool-lit.html)) * Lucid, a collection of infrastructure and tools for research in neural network interpretability by Google ([code](https://github.com/tensorflow/lucid); papers: [Feature Visualization](https://distill.pub/2017/feature-visualization/), [The Building Blocks of Interpretability](https://distill.pub/2018/building-blocks/)) * [TorchRay](https://github.com/facebookresearch/torchray) by Facebook, a PyTorch package implementing several visualization methods for deep CNNs * iNNvestigate Neural Networks ([code](https://github.com/albermax/innvestigate), [JMLR paper](https://jmlr.org/papers/v20/18-540.html)) * tf-explain - interpretability methods as Tensorflow 2.0 callbacks ([code](https://github.com/sicara/tf-explain), [docs](https://tf-explain.readthedocs.io/en/latest/), [blog post](https://blog.sicara.com/tf-explain-interpretability-tensorflow-2-9438b5846e35)) * InterpretML by Microsoft ([homepage](https://interpret.ml/), [code](https://github.com/Microsoft/interpret) still in alpha, [paper](https://arxiv.org/abs/1909.09223)) * Captum by Facebook AI - model interpetability for Pytorch ([homepage](https://captum.ai), [code](https://github.com/pytorch/captum), [intro blog post](https://ai.facebook.com/blog/open-sourcing-captum-a-model-interpretability-library-for-pytorch/)) * Skater, by Oracle ([code](https://github.com/oracle/Skater), [docs](https://oracle.github.io/Skater/)) * Alibi, by SeldonIO ([code](https://github.com/SeldonIO/alibi), [docs](https://docs.seldon.io/projects/alibi/en/stable/)) * AI Explainability 360, commenced by IBM and moved to the Linux Foundation ([homepage](https://ai-explainability-360.org/), [code](https://github.com/Trusted-AI/AIX360), [docs](https://aix360.readthedocs.io/en/latest/), [IBM Bluemix](http://aix360.mybluemix.net/), [blog post](https://www.ibm.com/blogs/research/2019/08/ai-explainability-360/)) * Ecco: explaining transformer-based NLP models using interactive visualizations ([homepage](https://www.eccox.io/), [code](https://github.com/jalammar/ecco), [article](https://jalammar.github.io/explaining-transformers/)). * Recipes for Machine Learning Interpretability in H2O Driverless AI ([repo](https://github.com/h2oai/driverlessai-recipes/tree/master/explainers)) --- **Reviews & general papers** * [A Survey of Methods for Explaining Black Box Models](https://dl.acm.org/doi/10.1145/3236009) (2018, ACM Computing Surveys) * [Definitions, methods, and applications in interpretable machine learning](https://www.pnas.org/content/116/44/22071) (2019, PNAS) * [Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead](https://www.nature.com/articles/s42256-019-0048-x) (2019, Nature Machine Intelligence, [preprint](https://arxiv.org/abs/1811.10154)) * [Machine Learning Interpretability: A Survey on Methods and Metrics](https://www.mdpi.com/2079-9292/8/8/832) (2019, Electronics) * [Principles and Practice of Explainable Machine Learning](https://arxiv.org/abs/2009.11698) (2020, preprint) * [Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges](https://arxiv.org/abs/2010.09337) (keynote at [2020 ECML XKDD workshop](https://kdd.isti.cnr.it/xkdd2020/) by <NAME>, [video & slides](https://slideslive.com/38933066/interpretable-machine-learning-state-of-the-art-and-challenges)) * [Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI](https://www.sciencedirect.com/science/article/pii/S1566253519308103) (2020, Information Fusion) * [Counterfactual Explanations for Machine Learning: A Review](https://arxiv.org/abs/2010.10596) (2020, preprint, [critique](https://twitter.com/yudapearl/status/1321038352355790848?s=11) by <NAME>) * [Interpretability 2020](https://ff06-2020.fastforwardlabs.com/), an applied research report by Cloudera Fast Forward, updated regularly * [Interpreting Predictions of NLP Models](https://github.com/Eric-Wallace/interpretability-tutorial-emnlp2020) (EMNLP 2020 tutorial) * Explainable NLP Datasets ([site](https://exnlpdatasets.github.io/), [preprint](https://arxiv.org/abs/2102.12060), [highlights](https://twitter.com/sarahwiegreffe/status/1365057786250489858)) * [Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges](https://arxiv.org/abs/2103.11251) --- **eBooks** (available online) * [Interpretable Machine Learning](https://christophm.github.io/interpretable-ml-book/), by <NAME>, with [R code](https://github.com/christophM/iml) available * [Explanatory Model Analysis](https://pbiecek.github.io/ema/), by DALEX creators <NAME> and <NAME>, with both R & Python code snippets * [An Introduction to Machine Learning Interpretability](https://www.h2o.ai/wp-content/uploads/2019/08/An-Introduction-to-Machine-Learning-Interpretability-Second-Edition.pdf) (2nd ed. 2019), by H2O --- **Online courses & tutorials** * [Machine Learning Explainability](https://www.kaggle.com/learn/machine-learning-explainability), Kaggle tutorial * [Explainable AI: Scene Classification and GradCam Visualization](https://www.coursera.org/projects/scene-classification-gradcam), Coursera guided project * [Explainable Machine Learning with LIME and H2O in R](http://Explainable%20Machine%20Learning%20with%20LIME%20and%20H2O%20in%20R), Coursera guided project * [Interpretability and Explainability in Machine Learning](https://interpretable-ml-class.github.io/), Harvard COMPSCI 282BR --- **Other resources** * [explained.ai](https://explained.ai/) blog * A [Twitter thread](https://twitter.com/ledell/status/995930308947140608), linking to several interpretation tools available for R * A whole bunch of resources in the [Awesome Machine Learning Interpetability](https://github.com/jphall663/awesome-machine-learning-interpretability) repo * The online comic book (!) [*The Hitchhiker's Guide to Responsible Machine Learning*](https://betaandbit.github.io/RML/#p=1), by the team behind the textbook *Explanatory Model Analysis* and the DALEX package mentioned above ([blog post](https://medium.com/responsibleml/the-hitchhikers-guide-to-responsible-machine-learning-d9179a047319) and [backstage](https://medium.com/responsibleml/backstage-the-hitchhikers-guide-to-responsible-machine-learning-3a8735a0f8bb)) Upvotes: 6 [selected_answer]<issue_comment>username_3: <NAME>'s last book [*Artificial Beings The Conscience of a Conscious Machine*](https://www.kobo.com/us/en/ebook/artificial-beings) has some interesting chapters (§2, §3, §5, §6, §7) related to the questions of explanations and meta-explanations in AI systems. It describes a reflexive, "expert system" like, meta-knowledge based approach to explainable AI You could also read his papers [*Implementation of a reflective system*](https://www.sciencedirect.com/science/article/abs/pii/0167739X96000118) (1996) and [*A Step toward an Artificial AI Scientist*](http://jacques.pitrat.pagesperso-orange.fr/A%20Step%20toward%20an%20Artificial%20AI%20Scientist.pdf) online (there could be a typo in it: "pile" is the French word for stack, including the call stack). You might also look into J.Pitrat's [blog](http://bootstrappingartificialintelligence.fr/WordPress3/) and into the ongoing [RefPerSys](http://refpersys.org/) project (it is in November 2020). PS. J.Pitrat (born in 1934) passed away in October 2019. French readers could see [this](https://afia.asso.fr/journee-hommage-j-pitrat/). His blog might disappear in a few months. Upvotes: 0
2019/06/16
459
1,871
<issue_start>username_0: For the purposes of this question, let's suppose that an artificial general intelligence (AGI) is defined as a machine that can successfully perform any intellectual task that a human being can [[1]](https://en.wikipedia.org/wiki/Artificial_general_intelligence). Would an AGI have to be Turing complete?<issue_comment>username_1: My answer is yes, but in a trivial way. The least you would expect from an intelligent agent is that it is able to execute a given Turing machine on a given input. This requires actually no intelligence, just following rules. If however, you are referring to the capability of predicting if the Turing machine will terminate on the given input, that is another matter. I don't think it is reasonable to expect such "undecidable" computational power for an agent to be generally intelligent. Upvotes: 1 <issue_comment>username_2: A system is *Turing complete* if it can be used to simulate any Turing machine. Given the Church-Turing thesis (which has not yet been proven), [a human brain can compute any function that a Turing machine can (given enough time and space)](https://plato.stanford.edu/entries/church-turing/#MeanCompCompTuriThes), [but the reverse is not necessarily true, given that the human brain might be able to compute more functions than a Turing machine](http://web.mit.edu/asf/www/PopularScience/Friedman_BrainsAndTuringMachines_2002.pdf). Intuively, humans are thus Turing complete (even though, to prove this, you need a formal model of the human), that is, given enough time and space, a human can compute anything that a Turing machine can. Hence, an AGI, defined as an AI with human-level intelligence, needs to be Turing complete, otherwise there would be at least one function that a human can calculate but the AGI cannot, which would not make it as general as a human. Upvotes: 2
2019/06/16
625
2,615
<issue_start>username_0: Problem Statement ----------------- I've built a classifier to classify a dataset consisting of n samples and four classes of data. To this end, I've used pre-trained VGG-19, pre-trained Alexnet and even LeNet (with cross-entropy loss). However, I just changed the softmax layer's architecture and placed just four neurons for that (because my dataset includes just four classes). Since the dataset classes have a striking resemblance to each other, this classifier was unable to classify them and I was forced to use other methods. During the training section, after some epochs, loss decreased from approximately 7 to approximately 1.2, but there were no changes in accuracy and it was frozen on 25% (random precision). In the best epochs, the accuracy just reached near 27% but it was completely unstable. Question -------- How is it justifiable? If loss reduction means model improvement, why doesn't accuracy increase? How is it possible to the loss decreases near 6 points (approximately from 7 to 1) but nothing happens to accuracy at all?<issue_comment>username_1: My answer is yes, but in a trivial way. The least you would expect from an intelligent agent is that it is able to execute a given Turing machine on a given input. This requires actually no intelligence, just following rules. If however, you are referring to the capability of predicting if the Turing machine will terminate on the given input, that is another matter. I don't think it is reasonable to expect such "undecidable" computational power for an agent to be generally intelligent. Upvotes: 1 <issue_comment>username_2: A system is *Turing complete* if it can be used to simulate any Turing machine. Given the Church-Turing thesis (which has not yet been proven), [a human brain can compute any function that a Turing machine can (given enough time and space)](https://plato.stanford.edu/entries/church-turing/#MeanCompCompTuriThes), [but the reverse is not necessarily true, given that the human brain might be able to compute more functions than a Turing machine](http://web.mit.edu/asf/www/PopularScience/Friedman_BrainsAndTuringMachines_2002.pdf). Intuively, humans are thus Turing complete (even though, to prove this, you need a formal model of the human), that is, given enough time and space, a human can compute anything that a Turing machine can. Hence, an AGI, defined as an AI with human-level intelligence, needs to be Turing complete, otherwise there would be at least one function that a human can calculate but the AGI cannot, which would not make it as general as a human. Upvotes: 2
2019/06/18
430
1,886
<issue_start>username_0: What are some interesting myths of Artificial Intelligence and what are the facts behind them?<issue_comment>username_1: In artificial intelligence, even though not everyone agrees, a common (and maybe the biggest) myth is that of the [intelligence explosion](https://en.wikipedia.org/wiki/Technological_singularity), which some people claim will happen (without considering physical limits or knowing anything about thermodynamics). Upvotes: 2 <issue_comment>username_2: As Artificial Intelligence is rapidly invading in our lives the myths around AI is also fabricating rapidly. Before getting into details one need to get clear off from this myths. **Myth 1: AI will take away our jobs:** **Reality:** AI is not completely different from other technologies and AI will not take away jobs but AI will change the way we work and helps us to increase the productivity by removing monotonous works. **Myth 2: Artificial intelligence will take over the world:** **Reality:** AI controlling the world. According to me it will not possible unless we give it that power. AI or robots will assist in our work and helps us to solve some tedious works that are difficult for human to solve easily. **Myth 3: Intelligent machines can learn on their own** **Reality:** It seems that a Intelligent machine can learn by it own. But the fact is that a AI Engineer or AI specialist should develop the algorithm and feed the machine with datasets and instructions and continuous monitoring should be done and most importantly regular update of software should be done. **Myth 4: Artificial Intelligence, Machine learning and Deep learning all three are same:** **Reality:** No not at all. To be clear machine learning is a part of AI and deep learning is the subset of ML. All three- AL, ML and DL are different but they are inter related with each other. Upvotes: 3
2019/06/18
754
3,015
<issue_start>username_0: I’m looking for some help with my neural network. I’m working on a binary classification on a recurrent neural network that predicts stock movements (up and down) Let’s say I’m studying Eur/Usd, I’m using all the data from 2000 to 2017 to train et I’m trying to predict every day of 2018. The issue I’m dealing with right now is that my program is giving me different answers every time I run it even without changing anything and I don’t understand why? The accuracy during the train from 2000 to 2017 is around 95% but I’ve noticed another issue. When I train it with 1 new data every day in 2018, I thought 2 epochs was enough, like if it doesn’t find the right answer the first time, then it knows what the answer is since the problem is binary, but apparently that doesn’t work. Do you guys have any suggestion to **stabilize** my NN?<issue_comment>username_1: Firstly, dealing with the issue that the program gives different answers every time without making any changes can be due to a couple of things. * Assigning random values to weights and bias. This can be solved by setting a `seed` manually at the start of the program. * Make sure you have set the model to the testing mode after training. For some frameworks, this has to be done manually. Secondly, regarding your expected results. To generate a proper accuracy metric, you will have to sample your dataset into training and testing data, making sure there is now overlap between them. This might be an issue as you have stated training on data till 2017 and then again training on data of 2018. Lastly, don't expect that the model will know that the output is wrong and directly change it because it's binary classification. This is not how neural networks work. The model fits the solution better by gradually updating its weights and biases over a number of iterations. So it will take a number of epochs to learn new trends in the data for 2018. Upvotes: 1 <issue_comment>username_2: This may not be directly answering your question, but predicting market movement based on past prices is probably not very sensible. Assuming that future samples are drawn from the same populations as the past samples basically violates the founations of AIML and statistics quite frankly. See relevant figure below. As far as accuracy goes, its is all relative. If you have a cpu which is right 0.999 of the time you have a useless piece of silicon, but if you have a 0.501 accuracy on stock market ID then you are the richest man in the world. That said, stock historical data is just not a phenomenon that repeats itself based on its own underlying distribution. > > Always remember that when it comes to markets, past performance is *not* a good predictor of future returns—looking in the rear-view mirror is a bad way to drive. Machine learning, on the other hand, is applicable to datasets where the past is a good predictor of the future. > > > — *Deep learning with python*, <NAME> > > > Upvotes: 0
2019/06/19
841
3,447
<issue_start>username_0: Everyone is afraid of losing their job to robots. Will or does artificial intelligence cause mass unemployment?<issue_comment>username_1: Up to a point. Some jobs will IMHO not easily be replaced by robots, others more easily. Some could be, but I hope common sense will prevail and stop that. Manual jobs: fruit picking, warehouse picking, and cooking are some jobs that need really subtle hand control, and precise handling of fragile items. I think those will be harder to automate than eg car factory robots. Customer facing roles: receptionists have to do a wide variety of tasks. While some of the tasks might be aided by AI systems, a good PA or receptionist cannot easily be replaced. Also, many people would much rather interact with a fellow human than with a machine, at least in some situations. Judgments: a lot of jobs require judging a situation, balancing risks, and making 'gut' decisions. While AI systems can do them, I think many would still require human intervention. I for one wouldn't like to be sentenced by a robo-judge, or examined by a robo-doctor. True, humans also make mistakes, but they would hopefully err in less potentially disastrous ways. Administration: again, many tasks can be assisted, which would lead to a reduction in head count, but a general AI is still off the horizon, so you'd still need humans. Creative arts: not likely. Would you want to read computer-generated novels, or look at computer-generated paintings? I could go on... in general: I think AI systems will make a lot of tasks faster and easier, so you need fewer people to do them. Some jobs cannot realistically be done by machines in the near-to-mid future, so overall we need not worry too much. Upvotes: 0 <issue_comment>username_2: The nuanced, boring answer is that it depends on your definition of AI. Most people wouldn't say that the rule-based systems designed in the 70's are AI. The amazing leaps in machine learning are almost taken for granted as well (think about how normal speech and facial recognition have become). This is known as the [AI effect](https://en.wikipedia.org/wiki/AI_effect); when we become accustomed to the technology, it loses it's 'magical aspect' and is thus no longer labelled as AI. Since AI is so diverse and difficult to define, the question becomes incredibly abstract. Did Siri cause all secretaries to become unemployed? Did TurboTax replace all accountants? Some parts of AI will affect jobs, or even make them redundant yes. On the other hand, it will give rise to new jobs as well. It is therefore impossible to generalize it as 'AI will cause massive unemployment'. This is not a new phenomenon, however, it has been part of the human economy ever since the industrial revolution (probably even before that, but I am not a historian). The invention of the car crippled the horse-and-wagon industry, but it brought along new jobs as well. Upvotes: 2 <issue_comment>username_3: Yes , but you should be happy about it. It is like usage of mashines in industry causes "mass unemployment". Modern tendencies is unfortinatly not like that - industry has not so bit interest in full automatisation cause of huge amout of cheap work force from immigrants - it is a bad factor, not that "ai takes your job". And further, robot what would take your job must not be realy smart - in could be just some doll without AI just making what is in programm... Upvotes: 0
2019/06/20
1,048
4,313
<issue_start>username_0: I have the following problem. We have $4$ separate *discrete* inputs, which can take any integer value between $-63$ and $63$. The output is also supposed to be a discrete value between $-63$ and $63$. Another constraint is that the solution should allow for online learning with singular values or mini-batches, as the dataset is too big to load all the training data into memory. I have tried the following method, but the predictions are not good. I created an MLP or feedforward network with $4$ inputs and $127$ outputs. The inputs are being fed without normalization. The number of hidden layers is $4$ with $[8,16,32,64]$ units in each (respectively). So, essentially, this treats the problem like a sequence classification problem. For training, we feed the non-normalized input along with a one-hot encoded vector for that specific value as output. The inference is done the same way. Finding the hottest output and returning that as the next number in the sequence.<issue_comment>username_1: Up to a point. Some jobs will IMHO not easily be replaced by robots, others more easily. Some could be, but I hope common sense will prevail and stop that. Manual jobs: fruit picking, warehouse picking, and cooking are some jobs that need really subtle hand control, and precise handling of fragile items. I think those will be harder to automate than eg car factory robots. Customer facing roles: receptionists have to do a wide variety of tasks. While some of the tasks might be aided by AI systems, a good PA or receptionist cannot easily be replaced. Also, many people would much rather interact with a fellow human than with a machine, at least in some situations. Judgments: a lot of jobs require judging a situation, balancing risks, and making 'gut' decisions. While AI systems can do them, I think many would still require human intervention. I for one wouldn't like to be sentenced by a robo-judge, or examined by a robo-doctor. True, humans also make mistakes, but they would hopefully err in less potentially disastrous ways. Administration: again, many tasks can be assisted, which would lead to a reduction in head count, but a general AI is still off the horizon, so you'd still need humans. Creative arts: not likely. Would you want to read computer-generated novels, or look at computer-generated paintings? I could go on... in general: I think AI systems will make a lot of tasks faster and easier, so you need fewer people to do them. Some jobs cannot realistically be done by machines in the near-to-mid future, so overall we need not worry too much. Upvotes: 0 <issue_comment>username_2: The nuanced, boring answer is that it depends on your definition of AI. Most people wouldn't say that the rule-based systems designed in the 70's are AI. The amazing leaps in machine learning are almost taken for granted as well (think about how normal speech and facial recognition have become). This is known as the [AI effect](https://en.wikipedia.org/wiki/AI_effect); when we become accustomed to the technology, it loses it's 'magical aspect' and is thus no longer labelled as AI. Since AI is so diverse and difficult to define, the question becomes incredibly abstract. Did Siri cause all secretaries to become unemployed? Did TurboTax replace all accountants? Some parts of AI will affect jobs, or even make them redundant yes. On the other hand, it will give rise to new jobs as well. It is therefore impossible to generalize it as 'AI will cause massive unemployment'. This is not a new phenomenon, however, it has been part of the human economy ever since the industrial revolution (probably even before that, but I am not a historian). The invention of the car crippled the horse-and-wagon industry, but it brought along new jobs as well. Upvotes: 2 <issue_comment>username_3: Yes , but you should be happy about it. It is like usage of mashines in industry causes "mass unemployment". Modern tendencies is unfortinatly not like that - industry has not so bit interest in full automatisation cause of huge amout of cheap work force from immigrants - it is a bad factor, not that "ai takes your job". And further, robot what would take your job must not be realy smart - in could be just some doll without AI just making what is in programm... Upvotes: 0
2019/06/21
1,608
6,053
<issue_start>username_0: I recently got a 18-month postdoc position in a math department. It's a position with relative light teaching duty and a lot of freedom about what type of research that I want to do. Previously I was mostly doing some research in probability and combinatorics. But I am thinking of doing a bit more application oriented work, e.g., AI. (There is also the consideration that there is good chance that I will not get a tenure-track position at the end my current position. Learn a bit of AI might be helpful for other career possibilities.) What sort of **mathematical problems** are there in AI that people are working on? From what I heard of, there are people studying * [Deterministic Finite Automaton](https://en.wikipedia.org/wiki/Deterministic_finite_automaton) * [Multi-armed bandit problems](https://en.wikipedia.org/wiki/Multi-armed_bandit) * [Monte Carlo tree search](https://en.wikipedia.org/wiki/Monte_Carlo_tree_search) * [Community detection](https://en.wikipedia.org/wiki/Community_structure) Any other examples?<issue_comment>username_1: In *artificial intelligence* (sometimes called *machine intelligence* or *computational intelligence*), there are several problems that are based on mathematical topics, especially optimization, statistics, probability theory, calculus and linear algebra. [<NAME>](http://www.hutter1.net/) has worked on a mathematical theory for *artificial general intelligence*, called [**AIXI**](https://jan.leike.name/AIXI.html), which is based on several mathematical and computation science concepts, such as reinforcement learning, probability theory (e.g. Bayes theorem and related topics) [measure theory](http://mathworld.wolfram.com/MeasureTheory.html), [algorithmic information theory](http://www.scholarpedia.org/article/Algorithmic_information_theory) (e.g. Kolmogorov complexity), optimisation, [Solomonoff induction](https://wiki.lesswrong.com/wiki/Solomonoff_induction), [universal Levin search](http://www.scholarpedia.org/article/Universal_search) and [theory of compution](https://en.wikipedia.org/wiki/Theory_of_computation) (e.g. universal Turing machines). His book [Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability](http://lib.slon.pp.ru/ComputerScience/Sequential%20Decisions%20Based%20On%20Algorithmic%20Probability.pdf), which is a highly technical and mathematical book, describes his theory of optimal Bayesian non-Markov reinforcement learning agents. [Here](https://ai.stackexchange.com/q/21427/2444) I list other similar works. There is also the research field called [**computational learning theory**](http://www.learningtheory.org/), which is devoted to studying the design and analysis of machine learning algorithms (from a statistical perspective, known as [*statistical learning theory*](https://www.math.arizona.edu/%7Ehzhang/math574m/Read/vapnik.pdf), or algorithmic perspective, *algorithmic learning theory*). More precisely, the field focuses on the rigorous study and mathematical analysis of machine learning algorithms using techniques from fields such as probability theory, statistics, optimization, information theory and geometry. Several people have worked on the [computational learning theory](https://ai.stackexchange.com/q/20355/2444), including <NAME> and [<NAME>](https://en.wikipedia.org/wiki/Vladimir_Vapnik). There is also a lot of research effort dedicated to approximations (heuristics) of [combinatorial optimization](https://en.wikipedia.org/wiki/Combinatorial_optimization) and [NP-complete](https://en.wikipedia.org/wiki/NP-completeness) problems, such as [ant colony optimization](http://www.scholarpedia.org/article/Ant_colony_optimization). There is also some work on [AI-completeness](https://en.wikipedia.org/wiki/AI-complete), but this has not received much attention (compared to the other research areas mentioned above). Upvotes: 5 [selected_answer]<issue_comment>username_2: Most of the math work being done in AI that I'm familiar with is already covered in username_1's answer. One thing that I do not believe is covered yet in that answer is **proving algorithmic equivalence** and/or **deriving equivalent algorithms**. One of my favourite papers on this is [Learning to Predict Independent of Span](https://arxiv.org/abs/1508.04582) by <NAME> and <NAME>. The basic idea is that we may first formulate an algorithm (in math form, for instance some update rules/equations for parameters that we're training) in one way, and then find different update rules/equations (i.e. a different algorithm) for which we can prove that it is equivalent to the first one (i.e. always results in the same output). A typical case where this is useful is if the first algorithm is easy to understand / appeals to our intuition / is more convenient for convergence proofs or other theoretical analysis, and the second algorithm is more efficient (in terms of computation, memory requirements, etc.). Upvotes: 3 <issue_comment>username_3: Specifically for mathematical apparatus of Neural Networks - [random matrix theory](https://en.wikipedia.org/wiki/Random_matrix). [Non-asymptotic random matrix theory](https://arxiv.org/abs/1011.3027) was used in some [proofs of convergence of gradient descent for Neural Networks](https://arxiv.org/abs/1811.08888), [high dimensional random landscapes](https://arxiv.org/abs/1806.05294) in connection to Hessian spectrum have relation to [loss surfaces of Neural Networks](https://arxiv.org/abs/1412.0233). [Topological data analysis](https://en.wikipedia.org/wiki/Topological_data_analysis) is another area of intense research related to ML, AI and [applied to Neural Networks](https://arxiv.org/abs/1905.12200). There were some works on [Tropical Geometry of Neural Networks](https://arxiv.org/abs/1805.07091) [Homotopy Type Theory](https://en.wikipedia.org/wiki/Homotopy_type_theory) also have [connection to AI](https://andrewnetwork.github.io/minor/math/2017/07/29/hotti.html) Upvotes: 2
2019/06/21
1,379
6,382
<issue_start>username_0: When describing tensors of higher order I feel like there is an overloading of the term *dimension* as it may be used to describe the order of the tensor but also the dimensionality of the... "orders"? Assume one describes the third-order tensor produced by a convolutional layer and wants to refer to its width and height. Do you say spatial *dimensions*? Would you write about the channel *dimension*? Or rather the direction? Saying "spatial order" feels really weird. But staying with dimensions makes sentences like "*The spatial dimensions are of equal dimensionality*." (Disclaimer: Obviously you can avoid the issue here by restructuring, but doing this at every occasion does not feel like a satisfactory solution.).<issue_comment>username_1: By definition, tensors can be of any order (usually named differently if the order is less than three). So, I use $d\_i$ to indicate the dimensionality of the $i$th facet. Unless you have three or four-order tensors which each facet has a very specific meaning, naming the orders by terms such as *special* or *time* would be limiting. Upvotes: 0 <issue_comment>username_2: **Ambiguity in Terms** You are correct that there is something like overloading occurring in tensor terminology in posts and in software libraries. Confusing jargon often appears when those without the mathematical background use mathematical terms. You rarely find this confusion when reading NASA, Cambridge, MIT, or Cal-tech materials. **Tensors, Rank, Dimensions, and Channels** A tensor is a grouping of dimensions. The grouping typically represents a relation between quantities describing a system. The order (or rank) of the tensor is the number of dimensions in the grouping. When this mathematical idea is applied formally to electrical engineering, chemical systems, biological systems, or computing systems, we can say a tensor is the grouping of the inputs and outputs of that system, abstracting the relation between them. The order (or rank) of the tensor is the total number of signal channels going to or from the system. The rank is simply the number of inputs plus the number of outputs. That original conception was lost a little when the term tensor began to be applied to grouping inputs in one tensor and grouping outputs in another tensor, without any connection with their relationship, which is not formally correct. Those are simply vectors in mathematics, represented by arrays in computer programs. A labelled data set prepared for machine learning is a discrete tensor formally, however it does not have a mathematical model applied to it yet. When convergence occurs in an artificial network, the result of training is a set of parameters that, if later applied to new inputs, will predict the outputs based on the net and its trained parameters. The configuration of the network cells had become the model and the trained network had become a tensor expression approximating the phenomenon that the training data represents. **The Specific Example in the Question** A third order tensor cannot have only two dimensions, width and height. It must have three dimensions, possibly width, height, and instantaneous value. Width and height are the independent variables, and instantaneous value is the dependent variable. So we have a rank of $1 + 2 = 3$. There is no clear and formal definition of what dimensionality means, so it would be useful to discontinue its use until academic textbooks contain a consistent definition, but that's unlikely to occur. It will likely continue to be used by non-mathematicians somewhat arbitrarily. Therefore, stating that two tensors are of equal dimensionality remains ambiguous. **Domain and Range** The correct terms are domain and range. These terms unambiguously describe the variability of the independent and dependent variables of a relation respectively. In the case of an array of samples describing a tensor relationship between horizontal position, vertical position, and brightness, the horizontal domain corresponds to the width and the vertical domain corresponds to the height. Consequently, one would not say, "Width and height are the spacial dimensions," or, "width and height are the channel dimensions." It could unambiguously be stated, "Width in this article shall refer to the maximum horizontal pixel position minus its minimum plus one." Without saying something that complex, albeit accurate, this would be a clear statement. > > The fourth convolution layer in the network has a domain of 1280, 768 and a range of IEEE 32 bit floats. > > > It is easily understood and close enough to technically correct to not cause readers with mathematical training to dismiss the writer as uneducated. The equality question is now solved. The equality of domains could be stated clearly and unambiguously. Whether it sounds unusual is not particularly relevant. > > The horizontal and vertical domains of network layers two and three are equal, both being [1280, 768]. > > > (The reason [1280, 768] is often seen is because compilers of many languages, starting with FORTRAN or maybe even earlier, interpret that as a two dimensional domain for a two dimensional array. So that can be used among software people, although [768, 1280] is more likely used in programs since it is a video tradition to traverse left to right and then top to bottom.) **Misuse of the Term Tensor** It is incorrect to say that adding two arrays is an example of tensors manipulated in software. Tensors are more abstract than arrays. If one were to ... * Make symbolic manipulations to reduce a complex tensor equation to a simple one, * Show that two tensor expressions are equal for given domains, or * Determine the correlation of a set of discrete points to a model represented as a tensor equation, ... then one would be performing tensor mathematics. There are many pieces of software that perform such symbolic manipulations, but assigning a tensor to an input sample in machine learning is not tensor math. Tensor object classes that are not part of a system of symbolic manipulation are simply an abstraction of the typical programming language's array structure, parameterizing rank in the constructor. Such a trivial abstraction is not nearly as deep as symbolic manipulation of tensor expressions. Upvotes: 2 [selected_answer]
2019/06/23
1,285
5,928
<issue_start>username_0: I need to develop a convolutional neural network whose inputs are 1-channel images, but I dont know how to do it, given that most libraries use 3 channel images. Should I convert my images to RGB? Is there any way to implement a CNN that receive as input 1-channel images?<issue_comment>username_1: By definition, tensors can be of any order (usually named differently if the order is less than three). So, I use $d\_i$ to indicate the dimensionality of the $i$th facet. Unless you have three or four-order tensors which each facet has a very specific meaning, naming the orders by terms such as *special* or *time* would be limiting. Upvotes: 0 <issue_comment>username_2: **Ambiguity in Terms** You are correct that there is something like overloading occurring in tensor terminology in posts and in software libraries. Confusing jargon often appears when those without the mathematical background use mathematical terms. You rarely find this confusion when reading NASA, Cambridge, MIT, or Cal-tech materials. **Tensors, Rank, Dimensions, and Channels** A tensor is a grouping of dimensions. The grouping typically represents a relation between quantities describing a system. The order (or rank) of the tensor is the number of dimensions in the grouping. When this mathematical idea is applied formally to electrical engineering, chemical systems, biological systems, or computing systems, we can say a tensor is the grouping of the inputs and outputs of that system, abstracting the relation between them. The order (or rank) of the tensor is the total number of signal channels going to or from the system. The rank is simply the number of inputs plus the number of outputs. That original conception was lost a little when the term tensor began to be applied to grouping inputs in one tensor and grouping outputs in another tensor, without any connection with their relationship, which is not formally correct. Those are simply vectors in mathematics, represented by arrays in computer programs. A labelled data set prepared for machine learning is a discrete tensor formally, however it does not have a mathematical model applied to it yet. When convergence occurs in an artificial network, the result of training is a set of parameters that, if later applied to new inputs, will predict the outputs based on the net and its trained parameters. The configuration of the network cells had become the model and the trained network had become a tensor expression approximating the phenomenon that the training data represents. **The Specific Example in the Question** A third order tensor cannot have only two dimensions, width and height. It must have three dimensions, possibly width, height, and instantaneous value. Width and height are the independent variables, and instantaneous value is the dependent variable. So we have a rank of $1 + 2 = 3$. There is no clear and formal definition of what dimensionality means, so it would be useful to discontinue its use until academic textbooks contain a consistent definition, but that's unlikely to occur. It will likely continue to be used by non-mathematicians somewhat arbitrarily. Therefore, stating that two tensors are of equal dimensionality remains ambiguous. **Domain and Range** The correct terms are domain and range. These terms unambiguously describe the variability of the independent and dependent variables of a relation respectively. In the case of an array of samples describing a tensor relationship between horizontal position, vertical position, and brightness, the horizontal domain corresponds to the width and the vertical domain corresponds to the height. Consequently, one would not say, "Width and height are the spacial dimensions," or, "width and height are the channel dimensions." It could unambiguously be stated, "Width in this article shall refer to the maximum horizontal pixel position minus its minimum plus one." Without saying something that complex, albeit accurate, this would be a clear statement. > > The fourth convolution layer in the network has a domain of 1280, 768 and a range of IEEE 32 bit floats. > > > It is easily understood and close enough to technically correct to not cause readers with mathematical training to dismiss the writer as uneducated. The equality question is now solved. The equality of domains could be stated clearly and unambiguously. Whether it sounds unusual is not particularly relevant. > > The horizontal and vertical domains of network layers two and three are equal, both being [1280, 768]. > > > (The reason [1280, 768] is often seen is because compilers of many languages, starting with FORTRAN or maybe even earlier, interpret that as a two dimensional domain for a two dimensional array. So that can be used among software people, although [768, 1280] is more likely used in programs since it is a video tradition to traverse left to right and then top to bottom.) **Misuse of the Term Tensor** It is incorrect to say that adding two arrays is an example of tensors manipulated in software. Tensors are more abstract than arrays. If one were to ... * Make symbolic manipulations to reduce a complex tensor equation to a simple one, * Show that two tensor expressions are equal for given domains, or * Determine the correlation of a set of discrete points to a model represented as a tensor equation, ... then one would be performing tensor mathematics. There are many pieces of software that perform such symbolic manipulations, but assigning a tensor to an input sample in machine learning is not tensor math. Tensor object classes that are not part of a system of symbolic manipulation are simply an abstraction of the typical programming language's array structure, parameterizing rank in the constructor. Such a trivial abstraction is not nearly as deep as symbolic manipulation of tensor expressions. Upvotes: 2 [selected_answer]
2019/06/23
547
2,314
<issue_start>username_0: I have a bunch of images from different trucks passing the road. Here is an example. [![enter image description here](https://i.stack.imgur.com/3mHXL.png)](https://i.stack.imgur.com/3mHXL.png) The truck needs to be at a certain distance from the border of the lane. Some of the trucks are way close to the border (that you can see on the shoulder of the road). I want to find a way to measure the distance between the truck and the border of the lane and, more importantly, to detect whether a truck is inside its lane. I would like to solve this problem by training a deep learning-based classifier or image processing techniques. Painting the ground is also possible if I can train a classification algorithm with painted images.<issue_comment>username_1: One possible approach will be to use an algorithm which detects lines (Ex. Hough lines or any deep neural net trained to detect lanes) and use some threshold range so that we can get the lane and the edges of truck, then after extracting the lines, you can easily find the distance between them. Then you need to experiment out on few images to get the threshold distance that you are expecting the truck to maintain as the real distance and the distance calculated using images are not same --- If you want to classify using deep learning, you may need to preprocess the images and send them. As it will become very difficult to directly learn to classify based on image, you may need to first detect the lanes, then apply a mask and then send the masked image to your network to make the network to converge. Upvotes: 2 <issue_comment>username_2: If you know the latitude-longitude of the trucks and the center, you can do the following, Given the latitude-longitude location of the center and the radius in which you want to search the presence of a truck(say R) around center, you can find the latitude-longitude bounds of the space around center within R radius by : [Link1](http://janmatuschek.de/LatitudeLongitudeBoundingCoordinates) You can find python implementation here:[Link2](https://github.com/saivarshittha/latitude_longitude_bounds/blob/master/latitude_longitude_bounds.ipynb) Once you know the bounds, you can simply check if the truck's location falls within the latitude-longitude bounds. Upvotes: 0
2019/06/24
640
2,519
<issue_start>username_0: Actually, I am "fresh-water", and I've never known what is neural network. Now I am trying understand how to design simple neuronetwork for this problem: I'd like to make up such neural network that after learning it could predicate a mark of passed exam (for example, math). There is such factors that influence on a mark: * Chosen topic (integral, derivative, series) * Perfomance (low, medium, high) * Does a student work? (Yes, No, flexible schedule) * Have a student ever gotten through a add course? (Yes, No) The output is a mark (*A,B,C,D,E,F*) I don't know should I add few layers between inputs and output [![enter image description here](https://i.stack.imgur.com/ChZiH.png)](https://i.stack.imgur.com/ChZiH.png) Moreover, I have few results from past years: * (integral, low, Yes, No, **E**) * (integral, medium, Yes, Yes, **B**) * (series, high, No, Yes, **A**) and so on. What do I need to know else for designing this NN?<issue_comment>username_1: One possible approach will be to use an algorithm which detects lines (Ex. Hough lines or any deep neural net trained to detect lanes) and use some threshold range so that we can get the lane and the edges of truck, then after extracting the lines, you can easily find the distance between them. Then you need to experiment out on few images to get the threshold distance that you are expecting the truck to maintain as the real distance and the distance calculated using images are not same --- If you want to classify using deep learning, you may need to preprocess the images and send them. As it will become very difficult to directly learn to classify based on image, you may need to first detect the lanes, then apply a mask and then send the masked image to your network to make the network to converge. Upvotes: 2 <issue_comment>username_2: If you know the latitude-longitude of the trucks and the center, you can do the following, Given the latitude-longitude location of the center and the radius in which you want to search the presence of a truck(say R) around center, you can find the latitude-longitude bounds of the space around center within R radius by : [Link1](http://janmatuschek.de/LatitudeLongitudeBoundingCoordinates) You can find python implementation here:[Link2](https://github.com/saivarshittha/latitude_longitude_bounds/blob/master/latitude_longitude_bounds.ipynb) Once you know the bounds, you can simply check if the truck's location falls within the latitude-longitude bounds. Upvotes: 0
2019/06/27
384
1,684
<issue_start>username_0: Both of them deal with data of graph structure like a network community. Is there a big difference there?<issue_comment>username_1: One possible approach will be to use an algorithm which detects lines (Ex. Hough lines or any deep neural net trained to detect lanes) and use some threshold range so that we can get the lane and the edges of truck, then after extracting the lines, you can easily find the distance between them. Then you need to experiment out on few images to get the threshold distance that you are expecting the truck to maintain as the real distance and the distance calculated using images are not same --- If you want to classify using deep learning, you may need to preprocess the images and send them. As it will become very difficult to directly learn to classify based on image, you may need to first detect the lanes, then apply a mask and then send the masked image to your network to make the network to converge. Upvotes: 2 <issue_comment>username_2: If you know the latitude-longitude of the trucks and the center, you can do the following, Given the latitude-longitude location of the center and the radius in which you want to search the presence of a truck(say R) around center, you can find the latitude-longitude bounds of the space around center within R radius by : [Link1](http://janmatuschek.de/LatitudeLongitudeBoundingCoordinates) You can find python implementation here:[Link2](https://github.com/saivarshittha/latitude_longitude_bounds/blob/master/latitude_longitude_bounds.ipynb) Once you know the bounds, you can simply check if the truck's location falls within the latitude-longitude bounds. Upvotes: 0
2019/06/28
1,070
4,296
<issue_start>username_0: The [Planning Domain Definition Language (PDDL)](https://en.wikipedia.org/wiki/Planning_Domain_Definition_Language) is known for its capabilities of symbolic planning in the state space. A solver will find a sequence of steps to bring the system from a start state to the goal state. A common example of this is [the monkey-and-banana problem](https://en.wikipedia.org/wiki/Monkey_and_banana_problem). At first, the monkey sits on the ground and, after doing some actions in the scene, the monkey will have reached the banana. The way a PDDL planner works is by analyzing the preconditions and effects of each primitive action. This will answer the question of what happens if a certain action is executed. However, will a PDDL domain description work the other way around as well, not for planning, but for action recognition? I've searched in the literature to get an answer, but all the papers I've found are describing [PDDL only as a planning paradigm](https://www.researchgate.net/publication/220003750_Context_Aware_Approach_for_Activity_Recognition_Based-on_Precondition-Effect_Rules). My idea is to use the given precondition and effects as a parser to identify what the monkey is doing and not what he should do. That means, in the example, the robot ape knows by itself how to reach the banana and the AI system has to monitor the actions. The task is to identify a PDDL action that fits the action by the monkey.<issue_comment>username_1: I don't know of any work on this with respect to PDDL, but this is very similar to a conceptual dependency application called SAM (Script Applier Mechanism). [Conceptual Dependency](https://en.wikipedia.org/wiki/Conceptual_dependency_theory) (CD) models actions using a number of primitives (which could be seen as equivalent to PDDL primitive actions): PTRANS for physical transfer, PROPEL for application of a physical force to an object, GRASP for grasping an object, etc. Their number varies around 12 or so, depending on the version of CD. Stories are described by a sequence of primitive acts, which have slots for actor, object, etc. They are supposed to enable a program to draw inferences about what happens. A common problem when trying to understand stories is that often common knowledge about a situation is omitted; the standard example here is going to a restaurant. It is usually assumed that the listener/reader knows what commonly happens when the protagonist enters a restaurant, so that when they leave without paying this is recognised as something unusual. The approach used to solve this problem is to encode such knowledge in scripts, sequences of primitive acts. When triggered, eg by "Manuel went to a restaurant", this script is retrieved, and the following actions are looked for in the script. Anything that is recognised is used to fill gaps in the story, eg sitting down at a table, or looking at a menu. This was the task of the SAM program. Basically you have a sequence of primitive actions, and you try to recognise a more abstract event "going to a restaurant" from that. Obviously you'd need to have a script to recognise, but one could presumably use this to derive a sequence of more generalised events, such as "retrieving an object from a high place", or "standing on top on another object". The theory of using scripts, plans, and goals to describe human reasoning is detailed in <NAME>; <NAME>. (1977). *Scripts, plans, goals and understanding: An inquiry into human knowledge structures.* New Jersey: Erlbaum. ISBN 0-470-99033-3. Upvotes: 2 [selected_answer]<issue_comment>username_2: This is theoretically possible for an exhaustive set of sequential, non-concurrent primitive actions: $$ \forall s\_1, s\_2 \left( \exists a \left( \text{possible}(a, s\_1) \land \text{do}(a, s\_1, s\_2) \right) \right) $$ where $s\_1$ is the prior situation, $s\_2$ is the result of doing action $a$ in $s\_2$, and $\text{possible}(a, s\_1)$ is true if it is possible to do an action in a situation (See [Situation Calculus](https://en.wikipedia.org/wiki/Situation_calculus)). So, for a given situation, reduce your search space to those actions that are possible, then reduce your search space again to those that result in the following situation. Upvotes: 0
2019/06/29
846
3,373
<issue_start>username_0: I have implemented an epsilon-greedy Monte Carlo reinforcement learning agent like suggested in [Sutton and Barto's RL book](http://incompleteideas.net/book/the-book-2nd.html) (page 101). As far as I understood epsilon-greedy agents so far, the evaluation has to stop at some point to exploit the gained knowledge. I do not understand, how to stop the evaluation here, because the policy update is linked to epsilon. So just setting epsilon equal to zero at some point does not seem to make sense to me.<issue_comment>username_1: I don't know of any work on this with respect to PDDL, but this is very similar to a conceptual dependency application called SAM (Script Applier Mechanism). [Conceptual Dependency](https://en.wikipedia.org/wiki/Conceptual_dependency_theory) (CD) models actions using a number of primitives (which could be seen as equivalent to PDDL primitive actions): PTRANS for physical transfer, PROPEL for application of a physical force to an object, GRASP for grasping an object, etc. Their number varies around 12 or so, depending on the version of CD. Stories are described by a sequence of primitive acts, which have slots for actor, object, etc. They are supposed to enable a program to draw inferences about what happens. A common problem when trying to understand stories is that often common knowledge about a situation is omitted; the standard example here is going to a restaurant. It is usually assumed that the listener/reader knows what commonly happens when the protagonist enters a restaurant, so that when they leave without paying this is recognised as something unusual. The approach used to solve this problem is to encode such knowledge in scripts, sequences of primitive acts. When triggered, eg by "Manuel went to a restaurant", this script is retrieved, and the following actions are looked for in the script. Anything that is recognised is used to fill gaps in the story, eg sitting down at a table, or looking at a menu. This was the task of the SAM program. Basically you have a sequence of primitive actions, and you try to recognise a more abstract event "going to a restaurant" from that. Obviously you'd need to have a script to recognise, but one could presumably use this to derive a sequence of more generalised events, such as "retrieving an object from a high place", or "standing on top on another object". The theory of using scripts, plans, and goals to describe human reasoning is detailed in <NAME>; <NAME>. (1977). *Scripts, plans, goals and understanding: An inquiry into human knowledge structures.* New Jersey: Erlbaum. ISBN 0-470-99033-3. Upvotes: 2 [selected_answer]<issue_comment>username_2: This is theoretically possible for an exhaustive set of sequential, non-concurrent primitive actions: $$ \forall s\_1, s\_2 \left( \exists a \left( \text{possible}(a, s\_1) \land \text{do}(a, s\_1, s\_2) \right) \right) $$ where $s\_1$ is the prior situation, $s\_2$ is the result of doing action $a$ in $s\_2$, and $\text{possible}(a, s\_1)$ is true if it is possible to do an action in a situation (See [Situation Calculus](https://en.wikipedia.org/wiki/Situation_calculus)). So, for a given situation, reduce your search space to those actions that are possible, then reduce your search space again to those that result in the following situation. Upvotes: 0
2019/06/30
839
3,314
<issue_start>username_0: At a time step $t$, for a state $S\_{t}$, the return is defined as the discounted cumulative reward from that time step $t$. If an agent is following a policy (which in itself is a probability distribution of choosing a next state $S\_{t+1}$ from $S\_{t}$), the agent wants to find the value at $S\_{t}$ by calculating sort of "weighted average" of all the returns from $S\_{t}.$ This is called the expected return. Is my understanding correct?<issue_comment>username_1: I don't know of any work on this with respect to PDDL, but this is very similar to a conceptual dependency application called SAM (Script Applier Mechanism). [Conceptual Dependency](https://en.wikipedia.org/wiki/Conceptual_dependency_theory) (CD) models actions using a number of primitives (which could be seen as equivalent to PDDL primitive actions): PTRANS for physical transfer, PROPEL for application of a physical force to an object, GRASP for grasping an object, etc. Their number varies around 12 or so, depending on the version of CD. Stories are described by a sequence of primitive acts, which have slots for actor, object, etc. They are supposed to enable a program to draw inferences about what happens. A common problem when trying to understand stories is that often common knowledge about a situation is omitted; the standard example here is going to a restaurant. It is usually assumed that the listener/reader knows what commonly happens when the protagonist enters a restaurant, so that when they leave without paying this is recognised as something unusual. The approach used to solve this problem is to encode such knowledge in scripts, sequences of primitive acts. When triggered, eg by "Manuel went to a restaurant", this script is retrieved, and the following actions are looked for in the script. Anything that is recognised is used to fill gaps in the story, eg sitting down at a table, or looking at a menu. This was the task of the SAM program. Basically you have a sequence of primitive actions, and you try to recognise a more abstract event "going to a restaurant" from that. Obviously you'd need to have a script to recognise, but one could presumably use this to derive a sequence of more generalised events, such as "retrieving an object from a high place", or "standing on top on another object". The theory of using scripts, plans, and goals to describe human reasoning is detailed in <NAME>; <NAME>. (1977). *Scripts, plans, goals and understanding: An inquiry into human knowledge structures.* New Jersey: Erlbaum. ISBN 0-470-99033-3. Upvotes: 2 [selected_answer]<issue_comment>username_2: This is theoretically possible for an exhaustive set of sequential, non-concurrent primitive actions: $$ \forall s\_1, s\_2 \left( \exists a \left( \text{possible}(a, s\_1) \land \text{do}(a, s\_1, s\_2) \right) \right) $$ where $s\_1$ is the prior situation, $s\_2$ is the result of doing action $a$ in $s\_2$, and $\text{possible}(a, s\_1)$ is true if it is possible to do an action in a situation (See [Situation Calculus](https://en.wikipedia.org/wiki/Situation_calculus)). So, for a given situation, reduce your search space to those actions that are possible, then reduce your search space again to those that result in the following situation. Upvotes: 0
2019/07/01
474
1,986
<issue_start>username_0: I need to cluster my points into unknown number of clusters, given the minimal Euclidean distance R between the two clusters. Any two clusters that are closer than this minimal distance should be merged and treated as one. I could implement a loop starting from the two clusters and going up until I observe the pair of clusters that are closer to each other than my minimal distance. The upper boundary of the loop is the number of points we need to cluster. Are there any well known algorithms and approaches estimate the approximate number of centroids from the set of points and required minimal distance between centroids? I am currently using FAISS under Python, but with the right idea I could also implement in C myself.<issue_comment>username_1: If you look at Kaufman & Rousseeuw (1990), *[Finding Groups in Data](https://onlinelibrary.wiley.com/doi/book/10.1002/9780470316801)*, they describe an algorithm to evaluate the quality of clusters in agglomerative clustering. You run the clustering algorithm with a specific value *k* for the number of clusters you want, and that routine then gives you a score to reflect the cohesion of the clustering. If you then cluster again with a different value for *k*, you will get another score. You repeat this process until you have found a maximum score, and then you have the clustering with the optimum number of clusters. Upvotes: 1 <issue_comment>username_2: Yes, the [silhouette](https://en.wikipedia.org/wiki/Silhouette_(clustering)) method (which is implemented in `sklearn` as [`silhouette_score`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html)) is commonly used to assess the quality of clusters produced by any clustering algorithm (including $k$-means or any hierarchical clustering algorithm). Roughly, you can compute the silhouette value for different $k$, then you would pick the $k$ with the highest silhouette value. Upvotes: 3 [selected_answer]
2019/07/02
862
3,460
<issue_start>username_0: The basic seq-2-seq model consists of 2 parts: a recurrent encoder that compresses a sequence to a vector and decoder that unrolls the vector into the output sequence: [![enter image description here](https://i.stack.imgur.com/IkEwP.png)](https://i.stack.imgur.com/IkEwP.png) Why is the output, `w`, `x`, `y`, `z` of the decoder used as its input? Shouldn't the hidden state of the RNN from the previous timestamps be enough?<issue_comment>username_1: In the original seq2seq paper, they used two RNN, one for encoding and one for decoding. In the encoder they need to unroll the inputs to capture the time dependency. Now if we want to pass the hidden state from the encoder to the decoder that means that the decoder hidden state shape needs to match the encoder (aka same architecture). Since the architecture is the same, we can not directly generate a sequence of n samples within the decoder without unrolling it and you can not unroll it without an input. Upvotes: 0 <issue_comment>username_2: > > Shouldn't the hidden state of the RNN from the previous timestamps be enough? > > > It is theoretically enough to generate a sequence. However, allowing an input offers a couple of convenient extras: * Training data for output sequences is used twice - once as input (as previous sequence data), once as target (to establish loss metric). This may help training process as the decoder trains both as a decoder to the new sequence type and as a predictive model over the output sequence semi-independently - i.e. weights from input to RNN layer are affected by error gradients separately to weights between previous hidden state and next state, although the two sets of weights together influence output and next state, so are not fully independent over a sequence. * By allowing input of sequence so far generated, the decoder can work as a generator, where the next item in the sequence does not need to be the maximum probability item, but can be sampled or have rules applied. This allows for approaches such as [BEAM search](https://medium.com/@dhartidhami/beam-search-in-seq2seq-model-7606d55b21a5), commonly used in machine translation, which maintains several potential outputs, selecting best one at the end. I have not done the experiment, but I suspect the first item results in faster and better generalisation. The second one is very convenient for natural language generation and similar problems. Upvotes: 1 <issue_comment>username_3: In seq2seq they model the joint distribution of whatever char/word sequence by decomposing it into time-forward conditionals: \begin{align\*} p(w\_1,w\_2, \dots,w\_n) &= \ p(w\_1)\*p(w\_2|w\_1) \* \ ... \ \* p(w\_n|w\_1, \dots,w\_{n-1}) \\ &= \ p(w\_1)\*\prod\_{i=2}^{n}p(w\_i|w\_{ This can be sampled by sampling each of the conditional in ascending order. So, that's exactly what they're trying to imitate. You want the second output dependant on the **sampled** first output, not its distribution. This is why the hidden state is **NOT** good for modeling this setup because it is a latent representation of the distribution, not a sample of the distribution. Note: In training, they use ground-truth as input by default because it working under the assumption the model should've predicted the correct word, and, if it didn't, the gradient of the word/char level loss will reflect that (this is called teacher forcing and has a multitude of pitfalls). Upvotes: 2
2019/07/03
1,461
6,220
<issue_start>username_0: Let's say I have an adjustable loaded die, and I want to train a neural network to give me the probability of each face, depending on the settings of the loaded die. I can't mesure its performance on individual die roll, since it does not give me a probability. I could batch a lot of roll to calculate a probability and use this batch as an individual test case, but my problem does not allow this (let's say the settings are complex and randomized between each roll). I have 2 ideas: 1. train it as a classification problem which output confidence, and hope that the confidence will reflect the actual probability. Sometimes the network would output the correct probability and fail the test, but on average it would tend to the correct probability. However it may require a lot of training and data. 2. batch random rolls together and compare the mean/median/standard deviation of the measured result vs the predictions. It could work but I don't know the good batch size. Thank you.<issue_comment>username_1: Neural network isnt what you want here. You have a limited number of events and draws from some unknown distribution that you want to recover. In that case, just use the empirical probabilities $p(event\_i) = \frac{\# event\_i}{total\ events}$ which given enough draws will converge to the true probability. Upvotes: 0 <issue_comment>username_2: This approach: > > train it as a classification problem which output confidence, and hope that the confidence will reflect the actual probability. Sometimes the network would output the correct probability and fail the test, but on average it would tend to the correct probability. > > > will work with some limitations. If you use a classifier with softmax activation and multiclass log loss $$\mathcal{L}(\mathbf{\hat{y}},\mathbf{y}) = -\mathbf{y} \cdot \text{log}(\mathbf{\hat{y}})$$ where $\mathbf{\hat{y}}$ is the network output as a vector, and $\mathbf{y}$ is the actual output from an individual sample. Your input should be the settings of the die. Optimising this loss will converge on approximated probabilities for each discrete output. You can demonstrate this with some simple examples - for instance if you train a network with a single input - a one-hot-encoded die type from the classic D&D dice sets, plus deliberately chosen examples of different results in the right frequencies, you will end up with a classifier that predicts roughly $p=0.25$ for results of 1,2,3,4 for a d4 and $p=0.125$ for results of 1,2,3,4,5,6,7,8 for a d8 So it works mathematically. Whether it works for your situation depends on details. You need enough data samples to cover both the distribution of results under each setting, and any complexities of how that distribution varies with the settings. In the limit of wanting very accurate predictions of probability within a complex space you will need a huge number of samples. You should be able to find a compromise between accuracy and generalisation by trying different levels of regularisation - this will be necessary as over-fitting to input/output sample pairs as seen is going to be a serious problem for a neural network trained on this data. One thing you can do to help a classifier learn probabilities is always take some number of samples with the same settings - e.g. 10 or 100 or 1000 each with the same settings - that should guarantee that the network cannot simply converge to predict high $p$ values for single outputs as seen, as it will have counter-examples to work with. You mention that you have 40 dimensions of settings. Whether this is an issue will depend on how the probability distribution varies based on those settings. However, at minimum you should be thinking in terms of millions of samples for training, or possibly a fast on-demand generator that can generate 1000s of new samples per second to train with. You can test accuracy by building histograms using some fixed (and as yet unseen) setting and comparing to NN predictions of probabilities of that setting. Even getting accurate test results is likely to require 1000s of samples. > > However it may require a lot of training and data. > > > If you cannot obtain a very large training set here, then a purely statistical "black box" approach is probably not feasible, regardless of whether you use neural networks, or more raw analysis. What neural networks add is smooth interpolation between different settings values, as a form of approximation. This seems desirable for your problem, as you will never fully explore 40 dimensions of 100 values in the lifetime of the universe - but you need some confidence that minor changes in settings equate to minor changes in probability distributions in most cases. It's OK to have one or two major shifts across the input space, but if the distribution depends in some cryptographic primitive or similar complex high frequency (over space) and high amplitude on the input variables, there is no way to obtain approximations using statistics. The alternative to statistical approaches is to find some way to break open this black box through analysis. No AI system can do that in general at the moment, so you would rely on human ingenuity. Upvotes: 0 <issue_comment>username_3: The starting point is that for a fair dice thrown fairly the p(n) is 1/n where n is the number of sides. You said both and > > there are too many variables (up 40 dimensions with value range 1-100) in input, I don't know how these properties relate and an empirical approach would require too much data. > > > It seems that this problem has 2 solutions: 1. Don't use a neural net and create a 'std' statistical model. It may be possible since you said: > > I know there is some underlying rule that simplify a lot the problem (ie. reduce the actual number of dimension of the input) > > > 1. Use a neural network (with softmax at the end) - for a fair dice; with enough training data the classifier should arrive as 1/n as the approximating function for a fair dice. The other 40 dimensions/settings your mentioned are the inputs. I think a 'basic' neural network with dense layers only could work for your task. Upvotes: 1
2019/07/06
432
1,687
<issue_start>username_0: Most image classifiers like Inception-v3 accept images of about size 299 x 299 x 3 as input. In this particular case, I cannot resize the image and lose resolution. Is there an easy solution of dealing with this rather than retraining the model? (Particularly in tensorflow)<issue_comment>username_1: My suggestion is to transfom the resolution of all images equal proportion. You can use this python code: ``` from PIL import Image import os import argparse def rescale_images(directory, size): for img in os.listdir(directory): im = Image.open(directory + img) im_resized = im.resize(size, Image.ANTIALIAS) im_resized.save(directory + img) if __name__ == '__main__': parser = argparse.ArgumentParser(description="Rescale images") parser.add_argument('-d', '--directory', type=str, required=True, help='Directory containing the images') parser.add_argument('-s', '--size', type=int, nargs=2, required=True, metavar=('width', 'height'), help='Image size') args = parser.parse_args() rescale_images(args.directory, args.size) # save this python code as transform_image_resoluthion.py # run this with cmd with the below command # python transform_image_resolution.py -d images/ -s 800 600 ``` Upvotes: 2 <issue_comment>username_2: My guess: If you add a few CNN layers before the input of the given model and train only those layers while keeping the given model's parameters frozen, you might get better result. Essentially these few extra layers would "transform" your input image into the appropriate shape, but with more accuracy since its trained and not hard coded. Upvotes: 2
2019/07/07
2,075
8,435
<issue_start>username_0: I listened to a talk by a panel consisting of two influential Chinese scientists: [<NAME>](https://sites.google.com/site/gangwang6/) and [<NAME>](https://www.crunchbase.com/person/kai-yu#section-overview) and others. When being asked about the biggest bottleneck of the development of artificial intelligence in the near future (3 to 5 years), <NAME>, who has a background in the hardware industry, said that hardware would be the essential problem and we should pay most of our attention to that. He gave us two examples: 1. In the early development of the computer, we compare our machines by their chips; 2. ML/DL which is very popular these years would be almost impossible if not empowered by Nvidia's GPU. The fundamental algorithms existed already in the 1980s and 1990s, but AI went through 3 AI winters and was not empirical until we can train models with GPU boosted mega servers. Then Dr. Wang commented to his opinions that we should also develop software systems because we cannot build an automatic car even if we have combined all GPUs and computation in the world together. Then, as usual, my mind wandered off and I started thinking that what if those who can operate supercomputers in the 1980s and 1990s utilized the then-existing neural network algorithms and train them with tons of scientific data? Some people at that time can obviously attempt to build the AI systems we are building now. But why did AI/ML/DL become a hot topic and become empirical until decades later? Is it only a matter of hardware, software, and data?<issue_comment>username_1: GPUs were ideal for AI boom because: * They hit the right time AI has been researched for a LONG time. Almost half a century. However, that was all exploration of how algorithms would work and look. When NVIDIA saw that the AI is about to go mainstream, they looked at their GPUs and realized that the huge parallel processing power, with relative ease of programing, is ideal for the era that is to be. Many other people realized that too. * GPUs are sort of general purpose accelerators GPGPU is a concept of using GPU parallel processing for general tasks. You can accelerate graphics, or make your algorithm utilize 1000s of cores available on GPU. That makes GPU awesome target for all kinds of use cases including AI. Given that they are already available and are not too hard to program, its ideal choice for accelerating AI algorithms. Upvotes: 2 <issue_comment>username_2: There is a lot of factors for the boom of AI industry. What many people miss though is the boom has mostly been in the Machine Learning part of AI. This can be attributed to various simple reasons along with their comparisons during earlier times: * **Mathematics**: The maths behind ML algorithms are pretty simple and known for a long time (whether it would work or not was not known though). During earlier times it was not possible to implement algorithms which require high precision of numbers, to be calculated on a chip, in an acceptable amount of time. One of the main arithmetic operations division of numbers still takes a lot of cycles in modern processors. Older processors were magnitude times slower than modern processors (more than 100x), this bottleneck made it impossible to train sophisticated models on contemporary processors. * **Precision**: Precision in calculations is an important factor in ML algorithms. 32 bit precision in processor was made in the 80's and was probably commercially available in the late 90's ([x86](https://en.wikipedia.org/wiki/Intel_8086)), but it was still hella slow than current processors. This resulted in scientists improvising on the precision part and the most basic Perceptron Learning Algorithm invented in the 1960's to train a classifier uses only $1$'s and $0$'s, so basically a binary classifier. It was run on [special computers](https://en.wikipedia.org/wiki/Perceptron). Although, it is interesting to note that we have come a full circle and Google is now using [TPU's](https://en.wikipedia.org/wiki/Tensor_processing_unit) with 8-16 bit accuracy to implement ML models with great success. * **Parallelization** : The concept of parallelization of matrix operations is nothing new. It was only when we started to see Deep Learning as just a set of matrix operations we realized that it can be easily parallelized on massively parallel GPU's, still if your ML algorithm is not inherently parallel then it hardly matters whether you use CPU or GPU (e.g. RNN's). * **Data**: Probably the biggest cause in the ML boom. The Internet has provided opportunities to collect huge amounts of data from users and also make it available to interested parties. Since an ML algorithm is just a function approximator based on data, therefore data is the single most important thing in a ML algorithm. The more the data the better the performance of your model. * **Cost**: The cost of training a ML model has gone down significantly. So using a Supercomputer to train a model might be fine, but was it worth it? Super computers unlike normal PC's are tremendously resource hungry in terms of cooling, space, etc. A recent [article](https://www.technologyreview.com/s/613630/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/) on MIT Technology Review points out the carbon footprint of training a Deep Learning model (sub-branch of ML). It is quite a good indicator why it would have been infeasible to train on Supercomputers in earlier times (considering modern processors consume much lesser power and gives higher speeds). Although, I am not sure but I think earlier supercomputers were specialised in "parallel+very high precision computing" (required for weather, astronomy, military applications, etc) and the "very high precison part" is overkill in Machine Learning scenario. Another important aspect is nowadays everyone has access to powerful computers. Thus, anyone can build new ML models, re-train pre-existing models, modify models, etc. This was quite not possible during earlier times, All this factors has led to a huge surge in interest in ML and has caused the boom we are seeing today. Also check out this [question](https://ai.stackexchange.com/questions/7328/if-digital-values-are-mere-estimates-why-not-return-to-analog-for-ai/) on how we are moving beyond digital processors. Upvotes: 5 [selected_answer]<issue_comment>username_3: > > The year 2012 is also generally considered the start of the “deep learning revolution”. The term > “deep learning” refers to a branch of ML that is based on neural networks with many layers (hence the term “deep”). Although this basic technology had been around for many years, it was in 2012 when [KSH12] used deep neural networks (DNNs) to win the ImageNet image classification challenge by such a large margin that it caught the attention of the wider community. Related advances on other hard problems, such as speech recognition, appeared around the same time (see e.g., [Cir+10; Cir+11; Hin+12]). These breakthroughs were enabled by advances in **hardware technology** (in particular, the repurposing of fast graphics processing units (GPUs) from video games to ML), **data collection technology** (in particular, the use of crowd sourcing tools, such as Amazon’s Mechanical Turk platform, to collect large labeled datasets, such as ImageNet), as well as various **new algorithmic ideas**. > > > Reference: [Probabilistic Machine Learning: An Introduction](https://probml.github.io/pml-book/book1.html) Upvotes: 0 <issue_comment>username_4: Machine learning has been around since the 1960's. They had computers that were less intelligent than the palm-pilots of the 1990's, and they did machine learning. What most folks call "Machine Learning" is deep neural networks like those that started getting competitive at vision-related tasks in the early 2010's (teens). The gpgpu hardware standing on video cards (I had friends using the amiga bit-blitter for the task in the early 90's, @TadeuszWestawic) opened up a relative jump in performance, the software to drive that hardware like the methods from Hinton were mature and effective, and then some of the networks started doing amazing things. Personally I consider Giraffe (Lai, 2015) and AlexNet (Krizhevsky, 2012) to have been watershed moments that motivated the business folks to put resources into the field. Upvotes: 1
2019/07/07
978
4,008
<issue_start>username_0: Dietterich, who introduced the [taxi environment](https://arxiv.org/pdf/cs/9905014.pdf) (see p. 9), states the following: In total there “are 500 [distinct] possible states: 25 squares, 5 locations for the passenger (counting the four starting locations and the taxi), and 4 destinations” (Dietterich, 2000, p. 9). However, in my opinion there are only 25 (grid) \* 4 (locations) \* 2 (passenger in car) = 200 different states, because for the agent it should be the same task to go to a certain point, regardless of whether it's on its way to pick up or to drop-off. Only the action at the destination is different which would be stored binary (passenger in car or not) Why does Dietterich come up with 500 states?<issue_comment>username_1: This is more of a combinatorics than AI question but regradless, the full state information for the environment is: $(taxi \space position, passenger \space position, destination \space position)$ There are 25 possible taxi positions, 5 passenger positions and 4 destination positions making it $25 \cdot 5 \cdot 4 = 500$, so the paper is correct. You are also correct but you divided 1 objective into 2 objectives and you have 2 separate policies, a pickup policy and a dropoff policy. So your state information would be for each policy: $(taxi \space position, destination \space position)$ There are 25 possible taxi positions and 4 possible destination positions making it $25 \cdot 4 = 100$. You have 2 policies so you have $200$ states. **EDIT** Actually in the second case, I think you could get away with only 1 policy where you would simply change the destination position once you pick up the passenger so you dont need 2 separate policies and you would have only $100$ states Upvotes: 3 [selected_answer]<issue_comment>username_2: This . . . > > because for the agent it should be the same task to go to a certain point, regardless of whether it's on its way to pick up or to drop-off > > > . . . might seem logical/intuitive to a person understanding the task, but it is not mathematically correct. The agent cannot "merge" states because they involve the same behaviour. It *must* count differences in state as the combinations are presented. Critically, heading towards the passenger location or heading towards the goal location are not in any way similar to the agent, unless you manipulate the state to make them so\*. Eventually the taxi will learn very similar navigation behaviour for picking up and dropping off a passenger. However, using a basic RL agent it learns these very much separately, and must re-learn the navigation rules independently for each combination of passenger and goal location. An agent that learned navigation within the environment, and then combined it into different tasks might be an example of hierarchical reinforcement learning, transfer learning, or curriculum learning. These are more sophisticated learning approaches, but it is quite interesting that even very basic RL problems can demonstrate a use for higher level abstractions. Most agents used on the taxi problem don't do this though, as 500 states is really very easy to "brute force" using the simplest algorithms. --- \* You could modify the state representation to rationalise the task and make it have less states, similar to your suggestion. For instance, have one "target" location which could either be pickup or drop off, and a boolean "carrying passenger" state component. That would indeed reduce the number of states. However, that has involved you as the problem designer simplifying the problem to make it easier for the agent. Given that this is a toy problem designed as a benchmark to see how different agents perform, by doing that you subvert the purpose of the environment. If you were creating an agent to work on a harder real world problem though, it might be a very good idea to look for symmetries and ways to simplify state representation which would speed up learning. Upvotes: 2
2019/07/08
714
2,783
<issue_start>username_0: Is there any precedent for using a neuroevolution algorithm, like NEAT, as a way of getting to an initialization of weights for a network that can then be fine-tuned with gradient descent and back-propagation? I wonder if this may be a faster way of getting to a global minimum before starting a decent to a local using backpropagation with a large set of input parameters.<issue_comment>username_1: The paper [The Comparison and Combination of Genetic and Gradient Descent Learning in Recurrent Neural Networks: An Application to Speech Phoneme Classification](https://pdfs.semanticscholar.org/5071/d5f447a99f7e4d522f07f7752ca1435e13c9.pdf) (2007), by <NAME> and <NAME>, uses genetic algorithms to train a recurrent neural network and then uses gradient descent to fine tune the trained model. The paper [Evolutionary Stochastic Gradient Descent for Optimization of Deep Neural Networks](https://arxiv.org/pdf/1810.06773.pdf) (2018), by <NAME>, <NAME>, <NAME> and <NAME>, also combines evolutionary algorithms and gradient descent, but, in this case, they alternate between a gradient descent step and an evolution step. This is an example of a *evolutionary stochastic gradient descent* (ESGD) method, as opposed to a [population-based training (PBT)](https://arxiv.org/pdf/1711.09846.pdf) method, which uses only evolutionary algorithms to train neural networks. Upvotes: 3 [selected_answer]<issue_comment>username_2: Yes it can be in addition to the papers that username_1 linked to above uber's ai research team has a very interesting combination of sgd and neuroevolution which they have dubbed "safe mutations". In the algorithm each genome undergoes a bit of sgd to improve its fitness before the speciation, elitism, and reproduction processes. I imagine this has an effect of searching for genomes which are well suited for sgd optimization, and in my opinion does really provide the best of both a worlds. Here is the link to the paper <https://arxiv.org/abs/1712.06563> . What I think would be a cool for this combination of the two would be its use in conjunction with the es-hyperneat/hyperneat neuroevolution algorithms in which a small genome cppn encodes large phenotype rnns using the rnns substrate (its structure represented with cartesian coordinates) as the cppns input. If a small amount sgd is used on the rnn's to improve fitness then what you end up with is a cppn is being evolved to encode very general rnn networks that can then be optimized to specific domains via sgd. I like this because then your neuroevolution doesnt occur on a massive rnn and you can create cppns that recognize the general problem you wish to solve if your clever with your fitness evaluation. Upvotes: 1
2019/07/08
2,487
10,281
<issue_start>username_0: Let's consider this example: > > It's John's birthday, let's buy him a kite. > > > We humans most likely would say the kite is a birthday gift, if asked why it's being bought; and we refer to this reasoning as *common sense*. Why do we need this in artificially intelligent agents? I think it could cause a plethora of problems, since a lot of our human errors are caused by these vague assumptions. Imagine an AI ignoring doing certain things because it assumes it has already been done by someone else (or another AI), using its common sense. Wouldn't that bring human errors into AI systems?<issue_comment>username_1: [Commonsense knowledge](https://en.wikipedia.org/wiki/Commonsense_knowledge_(artificial_intelligence)) is the collection of premises that everyone, in a certain context (hence common sense knowledge might be a function of the context), takes for granted. There would exist a lot of miscommunication between a human and an AI if the AI did not possess common sense knowledge. Therefore, commonsense knowledge is fundamental to [*human-AI interaction*](https://en.wikipedia.org/wiki/Human%E2%80%93computer_interaction). There are also premises that every human takes for granted independently of the country, culture, or, in general, context. For example, every human (almost since its birth) has a mechanism for reasoning about naive physics, such as space, time, and physical interactions. If an AI does not possess this knowledge, then it cannot perform the tasks that require this knowledge. Any task that requires a machine to have common sense knowledge (of an *average* human) is believed to be [AI-complete](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.369.7112), that is, it requires human-level (or general) intelligence. See section D of [AI-Complete, AI-Hard, or AI-Easy – Classification of Problems in AI](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.369.7112) (2012) by <NAME>. Of course, the problems that arise while humans communicate because of different assumptions or premises might also arise between humans and AIs (that possess commonsense knowledge). Upvotes: 5 [selected_answer]<issue_comment>username_2: We need this kind of common sense knowledge if we want to get computers to *understand* human language. It's easy for a computer program to analyse the grammatical structure of the example you give, but in order to understand its meaning we need to know the possible contexts, which is what you refer to as "common sense" here. This was emphasised a lot in Roger Schank *et al.*'s work on computer understanding of stories, and lead to a lot of research into knowledge representation, scripts, plans, goals. One example from Schank's work is *Mary was hungry. She picked up a Michelin Guide.* -- this seems like a non-sequitur: if you are hungry, why pick up a book? Until you realise that it is a restaurant guide, and that Mary is presumably planning to go to a restaurant to eat. If you know that going to a restaurant is a potential solution to the problem of being hungry, then you have no problem understanding this story fragment. Any story needs common sense to be understood, because no story is completely explicit. Common things are "understood" and aren't explicitly mentioned. Stories relate to human experience, and a story that would make everything explicit would probably read like a computer program. You also need common sense to understand how characters in a story behave, and how they are affected by what is happening. Again, this is very subjective, but it is necessary. Some common sense might be generally applicable, other aspects of it won't be. It's a complex issue, which is why researchers have struggled with it for at least half a century of AI research. Of course this would introduce "human errors" into an AI system. All this is very subjective and culture-specific. Going to a restaurant in the USA is different from going to one in France -- this is why going abroad can be a challenge. And my reading of a story will probably be different from yours. But if you want to simulate human intelligence, you cannot do that without potential human "errors". Upvotes: 3 <issue_comment>username_3: I'll answer this question in several parts: > > Why do AGI systems need to have common sense? > > > Humans in the wild reason and communicate using common sense more than they do with strict logic, you can see this by noting that it is easier to appeal to someone's emotion than logic. So any system that seeks to replicate human cognition (as in AGI) should also replicate this tendency to use common sense. More simply put, we'd wish that our AGI system can speak to us in common sense language simply because that is what we understand best (otherwise we wouldn't understand our friendly AGI would we?). Obtuse theory and strict logic might technically be correct, but don't appeal to our understanding. > > Isn't the goal of AGI the create the most cognitively advance system? Why should the "most perfect" AGI system need to deal with such imperfections and impreciseness present in common sense? > > > First, it might only *appear* to be the case that common sense logic is "irrational". Perhaps there is a consistent mathematical way to model common sense such that all the subtleties of common sense are represented in a rigour fashion. Second, the early study of Artificial Intelligence started in the study of cognitive science, where researchers tried to replicate "algorithms of the mind", or more precisely: decidable procedures which replicated human thought. To that extent then, the study of AI isn't to create the "most supreme cognitive agent" but to merely replicate human thought/behavior. Once we can replicate human behavior we can perhaps try to create something super-human by giving it more computational power, but that is not guaranteed. > > I still don't see why common sense is needed in AGI systems. Isn't AGI about being the most intelligent and powerful computational system? Why should it care or conform towards the limits of human understanding, which requires common sense? > > > Perhaps then you have a bit of a misaligned understanding of what AGI entails. AGI doesn't mean unbounded computational power (physically impossible due to physical constraints on computation such as [Bremermann's limit](https://en.wikipedia.org/wiki/Bremermann%27s_limit)) or unbounded intelligence (perhaps physically impossible due to the prior constraint). It usually just means artificial "general intelligence", general meaning broad and common. Considerations about unbounded agents are studied in more detail in fields such as theoretical computer science (type theory I believe), decision theory, and perhaps even set theory, where we are able to pose questions about agents with unbounded computational power. We might say that there are questions even an AGI system with unbounded power can't answer due to the [Halting Problem](https://en.wikipedia.org/wiki/Halting_problem), but only if the assumptions on those fields map onto the structure of the given AGI, which might not be true. For a better understanding of what AGI might entail and its goals, I might recommend two books: Artificial Intelligence: The Very Idea by <NAME> for a more pragmatic approach (as pragmatic as AI-philosophy can be, and On the Origin of Objects by <NAME> for a more philosophically inclined approach. As a fun aside, the collection of Zen koan's: The Gateless Gate, includes the following passage: (quoted and edited from [wikipedia](https://en.wikipedia.org/wiki/Mu_(negative)#The_Mu-koan)) > > A monk asked Zhaozhou, a Chinese Zen master, "Has a dog Buddha-nature or not?" Zhaozhou answered, "Wú" > > > Wú (無) translates to "none", "nonesuch", or "nothing", which can be interpreted as to avoid answering either yes or no. This enlightened individual doesn't seek to strictly answer every question, but just to respond in a way that makes sense. It doesn't really matter as to wether the dog has Buddha-nature or not (whatever Buddha-nature means), so the master defaults to absolve the question rather than resolving it. Upvotes: 2 <issue_comment>username_4: Is this common sense, or is this natural language understanding? It's been said that natural language understanding is one of the hardest AI tasks. This is one of the examples showing why. The first part of the sentence is related to the second part, that how sentences work. Now the relevant question is *how* the two parts are related. There are a few standard relations that we encounter, for instance a temporal order. In this specific example, the nature of the relation is closer to a cause-and-effect. You see this effect when we insert a word to make this relation explicit: > > It's John's birthday, *so* let's buy him a kite. > or > Let's buy John a kite, *because* it's his birthday. > > > This is a technique for humans to make these implicit relations explicit. Now, as curiousdannii notes, you also need the cultural knowledge to understand how a birthdays can be a cause for a present. No amount of common sense helps with that. Upvotes: 1 <issue_comment>username_5: Perhaps it would help to give an example of what can go wrong without common sense: At the start of the novel "The Two Faces of Tomorrow" by <NAME>, a construction supervisor on the Moon files a request with an automated system, asking that a particular large piece of construction equipment be delivered to his site as soon as possible. The system replies that it will arrive in twenty minutes. Twenty minutes later, the supervisor is killed as the equipment crashes into his construction site. The system had determined that the fastest way to deliver the equipment to that site was to mount it on a mass-driver and launch it at the site. Had the system in question been given common sense, it would have inferred additional unstated constraints on the query, such as 'the equipment should arrive intact', 'the arrival of the equipment should not cause damage or loss of life', and so on. (the rest of the novel describes an experiment designed to produce a new system with common sense) Upvotes: 2
2019/07/10
1,610
6,541
<issue_start>username_0: Imagine you show a neural network a picture of a lion 100 times and label it with "dangerous", so it learns that lions are dangerous. Now imagine that previously you have shown it millions of images of lions and alternatively labeled it as "dangerous" and "not dangerous", such that the probability of a lion being dangerous is 50%. But those last 100 times have pushed the neural network into being very positive about regarding the lion as "dangerous", thus ignoring the last million lessons. Therefore, it seems there is a flaw in neural networks, in that they can change their mind too quickly based on recent evidence. Especially if that previous evidence was in the middle. Is there a neural network model that keeps track of how much evidence it has seen? (Or would this be equivalent to letting the learning rate decrease by $1/T$ where $T$ is the number of trials?)<issue_comment>username_1: Yes, the problem of *forgetting* older training examples is a characteristic of Neural Networks. I wouldn't call it a "flaw" though because it helps them be more adaptive and allows for interesting applications such as transfer learning (if a network remembered old training too well, fine tuning it to new data would be meaningless). In practice what you want to do is to mix the training examples for *dangerous* and *not dangerous* so that it doesn't see one category in the beginning and one at the end. A standard training procedure would work like this: ``` for e in epochs: shuffle dataset for x_batch, y_batch in dataset: train neural_network on x_batxh, y_batch ``` Note that the shuffle at every epoch guarantees that the network won't see the same training examples in the same order every epoch and that the classes will be mixed Now to answer your question, yes decreasing the learning rate would make the network less prone to *forgetting* its previous training, but how would this work in a non-online setting? In order for a network to converge it needs multiple epochs of training (i.e. seeing each sample in the dataset many times). Upvotes: 5 <issue_comment>username_2: Yes, indeed, neural networks are very prone to [catastrophic forgetting (or interference)](https://www.sciencedirect.com/science/article/pii/S0079742108605368?via%3Dihub). Currently, this problem is often ignored because neural networks are mainly trained offline (sometimes called *batch training*), where this problem does not often arise, and not [online or incrementally](https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2016-19.pdf), which is [fundamental to the development of *artificial general intelligence*](https://nips.cc/Conferences/2018/Schedule?showEvent=10910). There are some people that work on *continual lifelong learning* in neural networks, which attempts to adapt neural networks to continual lifelong learning, which is the ability of a model to learn from a stream of data continually, so that they do not completely forget previously acquired knowledge while learning new information. See, for example, the paper [Continual lifelong learning with neural networks: A review](https://www.sciencedirect.com/science/article/pii/S0893608019300231) (2019), by German I. Parisi et al., which summarises the problems and existing solutions related to catastrophic forgetting of neural networks. Upvotes: 7 [selected_answer]<issue_comment>username_3: What you are describing sounds like it could be a deliberate case of [fine-tuning](http://wiki.fast.ai/index.php/Fine_tuning). There is a fundamental assumption that makes [minibatch gradient descent](https://adventuresinmachinelearning.com/stochastic-gradient-descent/) work for learning problems: It is assumed that any batch or temporal window of consecutive batches forms a decent approximation of the true *global* gradient of the error function with respect to any parameterization of the model. If the error surface itself is moving in a big way, that would thwart the purposes of gradient descent--since gradient descent is a local refinement algorithm, all bets are off when you suddenly change the underlying distribution. In the example you cited, catastrophic forgetting seems like it would be an after-effect of having "forgotten" data points previously seen, and is either a symptom of the distribution having changed, or of under-representation in the data of some important phenomenon, such that it is rarely seen relative to its importance. Experience replay from reinforcement learning is a relevant concept that transfers well to this domain. Here is [a paper](https://arxiv.org/abs/1811.11682) that explores this concept with respect to catastrophic forgetting. As long as sampling represents the true gradients sufficiently well (look at [training sample balancing](https://towardsdatascience.com/handling-imbalanced-datasets-in-deep-learning-f48407a0e758) for this) and the model has enough parameters, the catastrophic forgetting problem is unlikely to occur. In randomly shuffled datasets with replacement, it is most likely to occur where datapoints of a particular class are so rare that they are unlikely to be included for a long time during training, effectively fine-tuning the model to a different problem until a matching sample is seen again. Upvotes: 3 <issue_comment>username_4: Maybe in theory, but not in practice. The thing is you seem to consider only chronological/sequential training. And there are two ways to view this issue: 1. online learning -> then it is a feature of the method 2. offline learning -> it does not happen thanks to several order randomizations --- 1. Online-Training or [Online Machine Learning](https://en.wikipedia.org/wiki/Online_machine_learning). Using the [woppal wabbit library](https://en.wikipedia.org/wiki/Vowpal_Wabbit). It is a **feature** (not an issue like you consider) of this library to adapt chronologically to the input it is fed with. I insist: it is a feature to adapt chronologically. It is wanted that when you start only telling him that lions are dangerous, then that it adapts consequently. --- 2. Offline-Training In my personal experience, I have used only **randomized** subsets of my input data as training set. And this randomization is **crucial**. Randomizations happens namely: * during the training of the neural network, each epochs generally randomize the dataset order * during cross-validation, randomization is used as a way to evaluate a robust model that generalises well and does not overfit Upvotes: 1
2019/07/11
2,401
8,253
<issue_start>username_0: The [Wikipedia article for the universal approximation](https://en.wikipedia.org/wiki/Universal_approximation_theorem) theorem cites a version of the universal approximation theorem for Lebesgue-measurable functions from [this conference paper](https://arxiv.org/pdf/1709.02540.pdf). However, the paper does not include the proofs of the theorem. Does anybody know where the proof can be found?<issue_comment>username_1: There are multiple papers on the topic because there have been multiple attempts to prove that neural networks are universal (i.e. they can **approximate** any **continuous** function) from slightly different perspectives and using slightly different assumptions (e.g. assuming that certain activation functions are used). Note that these proofs tell you that neural networks can **approximate** any **continuous** function, but they do not tell you **exactly** how you need to train your neural network so that it approximates your desired function. Moreover, most papers on the topic are quite technical and mathematical, so, if you do not have a solid knowledge of approximation theory and related fields, they may be difficult to read and understand. Nonetheless, below there are some links to some possibly useful articles and papers. The article [A visual proof that neural nets can compute any function](http://neuralnetworksanddeeplearning.com/chap4.html) (by <NAME>) should give you some intuition behind the *universality* of neural networks, so this is probably the first article you should read. Then you should probably read the paper [Approximation by Superpositions of a Sigmoidal Function](https://link.springer.com/content/pdf/10.1007/BF02551274.pdf) (1989), by <NAME>, who proves that **multi-layer perceptrons** (i.e. feed-forward neural networks with at least one hidden layer) can approximate any [continuous function](https://en.wikipedia.org/wiki/Continuous_function). However, he assumes that the neural network uses sigmoid activations functions, which, nowadays, have been replaced in many scenarios by ReLU activation functions. Other works (e.g. [[1]](https://archive.nyu.edu/jspui/bitstream/2451/14329/1/IS-92-13.pdf), [[2]](https://pinkus.net.technion.ac.il/files/2021/02/acta.pdf)) showed that you don't necessarily need sigmoid activation functions, but only certain classes of activation functions do not make neural networks universal. The universality property (i.e. the ability to approximate any continuous function) has also been proved in the case of **convolutional neural networks**. For example, see [Universality of Deep Convolutional Neural Networks](https://www.sciencedirect.com/science/article/abs/pii/S1063520318302045) (2020), by <NAME>, which shows that convolutional neural networks can approximate any continuous function to an arbitrary accuracy when the depth of the neural network is large enough. See also [Refinement and Universal Approximation via Sparsely Connected ReLU Convolution Nets](https://ieeexplore.ieee.org/document/9126188) (by <NAME> et al., 2020) See also page 632 of [Recurrent Neural Networks Are Universal Approximators](https://link.springer.com/chapter/10.1007/11840817_66) (2006), by Schäfer et al., which shows that **recurrent neural networks** are universal function approximators. See also [On the computational power of neural nets](http://binds.cs.umass.edu/papers/1992_Siegelmann_COLT.pdf) (1992, COLT) by Siegelmann and Sontag. [This answer](https://stats.stackexchange.com/a/221142/82135) could also be useful. For **graph neural networks**, see [Universal Function Approximation on Graphs](https://proceedings.neurips.cc//paper/2020/hash/e4acb4c86de9d2d9a41364f93951028d-Abstract.html) (by <NAME>, 2020, NeurIPS) Upvotes: 6 [selected_answer]<issue_comment>username_2: Just wanted to add that the new text *Deep Learning Architectures A Mathematical Approach* mentions this result, but I'm not sure if it gives a proof. It does mention an improved result by Hanin (<http://arxiv.org/abs/1708.02691>) for which I think it does give at least a partial proof. The original paper by Hanin seems to omit some proofs as well, but the published version (<https://www.mdpi.com/2227-7390/7/10/992/htm>) may be more complete. Upvotes: 0 <issue_comment>username_3: "Modern" Guarantees for [Feed-Forward Neural Networks](https://en.wikipedia.org/wiki/Feedforward_neural_network) ---------------------------------------------------------------------------------------------------------------- My answer will complement username_1's above, which gave a very nice overview of universal approximation theorems for different types of commonly used architectures, by focusing on recent developments specifically for feed-forward networks. I'll try an emphasis [depth](https://ai.stackexchange.com/questions/1742/what-is-the-difference-between-machine-learning-and-deep-learning) over breadth (sometimes called width) as much as possible. *Enjoy!* --- **Part 1: Universal Approximation** *Here I've listed a few recent universal approximation results that come to mind.* Remember, *universal approximation* asks if feed-forward networks (or some other architecture type) can *approximate* any (in this case continuous) function to arbitrary accuracy (I'll focus on the : *[uniformly on compacts](https://math.stackexchange.com/questions/560978/what-is-topology-of-compact-convergence)* sense). Let me mention, that there are two types of guarantees: *quantitative ones* and *qualitative ones*. The latter are akin to [Hornik's results (Neural Networks - 1989)](https://www.sciencedirect.com/science/article/pii/0893608089900208) which simply state that some neural networks can approximate a given (continuous) function to arbitrary precision. The former of these types of guarantees quantifies the number of parameters required for a neural network to actually perform the approximation and are akin to [Barron's (now) classical paper (IEEE - 1993)](https://ieeexplore.ieee.org/document/256500)'s breakthrough results. 1. **Shallow Case:** If you want quantitative results only for *shallow* networks: Then [<NAME> and <NAME> (Neural Networks - 2020)](https://www.sciencedirect.com/science/article/pii/S0893608020301891#b9) will do the trick but (note: The authors deal with the Sobolev case but you get the continuous case immediately via the [Soblev-Morrey embedding theorem](https://mathoverflow.net/questions/127259/sharpness-of-the-sobolev-embedding-theorem).) 2. **Deep (not narrow) [ReLU](https://ai.stackexchange.com/questions/6468/why-do-we-prefer-relu-over-linear-activation-functions) Case:** If you want a quantitative proof for deep networks (but not too narrow) with ReLU activation function then [Dimity Yarotsky's result (COLT - 2018)](http://proceedings.mlr.press/v75/yarotsky18a/yarotsky18a.pdf) will do the trick! 3. **Deep and Narrow:** To the best of my knowledge, the first quantitative proof for deep and narrow neural networks with general input and output spaces has recently appeared here: [https://arxiv.org/abs/2101.05390 (preprint - 2021)](https://arxiv.org/abs/2101.05390). The article is a constructive version of [<NAME> and <NAME>'s recent deep and narrow universal approximation theorem (COLT - 2020)](http://proceedings.mlr.press/v125/kidger20a.html) (qualitative) for functions from $\mathbb{R}^p$ to $\mathbb{R}^m$ and [<NAME> and <NAME>'s recent Non-Euclidean Universal Approximation Theorem (NeurIPS - 2020).](https://proceedings.neurips.cc/paper/2020/file/786ab8c4d7ee758f80d57e65582e609d-Paper.pdf) --- **Part 2: Memory Capacity** *A related concept is that of* **"memory capacity of a deep neural network"**. These results seek to quantify the number of parameters needed for a deep network to learn (exactly) the assignment of some input data $\{x\_n\}\_{n=1}^N$ to some output data $\{y\_n\}\_{n=1}^N$. For example; you may want to take a look here: 4. **Memory Capacity of Deep ReLU networks:** <NAME>'s very recent publication [Memory Capacity of Neural Networks with Threshold and Rectified Linear Unit Activations - (SIAM's SIMODS 2020)](https://epubs.siam.org/doi/pdf/10.1137/20M1314884) Upvotes: 3
2019/07/14
568
2,417
<issue_start>username_0: I have a continuous state space, and a continuous action space. The way I understand it, I can build a policy network which takes as input a continuous state vector and outputs both mean vector and covariance matrix of the action-distribution. To get a valid action I then sample from that distribution. However, when trying to implement such a network, I get the error message that the parts of my output layer which I want to be the covariance matrix are singular/not positive-semi-definite. How can I fix this? I tried different activation-functions and initializations for the last layer, but once in a while I run into the same problem again. How can I enforce that my network outputs a valid covariance matrix?<issue_comment>username_1: Usually it is assumed that there is no correlation between different actions, so the covariance matrix will be zero everywhere except on the main diagonal. Diagonal will represent variances of actions. Diagonal covariance matrix will be positive semidefinite if all values on diagonal are $\geq$ 0 so you need to insure that output of final layer is $\geq$ 0, which can be done with ReLU activation for example. Upvotes: 3 [selected_answer]<issue_comment>username_2: @username_1\_ 's answer is correct, it is common practice in a multitude of models to learn a representation of an independent multivariate normal, but don't let that stop you from pushing the envelope for your needs. You can actually learn a dependent form as well. Normally the independent form is done by learning the means and standard deviations, because sampling a standard normal can achieve your draw by $z \sim N(\mu, Diag(\sigma^2))$ by $z = \mu + \sigma \epsilon$ where $\epsilon$ is drawn from a unit normal. But you can actually achieve a similar trick for a generalized multivariate normal distribution: lets assume your trying to learn $N(\mu, \Sigma)$ where $\Sigma$ is the covariance matrix. So what you would do is learn the [Cholesky decomposition](https://en.wikipedia.org/wiki/Cholesky_decomposition) because where $\Sigma = AA^T$ you can now draw it through the parametrization trick: $z = \mu + A\epsilon$. Upvotes: 1 <issue_comment>username_3: @username_2, if $\sigma$ is covariance matrix, then exist $A$ s.t. $\sigma=AA^T$. But if we generate $A$ by model, and calculate $\sigma = AA^T$, then Cholesky decomposition will not successful Upvotes: 0
2019/07/15
710
2,450
<issue_start>username_0: From the subjective perspective, the number of documentaries about the subject Artificial Intelligence and robotics is small. It seems, that the topic is hard to visualize for the audience and in most cases, the assumption is, that the recipient isn't familiar with computers at all. I've found the following documentaries: * The Computer Chronicles - Artificial Intelligence (1985) * The Machine That Changed the World (1991), Episode IV, The Thinking Machine * Robots Rising (1998) * Rodney's Robot Revolution (2008) The subjective awareness is, that the quality of the films in the 1980s was higher than in modern documentaries, and in 50% of the documentaries <NAME> is the host. Are more documentaries available which can be recommended to watch? **Focus on non-fictional documentaries** Some fictional movies were already mentioned in a different [post](https://ai.stackexchange.com/questions/8844/what-are-the-most-instructive-movies-about-artificial-intelligence). For example Colossus: The Forbin Project (1970), Bladerunner (1982) or A.I. Artificial Intelligence (2001). They are based on fictional characters which doesn't exist and the presented robots are running with a [Hollywood OS](http://wiki.c2.com/?HollywoodOs). This question is only about nonfictional motion pictures.<issue_comment>username_1: The documentary **[Plug & Pray](https://en.wikipedia.org/wiki/Plug_%26_Pray)** (2010), directed and written by <NAME>, with main protagonists <NAME> (the creator of [ELIZA](https://en.wikipedia.org/wiki/ELIZA)) and the futurist <NAME>, is about the promise, problems and ethics of artificial intelligence and robotics. This documentary won several awards, including the Bavarian Film Award 2010 for "best documentary". Here's the [official trailer](https://www.youtube.com/watch?v=ecPEkG2Pclg) of the movie. Upvotes: 3 [selected_answer]<issue_comment>username_2: [AlphaGo](https://www.rottentomatoes.com/m/alphago) (2017) is quite a good watch, given that it is a documentary about the AlphaGo program, how DeepMind developed it, the help they had, and doesn't get too technical. You can watch the trailer [here](https://www.youtube.com/watch?v=8tq1C8spV_g). Another documentary which wasn't exactly AI but something that is an interesting watch or at least was when it came out was [The Human Face of Big Data](https://www.youtube.com/watch?v=OV1y6ZUV_Q4). Upvotes: 2
2019/07/15
772
3,205
<issue_start>username_0: The definition of *deterministic environment* I am familiar with goes as follows: > > The next state of the agent depends only on the current state and the action chosen by the agent. > > > By exclusion, everything else would be a *stochastic environment*. However, what about environments where the next state depends deterministically on the history of previous states and actions chosen? Are such environments also considered *deterministic*? Are they very uncommon, and hence just ignored, or should I include them into my working definition of *deterministic environment*?<issue_comment>username_1: Depends on the information provided in the state of the system. In theory, the *history* can be an element of the state, in which case, by the definition you provided: > > The next state of the agent depends only on the current state and the action chosen by the agent. > > > It is a deterministic agent. On the otherhand assume the state has no information about the history, in which case at every point you only know its current status and nothing about where it was previously. In this case, it is a stochastic environment because you can define a distribution with greater than 0 entropy/uncertainty over possible next states. Upvotes: 0 <issue_comment>username_2: Markov Environment is not about deterministic or stochastic. "Depends only on the current state and your action" does not mean you know what will happen(deterministic). We can have Markov + deterministic, Markov + Stochastic, Non-Markov + deterministic, and Non-Markov + stochastic. The definition you have is not a definition of deterministic. It is a definition of Markov property. Refer to Wikipedia. > > A stochastic process has the Markov property if the conditional > probability distribution of future states of the process (conditional > on both past and present values) depends only upon the present state; > that is, **given the present, the future does not depend on the past**. A > process with this property is said to be Markovian or a Markov > process. The most famous Markov process is a Markov chain. Brownian > motion is another well-known Markov process. > > > Markov property is assumed mostly in stochastic problems. Brownian motion is the motion of molecules of ink in the water and used to model the movement of a stock price, which is stochastic. Deterministic means when you are in the same state and choose the same action your next state will be always the same. Stochastic means even you are in the same state and choose the same action, you next state can be different than the previous time. Example) You toss a coin and roll a die. Every time you roll a die you get pennies as many. If the coin gets head, you get a chance to roll a die twice next time. Your state can be (money you collect so far, coin head/tail in the previous time). In this problem, your next state will not be affected by the past. the only thing you need to know is the current state, the money you got and head or tail. It has a **Markov** process/environment. However, still, it is **stochastic** because you don't know what will be the next state. Upvotes: 2
2019/07/16
814
3,217
<issue_start>username_0: Tay was a chatbot, who learned from Twitter users. > > Microsoft's AI fam from the internet that's got zero chill. The more you talk the smarter Tay gets. — Twitter tagline. > > > Microsoft trained the AI to have a basic ability to communicate, and taught it a few jokes from hired comedians before setting it lose to learn from its conversations. This was a mistake. But why did Tay go so wrong? Was this an example of [catastrophic forgetting](https://ai.stackexchange.com/q/13289/125), where short, recent trends override large, less recent training, or was it something else entirely?<issue_comment>username_1: Depends on the information provided in the state of the system. In theory, the *history* can be an element of the state, in which case, by the definition you provided: > > The next state of the agent depends only on the current state and the action chosen by the agent. > > > It is a deterministic agent. On the otherhand assume the state has no information about the history, in which case at every point you only know its current status and nothing about where it was previously. In this case, it is a stochastic environment because you can define a distribution with greater than 0 entropy/uncertainty over possible next states. Upvotes: 0 <issue_comment>username_2: Markov Environment is not about deterministic or stochastic. "Depends only on the current state and your action" does not mean you know what will happen(deterministic). We can have Markov + deterministic, Markov + Stochastic, Non-Markov + deterministic, and Non-Markov + stochastic. The definition you have is not a definition of deterministic. It is a definition of Markov property. Refer to Wikipedia. > > A stochastic process has the Markov property if the conditional > probability distribution of future states of the process (conditional > on both past and present values) depends only upon the present state; > that is, **given the present, the future does not depend on the past**. A > process with this property is said to be Markovian or a Markov > process. The most famous Markov process is a Markov chain. Brownian > motion is another well-known Markov process. > > > Markov property is assumed mostly in stochastic problems. Brownian motion is the motion of molecules of ink in the water and used to model the movement of a stock price, which is stochastic. Deterministic means when you are in the same state and choose the same action your next state will be always the same. Stochastic means even you are in the same state and choose the same action, you next state can be different than the previous time. Example) You toss a coin and roll a die. Every time you roll a die you get pennies as many. If the coin gets head, you get a chance to roll a die twice next time. Your state can be (money you collect so far, coin head/tail in the previous time). In this problem, your next state will not be affected by the past. the only thing you need to know is the current state, the money you got and head or tail. It has a **Markov** process/environment. However, still, it is **stochastic** because you don't know what will be the next state. Upvotes: 2
2019/07/19
790
3,169
<issue_start>username_0: I've heard that [prediction is equivalent to data compression](https://www.reddit.com/r/artificial/comments/5r4vwa/data_compression_as_the_key_to_agi/). Is there a way to take a compression function and use it to create an AI that predicts?<issue_comment>username_1: The way some (not all) compression algorithms work is that they encode frequent events in a short code, and rarer events with a longer code. Overall you save more space by encoding the common elements than you need to expend coding the rare ones. One example of this is a [Huffman code](https://en.wikipedia.org/wiki/Huffman_coding), which uses a variable length encoding based on the frequency of the items. You can use a compression algorithm for prediction if if encodes more than one event at a time. For example, word pairs rather than individual words. Each word pair will have a code, and the common word pairs (eg *of the*) will have shorter codes that the ones which are less common (eg *of three*). For prediction, select all the word pairs that start with your known sequence (eg *of*). Now select from that list the pair with the shortest code (which is more common), so in this example *of* would more likely be followed by *the* rather than *three*. After than, repeat the process with the next word, so look for pairs that begin with *the*. All you need is the compression 'code book' which is produced during the compression process -- it's essentially a model of the data you compressed. This also works for longer sequences than pairs, of course. If you want to know more about the topic, I can recommend *Managing Gigabytes* by Witten, Moffat, and Bell. Great book on compression techniques. Upvotes: 2 <issue_comment>username_2: Although this does not strictly answer your question (but it is at least very related), <NAME> has some interesting ideas about compression and how it relates to artificial intelligence, prediction, curiosity, etc. For example, have a look at this paper [Simple Algorithmic Theory of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes](http://people.idsia.ch/~juergen/sice2009.pdf), whose abstract states > > In this summary of previous work, I argue that data becomes temporarily interesting by itself to some self-improving, but computationally limited, subjective observer once he learns to predict or compress the data in a better way, thus making it subjectively more "beautiful". Curiosity is the desire to create or discover more non-random, non-arbitrary, "truly novel", regular data that allows for compression progress because its regularity was not yet known. This drive maximizes "interestingness", the first derivative of subjective beauty > or compressibility, that is, the steepness of the learning curve. It motivates exploring infants, pure mathematicians, composers, artists, dancers, comedians, yourself, and recent artificial systems. > > > See also his interesting talk [<NAME>: The Algorithmic Principle Beyond Curiosity and Creativity](https://www.youtube.com/watch?v=h7F5sCLIbKQ) on YouTube. Upvotes: 0
2019/07/19
913
3,246
<issue_start>username_0: To me, most ANN/RNN related articles don't tell me actually how the network is implemented. I know that in the ANN you'll have multiple neurons, activation function, weights, etc. But, how do you, actually, in each neuron, convert the input to the output? Putting activation function aside, is the neuron simply doing $\text{input}\*a+b=\text{output}$, and try to find the correct $a$ and $b$? If it's true, then how about where you have two neurons and their output ($c$ and $d$) is pointing to one neuron? Do you first multiply $c$ and $d$ then feed it in as input?<issue_comment>username_1: The basic calculation for a single neuron is of the form $$\sigma\left(\sum\_{i} x\_i w\_i \right),$$ where $x\_i$ is the input to the neuron $w\_i$ are the neuron-specific weights for *every single* input and $\sigma$ is the pre-specified activation function. In your terms, and disregarding the activation function, the calculation would turn out to be $$c\,a\_c + d\,a\_d + b$$ Note, that the bias term $b$ is just a weight that gets multiplied by the input $1$, thus it appears to have no input. If you want to develop a further understanding for this, you should try to get familiar with matrix and vector notations and the basic linear algebra that underlies the *feed-forward neural networks*. If you do, an entire layer of neurons on a whole batch of data will suddenly simply look like this: $$\sigma(WX)$$ and a FFNN with say 3 layers will look like this: $$\sigma\_{3}(W\_3\sigma\_2(W\_2\sigma\_1(W\_1X)))$$ Upvotes: 2 <issue_comment>username_2: username_1's answer provides the math notation behined exactly what is going on, but since you still seem a bit confused, I've made a visual representation of a neural network using *only weights with no activation function*. See below: [![enter image description here](https://i.stack.imgur.com/vkQkI.png)](https://i.stack.imgur.com/vkQkI.png) So I'm fairly sure it as you suspected: At each neuron, you take the **sum** of the inputs of the previous layer multiplied by the weight that connects that specific input to said neuron, where each input has its own unique weight for every one of its outgoing connections. With a bias, you would do the exact same math as shown in the above image, but once you find the final value (0.2, -0.15, 0.16 and -0.075, the output layer doesn't have a bias) you would add the bias to the total value. So see below for an example including a bias: [![enter image description here](https://i.stack.imgur.com/Tyxsy.png)](https://i.stack.imgur.com/Tyxsy.png) **NOTE** I did not update the outputs at each layer to include the bias because I can't be bothered redrawing this in paint. Just know that the final value for all the nodes with the brown bias haven't carried over to the next layer. Then, if you were to include an activation function, you would finally take your value and put it through. So including the bias', looking at node 1 of layer 2, it would be (lets pretend your activation function is a sigmoid): ``` sigmoid((0.4*0.5)+0.2) ``` and for layer 3 node 2: ``` sigmoid(((0.6*0.2)+(1.3*-0.15))-0.4) ``` That is how you would do a forward pass of a simple neural network. Upvotes: 3 [selected_answer]
2019/07/19
1,026
3,789
<issue_start>username_0: Say I have a ML model which is not very costly to train. It has around say 5 hyperparameters. One way to select best hyperparameters would be to keep all the other hyperparamaters fixed and train the model by changing only one hyperparameter within a certain range. For the sake of mathematical convenience, we assume for the hyperparameter $h^1$, keeping all other hyperparameters fixed to their initial values, the model performs best when $h^1\_{low} < h^1 < h^1\_{high}$ (which we found out by running the model on a huge range of $h^1$). Now we, fix $h^1$ to one of the best values and tune $h^2$ the same way, where $h^1$ is chosen and the rest of the hyperparameters are again fixed on their initial values. My question is: Does this method find the best hyperparameter choices for the model? I know if the hyperparameters are independent, then this definitely does find the best solution, but in a general case, what is the general theory around this? (NOTE: I am not asking about the problem of choosing hyperparamaters, but I am asking about the aforementioned approach of choosing hyperparameters)<issue_comment>username_1: The basic calculation for a single neuron is of the form $$\sigma\left(\sum\_{i} x\_i w\_i \right),$$ where $x\_i$ is the input to the neuron $w\_i$ are the neuron-specific weights for *every single* input and $\sigma$ is the pre-specified activation function. In your terms, and disregarding the activation function, the calculation would turn out to be $$c\,a\_c + d\,a\_d + b$$ Note, that the bias term $b$ is just a weight that gets multiplied by the input $1$, thus it appears to have no input. If you want to develop a further understanding for this, you should try to get familiar with matrix and vector notations and the basic linear algebra that underlies the *feed-forward neural networks*. If you do, an entire layer of neurons on a whole batch of data will suddenly simply look like this: $$\sigma(WX)$$ and a FFNN with say 3 layers will look like this: $$\sigma\_{3}(W\_3\sigma\_2(W\_2\sigma\_1(W\_1X)))$$ Upvotes: 2 <issue_comment>username_2: username_1's answer provides the math notation behined exactly what is going on, but since you still seem a bit confused, I've made a visual representation of a neural network using *only weights with no activation function*. See below: [![enter image description here](https://i.stack.imgur.com/vkQkI.png)](https://i.stack.imgur.com/vkQkI.png) So I'm fairly sure it as you suspected: At each neuron, you take the **sum** of the inputs of the previous layer multiplied by the weight that connects that specific input to said neuron, where each input has its own unique weight for every one of its outgoing connections. With a bias, you would do the exact same math as shown in the above image, but once you find the final value (0.2, -0.15, 0.16 and -0.075, the output layer doesn't have a bias) you would add the bias to the total value. So see below for an example including a bias: [![enter image description here](https://i.stack.imgur.com/Tyxsy.png)](https://i.stack.imgur.com/Tyxsy.png) **NOTE** I did not update the outputs at each layer to include the bias because I can't be bothered redrawing this in paint. Just know that the final value for all the nodes with the brown bias haven't carried over to the next layer. Then, if you were to include an activation function, you would finally take your value and put it through. So including the bias', looking at node 1 of layer 2, it would be (lets pretend your activation function is a sigmoid): ``` sigmoid((0.4*0.5)+0.2) ``` and for layer 3 node 2: ``` sigmoid(((0.6*0.2)+(1.3*-0.15))-0.4) ``` That is how you would do a forward pass of a simple neural network. Upvotes: 3 [selected_answer]
2019/07/19
1,657
6,378
<issue_start>username_0: I am unsure about the following parts of the architecture and mechanics of convolution layers in CNNs. Possibly, this is implementation-dependent though. **First question:** Say I have 2 convolution layers with 10 filters each and the dimension of my input tensors is $n \times m \times 1$ (so, grayscale images for example). Passing this input to the first convolution layer results in 10 feature maps (10 matrices of $n \times m$, if we use padding), each produced by a different filter. Now, what does actually happen when this is passed to the second convolution layer? Are all 10 feature maps passed as one big $m \times n \times 10$ tensor or are the overlapping cells of the 10 feature maps averaged and a $m \times n \times 1$ tensor is passed to the next convolution layer? The former would result in an explosion of feature maps with increasing number of convolution layers and the spacial complexity would be in $\mathcal{O}\left((nm)^k\right)$, where $k$ is the number of chained convolution layers. Averaging the feature maps before passing them to the next layer would keep the complexity linear. So, which is it? Or are both possibilities commonly used? **Second question** (with two sub questions)**:** a) This is a similar question. If I have an input volume of $n \times m \times 3$ (e.g. RGB images) and I have again 2 convolution layers with 10 filters, does each convolution layer have in actuality 30 filters? So 10 sets of 3 filters, one for each channel? Or do I have in fact only 10 filters and the filters are applied to all 3 channels? b) This is the same question as question (1) but for channels: Once I have convolved a filter (consisting of three channel filters? (a)) over the input tensor I end up with 3 feature maps. One for each channel. What do I do with these? Do I average them component-wise with each other? Or do I keep them separate until I have convolved all 10 filters across the input and THEN average the 10 feature maps of each channel? Or do I average all 30 feature maps of all three channels? Or do I just pass on 30 feature maps to the next convoloution layers which in turn knows which of these feature maps belong to which channel? Quite a few possibilities... None of the sources I consulted makes this explicit. Maybe because it depends on the individual implementation. Anyway, would be great if somebody could clear this confusion up a little!<issue_comment>username_1: Answers: 1. Generally its the former. The next layer would learn at each filter how to merge the channels of the previous layer, that is why in a 2D convolution the kernel is a 3-dimensional tensor. But the number of parameters is $nmc\_ic\_{i+1}$ at the $i^{th}$ layer (this is ignoring bias). lets assume all channels are $O(c)$ then the spatial complexity becomes $O(knmc^2)$ where $k$ is the number of layers. 2. *a)* The first convolutional filter would have kernel size of (w,h,3,10) where w and h are the kernel sizes of the 2d convolution (often in practice is 3). So there are 10 filters of size (w,h,3), but the # of parameters is w\*h\*30 (once again for ease, ignoring bias). The second layer though, since it is working on a layer with 10 channels, will have kernel (w,h,10,10). *b)* I think you need to go back and look at what a convolution does (in your setting specifically a 2D convolution). Each **filter** works on every **channel** of the previous layer. Each **channel** of the last convolutional layer refers to a **single** filter that convolved over the entire previous layer to that. Upvotes: 2 <issue_comment>username_2: ### tl;dr It helps to think that the *channels* dimension of a convolutional layer works like a **fully connected** layer (i.e. the layer computes the weighted sum over all channels). ### For a single pixel... Let's consider a single pixel (e.g. the top left pixel). This pixel has $C$ different values, where $C$ are the number of channels. In order to produce the result of a single filter, the layer takes the weighted sum of these $C$ pixels. It does this by actually having $C$ weights, multiplying the pixel values with their corresponding weights and summing them together. ### Example Let's say you have an $n \times m \times 10$ tensor as an input to a convolutional layer with $1$ filter and a $3 \times 3$ kernel. For creating its output the layer has $3 \times 3 \times 10 = 90$ weights, i.e. a different $3 \times 3$ kernel for each of the $10$ input channels. To create its output, the layer performs the convolution operation **separately** on each of the input channels (each with its corresponding weight matrix) and creates this way $10$ feature maps which are **summed** together. Now imagine the layer has $20$ filters instead of $1$. Nothing changes, just that the same procedure is done $20$ times with $20$ different sets of weights So the total number of weights in the layer in this case is $3 \times 3 \times 10 \times 20 = 1800$. ### To answer your questions... **(1)** You have a grayscale image $n \times m \times 1$ and it passes through the first convolution layer (which has $10$ filters). This layer will perform the convolution operation on the input image $10$ times **independently** and produce $10$ feature maps, i.e. an output tensor of $n \times m \times 10$. Now this in fed into the second convolution layer, which again has $10$ filters. **For each** filter the layer will perform the convolution operation on the $10$ input feature maps independently and will sum the corresponding pixels together to form a $n \times m \times 1$ feature map. It will perform this procedure $10$ times and generate an output tensor of $n \times m \times 10$. **(2)** Yes, each layer actually has $30$ filters, a total of $10$ for each of the R, G, B channels. The layer just sums the results of the R, G, B channels to produce a single feature map. For the second part, if I got it right it's pretty much what you said but it sums the maps instead of averaging them ### Notes I'd recommend checking [Stanford's CS231 notes on convolution layers](http://cs231n.github.io/convolutional-networks/), which explain it in much detail and also have numerical examples to confirm if you've got it right. You could also check [this answer](https://stats.stackexchange.com/a/335327/119015) for more details. Upvotes: 3 [selected_answer]
2019/07/23
1,162
4,318
<issue_start>username_0: Why do people use the $PReLU$ activation? $PReLU[x] = ReLU[x] + ReLU[p\*x]$ with the parameter $p$ typically being a small negative number. If a fully connected layer is followed by a at least two element $ReLU$ layer then the combined layers together are capable of emulating exactly the $PReLU$, so why is it necessary? Am I missing something?<issue_comment>username_1: Answers: 1. Generally its the former. The next layer would learn at each filter how to merge the channels of the previous layer, that is why in a 2D convolution the kernel is a 3-dimensional tensor. But the number of parameters is $nmc\_ic\_{i+1}$ at the $i^{th}$ layer (this is ignoring bias). lets assume all channels are $O(c)$ then the spatial complexity becomes $O(knmc^2)$ where $k$ is the number of layers. 2. *a)* The first convolutional filter would have kernel size of (w,h,3,10) where w and h are the kernel sizes of the 2d convolution (often in practice is 3). So there are 10 filters of size (w,h,3), but the # of parameters is w\*h\*30 (once again for ease, ignoring bias). The second layer though, since it is working on a layer with 10 channels, will have kernel (w,h,10,10). *b)* I think you need to go back and look at what a convolution does (in your setting specifically a 2D convolution). Each **filter** works on every **channel** of the previous layer. Each **channel** of the last convolutional layer refers to a **single** filter that convolved over the entire previous layer to that. Upvotes: 2 <issue_comment>username_2: ### tl;dr It helps to think that the *channels* dimension of a convolutional layer works like a **fully connected** layer (i.e. the layer computes the weighted sum over all channels). ### For a single pixel... Let's consider a single pixel (e.g. the top left pixel). This pixel has $C$ different values, where $C$ are the number of channels. In order to produce the result of a single filter, the layer takes the weighted sum of these $C$ pixels. It does this by actually having $C$ weights, multiplying the pixel values with their corresponding weights and summing them together. ### Example Let's say you have an $n \times m \times 10$ tensor as an input to a convolutional layer with $1$ filter and a $3 \times 3$ kernel. For creating its output the layer has $3 \times 3 \times 10 = 90$ weights, i.e. a different $3 \times 3$ kernel for each of the $10$ input channels. To create its output, the layer performs the convolution operation **separately** on each of the input channels (each with its corresponding weight matrix) and creates this way $10$ feature maps which are **summed** together. Now imagine the layer has $20$ filters instead of $1$. Nothing changes, just that the same procedure is done $20$ times with $20$ different sets of weights So the total number of weights in the layer in this case is $3 \times 3 \times 10 \times 20 = 1800$. ### To answer your questions... **(1)** You have a grayscale image $n \times m \times 1$ and it passes through the first convolution layer (which has $10$ filters). This layer will perform the convolution operation on the input image $10$ times **independently** and produce $10$ feature maps, i.e. an output tensor of $n \times m \times 10$. Now this in fed into the second convolution layer, which again has $10$ filters. **For each** filter the layer will perform the convolution operation on the $10$ input feature maps independently and will sum the corresponding pixels together to form a $n \times m \times 1$ feature map. It will perform this procedure $10$ times and generate an output tensor of $n \times m \times 10$. **(2)** Yes, each layer actually has $30$ filters, a total of $10$ for each of the R, G, B channels. The layer just sums the results of the R, G, B channels to produce a single feature map. For the second part, if I got it right it's pretty much what you said but it sums the maps instead of averaging them ### Notes I'd recommend checking [Stanford's CS231 notes on convolution layers](http://cs231n.github.io/convolutional-networks/), which explain it in much detail and also have numerical examples to confirm if you've got it right. You could also check [this answer](https://stats.stackexchange.com/a/335327/119015) for more details. Upvotes: 3 [selected_answer]
2019/07/23
898
3,484
<issue_start>username_0: In [this answer](https://ai.stackexchange.com/a/13543/2444) to the question [Is an optimization algorithm equivalent to a neural network?](https://ai.stackexchange.com/q/13524/2444), the author stated that, in theory, there is some recurrent neural network that implements a given optimization algorithm. If so, then can we optimize the optimization algorithm?<issue_comment>username_1: First, you need to consider what are the "parameters" of this "optimization algorithm" that you want to "optimize". Let's take the most simple case, a SGD without momentum. The update rule for this optimizer is: $$ w\_{t+1} \leftarrow w\_{t} - a \cdot \nabla\_{w\_{t}} J(w\_t) = w\_{t} - a \cdot g\_t $$ where $w\_t$ are the weights at iteration $t$, $J$ is the cost function, $g\_t = \nabla\_{w\_{t}} J(w\_t)$ are the gradients of the cost function w.r.t $w\_t$ and $a$ is the learning rate. An optimization algorithm accepts as its input the weights and their gradients and returns the update. So we could write the above equation as: $$ w\_{t+1} \leftarrow w\_{t} - SGD(w\_t, g\_t) $$ The same is true for all optimization algorithms (e.g. Adam, RMSprop, etc.). Now our initial question was what are the **parameters of the optimizer**, which we want to optimize. In the simple case of the SGD, the sole parameter of the optimizer is the **learning rate**. The question that arises at this point is *can we optimize the learning rate of the optimizer during training?* Or more practically, can we compute this derivative? $$ \frac{\partial J(w\_t)}{\partial a} $$ This idea was explored in [this paper](https://arxiv.org/pdf/1703.04782.pdf), where they coin this technique "hypergradient descent". I suggest you take a look. Upvotes: 4 [selected_answer]<issue_comment>username_2: We usually optimize **with respect to** something. For example, you can train a neural network to locate cats in an image. This operation of locating cats in an image can be thought of as a function: given an image, a neural network can be trained to return the position of the cat in the image. In this sense, we can optimize a neural network with respect to this task. However, if a neural network represents an optimization algorithm, then, if you change it a little bit, then it will no more be the same optimization algorithm: it might be another optimization algorithm or some other different algorithm. For example, most optimizations algorithms that are used to train neural networks (like Adam) are a variation of gradient descent (GD). If you think that Adam performs better than GD, then you could say that Adam is an optimization of GD. So, Adam performs better than GD with respect to something. Possibly, GD also performs better than Adam with respect to something else. Of course, this is a little bit of a stretch. Upvotes: 2 <issue_comment>username_3: That does not seem very useful to apply local minima search (as SGD) to another local minima search. Existing successful solutions combine global minima search techniques with local minima search. For example, it's beneficial to combine simulated annealing with SGD to optimize it's learning rate and/or Nesterov momentum. In this case, you don't even need to spawn a population of SGD optimizers. But, you can also try population-based algorithms like evolution programming. The idea to optimize optimizers is very curious, but it's rather useful to try it on global optimization algorithms. Upvotes: 0
2019/07/26
1,359
5,783
<issue_start>username_0: I have very outdated idea about how NLP tasks are carried out by normal RNN's, LSTM's/GRU's, word2vec, etc to basically generate some hidden form of the sentence understood by the machine. One of the things I have noticed is that in general researchers are interested in generating the context of the sentence, but oftentimes ignore punctuation marks which is on of the most important aspects for generating context. For example: > > “Most of the time, travellers worry about their luggage.” > > > “Most of the time travellers worry about their luggage” > > > [Source](https://cybertext.wordpress.com/2012/11/22/a-light-hearted-look-at-how-punctuation-can-change-meaning/) Like this there exists probably 4 important punctuation marks `.,?` and `!`. Yet, I have not seen any significant tutorials/blogs on them. It is also interesting to note that punctuations don't have a meaning (quite important, since most language models try to map word to a numerical value/meaning), they are more of a 'delimiter'. So what is the current theory or perspective on this? And why is it ignored?<issue_comment>username_1: Language models almost always map every word to an embedding. There are many embedding algorithms with most of them having interpolation properties i.e. If $E(word)$ represents embedding of a word then $E(king)-E(male)+E(female) \sim E(queen)$. The smoother the interpolation properties the better the model understands the word, these properties don't exactly make much sense when it comes to delimiters. Yet, there are instances where a delimiter embedding is learned ( always has an embedding ). While using these first all punctuation in the text is converted to one specific word, say 'dlmt', the embedding algorithm learns an embedding for this word treating it as if it would have any word. This maintains interpolation properties where the delimiter is understood to be a word that is used to split context. I have observed that delimiters such as question mark or exclamation marks at the end of sentence are also understood to be breaks in context, in these cases the model learns if the statement is a question or so just by the context given by the words and stops in the sentence Upvotes: 1 <issue_comment>username_2: You are right. Approaches that map words to meaning solely do fail in this regard. None the less Word2Vec and Glove have shown wonderful downstream results. This in itself may indicate that most of the time, punctuation's addition can be interpolated. But as you provided, there are cases where this just is not true! Now of days I would say most models actually use almost **NO** preprocessing. This is surprising but its due to the rise in power of learnable, reversable, tokenizations. Some examples of these include [byte pair encoding (bpe)](https://en.wikipedia.org/wiki/Byte_pair_encoding) and the [sentence piece model (spm)](https://arxiv.org/abs/1808.06226). State-of-the-art NLP generally rely on these. Examples include [BERT](https://arxiv.org/abs/1810.04805) and [GPT2](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf), which are general purpose pretrained Language Models. Their ability to parse and *understand* (i use this word loosely) a wide variety of phrasing, spelling and more can be partially due to the freedom in the preprocessing. **Takeaway:** You can achieve good results by using preprocessing in a manner that will eliminate information but keep the meat and bones that you are interested in (but this requires domain knowledge paired with optimization experience), but the field seems gearing towards models that are more inclusive, more transferable, and dont have the problems you mention by design. Upvotes: 2 <issue_comment>username_3: This is a somewhat provocative view, so be warned (and please don't down-vote this if you feel provoked by it!): In the "old days", when information retrieval (IR) was one of the main tasks in NLP, several categories of words were ignored as *stopwords*; conjunctions, determiners, prepositions, etc. These function words do not carry meaning themselves, but organise the structure of sentences. Most IR algorithms worked on frequencies of individual words, and as functions words are very frequent (*of* and *the* are the two most frequent English words) and don't mean anything by themselves, they were ignored. This kept the index files small and didn't seem to influence the results. However, if you want to analyse sentences themselves, they are rather important. They are also useful for all sorts of other tasks where you are looking at sequences of words (eg part-of-speech tagging based on context). Similar for word embeddings: without function words you'd not have any meaningful context to work with. So, increasingly you would not ignore function words anymore. My suspicion is, that punctuation is now in the 'stopword position': it's not too clear how it influences meaning, and is often inconsistent or redundant (obviously not in all cases). So you can probably treat it as 'noise' and get away with it for most applications. For example, looking at meanings of words, it probably doesn't matter that much whether the sentence they occurred in was a question or an exclamation. By removing punctuation (maybe apart from sentence-terminators), your model is a bit smaller and you don't lose much. Since punctuation is purely a property of written language, we can generally get away without it, as we do in speech. A text without punctuation might be harder to read, because we're not used to it, but don't forget that some writing systems (Chinese, Egyptian hieroglyphics, ...) don't even have spaces between words — and people can still use them without problems. Upvotes: 1
2019/07/26
1,298
5,655
<issue_start>username_0: I'm very new to AI. I read somewhere that AI can be used to create GUI UI/UX design. That has fascinated me for a long time. But, since I'm very new here, I don't have any idea how it can happen. The usual steps to create the UI Design are: * Create Grids. * Draw Buttons/Text/Boxes/Borders/styles. * Choose Color Schemes. * Follow CRAP Principle (Contrast, Repeatition, Alignment, Proximity) I wonder how can AI algorithms help with that. I know a bit of neural networking and the closest I can think of is the following two methods (Supervised Learning). 1. Draw grids manually and train the Software manually to learn proper styles until it becomes capable of giving modern results and design its own design language. 2. Take a list of a few websites (for example) from the internet and let the software learn and explore the source code and CSS style sheets and learn and program neurons manually until it becomes capable of making it's own unique styles.<issue_comment>username_1: Language models almost always map every word to an embedding. There are many embedding algorithms with most of them having interpolation properties i.e. If $E(word)$ represents embedding of a word then $E(king)-E(male)+E(female) \sim E(queen)$. The smoother the interpolation properties the better the model understands the word, these properties don't exactly make much sense when it comes to delimiters. Yet, there are instances where a delimiter embedding is learned ( always has an embedding ). While using these first all punctuation in the text is converted to one specific word, say 'dlmt', the embedding algorithm learns an embedding for this word treating it as if it would have any word. This maintains interpolation properties where the delimiter is understood to be a word that is used to split context. I have observed that delimiters such as question mark or exclamation marks at the end of sentence are also understood to be breaks in context, in these cases the model learns if the statement is a question or so just by the context given by the words and stops in the sentence Upvotes: 1 <issue_comment>username_2: You are right. Approaches that map words to meaning solely do fail in this regard. None the less Word2Vec and Glove have shown wonderful downstream results. This in itself may indicate that most of the time, punctuation's addition can be interpolated. But as you provided, there are cases where this just is not true! Now of days I would say most models actually use almost **NO** preprocessing. This is surprising but its due to the rise in power of learnable, reversable, tokenizations. Some examples of these include [byte pair encoding (bpe)](https://en.wikipedia.org/wiki/Byte_pair_encoding) and the [sentence piece model (spm)](https://arxiv.org/abs/1808.06226). State-of-the-art NLP generally rely on these. Examples include [BERT](https://arxiv.org/abs/1810.04805) and [GPT2](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf), which are general purpose pretrained Language Models. Their ability to parse and *understand* (i use this word loosely) a wide variety of phrasing, spelling and more can be partially due to the freedom in the preprocessing. **Takeaway:** You can achieve good results by using preprocessing in a manner that will eliminate information but keep the meat and bones that you are interested in (but this requires domain knowledge paired with optimization experience), but the field seems gearing towards models that are more inclusive, more transferable, and dont have the problems you mention by design. Upvotes: 2 <issue_comment>username_3: This is a somewhat provocative view, so be warned (and please don't down-vote this if you feel provoked by it!): In the "old days", when information retrieval (IR) was one of the main tasks in NLP, several categories of words were ignored as *stopwords*; conjunctions, determiners, prepositions, etc. These function words do not carry meaning themselves, but organise the structure of sentences. Most IR algorithms worked on frequencies of individual words, and as functions words are very frequent (*of* and *the* are the two most frequent English words) and don't mean anything by themselves, they were ignored. This kept the index files small and didn't seem to influence the results. However, if you want to analyse sentences themselves, they are rather important. They are also useful for all sorts of other tasks where you are looking at sequences of words (eg part-of-speech tagging based on context). Similar for word embeddings: without function words you'd not have any meaningful context to work with. So, increasingly you would not ignore function words anymore. My suspicion is, that punctuation is now in the 'stopword position': it's not too clear how it influences meaning, and is often inconsistent or redundant (obviously not in all cases). So you can probably treat it as 'noise' and get away with it for most applications. For example, looking at meanings of words, it probably doesn't matter that much whether the sentence they occurred in was a question or an exclamation. By removing punctuation (maybe apart from sentence-terminators), your model is a bit smaller and you don't lose much. Since punctuation is purely a property of written language, we can generally get away without it, as we do in speech. A text without punctuation might be harder to read, because we're not used to it, but don't forget that some writing systems (Chinese, Egyptian hieroglyphics, ...) don't even have spaces between words — and people can still use them without problems. Upvotes: 1
2019/07/29
1,411
5,369
<issue_start>username_0: I would like to incrementally train my model with my current dataset and [I asked this question on Github](https://github.com/tensorflow/models/issues/7200#issuecomment-510850230), which is what I'm using SSD MobileNet v1. Someone there told me about [**learning without forgetting**](https://arxiv.org/abs/1606.09282). I'm now confused between *learning without forgetting* and *transfer learning*. How they differ from each other? My initial problem, what I'm trying to achieve (mentioned in Github issue) is the following. I have trained my dataset on `ssd_mobilenet_v1_coco` model. I'm getting continuous incremental data. Right now, my dataset is very limited. What I want to achieve is *incremental training*, i.e. as soon as I get new data, I can further train my already trained model and I don't have to retrain everything: 1. Save trained model $M\_t$ 2. Get new data $D\_{t+1}$ 3. Train $M\_t$ on $D\_{t+1}$ to produce $M\_{t+1}$ 4. Let $t = t+1$, then go back to $1$ How do I perform this incremental training/learning? Should I use LwF or transfer learning?<issue_comment>username_1: > > What I want to achieve is incremental training. So, as soon as I get new data, I can further train my already trained model and I don't have to retrain everything. > > > Learning without forgetting is one of the methods to solve multitask learning. If your model trained to solve problem A and then after sometimes you need your model to solve new problem B without forgetting the problem A (the model still good to solve the problem A), then you need this. Transfer learning is a method to use a trained model to solve another task (and may forget the original task). For example, you use a model that originally trained to classify cat or dog to a new task that trying to classify goat or cow. You use this in hopes of speeding up your training process. If your new data has the same task with the old data, you don't need to use multitask learning method. For example: * if your model trained with 50 images to detect an apple in the image, and then you get new 100 images to detect an apple then you just need to continue your train (incremental learning). In this case, you need (to save) the latest parameter of your model after trained (latest learning rate value, epoch, etc.), if you have it then you just need to run your training again (continue the epoch). * if your model trained with 100 images to detect an apple in the image, and then you get new 100 images to train your model to detect an orange and **you don't care if your model will give a bad result to detect an apple**, then you can use transfer learning. You may freeze a few first layers as "extractor" and initialize a new layer at the end. * if your model trained with 100 images to detect an apple in the image, and then you get new 100 images to detect an orange and your model must good to detect both apple and orange in an image, then you use multitask learning. The easiest method is to train your model with the apple+orange image, but you can also use another approach like proposed in Learning without Forgetting paper. Upvotes: 2 <issue_comment>username_2: Learning without Forgetting (LwF) is an incremental learning (sometimes also called continual or lifelong learning) technique for neural networks, which is a machine learning technique that attempts to avoid [catastrophic forgetting](https://ai.stackexchange.com/a/13293/2444). There are several incremental learning approaches. LwF is an incremental learning approach based on the concept of [*regularization*](https://arxiv.org/pdf/1710.10686.pdf). In section 3.2 of the paper [Continual lifelong learning with neural networks: A review](https://www.sciencedirect.com/science/article/pii/S0893608019300231) (2019), by Parisi et al., other regularisation-based continual learning techniques are described. LwF could be seen as a combination of [distillation networks](https://arxiv.org/abs/1503.02531) and [fine-tuning](https://arxiv.org/abs/1311.2524), which refers to the re-training with a low learning rate (which is a very rudimentary technique to avoid catastrophically forgetting the previously learned knowledge) an already trained model $\mathcal{M}$ with new and (usually) more specific dataset, $\mathcal{D}\_{\text{new}}$, with respect to the dataset, $\mathcal{D}\_{\text{old}}$, with which you originally trained the given model $\mathcal{M}$. LwF, as opposed to other continual learning techniques, only uses the new data, so it assumes that past data (used to pre-train the network) is unavailable. The paper [Learning without Forgetting](https://arxiv.org/abs/1606.09282) goes into the details of the technique and it also describes the concepts of *feature extraction*, *fine tuning* and *multitask learning*, which are related to incremental learning techniques. *What is the difference between LwF and transfer learning?* LwF is a combination of distillation networks and fine-tuning, which is a [transfer learning](https://www.cse.ust.hk/~qyang/Docs/2009/tkde_transfer_learning.pdf) technique, which is a special case of incremental learning, where the old and new tasks are different, while, in general, in incremental learning, the old and new tasks can also be the same (which is called [domain adaptation](https://arxiv.org/abs/1606.09282)). Upvotes: 3
2019/07/29
1,222
4,761
<issue_start>username_0: I am working with [the Inception ResNet V2 model](https://ai.googleblog.com/2016/08/improving-inception-and-image.html), pre-trained with ImageNet, for face recognition. However, I'm so confused about what the exact output of the feature extraction layer (i.e. the layer just before the fully connected layer) of Inception ResNet V2 is. Can someone clarify exactly this? (By the way, if you know some resource that explains Inception ResNet V2 clearly, let me know).<issue_comment>username_1: > > What I want to achieve is incremental training. So, as soon as I get new data, I can further train my already trained model and I don't have to retrain everything. > > > Learning without forgetting is one of the methods to solve multitask learning. If your model trained to solve problem A and then after sometimes you need your model to solve new problem B without forgetting the problem A (the model still good to solve the problem A), then you need this. Transfer learning is a method to use a trained model to solve another task (and may forget the original task). For example, you use a model that originally trained to classify cat or dog to a new task that trying to classify goat or cow. You use this in hopes of speeding up your training process. If your new data has the same task with the old data, you don't need to use multitask learning method. For example: * if your model trained with 50 images to detect an apple in the image, and then you get new 100 images to detect an apple then you just need to continue your train (incremental learning). In this case, you need (to save) the latest parameter of your model after trained (latest learning rate value, epoch, etc.), if you have it then you just need to run your training again (continue the epoch). * if your model trained with 100 images to detect an apple in the image, and then you get new 100 images to train your model to detect an orange and **you don't care if your model will give a bad result to detect an apple**, then you can use transfer learning. You may freeze a few first layers as "extractor" and initialize a new layer at the end. * if your model trained with 100 images to detect an apple in the image, and then you get new 100 images to detect an orange and your model must good to detect both apple and orange in an image, then you use multitask learning. The easiest method is to train your model with the apple+orange image, but you can also use another approach like proposed in Learning without Forgetting paper. Upvotes: 2 <issue_comment>username_2: Learning without Forgetting (LwF) is an incremental learning (sometimes also called continual or lifelong learning) technique for neural networks, which is a machine learning technique that attempts to avoid [catastrophic forgetting](https://ai.stackexchange.com/a/13293/2444). There are several incremental learning approaches. LwF is an incremental learning approach based on the concept of [*regularization*](https://arxiv.org/pdf/1710.10686.pdf). In section 3.2 of the paper [Continual lifelong learning with neural networks: A review](https://www.sciencedirect.com/science/article/pii/S0893608019300231) (2019), by Parisi et al., other regularisation-based continual learning techniques are described. LwF could be seen as a combination of [distillation networks](https://arxiv.org/abs/1503.02531) and [fine-tuning](https://arxiv.org/abs/1311.2524), which refers to the re-training with a low learning rate (which is a very rudimentary technique to avoid catastrophically forgetting the previously learned knowledge) an already trained model $\mathcal{M}$ with new and (usually) more specific dataset, $\mathcal{D}\_{\text{new}}$, with respect to the dataset, $\mathcal{D}\_{\text{old}}$, with which you originally trained the given model $\mathcal{M}$. LwF, as opposed to other continual learning techniques, only uses the new data, so it assumes that past data (used to pre-train the network) is unavailable. The paper [Learning without Forgetting](https://arxiv.org/abs/1606.09282) goes into the details of the technique and it also describes the concepts of *feature extraction*, *fine tuning* and *multitask learning*, which are related to incremental learning techniques. *What is the difference between LwF and transfer learning?* LwF is a combination of distillation networks and fine-tuning, which is a [transfer learning](https://www.cse.ust.hk/~qyang/Docs/2009/tkde_transfer_learning.pdf) technique, which is a special case of incremental learning, where the old and new tasks are different, while, in general, in incremental learning, the old and new tasks can also be the same (which is called [domain adaptation](https://arxiv.org/abs/1606.09282)). Upvotes: 3
2019/07/29
783
3,389
<issue_start>username_0: Suppose we have a data set with $4,000$ labeled examples. The outcome variable is trinary (three possible categorical values). Suppose the accuracy of a given model is "bad" (e.g. less than $50 \%$). > > **Question.** Should you try different traditional machine learning models (e.g. multinomial logistic regression, random forests, XGBoost, > etc.), get more data, or try various deep learning models like > convolutional neural networks or recurrent neural networks? > > > If the purpose is to minimize time and effort in collecting training data, would deep learning models be a viable option over traditional machine learning models in this case?<issue_comment>username_1: To know if your model needs more training data, try to plot out "learning curves", that are based on increasing size of the training set. Basically, you calculate training and validation accuracy metrics for 1, 2, 3, 4, 5, ..., m training samples. Size of validation set may be constant over time. If the accuracy is still rising when your data set is fully used, then you need more training data. Upvotes: 2 <issue_comment>username_2: Accuracy can sometimes be a very coarse metric. When it is applied to three class problems, people often take the class label with maximum predicted probability and predict that. The probabilities of the individual labels are ignored. I'd recommend that as well as accuracy you calculate sensitivity and specificity for each class and the area under the ROC curve. For both of these, you can take a 1 vs rest (i.e. class 1 vs classes 2 & 3, class 2 vs classes 1 & 3 etc) approach to calculating the metrics. Even if your accuracy is under 50%, the model may still be predicting at least one class well, so I recommend doing more analysis before making a decision. I also recommend comparing your model (if it is deep learning-based) to a traditional type of model, as deep learning usually works best with big data and with such a small dataset as you have, there may be no benefit to using deep learning (unless you are leveraging transfer learning). Upvotes: 2 <issue_comment>username_3: If data collection is expensive, it is better to first try to improve your model. You say your accuracy is bad, but have you using tried better performance metrics? A confusion matrix could help. Another potential problem is that your data may be imbalanced. What if your model is performing badly because, for example, there aren't enough samples from class 2. Now you know that to improve your model you need mode class 2 samples. This can be acquired by collecting more data, or by other class imbalance methods e.g. SMOTE. You can also check whether the performance is bad only on the test set. Is the model overfitting? Or does it perform badly on the training set too? Underfitting. 4000 samples should be enough for a neural network, but not a very large deep learning model, maybe a smaller one. Testing other machine learning models on your data is definitely a good idea. Not only you can provide your model with a benchmark, but you also may find a better model. If you think your data is good enough and your model is adequate, you can try hyperparameter optimization. This can be quire useful for getting the most out of neural networks, but it is trial and error, and it can take some time to find the best hyperparameters. Upvotes: 0
2019/07/30
1,105
4,833
<issue_start>username_0: When executing MCTS' expansion phase, where you create a number of child nodes, select one of the numbers, and simulate from that child, how can you efficiently and unbiasedly decide which child(ren) to generate? One strategy is to always generate all possible children. I believe that [this answer](https://ai.stackexchange.com/a/12913/2444) says that AlphaZero always generates all possible ($\sim 300$) children. If it were expensive to compute the children or if there were many of them, this might not be efficient. One strategy is to generate a lazy stream of possible children. That is, generate one child and a promise to generate the rest. You could then randomly select one by flipping a coin: heads you take the first child, tails you keep going. This is clearly biased in favor of children earlier in the stream. Another strategy is to compute how many $N$ children there are and provide a function to generate child $X < N$ (of type Nat -> State). You could then randomly select one by choosing uniformly in the range $[0, N)$. This may be harder to implement than the previous version because computing the number of children may be as hard as computing the children themselves. Alternatively, you could compute an upper-bound on the number of children and the function is partial (of type Nat -> Maybe State), but you'd be doing something like rejection sampling. I believe that if the number of iterations of MCTS remaining, $X\_t$, is larger than the number of children, $N$, then it doesn't matter what you do, because you'll find this node again the next iteration and expand one of the children. This seems to suggest that the only time it matters is when $X\_t < N$ and in situations like AlphaZero, $N$ is so much smaller than $X\_0$, that this basically never matters. In cases where $X\_0$ and $N$ are of similar size, then it seems like the number of iterations really needs to be changed into something like an amount of time and sometimes you spend your time doing playouts while other times you spend your time computing children. Have I thought about this correctly?<issue_comment>username_1: To know if your model needs more training data, try to plot out "learning curves", that are based on increasing size of the training set. Basically, you calculate training and validation accuracy metrics for 1, 2, 3, 4, 5, ..., m training samples. Size of validation set may be constant over time. If the accuracy is still rising when your data set is fully used, then you need more training data. Upvotes: 2 <issue_comment>username_2: Accuracy can sometimes be a very coarse metric. When it is applied to three class problems, people often take the class label with maximum predicted probability and predict that. The probabilities of the individual labels are ignored. I'd recommend that as well as accuracy you calculate sensitivity and specificity for each class and the area under the ROC curve. For both of these, you can take a 1 vs rest (i.e. class 1 vs classes 2 & 3, class 2 vs classes 1 & 3 etc) approach to calculating the metrics. Even if your accuracy is under 50%, the model may still be predicting at least one class well, so I recommend doing more analysis before making a decision. I also recommend comparing your model (if it is deep learning-based) to a traditional type of model, as deep learning usually works best with big data and with such a small dataset as you have, there may be no benefit to using deep learning (unless you are leveraging transfer learning). Upvotes: 2 <issue_comment>username_3: If data collection is expensive, it is better to first try to improve your model. You say your accuracy is bad, but have you using tried better performance metrics? A confusion matrix could help. Another potential problem is that your data may be imbalanced. What if your model is performing badly because, for example, there aren't enough samples from class 2. Now you know that to improve your model you need mode class 2 samples. This can be acquired by collecting more data, or by other class imbalance methods e.g. SMOTE. You can also check whether the performance is bad only on the test set. Is the model overfitting? Or does it perform badly on the training set too? Underfitting. 4000 samples should be enough for a neural network, but not a very large deep learning model, maybe a smaller one. Testing other machine learning models on your data is definitely a good idea. Not only you can provide your model with a benchmark, but you also may find a better model. If you think your data is good enough and your model is adequate, you can try hyperparameter optimization. This can be quire useful for getting the most out of neural networks, but it is trial and error, and it can take some time to find the best hyperparameters. Upvotes: 0
2019/07/30
584
2,461
<issue_start>username_0: Many games have multiple paths to the same states. What is the appropriate way to deal with this in MCTS? If the state appears once in the tree, but with multiple parents, then it seems to be difficult to define back propagation: do we only propagate back along the path that got us there "this" time? Or do we incorporate the information everywhere? Or maybe along the "first" path? If the state appears once in the tree, but with only one parent, then we ignored one of the paths, but it doesn't matter because by definition this is the same state? If the state appears twice in the tree, aren't we wasting a lot of resources thinking about it multiple times?<issue_comment>username_1: Node in a tree must have a single parent, otherwise it violates a tree definition. Also the way I look at it, there are no "same" states when you do MCTS. Because you are keeping the history of how you got there. So the second time you visit the "same" state it'll have a different history path and a single parent. Upvotes: 1 <issue_comment>username_2: > > If the state appears twice in the tree, aren't we wasting a lot of resources thinking about it multiple times? > > > You're right. Precisely the same problem was also noticed decades before MCTS existed, in the classic minimax-style tree search algorithms (alpha-beta search, etc.) that were used in games before MCTS. The solution is also mostly the same; **transposition tables**. In the case of MCTS, the statistics used by the algorithm that are normally associated with nodes (or their incoming edges) may instead be stored in entries of a transposition table. I mean stuff like visit counts and sums (or averages) of backpropagated scores. A brief description of how it would work, and references to more extensive relevant literature, can be found in subsubsection 5.2.4 of the [well-known 2012 Survey paper on MCTS](http://mcts.ai/pubs/mcts-survey-master.pdf). This does require that you can efficiently (incrementally) compute hash values for the states you encounter, which may not always be easy (should usually be possible, depends on the details of your problem domain). Use of transpositions in MCTS is also not always guaranteed to actually improve performance. It does come with computational overhead, and in games where transpositions are very rare it may be more efficient to simply ignore them and use the regular tree structure. Upvotes: 4 [selected_answer]
2019/07/31
874
3,315
<issue_start>username_0: I am new to convolutional neural networks, and I am learning 3D convolution. What I could understand is that 2D convolution gives us relationships between low-level features in the X-Y dimension, while the 3D convolution helps detect low-level features and relationships between them in all the 3 dimensions. Consider a CNN employing 2D convolutional layers to recognize handwritten digits. If a digit, say 5, was written in different colors: [![enter image description here](https://i.stack.imgur.com/3Ajlf.jpg)](https://i.stack.imgur.com/3Ajlf.jpg) Would a strictly 2D CNN perform poorly (since they belong to different channels in the z-dimension)? Also, are there practical well-known neural nets that employ 3D convolution?<issue_comment>username_1: 3D convolutions should be used when you want to extract spatial features from your input on 3 dimensions. For computer vision, they are typically used on **volumetric images**, which are 3D. Some examples are [classifying 3D rendered images](http://graphics.stanford.edu/projects/3dcnn/) and [medical image segmentation](https://arxiv.org/pdf/1606.06650.pdf). Upvotes: 3 <issue_comment>username_2: 3D convolutions are used when you want to extract features in 3 dimensions or establish a relationship between 3 dimensions. Essentially, it's the same as 2D convolutions, but the kernel movement is now 3-dimensional, causing a better capture of dependencies within the 3 dimensions and a difference in output dimensions post convolution. The kernel of the 3d convolution will move in 3-dimensions if the kernel's depth is lesser than the feature map's depth. ![this](https://miro.medium.com/max/700/1*wUVVgZnzBwYKgQyTBK_5sg.png) On the other hand, 2-D convolutions on 3-D data mean that the kernel will traverse in 2-D only. This happens when the feature map's depth is the same as the kernel's depth (channels). ![here](https://miro.medium.com/max/700/1*K_OsaLGJ7I8cGlPvvWYJdg.png) Some use cases for better understanding are * MRI scans where the relationship between a stack of images is to be understood; * low-level feature extractor for spatio-temporal data, like videos for gesture recognition, weather forecasting, etc. (3-D CNN's are used as low level feature extractors only over multiple short intervals, as 3D CNN's fail to capture long term spatio-temporal dependencies - for more on that check out [ConvLSTM](https://papers.nips.cc/paper/5955-convolutional-lstm-network-a-machine-learning-approach-for-precipitation-nowcasting.pdf) or an alternate perspective [here](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6387284/).) Most CNN models that learn from video data almost always have 3D CNN as a low level feature extractor. In the example you have mentioned above regarding the number 5 - 2D convolutions would probably perform better, as you're treating every channel intensity as an aggregate of the information it holds, meaning the learning would almost be the same as it would on a black and white image. Using 3D convolution for this, on the other hand, would cause learning of relationships between the channels which do not exist in this case! (Also 3D convolutions on an image with depth 3 would require a very uncommon kernel to be used, especially for the use case) Upvotes: 5 [selected_answer]
2019/07/31
811
3,021
<issue_start>username_0: I am building a model which predicts angles as output. What are the different kinds of outputs that can be used to predict angles? For example, 1. output the angle in radians * cyclic nature of the angles is not captured * output might be outside $\left[-\pi, \pi \right)$ 2. output the sine and the cosine of the angle * outputs might not satisfy $\sin^2 \theta + \cos^2 \theta = 1$ What are the pros and cons of different methods?<issue_comment>username_1: 3D convolutions should be used when you want to extract spatial features from your input on 3 dimensions. For computer vision, they are typically used on **volumetric images**, which are 3D. Some examples are [classifying 3D rendered images](http://graphics.stanford.edu/projects/3dcnn/) and [medical image segmentation](https://arxiv.org/pdf/1606.06650.pdf). Upvotes: 3 <issue_comment>username_2: 3D convolutions are used when you want to extract features in 3 dimensions or establish a relationship between 3 dimensions. Essentially, it's the same as 2D convolutions, but the kernel movement is now 3-dimensional, causing a better capture of dependencies within the 3 dimensions and a difference in output dimensions post convolution. The kernel of the 3d convolution will move in 3-dimensions if the kernel's depth is lesser than the feature map's depth. ![this](https://miro.medium.com/max/700/1*wUVVgZnzBwYKgQyTBK_5sg.png) On the other hand, 2-D convolutions on 3-D data mean that the kernel will traverse in 2-D only. This happens when the feature map's depth is the same as the kernel's depth (channels). ![here](https://miro.medium.com/max/700/1*K_OsaLGJ7I8cGlPvvWYJdg.png) Some use cases for better understanding are * MRI scans where the relationship between a stack of images is to be understood; * low-level feature extractor for spatio-temporal data, like videos for gesture recognition, weather forecasting, etc. (3-D CNN's are used as low level feature extractors only over multiple short intervals, as 3D CNN's fail to capture long term spatio-temporal dependencies - for more on that check out [ConvLSTM](https://papers.nips.cc/paper/5955-convolutional-lstm-network-a-machine-learning-approach-for-precipitation-nowcasting.pdf) or an alternate perspective [here](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6387284/).) Most CNN models that learn from video data almost always have 3D CNN as a low level feature extractor. In the example you have mentioned above regarding the number 5 - 2D convolutions would probably perform better, as you're treating every channel intensity as an aggregate of the information it holds, meaning the learning would almost be the same as it would on a black and white image. Using 3D convolution for this, on the other hand, would cause learning of relationships between the channels which do not exist in this case! (Also 3D convolutions on an image with depth 3 would require a very uncommon kernel to be used, especially for the use case) Upvotes: 5 [selected_answer]
2019/07/31
695
3,047
<issue_start>username_0: I have a decent background in Mathematics and Computer Science .I started learning AI from Andrew Ng's course from one month back. I understand logic and intuition behind everything taught but if someone asks me to write or derive mathematical formulas related to back propagation I will fail to do so. I need to complete object recognition project within 4 months. Am I on right path?<issue_comment>username_1: I think the key part of your question is *"as a beginner"*. For all intents and purposes you can create a state of the art (SoTA) model in various fields with no knowledge of the mathematics what so ever. This means you do not need to understand back-propagation, gradient descent, or even mathematically how each layer works. Respectively you could just know there exists an optimizer and that different layers generally do different things (convolutions are good at picking up local dependencies, fully connected layers are good at picking up connections among your neurons in an expensive manner when you hold no previous assumptions), etc.. Follow some common intuitions and architectures built upon in the field and your ability to model will follow (thanks to the amazing work on opensource ML frameworks -- Looking at you Google and Facebook)! But this is only a stop-gap. A Newton quote that I'm about to butcher: "If I have seen further it's because I'm standing on the shoulders of giants". In other words he saw *further* because he didn't just use what people before him did, he utilized it to expand even further. So **yes**, I think you can finish your object detection project on time with little understanding of the math (look at the [Google object detection API](https://github.com/tensorflow/models/tree/master/research/object_detection), it does wonders and you don't even need to know anything about ML to use it, you just need to have data. But, and this is a big but, if you ever want to extend into a realm that isn't particularly touched upon or push the envelope in any meaningful way, you will probably have to learn the math, learn the basics, learn the foundations. Upvotes: 4 [selected_answer]<issue_comment>username_2: * Not only is it 100% ok, it's the process. You may be surprised to know that even mathematicians struggle with mathematics, both the proofs they are working on, and the proofs of their colleagues. Some thinkers are so far ahead of the curve, very few understand what they're stating until generations later. The main thing is to keep with it. Upvotes: 2 <issue_comment>username_3: If you want to bee engineer who work with models as black boxes it could be OK. If you want to be researcher, as the job position or for better understanding of the subject it's not OK. Backporpagation is just basic multivariate calculus. If you straggling with it things like Hessians, regularizers, stochastic processes etc. would cause even more problems. If you want to go research track it could be good idea to take some math courses and prioritize them. Upvotes: 1
2019/08/02
3,953
14,407
<issue_start>username_0: I came across a comment recently "reads like sentences strung together with no logic." But is this even possible? Sentences can be strung together randomly if the selection process is random. (Random sentences in a random sequence.) Stochasticity does not seem logical—it's a probability distribution, not based on sequence or causality. *but* That stochastic process is part of an algorithm, which is a set of instructions that must be valid for the program to compute. So which is it? * **Is randomness anti-logical?** --- **Some definitions of computational logic:** The arrangement of circuit elements (as in a computer) needed for computation also: the circuits themselves [Merriam Websters](https://www.merriam-webster.com/dictionary/logic) A system or set of principles underlying the arrangements of elements in a computer or electronic device so as to perform a specified task. Logical operations collectively. [Google Dictionary](https://www.google.com/search?q=logic%20definition&oq=logic%20definition&aqs=chrome..69i57j0l5.3583j1j7&sourceid=chrome&ie=UTF-8) The system or principles underlying the representation of logical operations. Logical operations collectively, as performed by electronic or other devices. [Oxford English Dictionary](https://www.oed.com/view/Entry/109788?rskey=w2r2Z9&result=1&isAdvanced=false#eid) **Some definitions of randomness** Being or relating to a set or to an element of a set each of whose elements has equal probability of occurrence. Lacking a definite plan, purpose, or pattern. [Merriam Websters](https://www.merriam-webster.com/dictionary/random). Made, done, happening, or chosen without method or conscious decision. [Google Dictionary](https://www.google.com/search?safe=off&ei=HYRDXd72ErCJggfWtQQ&q=definition%20of%20random&oq=definition%20of%20random&gs_l=psy-ab.3..0i70i249j0l9.3955.3955..4570...0.0..0.57.57.1......0....1..gws-wiz.......0i71.sIz5wR18Zj8&ved=0ahUKEwie-aC19-LjAhWwhOAKHdYaAQAQ4dUDCAo&uact=5) Having no definite aim or purpose; not sent or guided in a particular direction; made, done, occurring, etc., without method. Seeming to be without purpose or direct relationship to a stimulus. [Oxford English Dictionary](https://www.oed.com/view/Entry/157984#eid27053631)<issue_comment>username_1: I think the answer here lies in that the dictionary definition of randomness you have is not the one used in statistics, ML, or mathematics. We define randomness to mean there exists a distribution with generally greater than 0 uncertainty. Depending on who you talk to, we live in a random universe (the way we define quantum mechanics depends on a wave function (essentially a probability distribution) So why if a sequence is drawn from a distribution is it illogical? First, even as humans we can make a strong argument that what we say is *random*. I mean we speak to convey some form of message or context, but there exists multiple ways to deliver this, but we choose a single one. Our brains inherently model $p(\vec w|c)$ where $\vec w$ is the sequence and $c$ is our context or message we want to convey. **Takeaway:** Generating a sequence in an ergodic or uniform manner would be illogical, but that is not what is being modeled or done in practice. Normally its drawn from some complex distribution. **Sidenote:** My above claim could make it seem that being uniformly random implicates something illogical, and I want to emphasize that is not the case. It is domain to domain, sometimes that is the **most** logical solution, just in the case of sentence generation it normally isnt. I would define a logical algorithm as one that given the information at hand acts in a sensible manner towards achieving some goal, and so if something purely random does that, I don't see the problem. Upvotes: 2 <issue_comment>username_2: I might misunderstand your question, but there seem to be different levels of logic at play here. 1. Computing logic, whereby any computational process is based on processor logic. In this case, any computing is involving logic, as boolean logic drives any processing. 2. Linguistic logic, where there is a logic in the sequencing of sentences within a text. A random collection of sentences is not a text, as there need to be certain principles behind the structure to make it a narrative. While you can easily generate a sequence of random sentences, they will not mean anything; there won't be any logic behind selecting a particular sentence to follow on from another one. So this is *linguistic logic* rather than *processing logic*. Note that where the linguistic logic is makes it a bit vague: I can read a randomly selected sequence of sentences and ascribe meaning to it by building a mental model that treats it as a logically constructed text. This principle is what made [ELIZA](https://en.wikipedia.org/wiki/ELIZA) so successful: even though the program's answers were based on simple pattern matching rules with no understanding, many users assumed there was logic/meaning behind it and interpreted it as such, papering over the cracks in the conversation. In summary: there is logic involved in random sentence combining, but it is the low-level computing logic, not the higher-level linguistic interpretative logic, which is generally absent from randomly generated data. Upvotes: 2 <issue_comment>username_3: In certain games, random selection is the optimal strategy. See: [Matching Pennies](https://en.wikipedia.org/wiki/Matching_pennies) Strategy is essentially a plan of action utilized to achieve a goal. * If random choice can be a strategy, it seems that it must be a form of logic, even if the nature of the stochastic process is counter to all forms of formal logic. This seems paradoxical, in that the random strategy is to have no strategy (random choices.) Upvotes: 1 <issue_comment>username_4: Previous answers are very well written. I just wanted to supplement the thread by giving a simple example. The example shows how a logical function can be computed without errors using noisy components. Taken verbatim from [Neural Networks by <NAME>](http://www.inf.fu-berlin.de/inst/ag-ki/rojas_home/documents/1996/NeuralNetworks/neuron.pdf). An excellent book: [![enter image description here](https://i.stack.imgur.com/lPXkG.png)](https://i.stack.imgur.com/lPXkG.png) > > an example of a network built using four > units. Assume that the first three units connected directly to the three bits of > input $x\_1, x\_2, x\_3$ all fire with probability $1$ when the total excitation is greater > than or equal to the threshold $\theta$ but also with probability $p$ when it is $\theta − 1$. > The duplicated connections add redundancy to the transmitted bit, but in > such a way that all three units fire with probability one when the three bits > are $1$. Each unit also fires with probability $p$ if two out of three inputs are $1$. > However each unit reacts to a different combination. The last unit, finally, is > also noisy and fires any time the three units in the first level fire and also with > probability $p$ when two of them fire. Since, in the first level, at most one unit > fires when just two inputs are set to $1$, the third unit will only fire when all > three inputs are $1$. This makes the logical circuit, the AND function of three > inputs, built out of unreliable components error-proof. > > > Upvotes: 1 <issue_comment>username_5: **Central Premises**:-- This, computability of randomness in conjunction to logic, is unfortunately/fortunately a very technical topic. > > " ... That stochastic process is part of an algorithm, which is a set of instructions that must be valid for the program to compute. > > > ... Is randomness anti-logical?" ~ username_3 (Stack Exchange user, opening poster) > > > This answer is about: randomness and chaos; and how they relate to logic and computability. > > "What is randomness and where does it come from? > This is one scary place to venture in. We take for granted the randomness in our reality. We compensate for that randomness with probability theory. However, is randomness even real or is it just a figment of our lack of intelligence? That is, does what we describe as randomness just a substitute for our uncertainty about reality? Is randomness just a manifestation of something else?" > — "Medium." Medium, < [medium.com/intuitionmachine/there-is-no-randomness-only-chaos-and-complexity-c92f6](http://medium.com/intuitionmachine/there-is-no-randomness-only-chaos-and-complexity-c92f6) >. > > > - > > "Many natural intensional properties in artificial and natural languages are hard to compute. We show that randomized algorithms are often necessary to have good estimators of natural properties and to verify some specific relations. We concentrate on the reliability of queries to show the advantage of randomized algorithms in uncertain cognitive worlds." — <NAME>. "Logic, randomness and cognition." Logic, Thought and Action. Springer, Dordrecht, 2005. 497-506. > > > --- **Layperson's Explanations**:-- Unfortunately, quantum chaos in relation to randomness, is a profoundly technical topic. I have managed to track down sources that relatively aren't overly technical. As a starting point, this Wikipedia article is worth reading:-- (<https://simple.wikipedia.org/wiki/Chaos_theory>) You can continue and read this particular Medium post:-- (<https://medium.com/intuitionmachine/there-is-no-randomness-only-chaos-and-complexity-c92f6dccd7ab>) For profoundly technical topics, I recommend this book series, as they are written by experts in basic technical terms, for the laypersons wanting to study technical topics:-- (<https://en.wikipedia.org/wiki/Very_Short_Introductions>) [**I recommend reading**](https://global.oup.com/academic/search?q=randomness%20a%20very%20short%20introduction&cc=us&lang=en):-- * *Chaos: A Very Short Introduction* * *Probability: A Very Short Introduction* * *Fractals: A Very Short Introduction* --- **Other References for the Layperson**:-- * (<https://www.random.org/randomness/>) * (<https://plato.stanford.edu/entries/computability/>) * (<https://plato.stanford.edu/entries/chance-randomness/>) * (<https://plato.stanford.edu/entries/qt-quantlog/>) [Some broader implications of chaos](https://plato.stanford.edu/entries/chaos/#SomBroImpCha) [Link to Stanford Encyclopedia of Philosophy]. [When I think of randomness, I'm inclined to think in cosmological terms. Is randomness is a structural property of the universe?](https://www.newscientist.com/article/mg22530120-500-chance-is-anything-in-the-universe-truly-random/) Is anything in the universe truly random? --- **Technical Explanations**:-- > > "In mathematical logic, independence is the unprovability of a sentence from other sentences." Wikipedia contributors. — "Independence (mathematical logic)." *Wikipedia, The Free Encyclopedia*. Wikipedia, The Free Encyclopedia, 3 Feb. 2019. Web. 29 Aug. 2019. > > > - > > "We propose a link between logical independence and quantum physics. We demonstrate that quantum systems in the eigenstates of Pauli group operators are capable of encoding mathematical axioms and show that Pauli group quantum measurements are capable of revealing whether or not a given proposition is logically dependent on the axiomatic system. Whenever a mathematical proposition is logically independent of the axioms encoded in the measured state, the measurement associated with the proposition gives random outcomes. This allows for an experimental test of logical independence. Conversely, it also allows for an explanation of the probabilities of random outcomes observed in Pauli group measurements from logical independence without invoking quantum theory. The axiomatic systems we study can be completed and are therefore not subject to Gödel's incompleteness theorem." <NAME>, et al. "Logical independence and quantum randomness." — *New Journal of Physics* 12.1 (2010): 013019. > > > - **Other Technical Explanations, Sources, References, and Further Reading:--** * (<https://iopscience.iop.org/article/10.1088/1367-2630/12/1/013019/meta>) * (<https://en.wikipedia.org/wiki/Quantum_chaos>) * (<https://arxiv.org/pdf/0708.1362.pdf>) * (<https://arxiv.org/pdf/1701.01107.pdf>) * (<http://csc.ucdavis.edu/~cmg/papers/idep.pdf>) * (<https://www.datasciencecentral.com/profiles/blogs/logistic-map-chaos-randomness-and-quantum-algorithms>) * (<https://www.deepdyve.com/lp/wiley/non-linear-dynamics-complexity-and-randomness-algorithmic-foundations-9vxIJOTT6u>) * (<https://en.wikipedia.org/wiki/Independence_(mathematical_logic)>) --- **Notes**:-- * I usually do not use Medium or Quora as a source, with some exceptions. I have chosen to do so here. * I've decided to place Stanford Encyclopedia of Philosophy sources in the layperson's section. Upvotes: 0 <issue_comment>username_4: Let me add an example from machine learning that shows that resorting to randomness is the optimal way, sometimes. When working on the whole data is not tractable (computation cost, data does not fit in memory), working on random samples can be an optimal way to train a machine learning algorithm. One of the most used optimization technique in those cases is the [Stochastic Gradient Descent](https://en.wikipedia.org/wiki/Stochastic_gradient_descent). It is an iterative procedure that computes the [estimates](https://en.wikipedia.org/wiki/Estimation) of the true [gradient](https://en.wikipedia.org/wiki/Gradient) of the [loss function](https://en.wikipedia.org/wiki/Loss_function) that needs to be minimized, over a *randomly* selected data point from the whole data. After getting the gradient estimate, the weights are updated, all this done by the well known [back-propagation](https://en.wikipedia.org/wiki/Backpropagation) algorithm. This procedure is repeated many times until a [stopping criterion](https://en.wikipedia.org/wiki/Early_stopping). The rule is: $ \theta\_{k+1} \leftarrow{} \theta\_{k} - \eta\_k \nabla (f\_{i(k)}(x\_k))$ where $\theta$ are the weights of the network, $f$ is the loss function whose gradient is computed w.r.t the weights of the network, $x\_k$ is the randomly chosen sample to compute gradients for, and $\eta\_k$ is the step size to multiply the negative of the gradient with. Upvotes: 1
2019/08/02
3,492
13,205
<issue_start>username_0: I'm reading the book Pattern Recognition and Machine Learning by Bishop, specifically the intro where he covers polynomial regression model. In short, let's say we generate $10$ data points using the function $\sin(2\pi x)$ and add some gaussian random noise to each observation. Now we pretend not knowing the generating function and try to fit a polynomial model to these points. As we increase the degree of the polynomial, it goes from underfitting ($d=1,2$) to overfitting ($d=10$). One thing the author notes is that the higher the degree of the polynomial, the higher the values of the coefficients (parameters). This is my first doubt: **why does the size of the coefficients increase with the polynomial degree? And why is the size of the parameters related to overfitting?** Secondly, he states that even for degree $10$, if we get sufficiently many data points (say $100$), then the high degree polynomial will no longer overfit the data and should have comparatively better generalization performance. Second doubt: **Why is this so?**<issue_comment>username_1: I think the answer here lies in that the dictionary definition of randomness you have is not the one used in statistics, ML, or mathematics. We define randomness to mean there exists a distribution with generally greater than 0 uncertainty. Depending on who you talk to, we live in a random universe (the way we define quantum mechanics depends on a wave function (essentially a probability distribution) So why if a sequence is drawn from a distribution is it illogical? First, even as humans we can make a strong argument that what we say is *random*. I mean we speak to convey some form of message or context, but there exists multiple ways to deliver this, but we choose a single one. Our brains inherently model $p(\vec w|c)$ where $\vec w$ is the sequence and $c$ is our context or message we want to convey. **Takeaway:** Generating a sequence in an ergodic or uniform manner would be illogical, but that is not what is being modeled or done in practice. Normally its drawn from some complex distribution. **Sidenote:** My above claim could make it seem that being uniformly random implicates something illogical, and I want to emphasize that is not the case. It is domain to domain, sometimes that is the **most** logical solution, just in the case of sentence generation it normally isnt. I would define a logical algorithm as one that given the information at hand acts in a sensible manner towards achieving some goal, and so if something purely random does that, I don't see the problem. Upvotes: 2 <issue_comment>username_2: I might misunderstand your question, but there seem to be different levels of logic at play here. 1. Computing logic, whereby any computational process is based on processor logic. In this case, any computing is involving logic, as boolean logic drives any processing. 2. Linguistic logic, where there is a logic in the sequencing of sentences within a text. A random collection of sentences is not a text, as there need to be certain principles behind the structure to make it a narrative. While you can easily generate a sequence of random sentences, they will not mean anything; there won't be any logic behind selecting a particular sentence to follow on from another one. So this is *linguistic logic* rather than *processing logic*. Note that where the linguistic logic is makes it a bit vague: I can read a randomly selected sequence of sentences and ascribe meaning to it by building a mental model that treats it as a logically constructed text. This principle is what made [ELIZA](https://en.wikipedia.org/wiki/ELIZA) so successful: even though the program's answers were based on simple pattern matching rules with no understanding, many users assumed there was logic/meaning behind it and interpreted it as such, papering over the cracks in the conversation. In summary: there is logic involved in random sentence combining, but it is the low-level computing logic, not the higher-level linguistic interpretative logic, which is generally absent from randomly generated data. Upvotes: 2 <issue_comment>username_3: In certain games, random selection is the optimal strategy. See: [Matching Pennies](https://en.wikipedia.org/wiki/Matching_pennies) Strategy is essentially a plan of action utilized to achieve a goal. * If random choice can be a strategy, it seems that it must be a form of logic, even if the nature of the stochastic process is counter to all forms of formal logic. This seems paradoxical, in that the random strategy is to have no strategy (random choices.) Upvotes: 1 <issue_comment>username_4: Previous answers are very well written. I just wanted to supplement the thread by giving a simple example. The example shows how a logical function can be computed without errors using noisy components. Taken verbatim from [Neural Networks by <NAME>](http://www.inf.fu-berlin.de/inst/ag-ki/rojas_home/documents/1996/NeuralNetworks/neuron.pdf). An excellent book: [![enter image description here](https://i.stack.imgur.com/lPXkG.png)](https://i.stack.imgur.com/lPXkG.png) > > an example of a network built using four > units. Assume that the first three units connected directly to the three bits of > input $x\_1, x\_2, x\_3$ all fire with probability $1$ when the total excitation is greater > than or equal to the threshold $\theta$ but also with probability $p$ when it is $\theta − 1$. > The duplicated connections add redundancy to the transmitted bit, but in > such a way that all three units fire with probability one when the three bits > are $1$. Each unit also fires with probability $p$ if two out of three inputs are $1$. > However each unit reacts to a different combination. The last unit, finally, is > also noisy and fires any time the three units in the first level fire and also with > probability $p$ when two of them fire. Since, in the first level, at most one unit > fires when just two inputs are set to $1$, the third unit will only fire when all > three inputs are $1$. This makes the logical circuit, the AND function of three > inputs, built out of unreliable components error-proof. > > > Upvotes: 1 <issue_comment>username_5: **Central Premises**:-- This, computability of randomness in conjunction to logic, is unfortunately/fortunately a very technical topic. > > " ... That stochastic process is part of an algorithm, which is a set of instructions that must be valid for the program to compute. > > > ... Is randomness anti-logical?" ~ username_3 (Stack Exchange user, opening poster) > > > This answer is about: randomness and chaos; and how they relate to logic and computability. > > "What is randomness and where does it come from? > This is one scary place to venture in. We take for granted the randomness in our reality. We compensate for that randomness with probability theory. However, is randomness even real or is it just a figment of our lack of intelligence? That is, does what we describe as randomness just a substitute for our uncertainty about reality? Is randomness just a manifestation of something else?" > — "Medium." Medium, < [medium.com/intuitionmachine/there-is-no-randomness-only-chaos-and-complexity-c92f6](http://medium.com/intuitionmachine/there-is-no-randomness-only-chaos-and-complexity-c92f6) >. > > > - > > "Many natural intensional properties in artificial and natural languages are hard to compute. We show that randomized algorithms are often necessary to have good estimators of natural properties and to verify some specific relations. We concentrate on the reliability of queries to show the advantage of randomized algorithms in uncertain cognitive worlds." — de Rougemont, Michel. "Logic, randomness and cognition." Logic, Thought and Action. Springer, Dordrecht, 2005. 497-506. > > > --- **Layperson's Explanations**:-- Unfortunately, quantum chaos in relation to randomness, is a profoundly technical topic. I have managed to track down sources that relatively aren't overly technical. As a starting point, this Wikipedia article is worth reading:-- (<https://simple.wikipedia.org/wiki/Chaos_theory>) You can continue and read this particular Medium post:-- (<https://medium.com/intuitionmachine/there-is-no-randomness-only-chaos-and-complexity-c92f6dccd7ab>) For profoundly technical topics, I recommend this book series, as they are written by experts in basic technical terms, for the laypersons wanting to study technical topics:-- (<https://en.wikipedia.org/wiki/Very_Short_Introductions>) [**I recommend reading**](https://global.oup.com/academic/search?q=randomness%20a%20very%20short%20introduction&cc=us&lang=en):-- * *Chaos: A Very Short Introduction* * *Probability: A Very Short Introduction* * *Fractals: A Very Short Introduction* --- **Other References for the Layperson**:-- * (<https://www.random.org/randomness/>) * (<https://plato.stanford.edu/entries/computability/>) * (<https://plato.stanford.edu/entries/chance-randomness/>) * (<https://plato.stanford.edu/entries/qt-quantlog/>) [Some broader implications of chaos](https://plato.stanford.edu/entries/chaos/#SomBroImpCha) [Link to Stanford Encyclopedia of Philosophy]. [When I think of randomness, I'm inclined to think in cosmological terms. Is randomness is a structural property of the universe?](https://www.newscientist.com/article/mg22530120-500-chance-is-anything-in-the-universe-truly-random/) Is anything in the universe truly random? --- **Technical Explanations**:-- > > "In mathematical logic, independence is the unprovability of a sentence from other sentences." Wikipedia contributors. — "Independence (mathematical logic)." *Wikipedia, The Free Encyclopedia*. Wikipedia, The Free Encyclopedia, 3 Feb. 2019. Web. 29 Aug. 2019. > > > - > > "We propose a link between logical independence and quantum physics. We demonstrate that quantum systems in the eigenstates of Pauli group operators are capable of encoding mathematical axioms and show that Pauli group quantum measurements are capable of revealing whether or not a given proposition is logically dependent on the axiomatic system. Whenever a mathematical proposition is logically independent of the axioms encoded in the measured state, the measurement associated with the proposition gives random outcomes. This allows for an experimental test of logical independence. Conversely, it also allows for an explanation of the probabilities of random outcomes observed in Pauli group measurements from logical independence without invoking quantum theory. The axiomatic systems we study can be completed and are therefore not subject to Gödel's incompleteness theorem." <NAME>, et al. "Logical independence and quantum randomness." — *New Journal of Physics* 12.1 (2010): 013019. > > > - **Other Technical Explanations, Sources, References, and Further Reading:--** * (<https://iopscience.iop.org/article/10.1088/1367-2630/12/1/013019/meta>) * (<https://en.wikipedia.org/wiki/Quantum_chaos>) * (<https://arxiv.org/pdf/0708.1362.pdf>) * (<https://arxiv.org/pdf/1701.01107.pdf>) * (<http://csc.ucdavis.edu/~cmg/papers/idep.pdf>) * (<https://www.datasciencecentral.com/profiles/blogs/logistic-map-chaos-randomness-and-quantum-algorithms>) * (<https://www.deepdyve.com/lp/wiley/non-linear-dynamics-complexity-and-randomness-algorithmic-foundations-9vxIJOTT6u>) * (<https://en.wikipedia.org/wiki/Independence_(mathematical_logic)>) --- **Notes**:-- * I usually do not use Medium or Quora as a source, with some exceptions. I have chosen to do so here. * I've decided to place Stanford Encyclopedia of Philosophy sources in the layperson's section. Upvotes: 0 <issue_comment>username_4: Let me add an example from machine learning that shows that resorting to randomness is the optimal way, sometimes. When working on the whole data is not tractable (computation cost, data does not fit in memory), working on random samples can be an optimal way to train a machine learning algorithm. One of the most used optimization technique in those cases is the [Stochastic Gradient Descent](https://en.wikipedia.org/wiki/Stochastic_gradient_descent). It is an iterative procedure that computes the [estimates](https://en.wikipedia.org/wiki/Estimation) of the true [gradient](https://en.wikipedia.org/wiki/Gradient) of the [loss function](https://en.wikipedia.org/wiki/Loss_function) that needs to be minimized, over a *randomly* selected data point from the whole data. After getting the gradient estimate, the weights are updated, all this done by the well known [back-propagation](https://en.wikipedia.org/wiki/Backpropagation) algorithm. This procedure is repeated many times until a [stopping criterion](https://en.wikipedia.org/wiki/Early_stopping). The rule is: $ \theta\_{k+1} \leftarrow{} \theta\_{k} - \eta\_k \nabla (f\_{i(k)}(x\_k))$ where $\theta$ are the weights of the network, $f$ is the loss function whose gradient is computed w.r.t the weights of the network, $x\_k$ is the randomly chosen sample to compute gradients for, and $\eta\_k$ is the step size to multiply the negative of the gradient with. Upvotes: 1
2019/08/04
2,429
10,308
<issue_start>username_0: I just finished a 1-year Data Science master's program where we were taught R. I found that Python is more popular and has a larger community in AI. What are the advantages that Python may have over R in terms of features applicable to the field of Data Science and AI (other than popularity and larger community)? What positions in Data Science and AI would be more Python-heavy than R-heavy (especially comparing industry, academic, and government job positions)? In short, is Python worthwhile in all job situations or can I get by with only R in some positions?<issue_comment>username_1: Of course, this type of questions will also lead to primarily opinion-based answers. Nonetheless, it is possible to enumerate the strengths and weakness of each language, with respect to machine learning, statistics, and data analysis tasks, which I will try to list below. R - ### Strengths * R was designed and developed for statisticians and data analysts, so it provides, [out-of-the-box](https://en.wikipedia.org/wiki/Out_of_the_box_(feature)) (that is, they are part of the language itself), features and facilities for statisticians, which are not available in Python, unless you install a related package. For example, the data frame, which Python does not provide, unless you install the famous Python's [`pandas`](https://pandas.pydata.org/) package. There are other examples like matrices, vectors, etc. In Python, there are also similar data structures, but they are more general, so not specifically targeted for statisticians. * There are a lot of statistical libraries. ### Weakness * Given its purpose, R is mainly used to solve statistical or data analysis problems. However, it can also be used outside of this domain. See, for example, this Quora question: [Is R used outside of statistics and data analysis?](https://qr.ae/TWvefg). Python ------ ### Strengths * A lot of people and companies, including Google and Facebook, invest a lot in Python. For example, the main programming language of TensorFlow and PyTorch (two widely used machine learning frameworks) is Python. So, it is very unlikely that Python won't continue to be widely used in machine learning for at least 5-10 more years. * The Python community is likely a lot bigger than the R community. In fact, for example, if you look at [Tiobe's index](https://www.tiobe.com/tiobe-index/), Python is placed 3rd, while R is placed 20th. * Python is also widely used outside of the statistics or machine learning communities. For example, it is used for web development (see e.g. the Python frameworks Django or Flask). * There are a lot of machine learning libraries (e.g. TensorFlow and PyTorch). ### Weakness * It does not provide, out-of-the-box, the statistical and data analysis functionalities that R provides, unless you install an appropriate package. This might be a weakness or a strength, depending on your philosophical point of view. There are other possible advantages and disadvantages of these languages. For example, both languages are dynamic. However, this feature can both be an advantage and a disadvantage (and it is not strictly related to machine learning or statistics), so I did not list it above. I avoided mentioning opinionated language features, such as code readability and learning curve, for obvious reasons (e.g. not all people have the same programming experience). Conclusion ---------- Python is definitely worth learning if you are studying machine learning or statistics. However, it does not mean that you will not use R anymore. R might still be handier for certain tasks. Upvotes: 5 <issue_comment>username_2: I want to reframe your question. Don't think about switching, think about adding. In data science you'll be able to go very far with either python or r but you'll go farthest with both. Python and r integrate very well, thanks to the `reticulate` package. I often tidy data in r because it is easier for me, train a model in python to benefit from superior speed and visualize the outcomes in r in beautiful `ggplot` all in one notebook! If you already know r there is no sense in abandoning it, use it where sensible and easy to you. But it is 100% a good idea to add python for many uses. Once you feel comfortable in both you'll have a workflow that fits you best dominated by your favorite language. Upvotes: 7 [selected_answer]<issue_comment>username_3: I didn't have this choice because I was forced to move from R to Python: It depends on your **environment**: When you are embedded in an **engineer** department, working technical group or something similar than Python is more feasible. When you are surrounded by **scientists** and especially **statisticians**, stay with R. PS: R offers keras and tensorflow as well though it is implemented under the hood of python. Only very advanced stuff will make you need Python. Though I'm getting more and more used to Python, the synthax in **R is easier**. And though each package has its own, it is somehow consistent while Python is not.. And ggplot is so strong. Python has a clone (plotnine) but it lacks several (important) features. In principle you can do nearly as much as in R but especially visualization and data wrangling is much easier in R. Thus, the most famous Python library, pandas, is a **clone** of R. PSS: **Advanced** statistics aims definitely at R. Python offers a lot of everyday tools and methods for a data scientist but it will never reach those >**13,000** packages R provides. For example, I had to do an inverse regression and python doesn't offer this. In R you can choose between several confidence tests and whether it is linear or nonlinear. The same goes to mixed models: It is implemented in python but it is so basic there I can't realize how this can be sufficient for someone. Upvotes: 3 <issue_comment>username_4: As others have said, it's not a "switch". But is it worth adding Python to your arsenal? I would say certainly. In data science, Python is popular and becoming ever more popular, while R is receding somewhat. And in the fields of machine learning and neural networks, I'd say that Python is the main language now -- I don't think R really comes close here in terms of usage. The reason for all of this is generality. Python is intended as a general programming language, and allows you to easily script all kinds of tasks. If you're staying strictly within a neatly structured statistical world, R is great, but with AI you often end up having to do novel, miscellaneous things, and I don't think R can beat Python at that. And because of this, I think Python and its packages will be receiving more support and development when it comes to the more cutting-edge tech. Upvotes: 1 <issue_comment>username_5: It sounds like you have invested 1 year for data science with R, and embedded into R environment, but want to explore python for data science. First learn the basics of the python like how lists and tuple works and how classes and objects work. Then get your hands dirty with some libraries like numpy matplotlib pandas. Learn tensorflow or keras and then go for data science. Upvotes: 0 <issue_comment>username_6: This is totally my personal opinion. I read in my office (at a construction site) that "There is a right tool for every task." I expect me to face a variety of tasks, as a programmer. I want as many tools as I can "buy or invest in", as possible. One day one tool will help me solve it, some other day some other tool. R (for statistics) and Python (for in general) are two tools I definitely want with me and I think it is worth investment for me. As far as switch is concerned, I will use the most efficient tool I know (where efficiency is measured by client's requirement, time and cost investment and ease of coding) . The more tools I know, the merrier! Of course there is a practical limit to it. All this is my personal opinion and not necessarily correct. Upvotes: 0 <issue_comment>username_7: I would say yes. Python is better than R for most tasks, but R has its niche and you would still want to use it in many circumstances. Additionally, learning a second language will improve your programming skills. My own perspective on the strengths of R vs Python is that I would prefer R for a small, single-purpose program involving tables or charts, or exploratory work in the same vein. I would prefer Python for everything else. * R is **really** good for table mashing. If most of what a particular program is going to do is smoosh some tables into different shapes, then R is the thing to pick. Python has tools for this, but R is designed for it and does it better. * It's worth switching to R whenever you need to make a chart, because *ggplot2* is a masterpiece of API usability and *matplotlib* is a crawling horror. * Python is well designed for general purpose programming. It has a very well designed set of standard data structures, standard libraries, and control flow statements. * R is poorly suited for general purpose programming. It doesn't handle tree-structured or graph-structured data well. It has some rules (like being able to look into and modify your parent scope) which are immediately convenient, but when used lead to programs that do are hard to grow, modify, or compose. * R also has some straightforwardly bad things in it. These are mostly just historical leftovers like the three different object systems. To elaborate more on the last point: computer programming done well is lego where you make your own bricks (functions and modules). Programs are usually modified and repurposed past their original design. As you build them it is useful to think about which parts might be reused, and to build those part in a general way that will let them plug in to the other bricks. R encourages you to melt all the bricks together. Upvotes: 2 <issue_comment>username_8: > > Person who chases two rabbits catches neither > > > And yes, Python is more popular. I work in both but, business speaking, it's easy to find a job on Python than in R. So, you could: * Pick Python because it is more popular. However, you must start from scratch. Or * Stay with R, after all, you have one year worth of training with R. But it is not popular. Upvotes: 0
2019/08/05
294
1,264
<issue_start>username_0: Is there any way to make [deepfake](https://en.wikipedia.org/wiki/Deepfake) videos without a fancy computer? For example, run the [DeepFaceLab](https://github.com/iperov/DeepFaceLab) on a website so your own computer won't get involved?<issue_comment>username_1: Yes. There are services that provide free environment to run jupyter notebooks for research purposes (with GPU included, which is crucial for neural networks) - such as Google Colaboratory and Kaggle Kernels. Although they limit how long your computation may run (12 and 6 hours accordingly), which adds some difficulties to the process, although I think it is possible to bypass these restrictions. Upvotes: 2 <issue_comment>username_2: Take a look at using [AWS](https://aws.amazon.com/machine-learning/amis/) You can spin up an instance with as much processing power as you need and by using their pre-built images it will already be preconfigured with a lot of the packages etc you might need for any kind of ML. I see you wanted to use DeepFaceLab which I guess requires some kind of GUI so unsure if this is suitable for your requirements but check it out, it seems to be the best way to perform high-processing machine learning without the *fancy computer* Upvotes: 1
2019/08/06
433
1,601
<issue_start>username_0: I've heard somewhere that due to their nature of capturing spatial relations, even untrained CNNs can be used as feature extractors? Is this true? Does anyone have any sources regarding this I can look at?<issue_comment>username_1: I'm not sure it's possible. Untrained CNN means it has random kernel values. Let's say you have a kernel with size 3x3 like below: ``` 0 0 0 0 0 0 0 0 1 ``` I don't think it is possible for that kernel to provide good information about the image. on the contrary, the kernel eliminates a lot of information. We cannot rely on random values for feature extraction. But, if you use CNN with "assigned" kernel, then you don't need to train the convolutional layer. For example, you can start a CNN with a kernel that designed to extract vertical line: ``` -1 2 -1 -1 2 -1 -1 2 -1 ``` Upvotes: -1 <issue_comment>username_2: Yes, it has been demonstrated that the main factor for CNNs to work is its architecture, which exploits locality during the feature extraction. A CNN with random weights will do a random partition of the feature space, but still with that spatial prior that works so well, so those random features are OK for classification (and sometimes even better than trained ones, as they don't introduce additional bias). You can read more in these papers: * [<NAME> et al. Deep Image Prior](https://arxiv.org/abs/1711.10925) * [<NAME> and <NAME>. Intriguing Properties of Randomly Weighted Networks: Generalizing While Learning Next to Nothing](https://arxiv.org/abs/1802.00844). Upvotes: 4 [selected_answer]
2019/08/06
535
2,130
<issue_start>username_0: As per subject title, are there ways to try Deep Learning without downloading and installing anything? I'm just trying to have a feel of how this work, not really want to go through the download and install step if possible.<issue_comment>username_1: You can definitely get a good handle on the theory of various concepts in ML(I.e the agent-environment loop and Markov Decision Processes) but true understanding(for the vast majority of people) will only come through application of the aforementioned theory. I would suggest something like [this](https://www.youtube.com/watch?v=TjZBTDzGeGg&t=4s) course to get your feet wet in ML Upvotes: 0 <issue_comment>username_2: As I understand, I think you wish to directly try out some deep learning stuff and things like library downloading, tools downloading, and managing all these really stop you from even starting to try out deep learning experiments.If this is what you asked for: 1. Google Colab I think this is the best place for you. Anyone with a Google Drive account can sign up for Colab by heading to and following the listed instructions. Since you mentioned that you just wanted to try out and practice stuffs, this would be ideal for you. * All major Python libraries like TensorFlow, Scikit-learn, Matplotlib among many others are pre-installed and ready to be imported. * Built on top of Jupyter NotebookPlease have a quick look at: <https://medium.com/lean-in-women-in-tech-india/google-colab-the-beginners-guide-5ad3b417dfa> 2.Microsoft Azure The Azure free account is available to all new customers of Azure. If you have never tried or paid for Azure before, you’re eligible. Try out a student account: <https://azure.microsoft.com/en-in/free/students/> Hope this helps, go ahead with practicing deep learning if not please feel free to raise questions, always ready to help --- Upvotes: 3 [selected_answer]<issue_comment>username_3: Have you tried the Tensorflow playground? It allows you turn all of the knobs and see the effect without having to code anything <https://playground.tensorflow.org> Upvotes: 0
2019/08/06
528
2,119
<issue_start>username_0: I have implemented a CNN for image classification. I have not used fully connected layers, but only a softmax. Still, I am getting results. Must I use fully-connected layers in a CNN?<issue_comment>username_1: In theory, you do not need fully-connected (FC) layers. FC layers are used to introduce scope for updating weights during back-propagation, due to its ability to introduce more connectivity possibilities, as every neuron of the FC is connected every neuron of the further layers. Upvotes: 1 <issue_comment>username_2: The reasons people use the FC after convolutional layer is that CNN preserves spatial information. You said you use softmax, so you probably make some classification task. If you don't use FC layer, then you probably evaluate first class by first position of the first kernel, not by whole image with all kernels, second class by second position of the kernel and so on. The dense net combine the info from all the kernels in all positions. That said, you technically can convert FC to Conv net, as [described here](http://cs231n.github.io/convolutional-networks/#convert), so then you can said you "skipped" FC layer. Upvotes: 1 <issue_comment>username_3: > > Are fully connected layers necessary in a CNN? > > > No. In fact, you can simulate a fully connected layer with convolutions. A convolutional neural network (CNN) that does not have fully connected layers is called a **fully convolutional network (FCN)**. See [this answer](https://ai.stackexchange.com/a/21824/2444) for more info. An example of an FCN is the [u-net](https://arxiv.org/abs/1505.04597), which does *not* use any fully connected layers, but only convolution, downsampling (i.e. pooling), upsampling (deconvolution), and copy and crop operations. Nevertheless, u-net is used to classify pixels (more precisely, semantic segmentation). Moreover, you can use CNNs only for the purpose of feature extraction, and then feed these extracted features in another classifier (e.g. an SVM). In fact, transfer learning is based on the idea that CNNs extract *reusable* features. Upvotes: 2
2019/08/06
572
2,351
<issue_start>username_0: **In a neural network, by how much does the number of neurons typically vary from layer to layer?** Note that I am **NOT** asking how to find the optimal number of neurons per layer. As a hardware design engineer with no practical experience programming neural networks, I would like to glean for example 1. By how much does the number of neurons in hidden layers typically vary from that of the input layer? 2. What is the maximum deviation in the number of hidden layer neurons to the number of input layer neurons? 3. How commonly do you see a large spike in the number of neurons? It likely depends on the application so I would like to hear from as many people as possible. Please tell me about your experience.<issue_comment>username_1: 1. Input layers will always have the dimensionality of your input data(for every model I can think of). 2. See above, the deviation between hidden layers can be significant. For example, 128 in the first hidden and 64 in the rest(or vice versa). 3. This question in particular will always be problem dependent. It is decided via architecture search or intuition/experience combined with some exploratory search. Upvotes: 1 <issue_comment>username_2: There is no right answer to this question. But, I would like to point you to [an answer on CV](https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw) that addresses the *mean* of your problem. Two points from the accepted answer that I want to draw your attention to are: > > a) There are some empirically-derived rules-of-thumb, of these, the most commonly relied on is *'the optimal size of the hidden layer is usually between the size of the input and size of the output layers'*. > > > b) In sum, for most problems, one could probably get decent performance (even without a second optimization step) by setting the hidden layer configuration using just two rules: (i) number of hidden layers equals one; and (ii) the number of neurons in that layer is the mean of the neurons in the input and output layers. > > > Other answers in the thread are also very insightful. I will recommend you to go through the answers and figure out the standard deviation and just assume things are normal, and hence you would have your distribution. Upvotes: 0
2019/08/06
563
2,140
<issue_start>username_0: I recently came across this [article](https://www.analyticsindiamag.com/iisc-bangalores-ai-team-comes-up-with-a-solution-to-an-old-nlp-problem/) which cites a [paper](https://malllabiisc.github.io/publications/papers/EWISE_ACL19.pdf), which apparently won the outstanding paper award in ACL 2019. The theme is that it solved a longstanding problem called **Word Sense Disambiguation**. What is *Word Sense Disambiguation*? How does it affect NLP? (Moreover, how does the proposed method solve this problem?)<issue_comment>username_1: 1. Input layers will always have the dimensionality of your input data(for every model I can think of). 2. See above, the deviation between hidden layers can be significant. For example, 128 in the first hidden and 64 in the rest(or vice versa). 3. This question in particular will always be problem dependent. It is decided via architecture search or intuition/experience combined with some exploratory search. Upvotes: 1 <issue_comment>username_2: There is no right answer to this question. But, I would like to point you to [an answer on CV](https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw) that addresses the *mean* of your problem. Two points from the accepted answer that I want to draw your attention to are: > > a) There are some empirically-derived rules-of-thumb, of these, the most commonly relied on is *'the optimal size of the hidden layer is usually between the size of the input and size of the output layers'*. > > > b) In sum, for most problems, one could probably get decent performance (even without a second optimization step) by setting the hidden layer configuration using just two rules: (i) number of hidden layers equals one; and (ii) the number of neurons in that layer is the mean of the neurons in the input and output layers. > > > Other answers in the thread are also very insightful. I will recommend you to go through the answers and figure out the standard deviation and just assume things are normal, and hence you would have your distribution. Upvotes: 0
2019/08/06
756
3,329
<issue_start>username_0: As everyone experienced in deep learning might know, in an image classification problem we normally add borders to images then resize it to the input size of a CNN network. The reason of doing this is to keep aspect ratio of the original image and retain it's information. I have seen people fill black (0 pixel value for each channel), gray (127 pixel value for each channel), or random value generated from gaussian distribution to the border. My question is, is there any proof that which of these is correct?<issue_comment>username_1: I've more often seen image resizing than padding to be honest and tend to resize the images. Maybe it's because datasets I've used have images with near equal aspect ratios. One major exception was when I worked with MR images. These were orthogonal and it would be wrong to mess up the aspect ratio. However, in this domain images have black borders everywhere, so a zero-padding was easy to apply. The most common use of padding I've seen is for data augmentations (to fill values gone due to translations, rotations, shifting etc.). In this regard, I've used many types of paddings (constant value, random value, 'same' padding, mirrored padding etc.) The best I've found to empirically work is zero-padding but I don't think that you'll ever find a *proof* for this. I like to think of it as a *hyperparameter*; different padding strategies maybe work better for different tasks. Though I think that zero-padding is the safest (there is a small chance of messing things up). Upvotes: 0 <issue_comment>username_2: If the computational components of the forward feed through the network have no curvature, which is normally the case in a sum of products, then it can be proven that any constant pixel value is equivalent in terms of effect on convergence results. We wouldn't expect a proof for that, since it would be too trivial to spend time writing up for publication. In general, functioning vision systems have feed forward computational components with curvature, so the padding is likely significant. Even the convolutional layers may have activation functions or something even more complex going forward, as noted in [Gauge Equivariant Convolutional Networks and the Icosahedral CNN](http://openaccess.thecvf.com/content_iccv_2017/html/Zoumpourlis_Non-Linear_Convolution_Filters_ICCV_2017_paper.html) (<NAME>, <NAME>, <NAME>, <NAME>, 2019). If purely stochastic values with value distributions like that of the un-padded coordinates are used, it may be possible to prove that some gain is made, but none appeared in a few academic article searches just made. Not surprisingly, there are many proofs regarding the properties of various message padding strategies for cryptography. Short of the inclusion of thermal or quantum noise acquisition devices in VLSI circuitry and exposure of those devices in software, purely stochastic values cannot be generated. This leaves the risk of a learning approach expected to extract features from frames learning features of the pseudo-random noise generator used to pad. The answer is that none are universally correct and there appears to be much work to do in proving advantages between different techniques in as many cases as such advantages can be proven. Upvotes: 2
2019/08/07
1,674
6,046
<issue_start>username_0: As the question suggests, I'm trying to see if I can solve [OpenAI's hardcore version of their gym's bipedal walker](https://gym.openai.com/envs/BipedalWalkerHardcore-v2/) using OpenAI's [DDPG algorithm](https://spinningup.openai.com/en/latest/algorithms/ddpg.html). Below is a performance graph from my latest attempt, including the hyper parameters, along with some other attempts I've made. I realise it has been solved using other custom implementations (also utilising only dense layers in Tensorflow, not convolution), but I don't seem to understand why it seems so difficult to solve using OpenAI's implementation of DDPG? Can anyone please point out where I might be going wrong? Thank you so much for any help! Latest attempt's performance: [![DDPG performance - Bipedal Walker Hardcore v2](https://i.stack.imgur.com/CD2KP.png)](https://i.stack.imgur.com/CD2KP.png) * Average score: **about -75 to -80** * Env interacts: **about 8.4mil (around 2600 epochs)** * Batch size: **64** * Replay memory: **1000000** * Network: **512, 256 (relu activation on inputs, tanh on outputs)** * All other inputs left to default Similar experiments yielded similar scores (or less), and included: * Network sizes of **(400,300)**, **(256,128)**, and **(128,128,128)** * Number of epochs ranging from **500** all the way to **100000** * Replay memory sizes all the way up to **5000000** * Batch sizes of **32**, **64**, **128**, and **256** * All of the above, with both [DDPG](https://spinningup.openai.com/en/latest/algorithms/ddpg.html) as well as [TD3](https://spinningup.openai.com/en/latest/algorithms/td3.html) Thank you so much for any help! It would be greatly appreciated!<issue_comment>username_1: You may be very interested to know that there was a bug in the v2 Lidar tracing, making the agent think there were phantom objects, and sometimes intersecting with its own legs: <https://github.com/openai/gym/pull/1789> > > Finding this bug makes me even more impressed anyone has solved BipedalWalkerHardcore-v2 - it seems the observations from lidar have been inconsistent and incorrect, returning the furthest hit result instead of closest. > > > ... > > Before fix - lidar traces through ground, and hits the side of a pit, > giving the agent the impression of a "phantom canyon" in front of the > pit, that only appears as it approaches the pit: > <https://i.stack.imgur.com/scwBD.png> > > > ... > > After fix - lidar is stopped by terrain, even when another object is behind it: <https://i.stack.imgur.com/Fzg0Z.png> > > > ... > > After triple checking the docs - I've submitted a minor tweak (returning -1 instead of 1 for an object that should be ignored) - it now seems legs are correctly ignored, and the traces are accurate in all situations! > > > ... > > It seems to me that solutions to BipedalWalkerHardcore-v2 have not just learned to deal with the complex environment - but advanced a step ahead, and are able to deal with the complex environment and sensory hallucinations causing them to jump at the slightest hint of a cube, and keep running even when it looks like the ground is not visible below their feet, relying more on the touch sensor than the lidar, or perhaps recognising the difference in "shape" between a real pit and a "fake pit" (A real pit has a floor) > > > BipedalWalkerHardcore-v2 has been bumped to BipedalWalkerHardcore-v3 with these fixes as of Jan 31, 2020. You might want to try retraining your agent now! (although it is still a difficult environment to solve) To expand on why DDPG doesn't solve it, when although buggy, BipedalWalkerHardcore-v2 *is* solvable: The solution landscape to this problem is as full of pits as the environment itself. To learn to leap over a pit in the environment for example, the agent must perform a complex sequence of actions that is difficult to discover by random chance. Each time it fails, it learns that being close to a pit is highly likely to result in a large penalty, and in an effort to maximise it's rewards, will often remain stationary with a naive method like DDPG as the rewards for doing that are higher than trying and falling into the pit once more. In short, vanilla DDPG lacks enough exploration power to find the complex series of actions required before it converges on not going near the pit. Not to mention all of the other things it needs to learn to be successful. Of the very few published examples that have solved it, one used Evolutionary Strategy - a gradient free method essentially trying millions of policies and evaluating them, and a custom A3C method that was tailored to solve this particular environment. Both had high computational and sample requirements. I speak from experience as I have personally solved this environment with a RL exploration algorithm that generalises to other environments, solves it in 4 hours on a single cpu, and can be used with any off-policy RL algorithm, but unfortunately I'm unable to publish it because of IP with the company I work for. TLDR; The chance of vanilla DDPG solving it are infeasibly small. Upvotes: 4 [selected_answer]<issue_comment>username_2: I've been working with a TD3 implementation for bipedal hardcore. It solved the easy version (v2 and v3) in about 300 epochs (<https://github.com/QasimWani/policy-value-methods>). I've been training it for hardcore and even after about 1200 episodes, it's no where close to convergence. Did you end up solving, and if so, what algorithm did you end up going with? Cheers, Q. Upvotes: 0 <issue_comment>username_3: Our recent work solves this problem by using the idea of a forward-looking actor. We use a neural network to forecast the next state given the current state and current action. Then plug it into the actor training with considering the value of future states. We use our idea on TD3 and make a new algorithm TD3-FORK, which solves this problem with as few as four hours. <https://github.com/honghaow/FORK/tree/master/BipedalWalkerHardcore> Upvotes: 0
2019/08/08
4,176
17,218
<issue_start>username_0: So far I understand - I know very little on the topic - the core of AI boils down to design algorithms that shall provide a TRUE/FALSE answer to a given statement. Nevertheless, I am aware of the limitations provided by the Gödel's incomplete theorems but I am also aware that there have been long debates such as the Lucas and Penrose arguments with all the consequent objections during the past 60 years. The conclusion is, in my understanding, that to create AI systems we must accept incompleteness or inconsistency. Does that mean that intelligence systems (including artificial ones), like humans, may end up in some undecidable situation that may lead to take a wrong decision? If this may be acceptable in some application (for example, if every once in a while a spam email ends up in the inbox folder - or vice versa - despite an AI-based anti-spam filter) in some other application it may not. I am referring to real-time critical applications when a "wrong" action from a machine may harm people. Does that mean that AI will never be employed for real-time critical applications? Would in that case more safe to use deterministic methods that do not leave room for any kind of undecidability?<issue_comment>username_1: Your initial statement on the core of AI is rather limited. In general, AI is concerned with modeling human behaviour either by imitation (*soft AI*) or by replicating the way human cognition works (*hard AI*). So far there have been some successes with soft AI, as computers can perform tasks that required some "intelligence", though the degree of this intelligence is questionable. This is partly due to the fact that even we as humans don't really have a clear idea what it means for a computer to "understand" something. But your conclusion is correct: if we build an AI system with human characteristics, then it will make mistakes, just as humans make mistakes. And any system designed by humans (or machines!) will make mistakes. However, not being able to deal with an imperfect world is not really relevant to AI alone: even systems that do not use AI methods will have to face that, and whether a system is suitable for real-time critical applications has got nothing to do with whether it is based on AI or not. UPDATE: There seem to be two distinct issues at play here: decidability and real-time processing. 1. Real-time computing (RTC): This is not really related to AI. Even ordinary programmes written in Java are not really safe for RTC, as they could start a garbage collection cycle at any time which pauses execution of the program. Just imagine a reactor core starts overheating just as your controller runs out of memory and garbage collection kicks in, halting the program for a few minutes. If you implement AI methods in RTC-safe systems, that should not be an issue. 2. Decidability: Your reasoning is that AI systems attempt to mirror human cognition, thus incorporating the ability to make mistakes. This is a more philosophical issue — if a human can control a system, then an AI system with the same capabilities should be able to do it too. This assumes that we are able to replicate human behaviour (which we are not). There are AI methods which are deterministic, so would come to the same conclusions given identical environments. So I would say that they would not perform worse than non-AI methods. It partly depends what you want to call AI; the distinction between traditional AI and statistical methods keeps getting blurred at present. To conclude: No, AI methods should be suitable, as they can also be deterministic. It depends on the actual application and method if they are. And, of course, on what you count as AI. Upvotes: 2 <issue_comment>username_2: Your question is mostly philosophical, not technical or scientific. So I am giving opinions and references here. > > the core of AI boils down to design algorithms > > > I am noticing that you are not even try to define [AI](https://en.wikipedia.org/wiki/Artificial_intelligence) (whose definition changed since the previous century). You could look at the table of contents of the [*Artificial Intelligence* journal](https://www.journals.elsevier.com/artificial-intelligence) and notice how topics covered there changed drastically in a few decades (even experimental approaches have declined). You might be interested in reading more about [AGI](https://en.wikipedia.org/wiki/Artificial_general_intelligence) and follow the few conferences about it. Beware, there is a lot of too much simplified approaches, and even a lot of bullshit (e.g. on [this](https://agi.topicbox.com/groups/agi) AGI mailing list, but some messages there are gems) I assume you accept the [Church-Turing](https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis) philosophical thesis: every intelligent cognition (either natural i.e. biological or artificial) is some symbolic computation. In particular, the work of a mathematician can be abstracted as a Turing machine (that was the major insight of Turing and in the [halting problem](https://en.wikipedia.org/wiki/Halting_problem)). Be also aware of the related [Curry-Howard](https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence) correspondence and [Rice's theorem](https://en.wikipedia.org/wiki/Rice%27s_theorem). Read [*Gödel, Escher, Bach*](https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach) ! We don't know yet how to make AGI. You could read Bostrom's [*SuperIntelligence*](https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies) book about potential dangers. You could also read J.Pitrat's book [*Artificial Beings*](https://dl.acm.org/citation.cfm?id=1717130) (which gives much more positive and constructive insights about eventually making some AGI) and [blog](http://bootstrappingartificialintelligence.fr/WordPress3/). My personal belief (just an opinion) is that AGI could be perhaps achieved (in many dozens of years), should be definitely get much more funding -and more time- as a research topic (e.g. as much as the [ITER](https://en.wikipedia.org/wiki/ITER) reactor; see also [softwareheritage.org](https://www.softwareheritage.org/) and the motivations there), but won't be achieved by any single technique, but by a clever combination of many AI techniques (both [symbolic AI](https://en.wikipedia.org/wiki/Symbolic_artificial_intelligence) -e.g. for [planning](https://en.wikipedia.org/wiki/Planning)- and [machine learning](https://en.wikipedia.org/wiki/Machine_learning) or [connectionnist](https://en.wikipedia.org/wiki/Connectionism) approaches, with inspiration from cognitive psychology). > > The conclusion is, in my understanding, that to create AI systems we must accept incompleteness or inconsistency. > > > We, members of the [Homo Sapiens Sapiens](https://en.wikipedia.org/wiki/Human_taxonomy#Subspecies) species (in latin, the humans who know that they know, so capable of [metaknowledge](https://en.wikipedia.org/wiki/Metaknowledge)), claim to be intelligent. But all of us have a globally incomplete and inconsistent behavior, because each of us have contradictions (e.g. in our personal lives or [ethical](https://en.wikipedia.org/wiki/Ethical_dilemma) beliefs). So, logically speaking, incompleteness or inconsistency is not opposed to intelligence. Read also more about [situated AI](https://en.wikipedia.org/wiki/Artificial_intelligence,_situated_approach) and [machine ethics](https://en.wikipedia.org/wiki/Machine_ethics). BTW, I believe (since educated by J.Pitrat about this) that explicit and [declarative](https://en.wikipedia.org/wiki/Descriptive_knowledge) [meta](https://www.coursehero.com/file/ptcioic/CS-484-Artificial-Intelligence-8-Meta-Rules-The-rules-that-determine-the/)knowledge is required in any AGI system. > > Does that mean that AI will never be employed for real-time critical applications? > > > Notice that [autonomous killing robots](https://en.wikipedia.org/wiki/Lethal_autonomous_weapon) are already a controversial research topic today. [Autonomous robots](https://en.wikipedia.org/wiki/Autonomous_robot) already exist (e.g. [Mars rovers](https://en.wikipedia.org/wiki/Mars_rover) cannot be teleoperated -for every elementary movement- from Earth, because any radio signal takes minutes to reach Mars). And autonomous vehicles (à la [Google car](https://en.wikipedia.org/wiki/Waymo)) claim today to use AI techniques and are [real-time](https://en.wikipedia.org/wiki/Real-time_computing) [safety-critical systems](https://en.wikipedia.org/wiki/Safety-critical_system). Today's Airbus or Boeing (cf [DO-178C](https://en.wikipedia.org/wiki/DO-178C)) are flying automatically most of the time. [Cruise missiles](https://en.wikipedia.org/wiki/Cruise_missile) and [ICBM](https://en.wikipedia.org/wiki/Intercontinental_ballistic_missile)s are [fire-and-forget](https://en.wikipedia.org/wiki/Fire-and-forget) devices. Many [high-frequency trading](https://en.wikipedia.org/wiki/High-frequency_trading) systems claim to use AI techniques and are real-time. PS. Notice that what was called AI in the previous century is today called AGI. My PhD in AI was defended in 1990 (and was about explicit metaknowledge for metaprogramming goals, see e.g. [this](https://inis.iaea.org/collection/NCLCollectionStore/_Public/19/059/19059867.pdf?r=1&r=1) old 1987 paper) Upvotes: 2 [selected_answer]<issue_comment>username_3: Artificial intelligence cannot be boiled down to designing algorithms, binary or otherwise, simply because the exhibition of intelligence in biological systems predated the invention of algorithmic computing. From this, we can further draw the conclusion that algorithms are not a necessary component of systems that exhibit behavior we deem intelligent. A decision was made, per the recommendation of <NAME>, to increase reliability of computing machinery by delegating to a single binary central processing unit all computation. This choice and the prior work upon which it was based (Shannon, Church, and Turing) led to the preeminence of algorithm specification in computer languages. The foundation of expressing functional design in algorithmic terms was laid and the software industry was born. Since that time, there has existed a parallel trend in research back toward the biological inspiration of computing machinery and, more specifically, parallel processing. We see this at several levels. * Movement of floating point arithmetic, video rendering, and machine learning bottlenecks to dedicated VLSI hardware acceleration * Multiple core VLSI processors * Computing clusters and processing frameworks, containers, and environments that expose interfaces through which compiler and kernel programmers can control parallel machinery explicitly or implicitly * Multiple thread and processes delegated to multiple cores, agents, or hosts in computing clusters * Sophisticated VLSI level caching to maximize the efficiency of parallel operations * Language and compiler features to support the trends toward deployment to multiprocessing environments, such as declarative languages for Big Data platforms (ECL for example) * Development of AI chip designs that completely or partially shift the computing paradigm to prior to the emergence of the CPU in some ways, returning to considerable parallelism and departing from centralized processing (yet capitalizing on lessons learned in computer vision, cognitive science, reverse engineering of brain genetics, mental signal tracing, the use of gradient descent with back propagation, reinforcement designs, and applied robotics) — This is likely a major research direction for the 2020s. Some believe that an implication of Gödel's two incompleteness theorems is that the human mind does not meet the criteria of a computing machine as Turing defined one, but these are largely tangential issues. It is true that The working out of a proof that RNNs of sufficient resolution, depth, and width can be trained to be equivalent to any Turing Machine by Hava Siegelmann. It is true that her work is considered support for <NAME>'s bold assertion that the human brain is a meat machine. However, the work on determinism by <NAME> and <NAME>'s *The Emperor's New Mind* are not refutations of either of Gödel's theorems. They are refutations of what some thought were consequences Gödel's theorems and some of the implications of Minski's declaration. Gödel clearly explains his intentions in the early portion of the paper presenting the theorems, and they had nothing to do with computing. He intended to and succeeded in proving that theorems within a concrete mathematical system not always be proven even if they are true. Gödel's work placed unwanted doubt on the initiative to prove all remaining unproven mathematical theorems. Mathematicians naturally tended to think of mathematics as the perfect human endeavor, and a legitimate proof of incongruity between what is true and what is provable seemed an imperfect irritation. Perhaps the most profound response to Gödel's incompleteness theorems came from <NAME>, who likely deliberately placed the word Completeness in the name of his theorem. But this was not a refutation either. He worked around incompleteness by defining a class of mathematical operations and finite data structures upon which they can operate that he could prove could be complete. Upon doing so, he put into place an important portion of the basis for algorithm development. Nonetheless, it is probably wise for present day AI researchers to accept both incompleteness and inconsistency and realize that intelligence, artificial or not, is likely fallible after any finite degree of learning. This is likely because one cannot provide an infinite range of problem types to a learning system in a finite amount of time. There may always be at least one problem that the current state of learning cannot address. The practical colloquialism for this condition of partial knowledge is, "We don't know what we don't know." Furthermore, a clear implication of the work of Gödel is that no proof may be found for some things that are true, ever, by any type of intelligence. Similarly, we cannot be sure that the most intelligent searching for a counter example to dispute a false assertion may end in finding one, ever. The PAC Learning framework addresses categories of problems that are solvable or not from a mathematical perspective and is worthy of study. Lastly, but perhaps most profoundly, it is not clear that a type of intelligence exists that can learn anything, as opposed to be programmed to accomplish anything. Said another way, general intelligence may be an ideal conception never achieved but possibly approached. What may seem like super intelligence in one environment and during one specific time period may be entirely ineffective or even counter-intelligent and problematic in another environment or during a different time period. This cannot be stressed too much, with so many statements about AI being made in the guise of science that have no origin in scientific rigor. Nonetheless, even with these likely limitations on both AI and human intelligence, one cannot conclude that AI will be ineffective in real time critical applications. One cannot conclude that AI will be less effective than human intelligence in any particular domain either. It is actually difficult to conclude anything about intelligence at all, without defining it formally and reaching a consensus in that definition, which continues to escape us. We can see that in the absence of this formality, mail industry from continuing to sort mail automatically. The automotive industry continues to pursue the invention of better artificial drivers than the average human driver. The game industry implements artificial opponents that have to deliberately make mistakes to let people win in an otherwise fair, real time game. Clearly AI is evolving faster than the DNA components that affect the human brain. People are less startled today than they would have been ten years ago by the proposition that, some time during this century, driving a car will be illegal in some jurisdictions, when the human and property loss statistics prove automated drivers to be substantially safer than nearly all manual ones. The bar for driving safety set by humans is not very high, with day dreaming, texting, occasional tiredness or inebriation slowing an already insufficient reaction time for many street events. If the driving computing agent panics because there is determines the trajectory of a dog, a child, and an elderly person is intersecting with the car's trajectory, it may resolve the panic and plot a safe course in a millisecond (perhaps avoiding all three or perhaps sacrificing the dog to save the two people), whereas the human may resolve the panic only after hitting some one. In summary, it is not infallibility that determines the proper balance or volume of AI deployment but the comparison of the distribution of human performances compared with the distribution found with the machine replacements under similar conditions. Upvotes: 2
2019/08/08
544
2,544
<issue_start>username_0: I am wondering if I can use neural networks to find features importances in similar manner as it can be done for random forests or decision trees and if so, how to do it? I would like to use it on tabular time series data (not images). The reason why I want to find importances on neural networks not on decision trees is that NNs are more complicated algorithms so using NNs might point out some correlations that are not seen by simple algorithms and I need to know what features are found to be more useful with that complicated correlations. I am not sure if I made it clear enough, please let me know if I have to explain something more.<issue_comment>username_1: This should be possible, considering universal approximation theorem you should be able to build a ann that approximates features that gives the most likely best feature set for a different net to train on. I would us a rnn for with a softmax output layer that ranks features by performance. You can find a good explanation of softmax here: <https://developers.google.com/machine-learning/crash-course/multi-class-neural-networks/softmax> basically it will assign probability values for each output node with all of these values adding up to 1.0 Upvotes: 2 <issue_comment>username_2: There are multiple standard ways of feature selection, for example ranking features by information gain, that you could use, and then you can train the neural network on just those features. However, let's assume you have trained a neural network on all of the features and now want to estimate their importance. One approach you could take is to perform a sensitivity analysis on the inputs: add random noise in a controlled fashion to different features and see what effect it has. If the training dataset has been centered (so each feature has zero mean) then you could set the inputs to zero (the "average" training example) and then perturb each feature in turn to see what the effect is. You could also fix a feature permanently to zero and then run your validation data through the network and see how accuracy changes. There should be no major effect for insignificant features, but important features being zeroed should lead to a decrease in accuracy. You can also do something like this when predicting specific examples: perturb the example's features and see how much the prediction changes. [LIME](https://github.com/marcotcr/lime) does something like this to explain why a black box like a neural network makes the predictions it does. Upvotes: 0
2019/08/09
2,233
9,019
<issue_start>username_0: This seems like such a simple idea, but I've never heard anyone that has addressed it, and a quick Google revealed nothing, so here it goes. The way I learned about machine learning is that it recognizes patterns in data, and not necessarily ones that exist -- which can lead to bias. One such example is hiring AIs: If an AI is trained to hire employees based on previous examples, it might recreate previous, human, biases towards, let's say, women. *Why can't we just feed the training data without data that we would consider discriminatory or irrelevant, for example, without fields for gender, race, etc., can AI still draw those prejudiced connections? If so, how? If not, why has this not been considered before?* Again, this seems like such an easy topic, so I apologize if I'm just being ignorant. But I have learned a bit about AI and machine learning specifically for some time now, and I'm just surprised this hasn't ever been mentioned, not even as a "here's-what-won't-work" example.<issue_comment>username_1: > > Why can't we just feed the training data without data that we would consider discriminatory or irrelevant, for example, without fields for gender, race, etc., can AI still draw those prejudiced connections? If so, how? If not, why has this not been considered before? > > > Yes. The AI/ model still can learn those prejudiced connections. Consider that you have a third variable which is a [confounding variable](https://en.wikipedia.org/wiki/Confounding) or has [spurious relationship](https://en.wikipedia.org/wiki/Spurious_relationship) that is correlated with the bias variable (BV) and the dependent variable (DV). And, the analyst removed the BV but failed to remove the third variable from the data that is fed to the model. Then the model will learn the relationships the analyst didn't want it to learn. But, at the same time the removal of the variables could lead to [omitted variable bias](https://en.wikipedia.org/wiki/Omitted-variable_bias), which occurs when a *relevant* variable is left out. Ex: Suppose that the goal is prediction of salary ($S$) of an individual and the independent variables are age ($A$) and experience ($E$) of the individual. The analyst wants to remove the bias that could come in because of age. So, she removes age from one of the models and comes up with two competing linear models: $S = \beta\_0 + \beta\_1E + \varepsilon$ $S = \beta\_0 + \beta\_1^\*E + \beta\_2A + \varepsilon$ Since, experience is highly correlated with age, in presence of age in the model, it is very likely that $\beta\_1^\* < \beta\_1$. $\beta\_1$ will be a bogus estimate of a person's experience on salary as the first model suffers from the omitted variable bias. At the same time the predictions from the first model would be reasonably good although the second model is very likely to beat the first model. So, if the analyst wants to remove any 'bias' that might come in because of age i.e. $A$ she must also remove $E$ from the model. Upvotes: 2 <issue_comment>username_2: Sometimes, the reason that this isn't an option is that you don't have that much control over what data is provided. Suppose, for example, you want a fancy AI that reads a Résumé and filters on suitability for a job. There isn't a particularly rigid formula about what people put in their Résumé, which makes it difficult to exclude things you'd rather not consider. Where you do have more control about exactly what information you consider, it can still be thwarted by correlations. Think, for a moment, how this pans out with a human decision maker. You want to ensure that <NAME> gives women a fair chance at being hired, so you make sure that there isn't a gender field in the application form. You also blind out the applicant's name, since there is no good reason that a name should determine suitability for a role, and including it would reveal a lot of genders. But you don't block out the hobbies, clubs and societies entry, because it's thought to say something positive about an applicant if they were the captain of their college sports team. Joe Sexist, however, considers it a positive if an applicant captained a male dominated team such as American football, but considers negative being captain of a female dominated team! Some might say that wouldn't quite be bias against women; it's bias against players of sports that Joe Sexist considers effeminate. But really a skunk by any other name would stink as bad. The same sort of thing can happen with AI. Now to be clear, the AI is not sexist. It is a blank sheet with no preconceptions until it gets fed data. But when it gets fed data, it will find patterns in the same way. The dataset it gets given is years of hiring decisions by Joe Sexist. As suggested, there is no entry for gender, but there are fields for all the things that might be considered slightly relevent. For example, we include whether they have a clean driving license. The AI notices that there is a positive correlation between the number of road traffic offences an applicant has and Joe's likelihood of hiring them (Because, of course, there happens to be a gender correlation between dangerous driving and gender). Again, the AI has no preconceptions. It doesn't know that traffic offences are dangerous and should be weighted against. As far as its dataset suggests, they're points! With this sort of information in a dataset, the AI can exhibit all the same sorts of biases as Joe Sexist, even though it doesn't know what a "woman" is! --- To expand this with specific numbers, suppose that your dataset has 1000 male and 1000 female applicants for a total of 1000 places. Of those, 400 of the men and 100 of the women have a tarnished traffic record. Joe Sexist was not in favour of reckless drivers: in fact a clean traffic record guaranteed you would beat an equivalent candidate with a tarnished record. But he was very in favour of men: being male made you 9 times more likely to get hired than being female. So he gives places to 900 of the men: all 600 of the clean drivers and 300 dirty drivers. He gives places to 100 of the women: all to clean drivers. Now, you take away any mention of gender in the dataset. There are 2000 people, 500 drive badly, 1500 drive well. Of these, 300 bad drivers get jobs, and 700 good drivers get jobs. Therefore the 25% of the population who drive badly get 30% of the jobs, which means (as far as an AI that just looks blindly at the numbers is concerned) that driving badly suggests you should get the job. That's a problem. Further, suppose you have a new batch of 2000 applicants with the same ratios and it's the AI's turn to decide. Now often AIs actually make this even worse by exagerating the significance of subtle indicators, but let's suppose that this one does everything in strict proportionality. The AI has learned that 60% (300 / 500) of the bad drivers should get the job. It doesn't know about gender, so it at least allocates the bad driver bonus "fairly": 240 male and 60 female bad drivers get jobs. Then 280 male and 420 female good drivers get jobs. This comes to 520 male and 480 female applicants getting in. Even though the original applicant pool was balanced and if anything women were better (at least at driving) the original sexism in the training dataset still gives some advantage to the men. (as well as giving an advantage to bad drivers) --- Now, don't let me completely disuade you. In the human case, it is a known fact that blinding out some information does indeed give more balanced hiring decisions. And even in my toy example, while it doesn't get to fairness it has massively reduced the scale of the sexism. So yes, it probably would make the AI somewhat less sexist if the most blatant indicators aren't provided in the dataset. But perhaps this gives some intuition about why it's not a complete solution to the problem. There is some sexism that leaks through, and it also causes the system to make very weird associations with other bits of the dataset. Upvotes: 1 <issue_comment>username_3: There is a wider social issue to consider here also. When we build machines, we evaluate what they do and decide if the action that they undertake is to our benefit or not. All societies do this, although you are probably more aware of obvious examples such as the Amish than you are of your own society. When people complain about biased decision making by AI systems, they are not just evaluating if the result is accurate, but also if that decision supports the values that they wish to see instantiated in society. You can make a human take cultural factors into account when making a decision, but not an AI that is completely unaware of them. People describe this as complaining about 'bias', but that is not always completely accurate. They are really complaining that the use of AI systems fail to take into account wider social issues that they consider to be important. Upvotes: 0
2019/08/09
1,490
6,335
<issue_start>username_0: I'm trying to use a Monte Carlo Tree Search for a non-deterministic game. Apparently, one of the standard approaches is to model non-determinism using *chance nodes*. The problem for this game is that it has a very high min-entropy for the random events (imagine the shuffle of a deck of cards), and consequently a very large branching factor ($\approx 2^{32}$) if I were to model this as a chance node. Despite this issue, there are a few things that likely make the search more tractable: 1. Chance nodes only occur a few times per game, not after every move. 2. The chance events do not depend on player actions. 3. Even if two random outcomes are distinct, they might be "similar to each other", and that would lead to game outcomes that are also similar. So far all approaches that I've found to MCTS for non-deterministic games use UCT-like policies (e.g. chapter 4 of [A Monte-Carlo AIXI Approximation](https://arxiv.org/pdf/0909.0801.pdf)) to select chance nodes, which weight unexplored nodes maximally. In my case, I think this will lead to fully random playouts since any chance node won't ever be repeated in the selection phase. What is the best way to approach this problem? Has research been done on this? Naively, I was thinking of a policy that favors repeating chance nodes more over always exploring new ones.<issue_comment>username_1: If you have that sort of a priori knowledge about your environment it, as you said, will simplify the problem substantially. From what I gather you have done a good amount of background research and simply want to apply UTC MCTS(or similar) to the environment. You mention that "chance nodes won't ever be repeated in the selection phase". If I understand what you are asking correctly, you can simply use what you know to alter the way you search nodes of the tree. I.e you can essentially act greedily wrt the initial chance nodes and then slowly decay that search strategy as the training progresses(to avoid convergence to local maxima). I encourage you to dig a bit deeper into the methods around exploration vs. exploitation as there may be an elegant solution to this problem in particular. Upvotes: 0 <issue_comment>username_2: You can try using an "Open-Loop" MCTS approach, instead of the standard "closed-loop" one, and eliminate chance nodes altogether. See, for example, [Open Loop Search for General Video Game Playing](http://www.diego-perez.net/papers/OpenLoopGVG.pdf). In a "standard" (closed-loop) implementation, you would store a game state in every normal (non-chance) node. Whenever there is a chance event, you would stochastically traverse to one of its children, and then have a normal node with a "deterministic" game state again. In an open-loop approach, you do not store game states in any node (except possibly the root nodes), because nodes no longer deterministically correspond to specific game states. **Every node in an open-loop MCTS approach only corresponds to the sequence of actions that leads to it from the root node**. This completely eliminates the need for chance nodes, and results in a significantly smaller tree because you only need a single path in your tree for every possible unique sequence of actions. A single sequence of actions may, depending on stochastic events, lead to a distribution over possible game states. In every separate MCTS iteration, you would re-generate game states again by applying moves "along the edges" as you traverse through the tree. You also "roll the dice" again for any stochastic events. If your MCTS iteration traverses a certain path of the tree often enough, it will still be able to observe all the possible stochastic events through sampling. Note that, **given an infinite amount of time, the closed-loop approach with explicit chance nodes will likely perform much better**. But when you have a small amount of time (as is the case in the real-time video game setting considered in the paper I linked above), an open-loop approach without explicit chance nodes may perform better. --- Alternatively, if you prefer the closed-loop approach with explicit chance nodes, you could try some mix of: * Allowing MCTS to prioritise promising parts of the search tree over parts that have not been visited at all (i.e. do not automatically prioritise nodes with $0$ visits). For example, instead of giving unvisited node a value estimate of $\infty$ (this is how you could interpret the automatic selection of them), you could give them a value estimate equal to the value estimate of the parent node, and just apply the UCB1 equation directly. * Use AMAF value estimates / RAVE / [GRAVE](https://www.lamsade.dauphine.fr/~cazenave/papers/grave.pdf) in your selection phase. This allows you to very quickly learn some crude value estimates for moves that you have never selected in the Selection phase yet, by generalising from observations of playing them in the Play-out phase. I have noticed that the "standard" implementation of RAVE / GRAVE, without an explicit UCB-like exploration term, does not mix well with my previous suggestion of using a non-infinite value estimate for unvisited children. It may be good to consider a UCB-like variant with an explicit exploration term instead. Upvotes: 3 <issue_comment>username_3: I would have a look at progressive widening, which was created to address the problem of applying MCTS to continuous stochastic environments. That is, where the probability of encountering the same state twice in simulating a chance node is $0$. The original paper for DPW (Double Progressive Widening) can be found here <https://hal.science/hal-00542673v2/file/c0mcts.pdf>. DPW addresses the case where both action and state spaces are continuous, but you can ablate it if you only have or the other. The main idea is that you expand the chance node a few times and then revisit those expansions. When you've saturated existing outcomes according to some hyperparameter, you then randomly sample the chance node. It has a few limitations, for example the rate at which you sample new nodes is fixed and has to be determined experimentally. In some environments, the optimal rate is dependent on the state itself which leads to more complex techniques. For your problem though, it sounds like it may be a good fit. Upvotes: 0
2019/08/11
643
2,563
<issue_start>username_0: Every week I will get a lot of videos from a game that I play, outside the game where you throw wooden skittle bats at skittles, and then I will cut videos, so that, at the end. there [is video only about throws](https://www.youtube.com/watch?v=ITCglVMyKLs). The job is simple and systematic. I have a lot of videos, so I was wondering: * Is it possible to teach AI to cut videos from the right place? I was thinking to ask help or guidance where to start to solve this problem. I can't use only sound, because sometimes you can hear skittles hit from outside of the video. I also can't just use movement activity, because sometimes there are people moving around the field. Videos are always filmed from a fixed stand, so it should make it easier. So, is it possible and where to start? Here is another example: <https://www.youtube.com/watch?v=sHu6yMBV3xU><issue_comment>username_1: I'm thinking, you can input the video you are trying to edit and make it output the timestamps to cut. You will probably have to manually type in the timestamps that you would cut for that video, or maybe a keylogging program of some sort. This works in theory, kinda, but I'm not sure how exactly though. The inconsistent input length is easy enough to deal with, but I'm pretty sure you cant have inconsistent output lengths (otherwise I would be using it long ago). Maybe just make like 100 nodes and make it output (timestamp, confirm) where if confirm is > 0.5 it is an actual cut it wants to make. But inputting whole videos is never a good idea. It takes way too long and takes like 30 gigs of RAM(I had to use virtual RAM which is ridiculously slow) for like a 5 min video and crashes all the time. Any suggestions though? Upvotes: 1 <issue_comment>username_2: I believe the answer is yes, but the video edition is mostly programmatic. The AI part comes with detecting the right spots to cut. * You want to detect the right portion when someone picks a wooden skittle * you want to detect when the Skittles stop moving on the ground (to detect this, also when they start moving) These will give you timestamps on the video. The next step is to add some padding and run the drop-down frame operations in the video, which can all be done with a simple script. You might find this video very similar in concept, and maybe even as code source: <https://youtu.be/DQ8orIurGxw> For the recognition of the objects in the video frames, there are lots of options. You might look into computer vision or object motion detection. Upvotes: 2
2019/08/12
549
2,542
<issue_start>username_0: A while back I posted on the Reverse Engineering site about an audio DSP system whose designer had passed away and whose manufacturer no longer had source code (but [the question was deleted](https://reverseengineering.stackexchange.com/questions/17238/decode-filter-coefficients-for-audio-dsp)). Basically, the audio filter settings are passed from a Windows program to the DSP device presumably as coefficients and then generic descriptions of those filters (boost/cut, frequency and bandwidth) are passed back from the box to the software - but only if it somehow recognizes the filter setting. I want to be able to generate the filter settings separately from the manufacturer software, so I need to know how they are calculated. I've not been able to deduce how this is structured from observing the USB communication that I've gathered. So, I wonder if AI could do this. How would I go about creating an AI to send commands to the box (I know how to communicate with the box and have a framework for how these types of commands are phrased) and then look at the responses to either further decode the system and/or create an algorithm for creating filters? The communication with the DSP mixer box is basically via "Serial" commands and although it uses a USB port, there is a significant bottleneck inside the command control system in the mixer box. Any attempts to reverse engineer may encounter problems based on the sheer amount of time that it would take to compile enough data. Or not.<issue_comment>username_1: Yes this is entirely possible. As was previously mentioned, complex connectionist systems are often thought of as black boxes(despite us being able to "look in" the box given enough computation and analysis) because of the difficulty in understanding learning and the networks ultimate decision making. Here, we can model the problem as such: given an input of filter settings(and presumably some information about the audio), predict the target descriptors as an output. All you really need to do is generate a dataset from the program and then train it in a multi-label classification context to predict the output descriptors. Upvotes: 1 <issue_comment>username_2: You definitely could -technically- use AI (advanced informatics) techniques like in the [BinSec](https://binsec.github.io/) binary analyzer (static analyzer of binary code). You might be forbidden legally to do so. Check with your lawyer. Contact me by email for more information (on the technical level). Upvotes: 0
2019/08/12
1,782
7,796
<issue_start>username_0: I came across 'Amber'(<https://ambervideo.co/>) where they are claiming that they have trained their AI to find patterns emerging due to artificially created videos which are invisible to naked eye. I am wondering that the people who are creating deepfakes can as well their AI's to remove these imperfections and so the problem reduces to 'cat-mouse' game where having more resources(to train their AI) is more crucial. I do not work in AI and vision and so I may be missing some trivial points in the area. I would really appreciate if detailed explanation or relevant resources are given. Edit: Most of the people who do manipulate the media news or create fake news could afford more resources than an average citizen. So, is the future is really going to be dark where only few strong have even more control on the society than today? I mean even though there are fake photos created by photo shop, most of the good photo-shopped photos do take a long time to make. But if AIs can be trained to do that then it is more about having large resources. Are there related works which give hope to know real from fakes? P.S.: I realize that after the edit, the question also went tangential to the topic-tags here. Please let me if there are relevant tags.<issue_comment>username_1: As mshlis begins to touch on, yes we can. However, it will be an unending war. There are quite a few reasons for this. For one, the problem itself is not simple. There are many different 'versions' of the deepfakes framework out in the wild at this point, any algorithm you create to try and spot them would have to work for all of the different iterations. Another reason is the systems that would be used to combat it can be quite easily fooled([see](https://arxiv.org/abs/1412.1897)). However, the most glaring, and unending problem comes from the architecture itself. Let us say we create a perfect algorithm that is foolproof and extremely accurate. Even then, all one would have to do is use that algorithm as the discriminator during training of your deepfakes model, and bing-bang-boom, your deepfake detection model is busted. Upvotes: 1 <issue_comment>username_2: I think this game will go pretty crazy, because, at some point, the generator AI will be able to generate absolutely perfect images. Actually, no, just perfect enough that no AI can be sure whether they are real or fake. So, I think the AI war will go onto more than the image, the detector AI will probably evolve to analyze whether this video is logically plausible, for example, by tracking the celebrities' position to prove that it is impossible that he/she was, for example, let's put it this way, being unloyal to his/her partner. I mean, currently, AI can tell whether an image is fake or not better than human because it has seen about a million times more samples than us, but if we know who the person in the image is and we are as stalky as the AI I just described, we can probably work out that this image is implausible. Of course, there will be counter measurements to that. But, at that point, we might as well just let the AI rule the world, given that it will have become this smart (lol). But, seriously, if it's smart enough to think this far ahead in this 'real world' problem, then strong AI is nigh. Upvotes: 2 <issue_comment>username_3: Sound and image manipulation necessarily creates artifacts. Around the edges of superimposition in layers there are such. Face replacement and other more surface or object centered operations create a different class of artifacts. A sufficiently well constructed LSTM or GRU network and data set of manipulated frame sequences and the user (mouse and keyboard) events that manipulated them can be used to produce good guesses of the event set from new images. Adding unmanipulated images to the data set can allow for the no-event case. That would be the supervised way to do it. There are unsupervised approaches that would require considerably less training resources, which is likely the case with this San Francisco solutions provider. In either case, the question of escalation is a good one. One can also create a device, building from the current state of machine learning, that hides manipulations from existing detection software. If they are forward thinking, the same provider may have already developed it. > > Can we combat against deepfakes? ... > I am wondering that the people who are creating deepfakes can as well their AI's to remove these imperfections ... > > > Yes and yes. In war, the combatants learn the methods of the opposing combatants and adapt. A detection mechanism for opposing strategy changes is also theoretically possible, which is one of the reasons that military research facilities spend so much on higher forms of AI. The edit to the question is not entirely tangential either. If we propose, which some people have, that a virtual reality may damage human culture or individual psyches, the average citizen is likely to be considered collateral damage on the field of combat by companies seeking a good financial return from their AI development. Of course, we could say the same thing about the use of diminished fifths in music. Two notes that are six half steps apart produce a dissonant frequency ratio of $1:\sqrt{2}$. The diminished fifth was considered subliminally satanic in Europe centuries ago and prohibited in music compositions by law. The glass harmonica was alleged to have driven listeners insane. Anthropologically, it is possible that a mark of our species is to manipulate appearance. To hunt fakes in frames and audio is likely a fruitless hunting ground, with our without the escalation. The current hunting ground of import is the research into what genetic elements led to human abilities to imagine, design, and fabricate. After that is known, we may have a better window into whether the cat-and-mouse games we play have any sustainable value for our species going forward. Those who love competition believe that it strengthens, which is possible. It is also possible that the games are solely an artifact of a painful path to our emergence as the dominant mammalian species and no longer of any particular use. "Do to others as you would want them to do to you," has the ring of truth we can't ignore either. If we look through this wider lens, we can see that our entertainment choices tend toward what could (in the absence of bias) qualify as deepfakes. There are entire cities fueled by the money made by the entertainment industry producing excellence in sound and image capture, synthesis, and manipulation. The story lines are not necessarily representing deep truths. This is more overt. On the more covert side, some pass fakes off as reality as a move in their own game to achieve some objective, but this is not the exception in our culture. The fields of public relations and marketing are based on the creation and preservation of business value. Some elements of government, education, and community are based on the creation of economy-preserving beliefs. The intention may be to benefit others or beat them and gain personal wealth. Some of us seek authenticity and would like the fake-finders to win the combat, but it appears they may be on the losing side. Does this question and this answer pertain to this Stack Exchange community? Absolutely. This community's description in the drop down of SE communities reads, "For people interested in conceptual questions about life and challenges in a world where 'cognitive' functions can be mimicked in purely digital environment." Whether AI ultimately weighs in on the side of playing people or informing them certainly pertains to this published view of this community's purpose. Upvotes: 0
2019/08/12
706
2,634
<issue_start>username_0: Assume I have a list of sentences, which is just a list of strings. I need a way of comparing some input string against those sentences to find the most similar. Can [ELMO embeddings](https://allennlp.org/elmo) be used to train a model that can give you the $n$ most similar sentences to an input string? For reference, gensim provides a [doc2vec](https://radimrehurek.com/gensim/models/doc2vec.html) model that can be trained on a list of strings, then you can use the trained model to infer a vector from some input string. That inferred vector can then be used to find the $n$ most similar vectors. Could something similar be done, but using ELMO embedding instead? Any guidance would be greatly appreciated.<issue_comment>username_1: I'm assuming you are trying to train a network that compares 2 sentences and give how similar they are. To do that you will need the dataset (the list of sentences) and a corresponding list of 'correct answers' (the exact similarity of the sentences, which I'm assuming you don't have?). Why do you need to compare them using a neural network though? For python, difflib's sequence matcher would be my suggestion, but I'm sure there are many other libraries out there :) Upvotes: -1 <issue_comment>username_2: I ended up finding [this article](https://ai.intelligentonlinetools.com/ml/document-similarity-in-machine-learning-text-analysis-with-elmo/) which does what I'm looking for. Below is the portion of code I adapted for my needs ``` from sklearn.metrics.pairwise import cosine_similarity import tensorflow_hub as hub import tensorflow as tf elmo = hub.Module("https://tfhub.dev/google/elmo/2", trainable=True) def elmo_vectors(x): embeddings=elmo(x, signature="default", as_dict=True)["elmo"] with tf.device('/device:GPU:0'): with tf.Session() as sess: sess.run(tf.global_variables_initializer()) sess.run(tf.tables_initializer()) # return average of ELMo features return sess.run(tf.reduce_mean(embeddings,1)) corpus=["I'd like an apple juice", "An apple a day keeps the doctor away", "Eat apple every day", "We buy apples every week", "We use machine learning for text classification", "Text classification is subfield of machine learning"] elmo_embeddings=[] print (len(corpus)) for i in range(len(corpus)): print (corpus[i]) elmo_embeddings.append(elmo_vectors([corpus[i]])[0]) print ( elmo_embeddings, len(elmo_embeddings)) print(elmo_embeddings[0].shape) sims = cosine_similarity(elmo_embeddings, elmo_embeddings) print(sims) print(sims.shape) ``` Upvotes: 2
2019/08/13
886
3,351
<issue_start>username_0: For an experiment that I'm working on, I want to train a deep network in a special way. I want to initialize and train a small network first, then, in a specific way, I want to increase network depth leading to a bigger network which is subsequently to be trained. This process will be repeated until one reaches the desired depth. It would be great if anybody heard of anything similar and could point out to me some related work. I think in some paper I read something about a related technique where people used something similar, but I don't find it anymore.<issue_comment>username_1: I haven't read any relevant paper about this, but I have seen some implementations based on what you are describing, arbitrarily called **DGNN (Dynamic Growing Neural Network)**. Hope this term can help your search. Upvotes: 2 <issue_comment>username_2: **Neuroevolution Through Augmenting Topologies** or **NEAT** may be what you are referring to. The original paper by <NAME> is [here](http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf) NEAT combines a neural network and a genetic algorithm. Instead of using back propagation or gradient descent to "train" your network, NEAT creates a population of very simple neural networks (no connections) and evolves them with **fitness evaluation, crossover,** and **mutation.** The genome syntax: every connection gene has a few settings. In node, Out node, Weight of connection, activated, and innovation. **In, Out,** and **Weight** values are the same as regular neural networks. **Enabled** and **Disabled** genes are well, enabled and disabled. The innovation value is possibly the most defining feature of NEAT, since it allows for crossover of different topologies and historical tracking of each connection.[![enter image description here](https://i.stack.imgur.com/J9zMq.png)](https://i.stack.imgur.com/J9zMq.png)NEAT can mutate or change both its weights and connections, so for example, `Parent1` and `Parent2` has 5 of the same connections, represented by innovation / ID numbers 1 through 5. Since they have the same connection nodes, the genetic algorithm will randomly pick either `Parent1 weight` or `Parent2 weight`. The excess and disjoint genes are inherited from the more fit parent.[![NEAT crossover](https://i.stack.imgur.com/l0je3.png)](https://i.stack.imgur.com/l0je3.png) NEAT will then mutate each genome, shown in the image below.[![enter image description here](https://i.stack.imgur.com/DbJ29.png)](https://i.stack.imgur.com/DbJ29.png) Upvotes: 3 <issue_comment>username_3: At [DeepChess](https://arxiv.org/abs/1711.09667) they trained stacked autoencoders: > > **Training Pos2Vec**: We first trained a deep belief network (DBN), which would later > serve as the initial weights for supervised training. The DBN is based > on stacked autoencoders which are trained using layer-wise > unsupervised training. The network consists of five fully connected > layers of sizes: 773–600–400–200–100. We initially trained the first > layer (i.e., a 3-layer (773–600–773) autoencoder), before fixing its > weights and training the weights of a new (600–400–600) autoencoder, > and so on. > > > I suppose this matches your criteria, although the network's architecture isn't very flexible. They don't introduce skip-connections etc. Upvotes: 0
2019/08/13
1,236
5,492
<issue_start>username_0: Consider an MLP that outputs an integer 'rating' of 0 to 4. Would it be correct to say this could be modeled in either of the following ways: 1. map each rating in the dataset to a 'normalized set' between 0 and 1 (i.e. 0, 0.25, 0.5, 0.75, 1), have a single neuron with sigmoid activation at output provide a single decimal value and then take as the rating whatever is closest to that value in the 'normalized set' 2. have 5 output neurons with a softmax activation function output 5 values, each representing a probability of one of the 5 ratings as the outcome, and then take as the rating whichever neuron gives the highest probability? If this is indeed the case, how does one typically decide 'which way to go'? Approach 1 certainly appears to yield a simpler model. What are the considerations, pros/cons of each approach? Perhaps a couple of concrete examples to illustrate?<issue_comment>username_1: This depends on whether the output is a continuous or discrete variable. If the output variable is discrete (there are a finite number of possibilities that it can be), as in a classification task (such as this one, where you are trying to place the input into one of 5 categories), you want to use one output neuron for each class. If the variable is continuous, however, you should only use one output neuron. This is because of how the training process works. When training your network successively makes adjustments to try and reduce the errors. These adjustments are made in the direction of the error – so if the network predicts a value which is too high then the network's weights are adjusted to make the output value lower. On the other hand if the network's predicts a value which is too low the network's weights are adjusted to make the output bigger. If you have output neurons labeled 0 to 4 and a training sample with some input value and a target prediction of 2 then the neural network will make its prediction. Once the prediction has made each neuron is adjusted individually – in this case neuron 2 will be adjusted in the direction of the correct probability and all the other neurons will be adjusted in the direction of the incorrect probability. In this way you have one prediction for each class. Backpropagation is a about error attribution, and using multiple neurons allows the error of the neural network to be better attributed as the neural network can adjust each neuron individually, and thus adjust the required probabilities for each class. Using a single neuron with a sigmoid activation function would be less good as the sigmoid function saturates values close to 0 and 1 so there would be an unnatural bias towards category 0 and category 4 over the other categories. The neural network could learn to overcome this, but it would take more time. Upvotes: 1 <issue_comment>username_2: In the case of one output neuron, you don't have to use sigmoid. As username_1 Aldridge suggested, it would cause a tendency to output 0 or 1. What I normally do is that I set the layer before the output layer to sigmoid of tanh so it won't output ridiculously off numbers, and set the output layer to linear. There would be cases that it outputs something like 1.5, but over time that disappears. Hope it helps :) Upvotes: 0 <issue_comment>username_3: A somewhat large set of designs and set-ups can be made to learn a rating function for a given set of labeled examples. If the objectives are simplicity and effectiveness (accuracy, reliability, and speed), then a third option should be considered. The requirement in the question includes, "Outputs an integer rating 0 [through] 4 [inclusive]." For such a discrete result, the number of required output bits $b$ (where $s$ is the number of possible states and $I$ is the set of integers) is given as follows. $$\min\_b \, (b \in I \; \land \; b \ge \log\_2 s)$$ In this case, we require three bits of output. $$s = 5 \quad \implies \quad b = 3$$ Note that with similar configuration ratings of 0 through 7 would also require only three bits of output. Either way, the output layer would likely be simplest and most efficient if its activation function was binary step function. This removes the need for rounding after it is applied. The output layer would then provide a binary value indicating rating. The goal of learning would be to reduce the error between the feed forward output and the associated the binary value of the label for each example. Previous layer(s) could be sigmoid or a more contemporary and less problematic continuous activation function like ISRLU. Since the engineer can select the error function used by the learning framework to accept any input range and distribution, normalizing the labels for supervised learning is primarily employed to remove redundancy from time and resource consuming operations required to compute error. With ratings as the labels, unless the distribution of ratings is skewed and the data set is such that learning time is excessive, normalization may not be necessary. If it is, it would likely be because improving the label distribution in advance (requiring floating point input to the error function and removing skew) would reduce learning time. The other two approaches introduce unnecessary complexities mentioned in context above. A consequence of removing complexities without adding impediments to convergence is more efficiency during learning and during execution after learning. Upvotes: 0
2019/08/14
1,610
6,692
<issue_start>username_0: I'm a beginner of RL and currently trying to make DQN agent that can act optimally in a simple situation. In the situation agent should decide at what rate to charge or discharge the electrical battery, which is equivalent to buying the electrical energy or selling it, for making money by means of arbitrage. So the action space is for example [-6, -4, -2, 0, 2, 4, 6]kW. The negative numbers mean discharging, and the positive numbers mean charging. In a case that battery is empty, discharging actions(-6, -4, -2) should be forbidden. Otherwise in a case that battery is fully charged, charging actions(2, 4, 6) should be forbidden. To deal with this issue, I tried two approaches: * In every step, renewing the action space, which means masking the forbidden action. * Give extreme penalties for selecting forbidden actions (in my case the penalty was -9999) But none of them worked. For the first approach, the training curve (the cumulative rewards) didn't converge. For the second approach, the training curve converged, but the charging/discharging results are not reasonable (almost random results). I think in second approach, a lot of forbidden actions are selected randomly by the epsilon-greedy policy, and these samples are stored in experience memory, which negatively affect the result. for example: The state is defined as [p\_t, e\_t] where p\_t is the market price for selling (discharging) the battery, and e\_t is the amount of energy left in the battery. When state = [p\_t, e\_t = 0], and discharging action (-6), which is forbidden action in this state, is selected, the next state is [p\_t, e\_t = -6]. And then the next action (2) is selected, then the next state is [p\_t, e\_t = -4] and so on. In this case the < s, a, r, s' > samples are: < [p\_t, 0], -6, -9999, [p\_t+1, -6] > < [p\_t, -6], 2, -9999, [p\_t+1, -4] > ... These are not expected to be stored in the experience memory because they are not desired samples (e\_t should be more than zero). I think this is why desired results didn't come out. So what should I do? Please help.<issue_comment>username_1: This depends on whether the output is a continuous or discrete variable. If the output variable is discrete (there are a finite number of possibilities that it can be), as in a classification task (such as this one, where you are trying to place the input into one of 5 categories), you want to use one output neuron for each class. If the variable is continuous, however, you should only use one output neuron. This is because of how the training process works. When training your network successively makes adjustments to try and reduce the errors. These adjustments are made in the direction of the error – so if the network predicts a value which is too high then the network's weights are adjusted to make the output value lower. On the other hand if the network's predicts a value which is too low the network's weights are adjusted to make the output bigger. If you have output neurons labeled 0 to 4 and a training sample with some input value and a target prediction of 2 then the neural network will make its prediction. Once the prediction has made each neuron is adjusted individually – in this case neuron 2 will be adjusted in the direction of the correct probability and all the other neurons will be adjusted in the direction of the incorrect probability. In this way you have one prediction for each class. Backpropagation is a about error attribution, and using multiple neurons allows the error of the neural network to be better attributed as the neural network can adjust each neuron individually, and thus adjust the required probabilities for each class. Using a single neuron with a sigmoid activation function would be less good as the sigmoid function saturates values close to 0 and 1 so there would be an unnatural bias towards category 0 and category 4 over the other categories. The neural network could learn to overcome this, but it would take more time. Upvotes: 1 <issue_comment>username_2: In the case of one output neuron, you don't have to use sigmoid. As username_1 Aldridge suggested, it would cause a tendency to output 0 or 1. What I normally do is that I set the layer before the output layer to sigmoid of tanh so it won't output ridiculously off numbers, and set the output layer to linear. There would be cases that it outputs something like 1.5, but over time that disappears. Hope it helps :) Upvotes: 0 <issue_comment>username_3: A somewhat large set of designs and set-ups can be made to learn a rating function for a given set of labeled examples. If the objectives are simplicity and effectiveness (accuracy, reliability, and speed), then a third option should be considered. The requirement in the question includes, "Outputs an integer rating 0 [through] 4 [inclusive]." For such a discrete result, the number of required output bits $b$ (where $s$ is the number of possible states and $I$ is the set of integers) is given as follows. $$\min\_b \, (b \in I \; \land \; b \ge \log\_2 s)$$ In this case, we require three bits of output. $$s = 5 \quad \implies \quad b = 3$$ Note that with similar configuration ratings of 0 through 7 would also require only three bits of output. Either way, the output layer would likely be simplest and most efficient if its activation function was binary step function. This removes the need for rounding after it is applied. The output layer would then provide a binary value indicating rating. The goal of learning would be to reduce the error between the feed forward output and the associated the binary value of the label for each example. Previous layer(s) could be sigmoid or a more contemporary and less problematic continuous activation function like ISRLU. Since the engineer can select the error function used by the learning framework to accept any input range and distribution, normalizing the labels for supervised learning is primarily employed to remove redundancy from time and resource consuming operations required to compute error. With ratings as the labels, unless the distribution of ratings is skewed and the data set is such that learning time is excessive, normalization may not be necessary. If it is, it would likely be because improving the label distribution in advance (requiring floating point input to the error function and removing skew) would reduce learning time. The other two approaches introduce unnecessary complexities mentioned in context above. A consequence of removing complexities without adding impediments to convergence is more efficiency during learning and during execution after learning. Upvotes: 0
2019/08/14
2,083
7,621
<issue_start>username_0: I'm new to NN. I am trying to understand some of its foundations. One question that I have is: *why the **derivative** of an activation function is important (not the function itself), and why it's the derivative which is tied to how the network performs learning*? For instance, when we say a constant derivative isn't good for learning, what is the intuition behind that? Is the activation function somehow like a **hash function** that needs to well differentiate small variance in inputs?<issue_comment>username_1: If what you are asking is what is the *intuition* for using the [derivative](https://en.wikipedia.org/wiki/Derivative) in backpropagation learning, instead of an in-depth mathematical explanation: Recall that the derivative tells you a function's sensitivity to change with respect to a change in its input. A high (absolute) value for the derivative at a certain point means that the function is very *steep*, and a small change in input may result in a drastic change in its output; conversely, a low absolute value means little change, so not *steep* at all, with the extreme case that the function is constant when the derivative is zero. Training a neural network essentially amounts to an optimization problem where one wants to minimize a certain value, in this case the error produced by the network on the given training examples. Backpropagation learning can be viewed as a case of [gradient descent](https://en.wikipedia.org/wiki/Gradient_descent) (the inverse of hill climbing). If for a moment we assume that your input is only 2-dimensional (just for illustration, the mathematics of course also work for higher dimensions), you could imagine the error function as a landscape with hills, mountains, valleys, ridges etc. You are standing at a high point and want to get down as far as possible. Gradient descent means that, in discrete steps, you always walk down in the direction that has the *steepest* slope downwards from where you are currently standing, until you eventually reach a (local) minimum. In order to determine where that steepest slope is, you need the derivative of the activation function. Basically, you want to sort out how much each unit in your network contributes to an error, and adjust in the direction that contributes the most. **Edit**: Regarding constant values for a derivative, in the landscape metaphor it would mean that the gradient is the same no matter where you are, so you'll always go in the same direction and never reach an optimum. However, multi-layer networks with linear activation function are kind of besides the point anyhow when you consider that each cell computes a linear combination of its inputs, which then is again a linear function, so the output of the last layer will ultimately be a linear function of the inputs at the first layer. That is to say, anything you can do with a multi-layer net with linear activation functions, you could also achieve with just a single layer. Upvotes: 3 <issue_comment>username_2: The basic (and usual) algorithm used to update the weights of the artificial neural network (ANN) is an iterative, [numerical](https://en.wikipedia.org/wiki/Numerical_analysis) and [optimization algorithm](https://en.wikipedia.org/wiki/Mathematical_optimization), called [*gradient descent*](https://en.wikipedia.org/wiki/Gradient_descent), which is based on and requires the computation of the derivative of the function you want to find the minimum of. If the function you want to find the minimum of is [multivariable](https://en.wikipedia.org/wiki/Multivariable_calculus), then, rather than the derivative, gradient descent requires the [gradient](https://en.wikipedia.org/wiki/Gradient), which is a vector where the $i$th element contains the *partial* derivative of the function with respect to the $i$th variable. Hence the name *gradient descent*, where the derivative of a function of one variable can be considered the gradient of the function. In the case of ANNs, we usually have a loss function that we want to minimize: for example, the mean squared error (MSE). Therefore, in order to apply gradient descent to find the minimum of the MSE, we need to find the derivative or, more precisely, the gradient of the MSE. To do it, the back-propagation (an algorithm based on the [chain rule](https://en.wikipedia.org/wiki/Chain_rule)) is often used, given that the MSE is a function of the ANN, which is a composite function of multiple non-linear functions, the activation functions, whose main purpose is thus to introduce non-linearity, or, in other words, it makes the ANN powerful. Given that the MSE is a function of the parameters of the ANN, then we need to find the partial derivative of the MSE with respect to all parameters of the ANN. In this process, we will also need to find the derivatives of the activation functions that each neuron applies to its linear combination of weights: *to fully see this, you will need to learn the details of back-propagation!* Hence the importance of the derivatives of the activation functions. A constant derivative would always give the same *learning signal*, independently of the error, but this is not desirable. *To fully understand all these statements, I recommend you learn about back-propagation and gradient descent in detail, which requires a little bit of effort!* Upvotes: 2 <issue_comment>username_3: Consider a dataset $\mathcal{D}=\{x^{(i)},y^{(i)}:i=1,2,\ldots,N\}$ where $x^{(i)}\in\mathbb{R}^3$ and $y^{(i)}\in\mathbb{R}$ $\forall i$ The goal is to fit a function that best explains our dataset.We can fit a simple function, as we do in **linear regression**. But that's different about neural networks, where we fit a complex function, say: $\begin{align}h(x) & = h(x\_1,x\_2,x\_3)\\ & =\sigma(w\_{46}\times\sigma(w\_{14}x\_1+w\_{24}x\_2+w\_{34}x\_3+b\_4)+w\_{56}\times\sigma(w\_{15}x\_1+w\_{25}x\_2+w\_{35}x\_3+b\_5)+b\_6)\end{align}$ where, $\theta = \{w\_{14},w\_{24},w\_{34},b\_4,w\_{15},w\_{25},w\_{35},b\_5,w\_{46},w\_{56},b\_6\}$ is the set of the respective coefficients we have to determine such that we minimize: $$J(\theta) = \frac{1}{2}\sum\_{i=1}^N (y^{(i)}-h(x^{(i)}))^2$$ The above optimization problem can be easily solved with ***gradient descent***. Just initiate $\theta$ with random values and with proper learning parameter $\eta$, update as follows till convergence: $$\theta:=\theta-\eta\frac{\partial J}{\partial \theta}$$ In order to get the gradients, we express the above function as a neural network as follows: [![enter image description here](https://i.stack.imgur.com/rf7db.png)](https://i.stack.imgur.com/rf7db.png) Let's calculate the gradient, say w.r.t. $w\_{14}$. $$\frac{\partial J}{\partial w\_{14}} = \sum\_{i=1}^N \Big[\big(h(x^{(i)})-y^{(i)}\big)\frac{\partial h(x^{(i)})}{\partial w\_{14}}\Big]$$ Let $p(x) = w\_{14}x\_1+w\_{24}x\_2+w\_{34}x\_3+b\_4$ , and Let $q(x) = w\_{46}\times\sigma(p(x))+w\_{56}\times\sigma(w\_{15}x\_1+w\_{25}x\_2+w\_{35}x\_3+b\_5)+b\_6)$ $\therefore \frac{\partial h(x)}{\partial w\_{14}} = \frac{\partial h(x)}{\partial q(x)}\times\frac{\partial q(x)}{\partial p(x)}\times\frac{\partial p(x)}{\partial w\_{14}} = \frac{\partial\sigma(q(x))}{\partial q(x)}\times\frac{\partial\sigma(p(x))}{\partial p(x)}\times\frac{\partial p(x)}{\partial w\_{14}}$ We see that the derivative of the activation function is important for getting the gradients and so for the learning of the neural network. A constant derivative will not help in the gradient descent and we won't be able to learn the optimal parameters. Upvotes: 3
2019/08/15
2,078
7,717
<issue_start>username_0: I read these comments from Judea Pearl saying we don't have causality, physical equations are symmetric, etc. But the conditional probability is clearly not symmetric and captures directed relationships. How would Pearl respond to someone saying that conditional probability already captures all we need to show causal relationships?<issue_comment>username_1: > > But the conditional probability is clearly not symmetric and captures directed relationships. > > > One needs to consider the kinds of directed relationships that is captured by conditional probability. It surely does capture *some* kind of association or dependence which could be directed. At the same time, it is not right to say that it *surely* captures the causal relationships. Let: Sun rises = $A$, Rooster crows = $B$, then, $P(A |B)$ is bound to be very high but it does not mean that rooster crowing causes sunrise. > > How would Pearl respond to someone saying that conditional probability already captures all we need to show causal relationships? > > > He will ask him to go back to school. Upvotes: 2 <issue_comment>username_2: *Why isn't conditional probability sufficient to describe causality?* Suppose that, when the barometric pressure, in a certain region, drops below a certain level, two things happen 1. the height of the column of mercury in your barometer drops below a certain level 2. a storm occurs We may be tempted to model these relationships with the following graphical model, where each directed edge represents a causal relationship, so, for example, the drop in barometric pressure causes the storm. [![enter image description here](https://i.stack.imgur.com/dmz3j.png)](https://i.stack.imgur.com/dmz3j.png) However, this graphical (and causal) model is likely wrong (and unintuitive), given that the drop in barometric pressure is likely [only correlated with the storm, so it is not the cause of the storm](https://en.wikipedia.org/wiki/Correlation_does_not_imply_causation). *How can we see that the drop in barometric pressure is or not the cause of the storm?* We can compare the probabilities $P(A \mid \text{do}(B))$ and $P(A \mid B)$, where $A$ is the event "a storm occurs" and $B$ is the event "drop in barometric pressure". *What does $\text{do}(B)$ mean?* It means that we force the event $B$ to occur, that is, we **force** the drop in barometric pressure to occur. *Intuitively, what is then the difference between $P(A \mid \text{do}(B))$ and $P(A \mid B)$*? In the case of $P(A \mid \text{do}(B))$, we force the event $B$ to always occur. In the case of $P(A \mid B)$, we only and passively look at the cases of event $A$ when event $B$ occurs (without thus forcing $B$ to occur). We now know the difference between $P(A \mid \text{do}(B))$ and $P(A \mid B)$. *However, how does this help us to understand that $B$ is not the cause of $A$?* If $B$ was a cause of $A$, then, if we forced $B$ always to occur, then the probability of $A$ should also change accordingly. However, imagine that we are (magically) able to drop the barometric pressure, if the probability of $A$ does not change accordingly (in this case, if it does not increase), then the storm is not an effect of the drop in barometric pressure. To conclude, Judea Pearl would say that **do**-operators (or *interventions*) are required to analyze causal relationships. The article [Probabilistic Causation](https://plato.stanford.edu/entries/causation-probabilistic/#Inte) by Stanford Encyclopedia of Philosophy gives a good overview of the (probabilistic) causation (or causality) field. In particular, have a look at section 3, which describes causal modeling (according to Pearl). Causal modeling and inference actually involve several nontrivial concepts that require some time to get familiar with, such as interventions (or do-operations), several basic causal relationships (such as [forks, chains and colliders](https://en.wikipedia.org/wiki/Causal_model)), d-separation or Bayesian networks. Upvotes: 0 <issue_comment>username_3: Perhaps the shortest answer to this question is that Bayes' Theorem itself allows us to easily change the direction of a conditional probability: $$ P(A|B) = \frac{P(B|A)P(A)}{P(B)} $$ So if you have $P(B|A)$, $P(A)$, and $P(B)$, we can determine $P(A|B)$, and similarly you can determine $P(B|A)$ from $P(A|B)$, $P(B)$ and $P(A)$. Just by looking at $P(B|A)$ and $P(A|B)$, it is therefore impossible to tell what the causal direction is (if any). In fact, probabilistic inference usually works the other way round: When there is a *known* causal relation, say from *diseases* $A$ to *symptoms* $B$, we usually have $P(B|A)$, and are interested in the diagnostic reasoning task of determining $P(A|B)$ from that. (The only other thing we need for that is the prior probability $P(A)$ since $P(B)$ is just a normalization factor.) Upvotes: 4 [selected_answer]<issue_comment>username_4: About Symmetry -------------- What Pearl means by *being not symmetric* is: $A=B$ and $B=A$ are exactly identical and lead to the same result in a not-causal scientific framework. For example consider very simple set of equations: $$ \begin{align} Z = & \epsilon\_z \\ X = & Z+\epsilon\_X \\ Y = & 2X+Z + \epsilon\_y \end{align} $$ From the Algebra point of view, this is a set of 3 equations, and 3 unknowns (consider error terms to be known). You can shuffle the equations, change the LHS and EHS of the equations, add or subtract them. In fact these actions are actually appreciated for solving a linear equation system, righ?! So the system above is identical to this: $$ \begin{align} \epsilon\_z = & Z \\ X = & Y - 2Z - \epsilon\_Y - \epsilon\_X \\ Y = & 2X+Z + \epsilon\_y \\ \end{align} $$ But I brought to you this example because the first one is the set of equations for a very fundamental causal structure called *confounding* or *common cause* structure, Where $X$ is the exposure, or treatment, $Y$ is the outcome, and $Z$ is the parent of both $X$ and $Y$. The orders of the equations and the RHS/LHS variables in this **Structural Equation Models** actually mean something. First you calculate $Z$, then go for $X$, then calculate $Y$. In this case, the second system is pointing to a completely different causal structure (or to be honest, it is not even look like a legit structural equation) About sufficiency of Probability Theory for Causal Inference ------------------------------------------------------------ First I would like to say this would be a very ironic question to ask from Pearl, as he also mentioned in one of his interviews, because he has had a significant contribution to the realm of probability theory with *bayesian network* and bayesian inference framework! And now it is like he is against himself. But he is for a good reason. *why Probability is not enough* can be and has been answered with formal proofs, equations and explanations. But There are a lot of examples that will draw your attention to this truth. I recommend you to read about *Simpson's Paradox*. It will be a great example to see how probability theory is incapable of causal inference. The fact that probability is not enough, only resembles the idea that *correlation is not necessarily causation*. and that is true. Again, read about *spurious correlations* and you will get it. Just think of this funny example: During the year, Crime rate and Ice cream sales amount are highly correlated, THUS we must ban ice cream sales to control the crime rate. and the problem with this dumb inference is that we have not taken into account the common cause which is *heat* or *summer* that accounts for the perceived correlation. Upvotes: 0
2019/08/16
2,100
8,087
<issue_start>username_0: I am a newbie in the fantastic AI world, I have started my learning recently. After a while, my understanding is, we need to feed in tremendous data to train a or many models. Once the training is complete, we could take out the trained models and "plug in" to any other programming languages to use to detect things. So my questions are: **1. What are the trained models? are they algorithms or a collection of parameters in a file?** **2. What do they look like? e.g. file extensions** **3. Especially, I want to find the trained models for detecting birds (the bird types do not matter). Are there any platforms for open-source/free online trained AI models??** Thank you!<issue_comment>username_1: This answer applies to Machine Learning (ML) part of AI, as that seems to be what you are asking about. Please bear in mind that AI is still a broad church, including many other techniques than ML. ML, including neural networks for deep learning, and Reinforcement Learning (RL) is only a subset of AI - some AI techniques are more focused on the algorithm than parameters. > > 1. What are the trained models? are they algorithms or a collection of parameters in a file? > > > In ML, the usual process is to feed data into parametric function (e.g. a neural network) and alter the parameters of it to "fit" the data. The main output of this is a collection of parameters and hyperparameters that describe the parametric function. So 90% of the time, when discussing the "trained model", it means the same thing as the collection of paramaters. However, those parameters are of limited use without a library that can re-create the function from them. Parameters will be saved from a specific library and can be loaded back into that library easily. It is also possible for libraries to read or convert from models saved from other libraries, much like how different spreadsheet programs can read each others' files. > > 2. What do they look like? e.g. file extensions > > > This varies a lot, depending on which library was used. It is not possible to make a general statement. For example, [Tensorflow can save variables to a "Checkpoint" file](https://www.tensorflow.org/guide/saved_model) with `.ckpt` extension, but can get more sophisticated depending on how much of the model you want to export, and full models with whole structure will contain more than just the variables and have the `.pb` extension. > > 3. Especially, I want to find the trained models for detecting birds (the bird types do not matter). Are there any platforms for open-source/free online trained AI models?? > > > There are a few places where you can find selections of pre-trained models. One such place is [Tensorflow's Model Zoo](https://github.com/tensorflow/models) and you might be interested in [Tensorflow detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md). Other frameworks may also provide example code. For instance [Caffe also has a "Model Zoo"](https://caffe.berkeleyvision.org/model_zoo.html) (searching Model Zoo is a good starting strategy). If you are working at the level of collecting model parameters and want to run these models yourself, you will need to learn a bit about each library, what language is used to work with it, and maybe follow some tutorial about how to use it. A few models will be packaged up with working scripts to use from the command line, but many are not and may take sometime and effort to get working. When you have a specific detection target, you may be disappointed to find models that don't quite match what you want. For image classifying, it is common if you have a specialist need to take a pre-existing general model that has been trained on large dataset for weeks, and then "fine tune" it with your own image dataset for your purpose. Most NN libraries will have tutorials and examples of this fine tuning process. Upvotes: 4 [selected_answer]<issue_comment>username_2: > > 1. What are the trained models? are they algorithms or a collection of parameters in a file? > > > "Model" could refer to the algorithm with or without a set of trained parameters. If you specify "trained model", the focus is on the parameters, but the algorithm is implicitly part of that, since without the algorithm, the parameters are just an arbitrary set of numbers. > > 2. What do they look like? e.g. file extensions > > > That very much depends on both the algorithm you're using *and* the specific implementation. A few simple examples might help clarify matters. Let's suppose that the problem we're trying to learn is the exclusive or (XOR) function: ``` a | b | a XOR b --+---+--------- 0 | 0 | 0 0 | 1 | 1 1 | 0 | 1 1 | 1 | 0 ``` --- First, let's use a 2-layer neural net to learn it. We'll define our activation function to be a simple step function: $ f(x) = \begin{cases} 1 & \text{if } x > 0.5 \\ 0 & \text{if } x \le 0.5 \end{cases} $ (This is actually a terrible activation function for real neural nets since it's non-differentiable, but it makes the example clearer.) Our model is: $h\_0 = f(1\cdot a+1\cdot b + 0)\\ h\_1 = f(0.5\cdot a + 0.5\cdot b + 0)\\ \,\;y = f(1\cdot h\_0 - 1\cdot h\_1 + 0)$ Each step of this essentially draws a hyperplane and evaluates to 1 if the input is on one side of the hyperplane and 0 otherwise. In this particular case, h\_0 tells us if either a or b is true. h\_1 tells us if they're both true, and y tells us if exactly one of them is true, which is the exact definition of the XOR function. Our parameters are the coefficients and biases (the offset added at the end of each expression): $ \begin{bmatrix} 1 & 1 & 0 \\ 0.5 & 0.5 & 0 \\ 1 & 1 & 0 \\ \end{bmatrix}$ They can be stored in a file in any way we want; all that matters is that the code that stores them and the code that reads them back agree on the format. --- Now let's solve the same problem using a decision tree. For these, we traverse a tree, and at every node, ask a question about the input to decide which child to visit next. Ideally, each question will divide the space of possibilities exactly in half. Once we reach a leaf node, we know our answer. In this diagram, we visit the right child iff the expression is true. ``` a+b=2 / \ a+b=0 0 / \ 0 1 ``` In this case, the model and parameters are harder to separate. The only part of the model that isn't learned is "It's a tree". The expressions in each interior node, the structure of the tree, and the value of the leaf nodes are all learned parameters. As with the weights from the neural network, we can store these in any format we want to. --- Both methods are learning the same problem, and actually find basically the same solution: a XOR b = (a OR b) AND NOT (a AND B). But the nature of the mathematical model we use depends on the method we choose, the parameters depend on what we train it on, the file format depends on the code we use to do it, and the line between model and parameter is fairly arbitrary; the math works out the same regardless of how we split it up. We could even write a program that tries different methods, and outputs a *program* that classifies inputs using the method that performed best. In this case, the model and parameters aren't separate at all. > > 3. Especially, I want to find the trained models for detecting birds (the bird types do not matter). Are there any platforms for open-source/free online trained AI models?? > > > I don't know of any pretrained models that specifically recognize birds, but I'm not in image-recognition, so that doesn't mean much. If you're not averse to training your own model (using existing code), I believe the [ImageNet](http://image-net.org/index) dataset includes birds. AlexNet and LeNet would probably be good starting points for the model. Most if not all of the the state of the art image recognition models are based on convolutional networks, so you'll need a decent GPU to run them. Upvotes: 2
2019/08/16
517
2,294
<issue_start>username_0: I've been reading different papers regarding graph convolution and it seems that they come into two flavors: spatial and spectral. From what I can see the main difference between the two approaches is that for spatial you're directly multiplying the adjacency matrix with the signal whereas for the spectral version you're using the Laplacian matrix. Am I missing something, or are there any other differences that I am not aware of?<issue_comment>username_1: After I read multiple explanations from different sources I think I found the main difference between the two methods. Implementation wise the only difference is the matrix that you're multiplying the signal with (Laplacian/adjacency matrix). But by using the Laplacian, you're encoding the graph structure (in-out degree of each node) which dictates how a signal should "diffuse" in the network. Upvotes: 2 <issue_comment>username_2: **Spectral Convolution** In a spectral graph convolution, we perform an Eigen decomposition of the Laplacian Matrix of the graph. This Eigen decomposition helps us in understanding the underlying structure of the graph with which we can identify clusters/sub-groups of this graph. This is done in the Fourier space. An analogy is PCA where we understand the spread of the data by performing an Eigen Decomposition of the feature matrix. The only difference between these two methods is with respect to the Eigen values. Smaller Eigen values explain the structure of the data better in Spectral Convolution whereas it's the opposite in PCA. ChebNet, GCN are some commonly used Deep learning architectures that use Spectral Convolution **Spatial Convolution** Spatial Convolution works on local neighbourhood of nodes and understands the properties of a node based on its k local neighbours. Unlike Spectral Convolution which takes a lot of time to compute, Spatial Convolutions are simple and have produced state of the art results on graph classification tasks. GraphSage is a good example for Spatial Convolution. Additional References: <https://towardsdatascience.com/tutorial-on-graph-neural-networks-for-computer-vision-and-beyond-part-2-be6d71d70f49> <https://towardsdatascience.com/graph-convolutional-networks-for-geometric-deep-learning-1faf17dee008> Upvotes: 3
2019/08/16
598
2,760
<issue_start>username_0: I am currently building a chatbot. What I have done so far is, collected possible questions/training data/files and create a model out of it using Apache OpenNLP; the model is able to predict all the questions that are in the training data and fails to predict for new questions. Instead of doing all the above, I can write a program that matches the question/words against training data and predict the answer — what is the advantage of using Machine Learning algorithms? I have searched extensively about this and all I got was, in Machine Learning there is no need to change the algorithm and the only change would be in the training data, but that is the case with programming too: the change will be in training data.<issue_comment>username_1: In my view ML does not work very well for conversational AI systems. It is generally alright for intent recognition, so getting what the user wants if they ask a question ("I want to book a flight?", "What is the weather in London?"), but anything after that quickly becomes difficult to handle, especially multi-step conversations that go beyond simple question/answer pairs. My suggestion would be to plan possible dialogues out as flow charts (more like trees/graphs, as there can be multiple branches at any point), and then write a program that interprets the graph based on user input and gives appropriate replies. You will also want to have some conversational memory to keep track of any information the user has mentioned. That is also tricky to do in a ML system. For a very simple framework to start off with, have a look at [ELIZA](https://en.wikipedia.org/wiki/ELIZA). It's half a century old, but you can still use it as a starting point. (Disclaimer: I work for a company that makes conversational AI systems) Upvotes: 2 <issue_comment>username_2: <NAME> in one of his online interviews once suggested that he thought that the conversational solution was a combination of both massive machine learning and rules based programming. The problem in the case of chats is that our expectations are very high which dooms early solutions to failure. For the ML side we require large amounts of data, and while large amounts may be available they are highly biased and unbalanced; they mostly focus on one area (specialized context) and re-use the same sentence formulas over and over again, so the learning finds a comfortable corner case solution and refuses to learn anything else. People are so predictable so they are an unhelpful source of raw data. One approach might be to use carefully constructed rules to generate the data that ML can learn from, rules that can guarantee broad contextual applicability and sentence construction variation. Upvotes: 0
2019/08/18
565
2,431
<issue_start>username_0: I have developed face recognition algorithms by using pre-built libraries in Python and open CV. However, suppose if I want to make my own neural network algorithm for face recognition, what are the steps that I need to follow? I have just seen [<NAME>'s course videos](https://www.youtube.com/watch?v=PPLop4L2eGk&list=PLLssT5z_DsK-h9vYZkQkYNWcItqhlRJLN) (specifically, I watched 70 videos).<issue_comment>username_1: In my view ML does not work very well for conversational AI systems. It is generally alright for intent recognition, so getting what the user wants if they ask a question ("I want to book a flight?", "What is the weather in London?"), but anything after that quickly becomes difficult to handle, especially multi-step conversations that go beyond simple question/answer pairs. My suggestion would be to plan possible dialogues out as flow charts (more like trees/graphs, as there can be multiple branches at any point), and then write a program that interprets the graph based on user input and gives appropriate replies. You will also want to have some conversational memory to keep track of any information the user has mentioned. That is also tricky to do in a ML system. For a very simple framework to start off with, have a look at [ELIZA](https://en.wikipedia.org/wiki/ELIZA). It's half a century old, but you can still use it as a starting point. (Disclaimer: I work for a company that makes conversational AI systems) Upvotes: 2 <issue_comment>username_2: <NAME> in one of his online interviews once suggested that he thought that the conversational solution was a combination of both massive machine learning and rules based programming. The problem in the case of chats is that our expectations are very high which dooms early solutions to failure. For the ML side we require large amounts of data, and while large amounts may be available they are highly biased and unbalanced; they mostly focus on one area (specialized context) and re-use the same sentence formulas over and over again, so the learning finds a comfortable corner case solution and refuses to learn anything else. People are so predictable so they are an unhelpful source of raw data. One approach might be to use carefully constructed rules to generate the data that ML can learn from, rules that can guarantee broad contextual applicability and sentence construction variation. Upvotes: 0
2019/08/19
1,260
4,843
<issue_start>username_0: There seems to be a lot of literature and research on the problems of stochastic gradient descent and catastrophic forgetting, but I can't find much on solutions to perform continual learning with neural network architectures. By continual learning, I mean improving a model (while using it) with a stream of data coming in (maybe after a partial initial training with ordinary batches and epochs). A lot of real-world distributions are likely to gradually change with time, so I believe that we should be able to train NNs in an online fashion. Do you know which are the state-of-the-art approaches on this topic, and could you point me to some literature on them?<issue_comment>username_1: What I understand from your questions is that you are trying to avoid catastrophic forgetting while applying online learning. This problem should be addressed by implementing methods that reduce catastrophic forgetting for different tasks. At first glance it might seem that they don't apply because it's data that change and not a particular task but changing data result in a change of the task. Say your goal is to classify different breeds of dogs. Your online data-set morphs into excluding "Great Danes". Your neural network after enough epochs would forget about "Great Danes". The task is still serving its purpose by classifying different breeds but the task still changed. It changed from recognizing "Great Danes" as a dog breed to not recognizing "Great Danes" as a dog breed. The weights changed to exclude them but the methods I linked tries and keep weights intact even though it was not intended for the purpose of online learning. Just set the hyper parameters to include these techniques to low as I believe data won't have an instant change but would change over time, and you should be fine. The most obvious technique being storing information as you train. This is called [pseudo-rehearsal](https://www.semanticscholar.org/paper/A-Strategy-for-an-Uncompromising-Incremental-Venkatesan-Venkateswara/ac7ea1a3050fe81a565b22b4bafdd5baa465be07). With this at least you would be able to use stochastic gradient decent but you need memory and resources as the data set grows. Then there was an attempt to reduce impacts of weights on old tasks to keep some relevancy to them. [Structural Regularization](https://www.pnas.org/content/pnas/114/13/3521.full.pdf). Later these guys implemented [HAT](https://arxiv.org/pdf/1801.01423.pdf) which seems to keep some weights static while others adapt to new tasks. Upvotes: 3 [selected_answer]<issue_comment>username_2: > > Do you know which are the state-of-the-art approaches on this topic, and could you point me to some literature on them? > > > [This answer](https://ai.stackexchange.com/a/14306/2444) already mentions some of the approaches. More concretely, currently, the most common approaches to continual learning (i.e. learning with progressively more data while attempting to address the [*catastrophic forgetting* problem](https://ai.stackexchange.com/a/13293/2444)) are * **dynamic/changing topologies** approaches * **regularization** approaches * [**rehearsal**](https://ai.stackexchange.com/a/23832/2444) (or **pseudo-rehearsal**) approaches * **ensemble** approaches * **hybrid** approaches You can also take a look at [this answer](https://ai.stackexchange.com/a/24529/2444). If you are interested in an exhaustive overview of the *state-of-the-art* (at least, until 2019), you should read the paper [Continual lifelong learning with neural networks: A review](https://www.sciencedirect.com/science/article/pii/S0893608019300231) (2019, by Parisi et al.). Upvotes: 2 <issue_comment>username_3: There are lots of different approaches that try to avoid catastrophic forgetting in neural networks. It is impossible to summarize all contributions here. However, in addition to the already mentioned techniques, there are **sparsity** approaches that try to disentangle internal representations of the network on different tasks or learning steps. Sparsity usually helps, but the network has to learn to use it, imposing a structural sparsity by construction is not enough. Also, you can leverage **bayesian** approaches, through which you can associate a confidence measure to each of your weights and use this measure to mitigate forgetting. Also, **meta-learning** can be employed to meta-learn a model which is robust to forgetting on different sequences of tasks. What I can suggest you in addition, is to take a look at [ContinualAI wiki](https://wiki.continualai.org/research.html), which maintains a list of updated publications classified by the type of Continual Learning strategy and tagged with additional information. (**Disclaimer**: I am a member of [ContinualAI association](https://www.continualai.org/)). Upvotes: 2
2019/08/21
2,898
11,204
<issue_start>username_0: [![enter image description here](https://i.stack.imgur.com/7FnY6.png)](https://i.stack.imgur.com/7FnY6.png) I tried to create a simple model that receives an $80 \times 130$ pixel image. I only had 35 images and 10 test images. I trained this model for a binary classification task. The architecture of the model is described below. ``` conv2d_1 (Conv2D) (None, 80, 130, 64) 640 _________________________________________________________________ conv2d_2 (Conv2D) (None, 78, 128, 64) 36928 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 39, 64, 64) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 39, 64, 64) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 39, 64, 128) 73856 _________________________________________________________________ conv2d_4 (Conv2D) (None, 37, 62, 128) 147584 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 18, 31, 128) 0 _________________________________________________________________ dropout_2 (Dropout) (None, 18, 31, 128) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 71424) 0 _________________________________________________________________ dense_1 (Dense) (None, 512) 36569600 _________________________________________________________________ dropout_3 (Dropout) (None, 512) 0 _________________________________________________________________ dense_2 (Dense) (None, 1) 513 ``` What could the oscillating training loss curve represent above? Why is the validation loss constant?<issue_comment>username_1: Try lowering the learning rate. Such a loss curve can be indicative of a high learning rate. Due to a high learning rate the algorithm can take large steps in the direction of the gradient and miss the local minima. Then it will try to come back to the minima in the next step and overshoot it again. You may also try switching to a momentum-based GD algorithm. Such a training loss curve can be indicative of a loss contour like in [this example](http://ruder.io/optimizing-gradient-descent/index.html#momentum), for which momentum-based GD methods are helpful. I noticed that you have a very small training set. You may have better luck with a larger training data (~1000 examples) or using a pre-trained Conv network as a starting point. Upvotes: 3 <issue_comment>username_2: Overview ======== As it has already been observed, your main problem, beside the training related issues like fixing the learning rate, is you have basically **no chance to learn such a big model woth such a small dataset ... from scratch** So focusing on the real problem, here are some techniques you could use * dataset augmentation * transfer learning + from a pretrained model + from the encoder stage of an autoencoder (last resort option before getting into more advanced topics) Dataset Augmentation ==================== Add transformations to your dataset you want your classifier to learn to be invariant to Let's assume that * $I$ is an input image * $l$ its associated label * $f(\mathcal{I};\theta) \rightarrow \mathcal{I}$ is a parametric transformation that affects appearance but not semantic, for example it is a rotation of $\theta$ angle then you can augment your dataset by generating $\{I\_{\theta}, l\}$ a set of transformed (e.g. rotated) images associated the same $l$ label Transfer Learning ================= The fundamental idea of transfer learning is to re-use a NN which has been trained to solve a task, to solve other tasks retraining only a selected subset of the weights It means using a pre-trained convolutive backend, the part of the model with Conv2D and Pooling, and train dense layers with dropout only (but you should still probably think about reducing the dimensionality there) More formally think about representing your CNN Classifier as follows * $f\_{C}(I; \theta\_{X})$ : Convolutive Processing on Input Image + it is the part of the CNN composed of `Conv2D` and `MaxPooling2D` layers + the $\theta\_{C}$ is the convolutive learnable weights set * $b = f\_{C}(I; \theta\_{C})$ : Bottleneck Feature Representation + it is the result of `Flatten` layer * $f\_{D}(b; \theta\_{D})$ : Dense Processing + it is the part of the model composed of `Dense` layers + the $\theta\_{D}$ is the dense learnable weights set The idea is to pick $\theta\_{C}$ from a training performed on an another dataset, bigger than your current one, and keep it fixed while training in your task This means reducing the number of parameters to be trained, however beware the dense layers account for most of the weights, as you can also see from your mode summary, which means you should also focus on reducing that number, for example reducing the bottleneck feature tensor size Transfer Learning from Pre-Trained Model ---------------------------------------- For example, if your actual goal was to perform binary classification on some kind of MNIST-like data then you could use a convolutive backend from a CNN which has been pre-trained on the MNIST 0..9 classification task or you can train it yourself but is important is the $\theta\_{C}$ weights will be learned from a MNIST dataset, which is much bigger than yours, even if the task is (slightly) different. Furthermore, in case of MNIST like data, please consider if you really need your full `80 x 130` resolution hence your input tensor, considering I can deduct from your model summary it is grayscale (no color), needs to be $(80,130,1)$ or you could rescale to the `28 x 28` MNIST resolution so you work with a smaller $(28,28,1)$ tensor My suggestion is to start from an architecture like this [MNIST Keras Model](https://gist.github.com/mjbhobe/cf895231aab8fa45a2a23da1351d185d#file-mnist_keras_model-py) as * it has a bottleneck representation of 64 which could be enough for your task and * also suggesting to remove the first dense layer so to significantly reduce $\theta\_{D}$ the number of learnable paramters hence going for something like ``` model = Sequential() # add Convolutional layers model.add(Conv2D(filters=32, kernel_size=(3,3), activation='relu', padding='same', input_shape=(10, 10, 1))) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding='same')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding='same')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Flatten()) # output layer model.add(Dense(1, activation='sigmoid')) ``` then compile the model with `binary_crossentropy` loss and maybe start giving a try to `adam` optimizer Transfer Learning from Autoencoder ---------------------------------- If your data is so special you can't find any big enough and similar enough dataset to use this strategy and you do not come up with any transformation you could use to perform dataset augmentation, without getting into advanced things, you could try to play one last card: use an Autoencoder to learn a compressed representation aimed at reconstructing the original image and perform transfer learning with the encoder only For example, again under the assumption of working with a $(28,28,1)$ tensor, you could start with an architecture like the following one ``` def build_ae(input_img): x = Conv2D(16, (3, 3), activation='relu', padding='same')(input_img) # (28,28,16) encoded = MaxPooling2D((8, 8), padding='same')(x) # (4,4,8) x = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded) # (4,4,8) x = UpSampling2D((8, 8))(x) # (16,16,8) x = Conv2D(16, (3, 3), activation='relu')(x) # Note: Convolving without padding='same' in order to get w-2 and h-2 dimensioality reduction so that following upsampling can lead to the desired 28x28 spatial resolution # (14,14,8) x = UpSampling2D((2, 2))(x) # (28,28,8) decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x) autoencoder = Model(input_img, decoded) return autoencoder ``` In this case, the full model has 2633 weights but the encoding stage consists only of Conv2D+Relu+MaxPooling which means in total `3x3x1x16` weights for the convolutive step and `16` weights for the relu for a total of 160 weights only and the latent representation is a $(4,4,8)$ tensor which means a 128 dimensional flattened tensor and hence assuming, as before, to perform the binary classification with a dense sigmoid layer it would mean 128+1 weights to learn in the actual binary classification task Of course it is possible to go for an even more compressed latent representation both on the spatial domain or channel domain with consequent reduced flattened vector dimensionality and ultimately even less weights to learn Would you share more details about your problem, also your dataset, we could try to help more Upvotes: 3 <issue_comment>username_3: username_2's answer is quite comprehensive. Here are my insights. **First of all, think whether you really need neural networks to solve your problem. Think whether traditional computer vision operations like edge detection/ region-based methods help you to solve your problem (OpenCV can help you here). Think about your data again. In case you decide to use neural networks, some things to try out:-** 1. Your data set size is too small. Try to recall that we are learning to approximate functions (Universal approximation theorem). Less data + More parameters have high chance of overfitting. Use transfer learning. (Try Resizing the image and perform random resized square crop and use as input to your neural network. This may or may not work since I don't know what exactly you are doing). Also try data augmentations that make sense (i.e. : Vertical flip of traffic sign images don't work) 2. Try reducing the learning rate/ using different learning rates for different parts of your network if you decide to use transfer learning. 3. Check whether your train and test dataset distributions are the same. i.e.: Don't train with 95 % of label 0 and test with a set that has 95% label 1. I do not know whether your dataset is highly class imbalanced or whether you are doing some kind of anomaly detection. 4. Think about optimizers. Try Adam if you haven't. If you need more help, try sharing the data and we can try to help you. Upvotes: 2 <issue_comment>username_4: From my experience, this oscillation comes from: * Too high learning rate: Weights change too quick. * Too few neurons in layers: Not enough to fit. + When having not enough neurons, the network can't learn at all and the oscillation is due to failure to fit to global optimum; it's correct for some cases and wrong on other cases Upvotes: 1
2019/08/21
2,572
9,871
<issue_start>username_0: Is it possible to specify what the asymptotic behaviour of a Neural Networks (NN) model should be? I am thinking of a NN which tries to learn a mapping $\vec y=f(\vec x)$ with $\vec x$ a vector of features of dimension $d$ and $\vec y$ a vector of outputs of dimension $p$. Is it possible to specify that, for instance, the NN should have a fixed value when $x\_1$ goes to infinite? I mean: $$ \lim\_{x\_1\to \infty} f(\vec x) = \vec c $$ If it is not possible with NN, do you know other machine learning models (for instance Gaussian Process Regression or Support Vector Regression) which have a known asymptotic behaviour?<issue_comment>username_1: Try lowering the learning rate. Such a loss curve can be indicative of a high learning rate. Due to a high learning rate the algorithm can take large steps in the direction of the gradient and miss the local minima. Then it will try to come back to the minima in the next step and overshoot it again. You may also try switching to a momentum-based GD algorithm. Such a training loss curve can be indicative of a loss contour like in [this example](http://ruder.io/optimizing-gradient-descent/index.html#momentum), for which momentum-based GD methods are helpful. I noticed that you have a very small training set. You may have better luck with a larger training data (~1000 examples) or using a pre-trained Conv network as a starting point. Upvotes: 3 <issue_comment>username_2: Overview ======== As it has already been observed, your main problem, beside the training related issues like fixing the learning rate, is you have basically **no chance to learn such a big model woth such a small dataset ... from scratch** So focusing on the real problem, here are some techniques you could use * dataset augmentation * transfer learning + from a pretrained model + from the encoder stage of an autoencoder (last resort option before getting into more advanced topics) Dataset Augmentation ==================== Add transformations to your dataset you want your classifier to learn to be invariant to Let's assume that * $I$ is an input image * $l$ its associated label * $f(\mathcal{I};\theta) \rightarrow \mathcal{I}$ is a parametric transformation that affects appearance but not semantic, for example it is a rotation of $\theta$ angle then you can augment your dataset by generating $\{I\_{\theta}, l\}$ a set of transformed (e.g. rotated) images associated the same $l$ label Transfer Learning ================= The fundamental idea of transfer learning is to re-use a NN which has been trained to solve a task, to solve other tasks retraining only a selected subset of the weights It means using a pre-trained convolutive backend, the part of the model with Conv2D and Pooling, and train dense layers with dropout only (but you should still probably think about reducing the dimensionality there) More formally think about representing your CNN Classifier as follows * $f\_{C}(I; \theta\_{X})$ : Convolutive Processing on Input Image + it is the part of the CNN composed of `Conv2D` and `MaxPooling2D` layers + the $\theta\_{C}$ is the convolutive learnable weights set * $b = f\_{C}(I; \theta\_{C})$ : Bottleneck Feature Representation + it is the result of `Flatten` layer * $f\_{D}(b; \theta\_{D})$ : Dense Processing + it is the part of the model composed of `Dense` layers + the $\theta\_{D}$ is the dense learnable weights set The idea is to pick $\theta\_{C}$ from a training performed on an another dataset, bigger than your current one, and keep it fixed while training in your task This means reducing the number of parameters to be trained, however beware the dense layers account for most of the weights, as you can also see from your mode summary, which means you should also focus on reducing that number, for example reducing the bottleneck feature tensor size Transfer Learning from Pre-Trained Model ---------------------------------------- For example, if your actual goal was to perform binary classification on some kind of MNIST-like data then you could use a convolutive backend from a CNN which has been pre-trained on the MNIST 0..9 classification task or you can train it yourself but is important is the $\theta\_{C}$ weights will be learned from a MNIST dataset, which is much bigger than yours, even if the task is (slightly) different. Furthermore, in case of MNIST like data, please consider if you really need your full `80 x 130` resolution hence your input tensor, considering I can deduct from your model summary it is grayscale (no color), needs to be $(80,130,1)$ or you could rescale to the `28 x 28` MNIST resolution so you work with a smaller $(28,28,1)$ tensor My suggestion is to start from an architecture like this [MNIST Keras Model](https://gist.github.com/mjbhobe/cf895231aab8fa45a2a23da1351d185d#file-mnist_keras_model-py) as * it has a bottleneck representation of 64 which could be enough for your task and * also suggesting to remove the first dense layer so to significantly reduce $\theta\_{D}$ the number of learnable paramters hence going for something like ``` model = Sequential() # add Convolutional layers model.add(Conv2D(filters=32, kernel_size=(3,3), activation='relu', padding='same', input_shape=(10, 10, 1))) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding='same')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding='same')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Flatten()) # output layer model.add(Dense(1, activation='sigmoid')) ``` then compile the model with `binary_crossentropy` loss and maybe start giving a try to `adam` optimizer Transfer Learning from Autoencoder ---------------------------------- If your data is so special you can't find any big enough and similar enough dataset to use this strategy and you do not come up with any transformation you could use to perform dataset augmentation, without getting into advanced things, you could try to play one last card: use an Autoencoder to learn a compressed representation aimed at reconstructing the original image and perform transfer learning with the encoder only For example, again under the assumption of working with a $(28,28,1)$ tensor, you could start with an architecture like the following one ``` def build_ae(input_img): x = Conv2D(16, (3, 3), activation='relu', padding='same')(input_img) # (28,28,16) encoded = MaxPooling2D((8, 8), padding='same')(x) # (4,4,8) x = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded) # (4,4,8) x = UpSampling2D((8, 8))(x) # (16,16,8) x = Conv2D(16, (3, 3), activation='relu')(x) # Note: Convolving without padding='same' in order to get w-2 and h-2 dimensioality reduction so that following upsampling can lead to the desired 28x28 spatial resolution # (14,14,8) x = UpSampling2D((2, 2))(x) # (28,28,8) decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x) autoencoder = Model(input_img, decoded) return autoencoder ``` In this case, the full model has 2633 weights but the encoding stage consists only of Conv2D+Relu+MaxPooling which means in total `3x3x1x16` weights for the convolutive step and `16` weights for the relu for a total of 160 weights only and the latent representation is a $(4,4,8)$ tensor which means a 128 dimensional flattened tensor and hence assuming, as before, to perform the binary classification with a dense sigmoid layer it would mean 128+1 weights to learn in the actual binary classification task Of course it is possible to go for an even more compressed latent representation both on the spatial domain or channel domain with consequent reduced flattened vector dimensionality and ultimately even less weights to learn Would you share more details about your problem, also your dataset, we could try to help more Upvotes: 3 <issue_comment>username_3: username_2's answer is quite comprehensive. Here are my insights. **First of all, think whether you really need neural networks to solve your problem. Think whether traditional computer vision operations like edge detection/ region-based methods help you to solve your problem (OpenCV can help you here). Think about your data again. In case you decide to use neural networks, some things to try out:-** 1. Your data set size is too small. Try to recall that we are learning to approximate functions (Universal approximation theorem). Less data + More parameters have high chance of overfitting. Use transfer learning. (Try Resizing the image and perform random resized square crop and use as input to your neural network. This may or may not work since I don't know what exactly you are doing). Also try data augmentations that make sense (i.e. : Vertical flip of traffic sign images don't work) 2. Try reducing the learning rate/ using different learning rates for different parts of your network if you decide to use transfer learning. 3. Check whether your train and test dataset distributions are the same. i.e.: Don't train with 95 % of label 0 and test with a set that has 95% label 1. I do not know whether your dataset is highly class imbalanced or whether you are doing some kind of anomaly detection. 4. Think about optimizers. Try Adam if you haven't. If you need more help, try sharing the data and we can try to help you. Upvotes: 2 <issue_comment>username_4: From my experience, this oscillation comes from: * Too high learning rate: Weights change too quick. * Too few neurons in layers: Not enough to fit. + When having not enough neurons, the network can't learn at all and the oscillation is due to failure to fit to global optimum; it's correct for some cases and wrong on other cases Upvotes: 1
2019/08/23
582
1,664
<issue_start>username_0: Suppose we have a labeled data set with columns $A$, $B$, and $C$ and a binary outcome variable $X$. Suppose we have rows as follows: ``` col A B C X 1 1 2 3 1 2 4 2 3 0 3 6 5 1 1 4 1 2 3 0 ``` Should we throw away either row 1 or row 4 because they have different values of the outcome variable X? Or keep both of them?<issue_comment>username_1: The problem you are portraying looks like a modified [XOR problem](https://medium.com/@jayeshbahire/the-xor-problem-in-neural-networks-50006411840b). You can't throw away the lines with a label of 1 because a the model won't be able to learn this class. Upvotes: 2 <issue_comment>username_2: This is perfectly acceptable in a stochastic environment. Generally your loss is to minimize $-log\ p(Y|X)$ or equivalently $-\sum\_i log\ p(y\_i|x\_i)$. This optimization is equivalent to $-\mathbb{E}\log\ p(y\_i|x\_i)$. In other words you are minimizing in this case: $$ \begin{align\*} L &= -log\ p(1|x\_0) - log\ p(0|x\_0) \\ &= -log [p(1|x\_0) \* p(0|x\_0)] \\ &= -log [p(1|x\_0) \* (1 - p(1|x\_0))] \\ \end{align\*} $$ or since log is concave equivalently minimizing $$ \hat L = -p(1|x\_0) \* (1 - p(1|x\_0)) $$ After some basic calc 1, we see the optimal result we want the system to learn is that $$ p(1|x\_0) = .5$$ Note that if you had more evidence, the result would just be that you want it to learn that it is $1$ with probability $\mathbb{E}\_i\ y\_i | x$ Upvotes: 1 <issue_comment>username_3: I might consider 2 models (throw away col 1 and throw away col 4), and one more that keeps both, and see which generalises better to test set. Upvotes: 0
2019/08/24
624
2,555
<issue_start>username_0: Currently, I am working on a few projects that use feedforward neural networks for regression and classification of simple tabular data. I have noticed that training a neural network using TensorFlow-GPU is often slower than training the same network using TensorFlow-CPU. Could something be wrong with my setup/code or is it possible that sometimes GPU is slower than CPU?<issue_comment>username_1: I advice you to always use GPU over CPU for training your models. This is driven by the usage of deep learning methods on images and texts, where the data is very rich. You must have a GPU suited perfectly for training (e.g. NVIDIA 1080, NVIDIA Titan or higher versions), I wouldn't be surprised to find that your CPU was faster if you don't have a powerful GPU. Upvotes: 2 <issue_comment>username_2: This changes according to your data and complexity of your models. See following article by [microsoft](https://azure.microsoft.com/en-us/blog/gpus-vs-cpus-for-deployment-of-deep-learning-models/). Their conclusion is > > The results suggest that the throughput from GPU clusters is always > better than CPU throughput for all models and frameworks proving that > GPU is the economical choice for inference of deep learning models. > ... > > > It is important to note that, for standard machine learning models > where number of parameters are not as high as deep learning models, > CPUs should still be considered as more effective and cost efficient. > > > Since you are training MLP, it can not be thought as standard machine learning model. See my [preprint, The impact of using large training data set KDD99 on classification accuracy](https://peerj.com/preprints/2838/). I compare different machine learning algorithms using weka. [![enter image description here](https://i.stack.imgur.com/H9Ogn.png)](https://i.stack.imgur.com/H9Ogn.png) As you can see from above image, MLP takes 722 minutes to train while Naive Bayes ~2 minutes. If your data is small and your models parameters are not high, you will see better performance on CPU. Upvotes: 4 [selected_answer]<issue_comment>username_3: It depends, if you ve to solve a "simple" problem which does not require CNN or stacked models without multidimensional data and not many multiplications, big long numbers then if you decide to use CNN / stacked architectures AND GPU it is like using a hammer to insert a needle.It will not only spend energy but the computations will do zero padding in memory, you will observe a degradation in speed. Upvotes: 1
2019/08/25
575
2,406
<issue_start>username_0: I started working on the application of deep learning in medical imaging recently. While dealing with MRI images in the BraTS dataset, I observe that first and last few frames are always completely empty (black). I want to ask those who are already working in the field, is there a way to remove them in a procedural manner before training and add them correctly after the training as a postprocessing step (to comply with the ground truth segmentations' shape)? Has anyone tried that? I could not find any results on Google. So asking here. Edit: I think I did not make my point clear enough. I meant to say first and last few frames of *each* MRI scan are empty. How to deal with those is what I intended to ask.<issue_comment>username_1: As an expert in image analysis I don't think this would be a problem. I have never worked with MRI images from the particular dataset you described but I found that the format of the file containing the images is NIfTI. NIfTI files can be imported in Matlab(niftiread function), ImageJ and Python (NiBabel-Nipy). Thus you should be able to write a script to import the images from the file, select which images you want to keep, and save the output I'm the same format as the input (NIfTI). Upvotes: 0 <issue_comment>username_2: I've worked on the BRATS dataset and I can verify that this is pretty much standard process. Besides throwing the totally blank images, I also throw away the images in the beginning and ending of the sequence that show the tip of the scull and the base of the neck. Generally when dealing with MRIs, I do this with a script (think of is as a preprocessing step) I run on each image that counts the amount of pixels that have a positive intensity (I actually add a small value to this to account for noise). Let's say for images with values 0-255 I count the amount of pixels with an intensity of over 10-15, let's call this $x$. After that I set a threshold (empirically), let's call it $t$ and discard images with $x < t$. Specifically for BRATS, because you have the labels, you can see which of these have the desired classes and discard most of the rest. If you try to train a network on the dataset as is you face an **enormous** imbalance ratio. I've had trouble training networks due to this and the most success I got was when I threw away most of the irrelevant images. Upvotes: 2 [selected_answer]
2019/08/27
2,518
8,696
<issue_start>username_0: I came across some papers that use $\mathbb E$ in equations, in particular, this paper: <https://arxiv.org/pdf/1511.06581.pdf>. Here is some equations from the paper that uses it: $Q^\pi \left(s,a \right) = \mathbb E \left[R\_t|s\_t = s, a\_t = a, \pi \right]$ , $V^\pi \left(s \right) = \mathbb E\_{a\backsim\pi\left(s \right)} \left[Q^\pi \left(s, a\right) \right]$ , $Q^\pi \left(s, a \right) = \mathbb E\_{s'} \left[r + \gamma\mathbb E\_{a'\backsim\pi \left(s' \right)} \left[Q^\pi \left(s', a' \right) \right] | s,a,\pi \right]$ $\nabla\_{\theta\_i}L\_i\left(\theta\_i \right) = \mathbb E\_{s, a, r, s'} \left[\left(y\_i^{DQN} - Q \left(s, a; \theta\_i \right) \right) \nabla\_{\theta\_i} Q\left(s, a, \theta\_i \right) \right]$ Could someone explain to me what is the purpose of $\mathbb E$?<issue_comment>username_1: That's the [Expected Value operator](https://en.wikipedia.org/wiki/Expected_value). Intuitively, it gives you the value that you would "expect" ("on average") the expression after it (often in square or other brackets) to have. Typically that expression involves some random variables, which means that there may be a wide range of different values the expression may take in any concrete, single event. Taking the expectation basically means that you "average" over all the values the expression could potentially take, appropriately weighted by the probabilities with which certain events occur. You'll often find assumptions under which the expectation is taken after a vertical line ($\mid$) inside the brackets, and/or in the subscript to the right of the $\mathbb{E}$ symbol. Sometimes, some assumptions may also be left implicit. --- For example: $$\mathbb{E} \left[ R\_t \mid s\_t = s, a\_t = a, \pi \right]$$ may be read in english as "the expected value of the discounted returns from time $t$ onwards ($R\_t$), given that the state at time $t$ is $s$ ($s\_t = s$), given that our action at time $t$ is $a$ ($a\_t = a$), given that we continue behaving according to policy $\pi$ after time $t$". I would say that, in this case, the expected value also relies on the transition dynamics of the Markov decision process (i.e. the probabilities with which we transition between states, given our actions). This is left implicit. --- Second example: $$V^{\pi}(s) = \mathbb{E}\_{a \sim \pi(s)} \left[ Q^{\pi}(s, a) \right]$$ may be read as "$V^{\pi}(s)$ is equal to the expected value of $Q^{\pi}(s, a)$, under the assumption that $a$ is sampled according to $\pi(s)$". In theory, something like this could be computed by enumerating over all possible $a$, computing $Q^{\pi}(s, a)$ for every such $a$, and multiplying it by the probability that $\pi(s)$ assigns to $a$. In practice, you could also approximate it by running a large number of experiments in which you sample $a$ from $\pi(s)$, and then evaluate a single concrete case of $Q^{\pi}(s, a)$, and average over all the evaluations. Upvotes: 5 [selected_answer]<issue_comment>username_2: $\mathbb E$ is the symbol for the [expectation (or expected value)](https://en.wikipedia.org/wiki/Expected_value). To fully understand the concept of *expected value*, you need to understand the concept of [*random variable*](https://en.wikipedia.org/wiki/Random_variable). An example should help you understand the idea behind the concept of a random variable. Suppose you [toss a coin](https://en.wikipedia.org/wiki/Coin_flipping). The outcome of this (random) experiment can either be *heads* or *tails*. Formally, the [*sample space*](https://en.wikipedia.org/wiki/Sample_space), $\Omega = \{\text{heads}, \text{tails}\}$, is the set that contains the possible outcomes of a random experiment. The outcome (e.g. *heads*) is the result of a [random process](https://en.wikipedia.org/wiki/Stochastic_process). A random variable is a function that we can associate with a random process so that we can more formally describe the random process. In this case, we can associate a random variable, $T$, with this random process of tossing a coin. $$ T(\omega) = \begin{cases} 1, & \text{if } \omega = \text{heads}, \\[6pt] 0, & \text{if } \omega = \text{tails}, \end{cases} $$ where $\omega \in \Omega$. In other words, if the outcome of the random process is *heads*, then the output of the associated random variable $T$ is $1$, else it is $0$. We can also associate with each random process (and thus with the corresponding random variable) a probability distribution, which, intuitively, describes the probability of occurrence of each possible outcome of the random process. In the case of the coin-flipping random variable (or process), assuming that the coin is "fair", then the following function describes the probability of each outcome of the coin $$ f\_T(t) = \begin{cases} \tfrac 12,& \text{if }t=1,\\[6pt] \tfrac 12,& \text{if }t=0, \end{cases} $$ In other words, there is $\tfrac 12$ probability that the outcome of the random process is $1$ (heads) and $\tfrac 12$ probability that it is $0$ (tails). If you throw a coin $n$ times in the air, how many times will it land heads and tails? Of course, it will depend on the experiment. In the first experiment, you might get $\frac{3n}{4}$ heads and $\frac{n}{4}$ tails. In the second experiment, you might get $\frac{n}{2}$ heads and $\frac{n}{2}$ tails, and so on. If you repeat this experiment an infinite amount of times (of course, we can't do that, but imagine if we could do that), how many times do you **expect** (on average) to get heads and tails? The expected value is the answer to this question. In the case of the coin-tossing experiment, the outcomes are discrete (heads or tails), consequently, $T$ is a *discrete random variable*. In the case of a discrete random variable, the expected value **is defined as** follows $$\mathbb E[T] = \sum\_{t \in T} p(t) t$$ where $t$ is the outcome of the random variable $t$ and $p(t)$ is the probability of such outcome. In other words, the expected value of a random variable $T$ is defined as a weighted sum of the values it can take, where the weights are the corresponding probabilities of occurrence. So, in the case of the coin-tossing experiment, the expected value is \begin{align} \mathbb E[T] &= \sum\_{t \in T} p(t) t\\ &= \frac{1}{2}1 + \frac{1}{2} 0\\ &=\frac{1}{2} \end{align} What does $\mathbb E[T] = \frac{1}{2}$ mean? Intuitively, it means that half of the times the random process produces heads and half of the times it produces tails, assuming it is governed by the probability distribution $f\_T(t)$. Note that, if the probability distribution $f\_T(t)$ had been defined differently, then the expected value would also have been different, given that the expected value is defined as a function of the probability of occurrence of each outcome of the random process. In your specific examples, $\mathbb E$ is still the symbol for the expected value. For example, in the case of $Q^\pi \left(s,a \right) = \mathbb E \left[R\_t|s\_t = s, a\_t = a, \pi \right]$, $Q^\pi \left(s,a \right)$ is thus defined as the expected value of the random variable $R\_t$, given that $s\_t = s$, $a\_t = a$ and the policy is $\pi$ (so this is actually a [conditional expectation](https://en.wikipedia.org/wiki/Conditional_expectation)). In this specific case, $R\_t$ represents the *return* at time step $t$, which, in reinforcement learning, is defined as $$ R\_t = \sum\_{k=0}^\infty \gamma^k r\_{t+k+1} $$ where $r\_{t+k+1} \in \mathbb{R}$ is the reward at time step $t+k+1$. $R\_t$ a random variable because it is assumed that the underlying environment is a random process. It is not always easy to intuitively understand the expected value of a random variable. For example, in the case of a coin-flipping random process, the expected value $\frac{1}{2}$ should be intuitive (given that it is the average of $1$ and $0$), but, in the case of $Q^\pi \left(s,a \right)$, at first glance, it is not clear what the expected value should be (hence the need for algorithms such as Q-learning), given that it depends on the rewards, which depend on the dynamics of the environment. However, the intuition behind the concept of the expected value and the calculation (provided the associated random variable is discrete) does not change. In the case there is more than one random variable involved in the calculation of the expected value, then we also need to specify the random variable the expected value is being calculated with respect to, hence the subscripts of the expected value in your examples. See, for example, [Subscript notation in expectations](https://stats.stackexchange.com/q/72613/82135) for more info. Upvotes: 3
2019/08/28
487
1,943
<issue_start>username_0: K-means tries to find centroid and then clusters around the centroids. But what if we want to cluster based on the complement? For example, suppose we have a group of animals and we want to cluster Dogs, Cats, (Not Dogs and Not Cats). The 3rd category will not arise from mean clustering.<issue_comment>username_1: Note: K-means does not assume an interpretation/label of the clusterings - in fact it is an [unsupervised](https://en.wikipedia.org/wiki/Unsupervised_learning) algorithm. The interpretations are a result of **human analysis** *after* running K-means. For example, in the case of cats and dogs one would most definitely chose k = 2 - which provides an easy interpretation. However, what would it mean if we set k = 1000. We no longer have a "clean" interpretation of the centroids. Note: how I keep saying "interpretation." The algorithm simply assigns a data point to a cluster and calls it a day. Humans then look at the results and try to **understand** them with an **interpretation**. Continuing with the example where k = 2. One could easily interpret "is cat" as "not dog" and "is dog" as "not cat." The idea here is that the data is **unlabeled** beforehand and humans try to fathom the results retrospectively by assigning the resulting clusters with an understandable label. I hope this clarifies the issue. Upvotes: 1 <issue_comment>username_2: The purpose of clustering is to look for the **same characteristics** in the data and group them into clusters. The number of clusters we take is based on how our **clustering algorithm evaluated**. For example, we use the **Elbow Method** for evaluates. We take optimal number of clusters that distortion start decreasing in a linear fashion. In the picture below, the **optimal number of clusters** for the data is 3. [![enter image description here](https://i.stack.imgur.com/0MEmU.png)](https://i.stack.imgur.com/0MEmU.png) Upvotes: 0
2019/08/30
680
2,855
<issue_start>username_0: How do I program a neural network such that, when an image is inputted, the output is a numerical value that is not the probability of the image being a certain class? In other words, a CNN that doesn't classify. For example, when an input of an image of a chair is given, the model should not give the chance that the image is a chair but rather give the predicted age of the chair, the predicted price of the chair, etc. I'm currently not sure how to program a neural net like this.<issue_comment>username_1: This can be thought of as a **loss function design problem**. If you optimize your network weights for something like multi-class classification, then expect your network to learn weights for this task (You will use cross entropy loss for this task). If you optimize your network to output a single value at the last layer and treat it as a regression problem for age prediction, then your network can learn weights for this particular task (You may use something like Mean Square Error loss here). Let me give you a weak guidline on how to do this. Suppose say your input is images of chairs and you want to predict their age using a pretrained resnet, this is how you may do it in pytorch. **Definition** X: Input pictures Y: List of ground truth values which is age here. i.e. : [1.4, 2.5, 2.2, ....] **Modify your neural network to give one output at the last layer** model = torchvision.models.resnet18(pretrained=True) num\_ftrs = model.fc.in\_features model.fc = torch.nn.Linear(num\_ftrs, 1) **Design your loss function appropriately. You can treat age prediction as a standard regression problem. Of course, there are better loss designs here.** criterion = torch.nn.MSELoss() **So use this loss during training** loss = criterion(output, target) where output is your neural network prediction and target is your ground truth values. This is how you can modify an existing architecture for your task. Hope it helps. Upvotes: 2 <issue_comment>username_2: You generally use SoftMax Layer as the output layer for a neural network that is used as a Classifier. Now, if you want your neural network to predict the age of a chair, predicting price of a chair like in linear regression (output is continuous), you have to remove the SoftMax Layer and add one or multiple layers such that the output of the final layer gives only one value at the output. (which is the prediction for you age or price). And, instead of logit loss you can use MSE for back propagation. So, like username_1 mentioned in another answer, it's all about the final layer used and the loss function used. Based on what you use, your weights get trained. This is what is done in **transfer learning** also. Based on the task you want to achieve you change the last layers and retrain your network. Upvotes: 1
2019/08/30
2,011
6,983
<issue_start>username_0: I would like to take in some input values for $n$ variables, say $R$, $B$, and $G$. Let $Y$ denote the response variable of these $n$ inputs (in this example, we have $3$ inputs). Other than these, I would like to use a reference/target value to compare the results. Now, suppose the relation between the inputs ($R$, $B$ and $G$) with the output $Y$ is (let's say): $$Y = R + B + G$$ But the system/machine has no knowledge of this relation. It can only read its inputs, $R$, $B$ and $G$, and the output, $Y$. Also, the system is provided with the reference value, say, $\text{REF} = 30$ (suppose). The aim of the machine is to find this relation between its inputs and output(s). For this, I have come across some quite useful material online like [this forum query](https://ai.stackexchange.com/q/7434/2444) and [Approximation by Superpositions of a Sigmoidal Function](https://pdfs.semanticscholar.org/05ce/b32839c26c8d2cb38d5529cf7720a68c3fab.pdf) by **<NAME>** and felt that it were possible. Also, I doubt that **Polynomial Regression** may be helpful as suggested [Here](https://rss.onlinelibrary.wiley.com/doi/abs/10.1111/1467-9868.00192). One vague approach that comes to my mind is to use a truth table like approach to somehow deduce the effect of the inputs on the output and hence, get a function for it. But neither am I sure how to proceed with it, nor do I trust its credibility. Is there any alternative/already existing method to accomplish this?<issue_comment>username_1: There are always a large number of possible functions that can produce a given set of input-output values. The challenge is to find a *simplest* function (according to whatever criteria you choose) to produce those values. One approach is to write a general function of the input variables, comprising terms of all order in R, G, and B, with a coefficient for each term, then search for values of the coefficients that A) reproduce the known input-output values accurately and B) leave the largest number of coefficients equal to zero. Several different algorithms can be used to do the search efficiently. My choice would be a genetic algorithm to seek the minimum of the RMS difference between the produced and known I-O values, summed with the product of a gradually increasing parameter and the number of nonzero coefficients. Upvotes: 1 <issue_comment>username_2: As I see your explanation, what you're looking for can be stated via another way saying ``` Y = aX + b ``` where Y is output vector, X input vector and a & b are the coefficients you want to find. **Why so? And what is happening?** First thing first: I recommend to look the video [4] about how matrices and vectors work together and form after multiplication very familiar equations: ``` Y = a1 * x1 + a2 * x2 + a3 * x3 ``` Now you see, you do not only get R + G + B but also some constants supplied for each of the variables. About polynomial solutions I found [2] and [3] but reading through [3] you'll soon notice it is about completely different approach: ``` Y = a1 * x + a2 * x^2 + a3 * x^3 ``` which you don't want. So, you find something called linear regression and a so called Deep Neural Network for example to solve it [1]. I would summarize the source [1] in these steps: * You have to find some training data. That is examples of correct Y values, when X values are known. * Then you build in Python note book some code that has: a Neural Network with hidden layer(s), Activation Function, Back Propagation and Objective Function. * Many many iterations with the data, called Training. There are plenty of samples and courses online about these in detail, but with correct tools and tutorials some dozen lines and no more are needed. Process is ended with validation phase with some more known data items and results. It can tell you how well the estimated model works. --- **Final Notes:** *As you may observe, the solution includes quite a many funny terms you have to learn somehow before mastering the task. For example Udemy has great online courses on this topic, also free tutorials are available on another sites. Your plans sound quite ambitious compared to the knowledge you have so far, so I really do recommend you learn little bit more to be able to fine tune the already given examples online. For example tutorial [5] includes one. It is at first quite complicated code, you'd need quite a lot practice to master it line by line.* --- **In short:** Find your favorite tutorial, study neural networks a little bit (basics) and pick code sample to start experimenting. It is a long way but it is worth it. --- **Source:** [1] <https://lightsapplications.wordpress.com/linear-regression-and-deep-learning/> [2] <https://www.ritchieng.com/machine-learning-polynomial-regression/> [3] <https://arachnoid.com/polysolve/> [4] <https://www.youtube.com/watch?v=F2lJ7oSwcyY> [5] <https://missinglink.ai/guides/neural-network-concepts/backpropagation-neural-networks-process-examples-code-minus-math/> Upvotes: 0 <issue_comment>username_3: I found a solution to this some time back. I have been studying function approximation (within linear regression) for some time. Here's how I did it: [Neural Networks have been proved to be universal function approximators](http://neuralnetworksanddeeplearning.com/chap4.html). So, even a single hidden layer would be sufficient to approximate a function simple as **addition** (Even somewhat complex functions like the **[Sine](https://www.codeproject.com/Articles/1237026/Simple-MLP-Backpropagation-Artificial-Neural-Netwo)** and **any random CONTINUOUS [wiggly function](https://colab.research.google.com/drive/1K9agqd9Pn8llY1uwkFD68YqFgPeSTBCQ)** have been approximated) First, I used a high level API like TensorFlow and Keras and implemented it [here](https://colab.research.google.com/drive/1s8VYf5tYGj83SxOtJQbiKs7AJd_eNQX5) The model was trained on the data (input-output pairs) ``` R = np.array([-4, -10, -2, 8, 5, 22, 3], dtype=float) B = np.array([4, -10, 0, 0, 15, 5, 1], dtype=float) G = np.array([0, 10, 5, 8, 1, 2, 38], dtype=float) Y = np.array([0, -10, 3, 16, 21, 29, 42], dtype=float) ``` And trained as follows: ``` #Create a hidden layer with 2 neurons hidden = tf.keras.layers.Dense(units=2, input_shape=[3]) #Create the output (final) layer which symbolises value of **Y** output = tf.keras.layers.Dense(units=1) #Combine layers to form the neural network and compile it model = tf.keras.Sequential([hidden, output]) model.compile(loss='mean_squared_error', optimizer=tf.keras.optimizers.Adam(0.1)) history = model.fit(RBG,Y, epochs=500, verbose=False) ``` The model converges in about 50 epochs [![enter image description here](https://i.stack.imgur.com/97HZy.png)](https://i.stack.imgur.com/97HZy.png) Also, I have implemented the same using only C/C++ and used GNU plot to visualize the results. Upvotes: 1 [selected_answer]
2019/08/31
3,809
14,341
<issue_start>username_0: If the original purpose for developing AI was to help humans in some tasks and that purpose still holds, why should we care about its explainability? For example, in deep learning, as long as the intelligence helps us to the best of their abilities and carefully arrives at its decisions, why would we need to know how its intelligence works?<issue_comment>username_1: Explainable AI is often desirable because 1. AI (in particular, artificial neural networks) can [catastrophically fail](https://en.wikipedia.org/wiki/Catastrophic_failure) to do their intended job. More specifically, it can be hacked or attacked with [adversarial examples](https://arxiv.org/pdf/1312.6199.pdf) or it can take unexpected wrong decisions whose consequences are catastrophic (for example, it can lead to the death of people). For instance, imagine that an AI is responsible for determining the dosage of a drug that needs to be given to a patient, based on the conditions of the patient. What if the AI makes a wrong prediction and this leads to the death of the patient? Who will be responsible for such an action? In order to accept the dosage prediction of the AI, the doctors need to *trust* the AI, but trust only comes with understanding, which requires an explanation. So, to avoid such possible failures, it is fundamental to understand the inner workings of the AI, so that it does not make those wrong decisions again. 2. AI often needs to interact with humans, which are sentient beings (we have feelings) and that often need an explanation or reassurance (regarding some topic or event). 3. In general, humans are often looking for an explanation and understanding of their surroundings and the world. By nature, we are curious and exploratory beings. Why does an apple fall? Upvotes: 3 <issue_comment>username_2: Another reason: In the future, AI might be used for tasks that are not possible to be understood by human beings, by understanding how given AI algorithm works on that problem we might understand the nature of the given phenomenon. Upvotes: 2 <issue_comment>username_3: IMHO, the most important need for explainable AI is to prevent us from becoming intellectually lazy. If we stop *trying* to understand how answers are found, we have conceded the game to our machines. Upvotes: -1 <issue_comment>username_4: If you're a bank, hospital or any other entity that uses predictive analytics to make a decision about actions that have huge impact on people's lives, you would not make important decisions just because Gradient Boosted trees told you to do so. Firstly, because it's risky and the underlying model might be wrong and, secondly, because in some cases it is illegal - see [Right to explanation](https://en.wikipedia.org/wiki/Right_to_explanation). Upvotes: 3 <issue_comment>username_5: As argued by [Selvaraju et al.](https://arxiv.org/pdf/1610.02391.pdf), there are three stages of AI evolution, in which interpretability is helpful. 1. In the early stages of AI development, when AI is weaker than human performance, transparency can help us **build better models**. It can give a better understanding of how a model works and helps us answer several key questions. For example, *why* a model works in some cases and doesn't in others, *why* some examples confuse the model more than others, *why* these types of models work and the others don't, etc. 2. When AI is on par with human performance and ML models are starting to be deployed in several industries, it can help build **trust** for these models. I'll elaborate a bit on this later, because I think that it is the most important reason. 3. When AI significantly outperforms humans (e.g. AI playing chess or Go), it can help with **machine teaching** (i.e. learning from the machine on how to improve human performance on that specific task). ### Why is trust so important? First, let me give you a couple of examples of industries where *trust* is paramount: * In healthcare, imagine a Deep Neural Net performing diagnosis for a specific disease. A classic *black box* NN would just output a binary "yes" or "no". Even if it could outperform humans in sheer predictability, it would be utterly useless in practice. What if the doctor disagreed with the model's assessment, shouldn't he know *why* the model made that prediction; maybe it saw something the doctor missed. Furthermore, if it made a misdiagnosis (e.g. a sick person was classified as healthy and didn't get the proper treatment), who would take responsibility: the model's user? the hospital? the company that designed the model? The legal framework surrounding this is a bit blurry. * Another example is self-driving cars. The same questions arise: if a car crashes, whose fault is it: the driver's? the car manufacturer's? the company that designed the AI? Legal accountability, is key for the development of this industry. In fact, according to many, this lack of trust has hindered the **adoption** of AI in many fields (sources: [[1]](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.8362&rep=rep1&type=pdf), [[2]](https://www.researchgate.net/publication/3454453_Autonomy_and_Common_Ground_in_Human-Robot_Interaction_A_Field_Study), [[3]](https://arxiv.org/abs/1706.07269)). While there is a running hypothesis that with more transparent, interpretable or explainable systems users will be **better equipped to understand and therefore trust** the intelligent agents (sources: [[4]](https://www.arl.army.mil/arlreports/2014/ARL-TR-6905.pdf), [[5]](https://journals.sagepub.com/doi/10.1177/0018720815621206), [[6]](http://www.bradhayes.info/papers/hri17.pdf)). In several real-world applications, you can't just say "it works 94% of the time". You might also need to provide a justification... ### Government regulations Several governments are slowly proceeding to regulate AI and transparency seems to be at the center of all of this. The first to move in this direction is the EU, which has set several guidelines where they state that AI should be transparent (sources: [[7]](https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai), [[8]](https://ec.europa.eu/commission/news/artificial-intelligence-2019-apr-08_en), [[9]](http://www.europarl.europa.eu/news/en/press-room/20190214IPR26425/online-platforms-required-by-law-to-be-more-transparent-with-eu-businesses)). For instance, the GDPR states that if a person's data has been subject to "automated decision-making" or "profiling" systems, then he has a right to access > > "meaningful information about the logic involved" > > > ([Article 15, EU GDPR](http://www.privacy-regulation.eu/en/article-15-right-of-access-by-the-data-subject-GDPR.htm)) Now, this is a bit blurry, but there is clearly the intent of requiring some form of explainability from these systems. The general idea the EU is trying to pass is that "if you have an automated decision-making system affecting people's lives then they have a right to know *why* a certain decision has been made." For example, a bank has an AI accepting and declining loan applications, then the applicants have a right to know *why* their application was rejected. ### To sum up... Explainable AIs are necessary because: * It gives us a better understanding, which helps us improve them. * In some cases, we can learn from AI how to make better decisions in some tasks. * It helps users *trust* AI, which leads to a wider adoption of AI. * Deployed AIs in the (not too distant) future might be required to be more "transparent". Upvotes: 7 [selected_answer]<issue_comment>username_6: The answer to this is incredibly simple. If you are a bank executive one day you may need to stand up in court and explain why your AI denied mortgages to all these people... who just happen to share some protected characteristic under anti-discrimination legislation. The judge will not be happy if you handwave the question away mumbling something about algorithms. Or worse, why did this car/plane crash and how will you prevent it next time. This is the major blocker to more widespread adoption of AI in many industries. Upvotes: 2 <issue_comment>username_7: > > Why do we need explainable AI? > ... why we need to know "how does its intelligence work?" > > > Because anyone with access to the equipment, enough skill, and enough time, can force the system to make a decision that is unexpected. The owner of the equipment, or 3rd parties, relying on the decision without an explanation as to why it is correct would be at a disadvantage. Examples - Someone ***might*** discover: * People whom are named <NAME> and request heart surgery on: Tuesday mornings, Wednesday afternoons, or Fridays on odd days and months have a 90% chance of moving to the front of the line. * Couples whom have the male's last name an odd letter in the first half of the alphabet and apply for a loan with a spouse whose first name begins with a letter from the beginning of the alphabet are 40% more likely to receive the loan if they have fewer than 5 bad entries in their credit history. * etc. Notice that the above examples *ought not* to be determining factors in regards to the question being asked, yet it's possible for an adversary (with their own equipment, or knowledge of the algorithm) to exploit it. **Source papers**: * "[AdvHat: Real-world adversarial attack on ArcFace Face ID system](https://arxiv.org/abs/1908.08705)" (Aug 23 2019) by <NAME> and <NAME> + Creating a sticker and placing it on your hat fools facial recognition system. * "[Defending against Adversarial Attacks through Resilient Feature Regeneration](https://arxiv.org/abs/1906.03444)" (Jun 8 2019), by <NAME>, <NAME>, and <NAME> + > > "Deep neural network (DNN) predictions have been shown to be vulnerable to carefully crafted adversarial perturbations. Specifically, so-called universal adversarial perturbations are image-agnostic perturbations that can be added to any image and can fool a target network into making erroneous predictions. Departing from existing adversarial defense strategies, which work in the image domain, we present a novel defense which operates in the DNN feature domain and effectively defends against such universal adversarial attacks. Our approach identifies pre-trained convolutional features that are most vulnerable to adversarial noise and deploys defender units which transform (regenerate) these DNN filter activations into noise-resilient features, guarding against unseen adversarial perturbations.". > > > * "[One pixel attack for fooling deep neural networks](https://arxiv.org/abs/1710.08864)" (May 3 2019), by <NAME>, <NAME>, and <NAME> + Altering **one pixel** can cause these errors: > > [![Figure 1](https://i.stack.imgur.com/X9yf8.jpg)](https://i.stack.imgur.com/X9yf8.jpg) > > Fig. 1. One-pixel attacks created with the proposed algorithm that successfully fooled three types of DNNs trained on CIFAR-10 dataset: The All convolutional network (AllConv), Network in network (NiN) and VGG. The original class labels are in black color while the target class labels and the corresponding confidence are given below. > > >   > > > [![Figure 2](https://i.stack.imgur.com/6RdaP.jpg)](https://i.stack.imgur.com/6RdaP.jpg) > > Fig. 2. One-pixel attacks on ImageNet dataset where the modified pixels are highlighted with red circles. The original class labels are in black color while the target class labels and their corresponding confidence are given below. > > > Without an explanation as to how and why a decision is arrived at the decision can't be absolutely relied upon. Upvotes: 4 <issue_comment>username_8: In addition to all these answers mentioning the more practical reasons of why we'd want explainable AIs, I'd like to add a more philosophical one. Understanding how things around us work is one of the main driving forces of science from antiquity. If you don't have an understanding of how things work, you can't evolve beyond that point. Just because "gravity works" hasn't stopped us trying to understand how it works. In turn a better understanding of it led to several key discoveries, which have helped us advance our technology. Likewise, if we stop at "it works" we will stop improving it. --- Edit: AI hasn't been just about making "machines think", but also through them to understand how the human brain works. AI and neuroscience go [hand-by-hand](https://deepmind.com/blog/article/ai-and-neuroscience-virtuous-circle). This all wouldn't be possible without being able to explain AI. Upvotes: 2 <issue_comment>username_1: I have already given [an answer](https://ai.stackexchange.com/a/14226/2444) and there are [other good answers](https://ai.stackexchange.com/a/14247/2444), but I would like to give another answer by quoting an excerpt from an old paper by [<NAME>](https://en.wikipedia.org/wiki/Norbert_Wiener), i.e. [Some Moral and Technical Consequences of Automation](https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/moral.pdf) ([1960, Science](https://science.sciencemag.org/content/131/3410/1355)) > > As is now generally admitted, over a limited range of operation, machines act far more rapidly than human beings and are far more precise in performing the details of their operations. This being the case, even when machines do not in any way transcend man's intelligence, they very well may, and often do, transcend man in the performance of tasks. An intelligent understanding of their mode of performance may be delayed until long after the task which they have been set has been completed. > > > > > This means that though machines are theoretically subject to human criticism, such criticism may be ineffective until long after it is relevant. To be effective in warding off disastrous consequences, our understanding of our man-made machines should in general develop [*pari passu*](https://en.wikipedia.org/wiki/Pari_passu) with the performance of the machine. By the very slowness of our human actions, our effective control of our machines may be nullified. By the time we are able to react to information conveyed by our senses and stop the car we are driving, it may already have run head on into a wall. > > > Upvotes: 0
2019/09/01
1,474
5,154
<issue_start>username_0: After an AI goes through the process described in [How would an AI learn language?](https://ai.stackexchange.com/questions/1970/how-would-an-ai-learn-language), an AI knows the grammar of a language through the process of grammar induction. They can speak the language, but they have learned *formal grammar*. But most conversations today, even formal ones, use idiomatic phrases. Would it be possible for an AI to be given a set of idioms, for example, > > Immer mit der Ruhe > > > Which, in German, means 'take it easy' but an AI of grammar induction, if told to translate 'take it easy' to German, would not think of this. And if asked to translate this, it would output > > Always with the quiet > > > So, it is possible to teach an AI to use idiomatic phrases to keep up with the culture of humans?<issue_comment>username_1: Do you have access to parallel corpora in source and target language that translates idioms correctly? Neural machine translation. (NMT) should handle this. NMT uses deep learning to match sequences/pairs of words in one language to another and is now the state of the art method for translation AI. I don't think an AI knows grammar of a language. A translating AI knows patterns but not necessarily grammar in the sense that we learn in school as children. Here's a potential approach that should work give a large enough corpora with examples of idioms - [github.com/facebookresearch/MUSE](https://github.com/facebookresearch/MUSE) Upvotes: 2 <issue_comment>username_2: **Short answer**: Yes. **TL;DR** In the presence of good datasets this can be accomplished with a [pipeline](https://www.aclweb.org/anthology/E17-3027). **Long Answer** In reality an idiom is a series of words which is supposed to have a semantic meaning that is not denoted by the literal reading ([source](https://pdfs.semanticscholar.org/6a4e/8ebd0979ae38d693528fe4da30f67db82768.pdf)). This means that any system that is used must be capable of considering multiple words at a time. Additionally, some idioms are **context dependent**. Example: * The fisherman broke the ice with his tool. Are we to believe that this is a very suave fisherman? > > So, it is possible to teach an AI to use idiomatic phrases to **keep up with the culture of humans**? > > > Observe that humans do not come linguistically "pre-loaded" with idioms. So we can safely assume that idiom usage is a learning task and that the only way for them to keep up is for them to keep learning. So if we solve the idiom learning task we just need to keep our agent [online](https://en.wikipedia.org/wiki/Online_machine_learning) or periodically retrain it on nascent corpora. One difficulty is that, in the absence of a label, a metaphor could be easily mistaken for an idiom and vice versa. So [semantic outlier](https://link.springer.com/chapter/10.1007/978-3-642-37247-6_35) (sorry it's not free) approaches may suffer from precision issues. Example: * She's a thorny wildflower (metaphor - could easily be an idiom) * She's a diamond in the rough (idiom - could easily be a metaphor) Though, idioms will most likely be repeated if a dataset is large whereas a "custom metaphor" is less likely to repeat. Additionally, some idioms (eg bite the bullet or break a leg) do not have readily available "interpretable semantics" that allow us to extract their intended meaning. For example, if one did not know the idiom "cut me some slack" one could think: "Slack implies loosening or to make less tight/taut. I was being very uptight. They probably want me to loosen up and not be so critical." Of course the human understanding of it might happen in a flash and not follow such a delineated path. The idea is that some [NLP pipeline](https://www.quora.com/What-is-natural-language-processing-pipeline) might be constructible that satisfactorily handles idioms in some specific use cases (example of a [pipeline](https://www.aclweb.org/anthology/E17-3027)). For example, one module might attempt to process outliers like "diamond in the rough" which have said interpretable semantics. Though, something like "bite the bullet" may have to be labelled with the correct semantics. I've only scratched the surface of this. Natural language understanding is already a hard problem - and idioms are thus a tough task in a tough task. I hope that this motivates the reading of some more thorough articles. I have gathered some articles that can be used as a springboard into the literature. Here's a [source](https://nlp.stanford.edu/~muzny/docs/mz-emnlp2013.pdf) that uses a dictionary type approach to train the model to recognize idioms. Excerpt: > > For identification, we assume data of the form ${(⟨p\_i,d\_i⟩,y\_i) : i = 1...n}$ where $p\_i$ is the phrase associated with definition $d\_i$ and $y\_i ∈ \{literal, idiomatic\}$. > > > This [source](https://www.aclweb.org/anthology/R15-1087) provides pseudo-code for idiom extraction. This [source](https://pdfs.semanticscholar.org/6a4e/8ebd0979ae38d693528fe4da30f67db82768.pdf) describes a dataset to help solve the idiom difficulties. Upvotes: 4 [selected_answer]
2019/09/02
552
2,056
<issue_start>username_0: I have a simple text classifier, with the following structure: ``` input = keras.layers.Input(shape=(len(train_x[0]),)) x=keras.layers.Dense(500, activation='relu')(input) x=keras.layers.Dropout(0.5)(x) x=keras.layers.Dense(250, activation='relu')(x) x=keras.layers.Dropout(0.5)(x) preds = keras.layers.Dense(len(train_y[0]), activation="sigmoid")(x) model = keras.Model(input, preds) ``` When training it with 300,000 samples, with a batch size of 500, I get an accuracy value of .95 and loss of .22 in the first iteration, and the subsequent iterations are .96 and .11. Why does the accuracy grow so quickly, and then just stop growing?<issue_comment>username_1: As you have trained your model in batch\_size of 500. Weights has been updated for each batch therefore 600 times(300000/500) by the end of one epoch. So, Your model generalized well. Check the predictions. If Predictions are well. Your model is ready. Upvotes: 2 <issue_comment>username_2: It actually depends on a couple of things here - 1. How many output classes do you have? If you have only 2 or 3 classes, it is a very easy task for the classifier that you have built. So, it is highly possible that convergence has occurred. 2. As @Djib2011 mentioned already, if your input training set is not balanced and is heavier with one of the output classes (95%), then this accuracy you see makes sense but note that your model won't do well in production. 3. Do not try to evaluate your model on the basis of your training accuracy. Test it on data your model has never seen before and then evaluate the classification accuracy making sure that your training/testing data is not heavy with one of the classes. Upvotes: 3 [selected_answer]<issue_comment>username_3: It can be normal and there might be nothing wrong with your model. If there is a very strong and clear correlation in your data(good separability) then a network can achive very high accuracy very fast. After reaching some value learning gets harder. Upvotes: 1
2019/09/04
834
3,399
<issue_start>username_0: When watching the machine learning course on Coursera by <NAME>, in the logistic regression week, the cost function was a bit more complex than the one for linear regression, but definitely not that hard. But it got me thinking, why not use the same cost function for logistic regression? So, the cost function would be $\frac{1}{2m} \sum\_{i}^m|h(x\_i) - y\_i|^2$, where $h(x\_i)$ is our hypothesis $\text{function}(\text{sigmoid}(X \* \theta))$, $m$ is the number of training examples and $x\_i$ and $y\_i$ are our $ith$ training example?<issue_comment>username_1: The mean squared error (MSE), $J(\theta) = \frac{1}{2m}\sum\_{i=1}^m(h\_\theta(x\_i)-y\_i)^2$, is not as appropriate as a cost function for classification, given that the MSE makes assumptions about the data that are not appropriate for classification. Though, as an optimization objective, it is still possible to attempt to minimize MSE even in a classification problem, and thus still learn parameters $\theta$. The new cost function has better convergence characteristics as it is more inline with the objective. See [link](https://rohanvarma.me/Loss-Functions/) for the precise mathematical formulation that explains these loss functions from a probabilistic perspective. Note that the absolute value is redundant because $\forall x:x^2\geq0$. I hope this clarifies the matter. Upvotes: 3 [selected_answer]<issue_comment>username_2: I mean you technically could (it's not going to break or something) however, cross entropy is much better suited for classification as it penalizes for misclassification errors: have a look at the function: when you are wrong the loss goes to infinity: [![enter image description here](https://i.stack.imgur.com/nHVdT.png)](https://i.stack.imgur.com/nHVdT.png) you are either from one class or another. MSE is designed for regression where you have nuance: you get close to target is sometimes good enough. You should try both and you will see the performance will be much better for the cross entropy. Upvotes: 1 <issue_comment>username_3: Minimising MSE in a classification setting is perfectly reasonable as it is also known as the [Brier Score](https://en.wikipedia.org/wiki/Brier_score) and is a [proper scoring rule](https://en.wikipedia.org/wiki/Scoring_rule#StrictlyProperScoringRules) which means that it is minimised if the network outputs the conditional probability of class membership. This is not unduly surprising as minimising MSE leads to a model that outputs an estimate of the conditional mean of the target distribution, which for a 1-of-c coding is the conditional probability of class membership. You can even use MSE for training networks with logistic or softmax activation functions in the output layer so that the obey the usual constraints of being in the interval $[0,1]$ and summing to one. However, the MSE penalises very confident misclassifications much less harshly than the cross-entropy metric does. Whether this is a good or bad thing depends on the needs of the application. If you are mostly interested in the p=0.5 decision boundary, then you probably don't want model resources spent dealing with highly confident misclassifications, which are a long way from the decision boundary, and have little effect on it. This is a large part of the justification for purely discriminative methods like the SVM. Upvotes: 0
2019/09/11
926
3,180
<issue_start>username_0: I've recently encountered different articles that are recommending to use the KL divergence instead of the MSE/RMSE (as the loss function), when trying to learn a probability distribution, but none of the articles are giving a clear reasoning why the first is better than the others. Could anyone give me a strong argument why the KL divergence is suitable for this?<issue_comment>username_1: KL-divergence is a measure on probability distributions. It essentially captures the information loss between ground truth distribution and predicted. L2-norm/MSE/RMSE doesn't do well with probabilities, because of the power operations involved in the calculation of loss. Probabilities, being fractions under 1, are significantly affected by any power operations (square or root), and considering we are calculating the squares of differences of probabilities, the values that are summed are abnormally small, essentially barely learning anything as the random initialization itself starts with an abnormally small loss, almost always staying constant. L1 norm, on the other hand, does not have any power operations, making it relatively acceptable. Loss functions, such as Kullback-Leibler-divergence or Jensen-Shannon-Divergence, are preferred for probability distributions because of the statistical meaning they hold. KL-Divergence, as mentioned before, is a statistical measure of information loss between distributions, or, in other words, assuming $Q$ is the ground truth distribution, KL-Divergence is a measure of how much $P$ deviates from $Q$. Also, considering probability distributions, convergence is much stronger in measures of Information Loss such KL-Divergence. More clarity on the motivation behind Kullback-Leibler can be read [here](https://math.stackexchange.com/q/90537). Upvotes: 2 <issue_comment>username_2: In the context of Variational Inference (VI): the KL allows you to move from the unknown posterior $p(z \mid x)$, to the known joint $p(z,x)=p(x|z)p(z)$ and optimize only the ELBO. You cannot do this with L2. $p(z|x)$ is the desired posterior, of which you cannot calculate the evidence (i.e., using Bayes formula we can set: $p(z|x) = \frac{p(x|z)p(z)}{\int\_z p(x|z)p(z)dz}$, and you can't calculate the integral in the denominator [also denoted by $p(x)$] due to it's intractability). Now suppose $q$ is a Variational distribution (e.g. a family of Gaussians which you can control), VI tries to approximates $p(z|x)$ by $q$ by minimizing their KL divergence. $$KL(q(z)||p(z|x)) = \int\_z q(z) \log \frac{q(z)}{p(z|x)}dz = \mathbb E\_q[\log q(z)]-\mathbb E\_q[\log \frac{p(x|z)p(z)}{p(x)}] =$$ $$ -\mathbb E\_q[\log p(x|z)p(z)] + \mathbb E\_q[\log q(z)] + \mathbb E\_q[\log p(x)] = -ELBO(q) + \log p(x) $$ Since you're only optimizing $q$ (it's the only thing you can control), you can discard the unknown and difficult to compute normalizing constant $p(x)$. If you would use the (squared) L2 norm you would get: $$\int\_z [q(z)-p(z|x)]^2dz = \int\_z [q^2(z)-2q(z)p(z|x)+p^2(z|x)]dz $$ While the 3rd term doesn't depend on q, the 2nd term does, and it also requires $p(x)$ to compute. Upvotes: 2 [selected_answer]