date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/01/20
1,104
2,916
<issue_start>username_0: In the [delta rule][1] the equation to adjust the weight with respect to error is $$w\_{(n+1)}=w\_{(n)}-\alpha \times \frac{\partial E}{\partial w}$$ \*where $\alpha$ is the learning rate and $E$ is the error. The graph for $E$ vs $w$ would look like the one below with $E$ in the $y$ axis and $W$ in the $x$-axis [![enter image description here](https://i.stack.imgur.com/Ju05S.png)](https://i.stack.imgur.com/Ju05S.png) In other words, we can write $$\alpha \times \frac{\partial E}{\partial w}=w\_{(n)}-w\_{(n+1)}$$ I want to know, what is the proof behind the gradient of a curve being equal/proportional to the distance between the two coordinates in the x-axis. $\frac{\partial E}{\partial w}$ times step is a small shift on $f(w)$ not $w$. So, why does the difference between $W(n+1)$ and $W(n)$ be equal to $f(W)$? I found [a similar question](https://stats.stackexchange.com/q/305516/82135), but the accepted answer doesn't have a proof.<issue_comment>username_1: Don’t think about it as the $w\_{(n)}-w\_{(n+1)}$ being proportional to something. Think about it this way: I'm now at $w\_{(n)}$. Where do I want to be at timestep, so that the error decreases? For that, I need to know how the error changes when I make small steps to the left or right of $w\_{(n)}$. If $E$ increases as I increase $w$ (that is, if $\frac{\partial E}{\partial w}>0$, then obviously, I would want to move a little bit to the left. In other words, $w\_{n+1} or $w\_{n+1}-w\_{n}<0$. On the other hand, if the derivative were negative, you know that you should move right to reduce the error a little bit, $w\_{n+1}-w\_{n}>0$. So, basically, your step should have the opposite sign of the derivative. $$w\_{n+1}-w\_{n} \propto-\frac{\partial E}{\partial w}$$ $\alpha$, the learning rate, is just the constant of proportionality. Caution: think about small values for this rate, not big numbers. Taking a huge step can cause you to overshoot the minimum point. Upvotes: 3 [selected_answer]<issue_comment>username_2: $$let,\ x\_{(n)} \ be\ a\ point\ on\ x-axis\ where\ f'( x) \ =\ 0\ ,\ and\ x\_{(n\ +\ h)} \ is\ any\ other\ arbitary\ point \\ \therefore \ \ \frac{f'( x\_{(n\ +\ h)})}{|\ f'( x\_{(n\ +\ h)}) \ |} =\begin{cases} 1 & \mathrm{if,} \ h\ \ >0\\ 0 & \mathrm{if,} \ h\ =\ 0\\ -1 & \mathrm{if,} \ h\ < \ 0 \end{cases}\\similarly,\ \ \frac{x\_{(n)} \ -\ x\_{(n\ +\ h)} \ }{|\ x\_{(n)} \ -\ x\_{(n\ +\ h)} \ |} \ =\ \begin{cases} 1 & \mathrm{if,} \ h\ \ < 0\\ 0 & \mathrm{if,} \ h\ =\ 0\\ -1 & \mathrm{if,} \ h\ >\ 0 \end{cases}\\or,\ \frac{x\_{(n)} \ -\ x\_{(n\ +\ h)} \ }{|\ x\_{(n)} \ -\ x\_{(n\ +\ h)} \ |} \ =\ -\ \ \ \frac{f'( x\_{(n\ +\ h)})}{|\ f'( x\_{(n\ +\ h)}) \ |}\\ \therefore \ x\_{(n)} \ =x\_{(n\ +\ h)} \ -\ \eta \times f'( x\_{(n\ +\ h)}) \ \ \ \ \ \ \left[ where\ \eta \ =\frac{|\ x\_{(n)} \ -\ x\_{(n\ +\ h)} \ |}{|f'( x\_{(n\ +\ h)}) \ |} \ \right]$$ Upvotes: 0
2018/01/22
1,434
4,988
<issue_start>username_0: I am trying to build a neural network that takes in a single string, ex: "dog" as an input, and outputs 50 or so related hashtags such as, "#pug, #dogsarelife, #realbff". I have thought of using a classifier, but because there is going to be millions of hashtags to choose the optimal one from, and millions of possible words from the english dictionary, it is virtually impossible to search up the probability of each It is going to be learning information from analyzing twitter posts' text, and its hashtags, and find which hashtags goes with what specific words.<issue_comment>username_1: You can try using Mallet (which uses Gibs Sampling) or gensim LDA (which uses drichlet priors) to model the problem as topics (hashtags) in different documents (tweets). <https://towardsdatascience.com/topic-modeling-and-latent-dirichlet-allocation-in-python-9bf156893c24> has a nice example. Upvotes: 0 <issue_comment>username_2: I think your intuition about a classifier being the wrong approach is a good one. This looks like a great use-case for word vectors, a "self-supervised" learning technique that maps tokens (e.g. "dog") to vectors (which might have anywhere between 50 - 500 dimensions). Facebook open-sourced a particular excellent tool for training word vectors [called FastText](https://github.com/facebookresearch/fastText); you could use this to embed tokens and hashtags alike into a word embedding space. You should find that the vector for "dog" ends "close to" (small cosine distance). Given a word, you can easily look up its vector (after training on your corpus, of course), but how to find other vectors that are close to it? If you want to do better than "brute force" and you need to check against a large number of (vectors for) hashtags, you should consider using [Facebook's excellent FAISS library for fast similarity search](https://github.com/facebookresearch/faiss) to find the closest hashtags. Upvotes: 0 <issue_comment>username_3: > > **Here is a good approach to achieve the task you want:** > > > **Step 1-** Compute the Vector representation (i.e embeddings) of all the words you want to include. There are many algorithms out there to achieve this task. [![Step 1- Compute Embeddings](https://i.stack.imgur.com/Ifqc4m.png)](https://i.stack.imgur.com/Ifqc4m.png) **Step 2-** Choose the #words corresponding to your input word (e.g dog) by applying K-Nearest Neighbors (KNN) or similar algorithms. You basically compute the distances using the embeddings. [![Step 2 - Apply KNN](https://i.stack.imgur.com/jI8Ptm.jpg)](https://i.stack.imgur.com/jI8Ptm.jpg) > > **Steps Detailed:** > > > **Step 1-** In NLP we represent human language as a vector of values instead of a set of characters in order to process it. To do so there are 3 approaches in the literature: **- Word Level Embeddings:** Represent each word as a vector of values. Algorithms: [Word2Vec by Google](https://medium.freecodecamp.org/how-to-get-started-with-word2vec-and-then-how-to-make-it-work-d0a2fca9dad3) ([paper](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf)), [fastText by Facebook](https://blog.manash.me/how-to-use-pre-trained-word-vectors-from-facebooks-fasttext-a71e6d55f27), [GloVe by Stanford University](https://nlp.stanford.edu/projects/glove/) ([paper](https://nlp.stanford.edu/pubs/glove.pdf)) ... **- Character Level Embeddings:** Represent each character as a vector of values. Algorithms: [ELMo](https://towardsdatascience.com/elmo-contextual-language-embedding-335de2268604) ([paper](https://arxiv.org/abs/1802.05365)) ... **- Sentence Level Embeddings:** Represent a sentence as a vector of values. Algorithms: [Universal Sentence Encoder by Google](https://tfhub.dev/google/universal-sentence-encoder/1) ([paper](https://arxiv.org/abs/1803.11175)) ... In your case I suggest to use [GloVe](https://nlp.stanford.edu/projects/glove/) or [ElMo](https://towardsdatascience.com/elmo-contextual-language-embedding-335de2268604) if you have only words and [Universal Sentence Encoder](https://tfhub.dev/google/universal-sentence-encoder/1) if you have words and sentences. . Compute all your words embeddings and move to the next step. **Step 2-** Now that you have your embeddings, compute the distances between all your words *(use Euclidian, Minkowski or any other distance)*. Notice that the computation may take some time but will only be executed once. Now each time you have a word (e.g dog) you apply the **KNN** algorithm using the computed distances and you will get the most related words to this word. > > **Note**: No need to compute distances and apply KNN if you use Universal Sentence Encoder as the similarity is easily computed using a dot product of the embeddings. See my quick implementation example [here](https://ai.stackexchange.com/questions/4965/sentence-similarity-in-python/12232#12232) for details. > > > Upvotes: 1
2018/01/22
1,568
5,606
<issue_start>username_0: Forgive what might be a basic question. I'm just experimenting with ML / AL and I have a small problem set and I'd like to see if it can be solved with ML / AI. Basically, given a set of objects with multiple features, I'd like to create a process for recommending one automatically to a user. I'm thinking that some sort of clustering algorithm may be the best approach. However, one main challenge I'm trying to wrap my head around is that I don't know in advance how many distinct clusters will evolve... There may be scenarios where we Feature X is really important, but other scenarios where a user will say Feature Y is important. Secondly, what is my input set? For each training sample, I will have 1 selected object, and N-1 unselected objects. But I don't want to "train" that the unselected objects are "bad" because they could be selected in a future training example. Finally, I don't have a large training set already, so I would like to use feedback (user input, "This was a bad choice" or "Use this object instead.") from the process to further refine the algorithm. Is this feasible? Are there any established patterns for this sort of process? Thanks in advance.<issue_comment>username_1: You can try using Mallet (which uses Gibs Sampling) or gensim LDA (which uses drichlet priors) to model the problem as topics (hashtags) in different documents (tweets). <https://towardsdatascience.com/topic-modeling-and-latent-dirichlet-allocation-in-python-9bf156893c24> has a nice example. Upvotes: 0 <issue_comment>username_2: I think your intuition about a classifier being the wrong approach is a good one. This looks like a great use-case for word vectors, a "self-supervised" learning technique that maps tokens (e.g. "dog") to vectors (which might have anywhere between 50 - 500 dimensions). Facebook open-sourced a particular excellent tool for training word vectors [called FastText](https://github.com/facebookresearch/fastText); you could use this to embed tokens and hashtags alike into a word embedding space. You should find that the vector for "dog" ends "close to" (small cosine distance). Given a word, you can easily look up its vector (after training on your corpus, of course), but how to find other vectors that are close to it? If you want to do better than "brute force" and you need to check against a large number of (vectors for) hashtags, you should consider using [Facebook's excellent FAISS library for fast similarity search](https://github.com/facebookresearch/faiss) to find the closest hashtags. Upvotes: 0 <issue_comment>username_3: > > **Here is a good approach to achieve the task you want:** > > > **Step 1-** Compute the Vector representation (i.e embeddings) of all the words you want to include. There are many algorithms out there to achieve this task. [![Step 1- Compute Embeddings](https://i.stack.imgur.com/Ifqc4m.png)](https://i.stack.imgur.com/Ifqc4m.png) **Step 2-** Choose the #words corresponding to your input word (e.g dog) by applying K-Nearest Neighbors (KNN) or similar algorithms. You basically compute the distances using the embeddings. [![Step 2 - Apply KNN](https://i.stack.imgur.com/jI8Ptm.jpg)](https://i.stack.imgur.com/jI8Ptm.jpg) > > **Steps Detailed:** > > > **Step 1-** In NLP we represent human language as a vector of values instead of a set of characters in order to process it. To do so there are 3 approaches in the literature: **- Word Level Embeddings:** Represent each word as a vector of values. Algorithms: [Word2Vec by Google](https://medium.freecodecamp.org/how-to-get-started-with-word2vec-and-then-how-to-make-it-work-d0a2fca9dad3) ([paper](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf)), [fastText by Facebook](https://blog.manash.me/how-to-use-pre-trained-word-vectors-from-facebooks-fasttext-a71e6d55f27), [GloVe by Stanford University](https://nlp.stanford.edu/projects/glove/) ([paper](https://nlp.stanford.edu/pubs/glove.pdf)) ... **- Character Level Embeddings:** Represent each character as a vector of values. Algorithms: [ELMo](https://towardsdatascience.com/elmo-contextual-language-embedding-335de2268604) ([paper](https://arxiv.org/abs/1802.05365)) ... **- Sentence Level Embeddings:** Represent a sentence as a vector of values. Algorithms: [Universal Sentence Encoder by Google](https://tfhub.dev/google/universal-sentence-encoder/1) ([paper](https://arxiv.org/abs/1803.11175)) ... In your case I suggest to use [GloVe](https://nlp.stanford.edu/projects/glove/) or [ElMo](https://towardsdatascience.com/elmo-contextual-language-embedding-335de2268604) if you have only words and [Universal Sentence Encoder](https://tfhub.dev/google/universal-sentence-encoder/1) if you have words and sentences. . Compute all your words embeddings and move to the next step. **Step 2-** Now that you have your embeddings, compute the distances between all your words *(use Euclidian, Minkowski or any other distance)*. Notice that the computation may take some time but will only be executed once. Now each time you have a word (e.g dog) you apply the **KNN** algorithm using the computed distances and you will get the most related words to this word. > > **Note**: No need to compute distances and apply KNN if you use Universal Sentence Encoder as the similarity is easily computed using a dot product of the embeddings. See my quick implementation example [here](https://ai.stackexchange.com/questions/4965/sentence-similarity-in-python/12232#12232) for details. > > > Upvotes: 1
2018/01/22
2,347
10,404
<issue_start>username_0: I am trying to develop a neural network which can identify design features in CAD models (i.e. slots, bosses, holes, pockets, steps). The input data I intend to use for the network is a n x n matrix (where n is the number of faces in the CAD model). A '1' in the top right triangle in the matrix represents a convex relationship between two faces and a '1' in the bottom left triangle represents a concave relationship. A zero in both positions means the faces are not adjacent. The image below gives an example of such a matrix. [![enter image description here](https://i.stack.imgur.com/Soj7K.png)](https://i.stack.imgur.com/Soj7K.png) Lets say I set the maximum model size to 20 faces and apply padding for anything smaller than that in order to make the inputs to the network a constant size. I want to be able to recognise 5 different design features and would therefore have 5 output neurons - [slot, pocket, hole, boss, step] Would I be right in saying that this becomes a sort of 'pattern recognition' problem? For example, if I supply the network with a number of training models - along with labels which describe the design feature which exists in the model, would the network learn to recognise specific adjacency patterns represented in the matrix which relate to certain design features? I am a complete beginner in machine learning and I am trying to get a handle on whether this approach will work or not - if any more info is needed to understand the problem leave a comment. Any input or help would be appreciated, thanks.<issue_comment>username_1: > > Would I be right in saying that this becomes a sort of 'pattern recognition' problem? > > > Technically, yes. In practice: no. I think you might be interpreting the term "pattern recognition" a bit too literal. Even though [wikipedia](https://en.wikipedia.org/wiki/Pattern_recognition) defines Pattern recognition as "a branch of machine learning that focuses on the recognition of patterns and regularities in data", it's not about solving problems that can "easily" be deduced by logical reasoning. E.g. you say that > > A '1' in the top right triangle in the matrix represents a convex relationship between two faces and a '1' in the bottom left triangle represents a concave relationship > > > This is true *always*. In a typical machine learning situation, you wouldn't (usually) have this prior knowledge. At least not to the extent that it would b be tractable to “solve by hand”. Pattern recognition is conventionally a statistical approach to solving problems when they get too complex to analyze with conventional logical reasoning and simpler regression models. Wikipedia also states (with a source) that pattern recognition "in some cases considered to be nearly synonymous with machine learning". That being said: you *could* use pattern recognition on this problem. However, it seems like overkill in this case. Your problem, as far as I can understand, has an actual "analytical" solution. That is: you can, by logic, get a 100% correct result all the time. Machine learning algorithms could, in theory, also do this, and in that case, and this branch of ML is referred to as Meta Modelling[1]. > > For example, if I supply the network with a number of training models - along with labels which describe the design feature which exists in the model, would the network learn to recognise specific adjacency patterns represented in the matrix which relate to certain design features? > > > In a word: Probably. Best way to go? Probably not. Why not, you ask? There is always the possibility that your model doesn't learn exactly what you want. In addition you have many challenges like [overfitting](https://en.wikipedia.org/wiki/Overfitting) that you'd need to concern yourself about. It's a statistical approach, as I said. Even if it classifies all your test data as 100% correct, there is no way (unless you check the insanely intractable maths) to be 100% sure that it will always classify correctly. I further suspect that you're also likely to end up spending more time working on your model then the time it would take to just deduce the logic. I also disagree with @Bitzel: I would not do a CNN (convolutional neural network) on this. CNNs are used when you want to look at specific parts of the matrix, and the relation and connectedness between the pixels are important — for example on images. Since you only have 1s and 0s, I strongly suspect that a CNN would be vastly overkill. And with all the sparsity (many zeros) you’d end up with a lot of zeros in the convolutions. I'd actually suggest a plain vanilla (feed forward) neural network, which, despite the sparsity, I think will be able to do this classification pretty easily. Upvotes: 2 <issue_comment>username_2: **The Problem** The training data for the proposed system is as follows. * A Boolean matrix representing the surface adjacency of a solid geometric design * Also represented in the matrix is differentiation between interior and exterior angles of edges * Labels (described below) Convex and concave are not the correct terms to describe surface gradient discontinuities. An interior edge, such as made by an end mill, is not actually a concave surface. Surface gradient discontinuity, from the point of view of the idealized solid model, has a zero radius. An exterior edge is not a convex portion of a surface for the same reason. The intended output of the trained system proposed is a Boolean array indicating the presence of specific solid geometric design features. * One or more slot * One or more boss * One or more holes * One or more pockets * One or more steps This array of Boolean values is also used as labels for training. **Possible Caveats in Approach** There are mapping incongruities in this approach. They fall roughly into one of four categories. * Ambiguity created by mapping the topology in the CAD model to the matrix — solid geometries that have primary not captured in the matrix encoding proposed * CAD models for which no matrix exists — cases where edges change from inner to outer angles or emerge from curvature * Ambiguity in the identification of features from the matrix — overlap between features that could identify the pattern in the matrix * Matrices describing features that are not among the five — this could become a data loss issue downstream in development These are just a few examples of topology issues that may be common in some mechanical design domains and obfuscate the data mapping. * A hole has the same matrix as a box frame with internal radii. * External radii may lead to oversimplification in the matrix. * Holes that intersect with edges may be indistinguishable from other topology in matrix form. * Two or more intersecting through holes may present adjacency ambiguities. * Flanges and ribs supporting round bosses with center holes may be indistinguishable. * A ball and a torus have the same matrix. * A disk and band with a hexagonal cross with a 180 degree twist have the same matrix. These possible caveats may or may not be of concern for the project defined in the question. Setting a face size balances efficiency with reliability but limits usability. There may be approaches that leverage one of the variants of RNNs, which may permit coverage of arbitrary model sizes without compromising efficiency for simple geometries. Such an approach may involve splaying the matrix out as a sequence for each example, applying a well conceived normalization strategy to each matrix. Padding may be effective if there are no tight constraints on training efficiency and a practical maximum for number of faces exists. **Considering Count and Certainty as Output** To handle some of these ambiguities, a certainty $\in [0.0, 1.0]$ could be the range of the activation functions of the output cells without changing the labeling of the training data. The possibility of using a non-negative integer output, as an unsigned binary representation created by aggregating multiple binary output cells, instead of a single Boolean per feature should be at least considered as well. Downstream, the capability to count features may become important. This leads to five realistic permutations to consider, that could be produced by the trained network for each feature of each solid geometry model. * Boolean indicating existence * Non-negative integer indicating instance count * Boolean and real certainty of one or more instance * Non-negative integer representing most likely instance count and real certainty of one or more instances * Non-negative real mean and standard deviation **Pattern Recognition or What?** In the current culture, applying an artificial network to this problem is not normally described as pattern recognition in the sense of computer vision or audio processing. It is thought of as learning a complex functional mapping via convergence in the rough direction of an idea mapping, given proximity, accuracy, and reliability criteria. The parameters of the function $f$, given inputs $\mathcal{X}$, are driven toward the associated labels $\mathcal{Y}$ during training. $$f(\mathcal{X}) \implies \mathcal{Y}$$ If the concept class being functionally approximated by the network is sufficiently represented in the sample used for training and the sample of training examples is drawn in the same way as the target application will later draw, the approximation is likely to be sufficient. In the world of information theory, there is a blurring of the distinction between pattern recognition and functional approximation, as there should be in that higher level AI conceptual abstraction. **Feasibility** > > Would the network learn to map matrices to [the array of] Boolean [indicators] of design features? > > > If the above listed caveats are acceptable to the project stakeholders, the examples are well labeled and provided in sufficient number, and the data normalization, loss function, hyper-parameters, and layer arrangements are set up well, it is likely convergence will occur during training and a reasonable automated feature identification system. Again, its usability hinges on new solid geometries being drawn from the concept class like the training examples were. System reliability relies on training being representative of later use cases. Upvotes: 0
2018/01/23
2,219
9,746
<issue_start>username_0: I am new to machine learning and AI, so forgive me if this is obvious. I was talking with a friend on how to solve this problem, and neither of us could figure out how to do it. Say I have a grid area of 100x100 blocks, and I want a robot to build a horizontal 100x100 grid, and 3 blocks high. I am given a random, but known starting surface, always 100x100 but the height of the random surface can vary from 1 to 5 blocks. I have an extra reserve of blocks i can pick up, so dont have to worry about running out. The robot can move in any direction, even diagonally at some cost penalty. The robot can obviously move a 4 high block to fill in a 2 high, so each is at the design height of 3. This sounds like a reinforcement learning problem, but would any one be able to explain more detail how I would do this, to a) minimize the amount of moves, and b) to get to the design surface.<issue_comment>username_1: > > Would I be right in saying that this becomes a sort of 'pattern recognition' problem? > > > Technically, yes. In practice: no. I think you might be interpreting the term "pattern recognition" a bit too literal. Even though [wikipedia](https://en.wikipedia.org/wiki/Pattern_recognition) defines Pattern recognition as "a branch of machine learning that focuses on the recognition of patterns and regularities in data", it's not about solving problems that can "easily" be deduced by logical reasoning. E.g. you say that > > A '1' in the top right triangle in the matrix represents a convex relationship between two faces and a '1' in the bottom left triangle represents a concave relationship > > > This is true *always*. In a typical machine learning situation, you wouldn't (usually) have this prior knowledge. At least not to the extent that it would b be tractable to “solve by hand”. Pattern recognition is conventionally a statistical approach to solving problems when they get too complex to analyze with conventional logical reasoning and simpler regression models. Wikipedia also states (with a source) that pattern recognition "in some cases considered to be nearly synonymous with machine learning". That being said: you *could* use pattern recognition on this problem. However, it seems like overkill in this case. Your problem, as far as I can understand, has an actual "analytical" solution. That is: you can, by logic, get a 100% correct result all the time. Machine learning algorithms could, in theory, also do this, and in that case, and this branch of ML is referred to as Meta Modelling[1]. > > For example, if I supply the network with a number of training models - along with labels which describe the design feature which exists in the model, would the network learn to recognise specific adjacency patterns represented in the matrix which relate to certain design features? > > > In a word: Probably. Best way to go? Probably not. Why not, you ask? There is always the possibility that your model doesn't learn exactly what you want. In addition you have many challenges like [overfitting](https://en.wikipedia.org/wiki/Overfitting) that you'd need to concern yourself about. It's a statistical approach, as I said. Even if it classifies all your test data as 100% correct, there is no way (unless you check the insanely intractable maths) to be 100% sure that it will always classify correctly. I further suspect that you're also likely to end up spending more time working on your model then the time it would take to just deduce the logic. I also disagree with @Bitzel: I would not do a CNN (convolutional neural network) on this. CNNs are used when you want to look at specific parts of the matrix, and the relation and connectedness between the pixels are important — for example on images. Since you only have 1s and 0s, I strongly suspect that a CNN would be vastly overkill. And with all the sparsity (many zeros) you’d end up with a lot of zeros in the convolutions. I'd actually suggest a plain vanilla (feed forward) neural network, which, despite the sparsity, I think will be able to do this classification pretty easily. Upvotes: 2 <issue_comment>username_2: **The Problem** The training data for the proposed system is as follows. * A Boolean matrix representing the surface adjacency of a solid geometric design * Also represented in the matrix is differentiation between interior and exterior angles of edges * Labels (described below) Convex and concave are not the correct terms to describe surface gradient discontinuities. An interior edge, such as made by an end mill, is not actually a concave surface. Surface gradient discontinuity, from the point of view of the idealized solid model, has a zero radius. An exterior edge is not a convex portion of a surface for the same reason. The intended output of the trained system proposed is a Boolean array indicating the presence of specific solid geometric design features. * One or more slot * One or more boss * One or more holes * One or more pockets * One or more steps This array of Boolean values is also used as labels for training. **Possible Caveats in Approach** There are mapping incongruities in this approach. They fall roughly into one of four categories. * Ambiguity created by mapping the topology in the CAD model to the matrix — solid geometries that have primary not captured in the matrix encoding proposed * CAD models for which no matrix exists — cases where edges change from inner to outer angles or emerge from curvature * Ambiguity in the identification of features from the matrix — overlap between features that could identify the pattern in the matrix * Matrices describing features that are not among the five — this could become a data loss issue downstream in development These are just a few examples of topology issues that may be common in some mechanical design domains and obfuscate the data mapping. * A hole has the same matrix as a box frame with internal radii. * External radii may lead to oversimplification in the matrix. * Holes that intersect with edges may be indistinguishable from other topology in matrix form. * Two or more intersecting through holes may present adjacency ambiguities. * Flanges and ribs supporting round bosses with center holes may be indistinguishable. * A ball and a torus have the same matrix. * A disk and band with a hexagonal cross with a 180 degree twist have the same matrix. These possible caveats may or may not be of concern for the project defined in the question. Setting a face size balances efficiency with reliability but limits usability. There may be approaches that leverage one of the variants of RNNs, which may permit coverage of arbitrary model sizes without compromising efficiency for simple geometries. Such an approach may involve splaying the matrix out as a sequence for each example, applying a well conceived normalization strategy to each matrix. Padding may be effective if there are no tight constraints on training efficiency and a practical maximum for number of faces exists. **Considering Count and Certainty as Output** To handle some of these ambiguities, a certainty $\in [0.0, 1.0]$ could be the range of the activation functions of the output cells without changing the labeling of the training data. The possibility of using a non-negative integer output, as an unsigned binary representation created by aggregating multiple binary output cells, instead of a single Boolean per feature should be at least considered as well. Downstream, the capability to count features may become important. This leads to five realistic permutations to consider, that could be produced by the trained network for each feature of each solid geometry model. * Boolean indicating existence * Non-negative integer indicating instance count * Boolean and real certainty of one or more instance * Non-negative integer representing most likely instance count and real certainty of one or more instances * Non-negative real mean and standard deviation **Pattern Recognition or What?** In the current culture, applying an artificial network to this problem is not normally described as pattern recognition in the sense of computer vision or audio processing. It is thought of as learning a complex functional mapping via convergence in the rough direction of an idea mapping, given proximity, accuracy, and reliability criteria. The parameters of the function $f$, given inputs $\mathcal{X}$, are driven toward the associated labels $\mathcal{Y}$ during training. $$f(\mathcal{X}) \implies \mathcal{Y}$$ If the concept class being functionally approximated by the network is sufficiently represented in the sample used for training and the sample of training examples is drawn in the same way as the target application will later draw, the approximation is likely to be sufficient. In the world of information theory, there is a blurring of the distinction between pattern recognition and functional approximation, as there should be in that higher level AI conceptual abstraction. **Feasibility** > > Would the network learn to map matrices to [the array of] Boolean [indicators] of design features? > > > If the above listed caveats are acceptable to the project stakeholders, the examples are well labeled and provided in sufficient number, and the data normalization, loss function, hyper-parameters, and layer arrangements are set up well, it is likely convergence will occur during training and a reasonable automated feature identification system. Again, its usability hinges on new solid geometries being drawn from the concept class like the training examples were. System reliability relies on training being representative of later use cases. Upvotes: 0
2018/01/23
1,476
5,089
<issue_start>username_0: Below is a quote from [CS231n](https://cs231n.github.io/convolutional-networks/): > > Prefer a stack of small filter CONV to one large receptive field CONV layer. Suppose that you stack three 3x3 CONV layers on top of each other (with non-linearities in between, of course). In this arrangement, each neuron on the first CONV layer has a 3x3 view of the input volume. A neuron on the second CONV layer has a 3x3 view of the first CONV layer, and hence by extension a 5x5 view of the input volume. Similarly, a neuron on the third CONV layer has a 3x3 view of the 2nd CONV layer, and hence a 7x7 view of the input volume. Suppose that instead of these three layers of 3x3 CONV, we only wanted to use a single CONV layer with 7x7 receptive fields. These neurons would have a receptive field size of the input volume that is identical in spatial extent (7x7), but with several disadvantages > > > My visualized interpretation: [![image](https://i.stack.imgur.com/KZy6R.png)](https://i.stack.imgur.com/KZy6R.png) How can you see through the first CNN layer from the second CNN layer and see a 5x5 sized receptive field? There were no previous comments stating all the other hyperparameters, like input size, steps, padding, etc. which made this very confusing to visualize. --- Edited: I think I found the [answer](https://medium.com/@nikasa1889/a-guide-to-receptive-field-arithmetic-for-convolutional-neural-networks-e0f514068807). BUT I still don't understand it. In fact, I am more confused than ever.<issue_comment>username_1: It is really easy to visualize the growth in the receptive field of the input as you go deep into the CNN layers if you consider a small example. Let's take a simple example: The dimensions are in the form of $\text{channels} \times \text{height} \times \text{width}$. * The input image $I$ is a $3 \times 5 \times 5$ matrix * The first convolutional layer's kernel $K\_1$ has shape $3 \times 2 \times 2$ (we consider only 1 filter for simplicity) * The second convolutional layer's kernel $K\_2$ has shape $1 \times 2 \times 2$ * Padding $P = 0$ * Stride $S = 1$ The output dimensions $O$ are calculated by the following formula taken from the lecture CS231n. $$O= (I - K + 2P)/S + 1$$ When you do a convolution of the input image with the first filter $K\_1$, you get an output of shape $1 \times 4 \times 4$ (this is the activation of the CONV1 layer). The receptive field for this is the same as the kernel size ($K\_1$), that is, $2 \times 2$. When this layer (of shape $1 \times 4 \times 4$) is convolved with the second kernel (CONV2) $K\_2$ ($1 \times 2 \times 2$), the output would be $1 \times 3 \times 3$. The receptive field for this would be the $3 \times 3$ window of the input because you have already accumulated the sum of the $2 \times 2$ window in the previous layer. Considering your example of three CONV layers with $3 \times 3$ kernels is also similar. The first layer activation accumulates the sum of all the neurons in the $3 \times 3$ window of the input. When you further convolve this output with a kernel of $3 \times 3$, it will accumulate all the outputs of the previous layers covering a bigger receptive field of the input. This observation comes in line with the argument that deeper layers learn more intricate features like facial expressions, abstract concepts, etc. because they cover a larger receptive field of our original input image. Upvotes: 4 [selected_answer]<issue_comment>username_2: The problem is in your diagram. Here are the steps to get to a 5x5 receptive field. Here is your diagram, redone slightly: [![The 3x3 input field is summed up (by the weights), and passed through a nonlinear activation function](https://i.stack.imgur.com/lEe45.jpg)](https://i.stack.imgur.com/lEe45.jpg) Notice that the new unit takes a weighted sum of the 9 pixels in the input, and then applies a rectified linear nonlinearity. Now, there are more of these, creating three new numbers computed from that part of the image. Each one slides over by one pixel: [![enter image description here](https://i.stack.imgur.com/jl3lW.jpg)](https://i.stack.imgur.com/jl3lW.jpg) We repeat this process going down three pixels as well, and then finally, we have a new 3x3 input field: [![enter image description here](https://i.stack.imgur.com/68ib4.jpg)](https://i.stack.imgur.com/68ib4.jpg) Notice that the new unit on the right now gets input from a 5x5 input field. I hope this helps! Upvotes: 2 <issue_comment>username_3: The intention of the referred text is to reason out the disadvantage of equivalent-merged-single-convolution-layer over multiple [CONV -> RELU]\*N layers. In the given scenario, if 2 layers of 3x3 filters were to be replaced by an equivalent single layer then this equivalent layer would need a filter with a receptive field of size 5x5. Similarly, an equivalent layer filter would need its receptive field to be of size 7x7 to compress 3 layers of 3x3 filters. Note that the most obvious disadvantage would be missing out on modeling non-linearity. Upvotes: 0
2018/01/23
460
2,005
<issue_start>username_0: I want to develop (in Java) a voice plugin for Eclipse on a Mac that helps me jot down high-level classes and stub methods. For example, I would like to command it to create a class that inherits from `X` and add a method that returns `String`. Could somebody help me point out the right material to learn to achieve that? I don't mind using an existing solution if it exists. As far as I understand, I would have to use some Siri interface and use nltk to convert the natural text into commands. Maybe there's some chatbot library that saves me some boilerpate NLP code to directly jump on to writing grammar or selecting sentence patterns.<issue_comment>username_1: While you can use NLTK for analyzing and parsing the text obtained from the speech to text interface (e.g. Siri), there are higher level APIs available for this. The class of problem you are trying to solve in NLP is "intent detection". There are several open source and commerical APIs available for this including Amazon Alexa, Google Cloud Natural Language, Azure, as well as libraries like RASA NLU, etc. The high level flow of your program will be: * Record/receive spoken audio * Convert audio speech to text * Detect intent of the text command using an intent detection library * Use the intent to feed a script/automation that generates the code in your IDE Upvotes: 1 <issue_comment>username_2: you could implement a simple TTS system that can translate your voice line by line to code , but it would of no use . you cant express code in a line-by-line manner. Coding is a highly iterative process at first you come up with a rough sketch for which you add details later on , and from NLP point of view this is a highly ambitious project . At the heart almost all of ai techniques (neural networks) are functions that map one domain to another , you cant map natural language sentences to instructions in code. However you can implement a tts system for a small language like LOGO. Upvotes: 0
2018/01/24
931
3,423
<issue_start>username_0: Before I start, I want to let you know that I am completely new to the field of deep learning! Since I need a new graphics card either way (gaming you know) I am thinking about buying the GTX 1060 with 6GB or the 1070 ti with 8GB. Because I am not rich, basically I am a pretty poor student ;), I don't want to waste my money. I don't need deep learning for my studies I just want to dive into this topic because of personal interest. What I want to say is that I can wait a little bit longer and don't need the results as quickly as possible. Can I do deep learning with the 1060 (6GB seem to be very limiting, according to some websites) or the 1070 ti? Is the 1070 ti overkill for a personal hobby deep learner? Or should I wait for the new generation Nvidia graphics card?<issue_comment>username_1: Regarding specific choices I can't recommend, but if you are completely new, you should probably learn/code some more until you get a GPU. There is a lot to learn in machine learning before GPU speedups make a significant difference, and until then doing the computations on any old CPU would be just fine, especially if you are just starting since you won't be doing anything too complex. You will know when computational resources are your main bottleneck, and until then it shouldn't really matter too much. Or, you could also rent computing power from say, [AWS](https://aws.amazon.com/machine-learning/amis/) or [Google](https://cloud.google.com/products/) Upvotes: 3 <issue_comment>username_2: I don't think you need to invest in any kind of GPU unless you're familiar with the computations required for the task you want to achieve using deep learning. Also, by the time you've sufficiently mastered Deep Learning to a point where you can actually make the most of your GPU, there will be new products in the market. So until then I suggest you use your CPU for doing little tasks such as Regression etc. You can always use the free credit offered by the various cloud companies for your tasks Upvotes: 2 <issue_comment>username_3: Given that you're a student doing this out of personal interest and wanting to do some gaming on the side, I'd suggest the GTX 1060 6GB since at present the GTX 1070Ti is overpriced due to crypto miners (this will date the answer, but for reference the 1060 is going for ~GBP340, the 1070Ti for ~GBP600; two other options are the 1050Ti 4GB for ~GBP160 or the vanilla 1080 at ~GBP650). ['Which GPU...'](http://timdettmers.com/2017/04/09/which-gpu-for-deep-learning/) by <NAME> is very helpful, as is ['Picking a GPU...'](https://blog.slavv.com/picking-a-gpu-for-deep-learning-3d4795c273b9) by <NAME>, especially the summaries at the end for different use cases. As you're not looking at spending a huge amount of money, the 1060 seems like a good compromise as the 1050Ti might just leave you with a disappointing gaming experience. Finding a used 1070 is also suggested, but you'd need to be comfortable with that. Other answers have mentioned the cloud, but that doesn't help with your gaming. If you want to save some cash while you're waiting for the next gen of cards, take advantage of your student status on [AWS educate](https://aws.amazon.com/education/awseducate/) or [Azure on MS Imagine](https://imagine.microsoft.com/en-us) - the [GitHub student dev pack](https://education.github.com/pack) is a good package. Upvotes: 3
2018/01/24
1,029
3,770
<issue_start>username_0: Imagine two languages that have only these words: ``` Man = 1, deer = 2, eat = 3, grass = 4 ``` And you would form all sentences possible from these words: ``` Man eats deer. Deer eats grass. Man eats. Deer eats. ``` German: ``` Mensch = 5, Gras = 6, isst = 7, Hirsch = 8 ``` Possible german sentences: ``` Mensch isst Hirsch. Hirsch isst Gras. Mensch isst. Hirsch isst. ``` How would you write a program that would figure out which words have the same meaning in English and German? It is possible. All words get their meaning from the information in which sentences they can be used. The connection with other words defines their meaning. We need to write a program that would recognize that a word is connected to other words in the same way in both languages. Then it would know those two words must have the same meaning. If we take the word "deer" (2) it has this structure in English ``` 1-3-2 2-3-4 ``` In german (8): ``` 5-6-8 8-6-7 ``` We get the same structure (pattern) in both languages: both 8 and 2 lie in first and last position, and the middle word is the same in both languages, the other word is different in both languages. So we can conclude that 8=2 because both elements are connected with other elements the same way. Maybe we just need to write a very good program for recognizing analogies and we will be on the right track to creating AI?<issue_comment>username_1: Isn't this what already `Word2Vec` and other `word-embedding` techniques already use. `You know your word by the company it keeps` is an idea that has been around for some time now. Upvotes: 1 <issue_comment>username_2: For this example the function below will do: `TSAI.Analogies.FindAnalogy(List ex1, List ex2, List ex3, out List ex4)` ex1 is to ex2 as ex3 is to ex4. Figure out ex4. Fill ex4 with values from ex2. For every value in ex3: find out to which positions in ex4 we have to copy this value, based on value in ex1 at the same position that was repeated in ex2. Upvotes: -1 [selected_answer]<issue_comment>username_3: It is wrong to assume that just the connections to other words define their meaning. Give an AI a hundred novels and it would still not know what the word "cat" means. Show the AI a picture of a cat with the word "cat" underneath it and it would know straight away. In this way an AI needs to know a minimum number of words through experience other than combinations of other words. From then it may be able to deduce meanings of new words. Just like, if I gave you a hundred novels in Chinese you would never be able to understand Chinese. I show you a picture book in Chinese and maybe you have a chance. Upvotes: -1 <issue_comment>username_4: You are implying that such ideas are novel, and that such tools do not exist. But the idea is very popular, and there are numerous tools. > > We need to write a program that would recognize that a word is connected to other words in the same way in both language. Then it would know those two words must have the same meaning. > > > You are describing the essence of known natural language processing (NLP) tasks such as word alignment (link words in different languages that have the same meaning) and, of course, machine translation. While learning a machine translation model, we actually do discover which words (or parts of words, or sequences of words) in different languages have the same meaning. Here are some concepts I would recommend for further study of this subject: * Word alignment, an example for a well-known and popular tool would be `fast_align` * Word embeddings, `word2vec` is a widely used tool * Modern machine translation with sequence-to-sequence models, well-known tools are `fairseq`, or `Sockeye` Upvotes: 2
2018/01/24
669
2,701
<issue_start>username_0: As many papers point out, for better learning curve of a NN, it is better for a data-set to be normalized in a way such that values match a Gaussian curve. Does this process of feature normalization apply only if we use sigmoid function as squashing function? If not what deviation is best for the tanh squashing function?<issue_comment>username_1: Isn't this what already `Word2Vec` and other `word-embedding` techniques already use. `You know your word by the company it keeps` is an idea that has been around for some time now. Upvotes: 1 <issue_comment>username_2: For this example the function below will do: `TSAI.Analogies.FindAnalogy(List ex1, List ex2, List ex3, out List ex4)` ex1 is to ex2 as ex3 is to ex4. Figure out ex4. Fill ex4 with values from ex2. For every value in ex3: find out to which positions in ex4 we have to copy this value, based on value in ex1 at the same position that was repeated in ex2. Upvotes: -1 [selected_answer]<issue_comment>username_3: It is wrong to assume that just the connections to other words define their meaning. Give an AI a hundred novels and it would still not know what the word "cat" means. Show the AI a picture of a cat with the word "cat" underneath it and it would know straight away. In this way an AI needs to know a minimum number of words through experience other than combinations of other words. From then it may be able to deduce meanings of new words. Just like, if I gave you a hundred novels in Chinese you would never be able to understand Chinese. I show you a picture book in Chinese and maybe you have a chance. Upvotes: -1 <issue_comment>username_4: You are implying that such ideas are novel, and that such tools do not exist. But the idea is very popular, and there are numerous tools. > > We need to write a program that would recognize that a word is connected to other words in the same way in both language. Then it would know those two words must have the same meaning. > > > You are describing the essence of known natural language processing (NLP) tasks such as word alignment (link words in different languages that have the same meaning) and, of course, machine translation. While learning a machine translation model, we actually do discover which words (or parts of words, or sequences of words) in different languages have the same meaning. Here are some concepts I would recommend for further study of this subject: * Word alignment, an example for a well-known and popular tool would be `fast_align` * Word embeddings, `word2vec` is a widely used tool * Modern machine translation with sequence-to-sequence models, well-known tools are `fairseq`, or `Sockeye` Upvotes: 2
2018/01/27
793
3,351
<issue_start>username_0: At some point in time during the evolution, because of some factors, some beings first started to become conscious of themselves and their surroundings. That conscious experience is beyond some mere sensory reflexive actions trained. Can that be possible with AI?<issue_comment>username_1: Current limitations in our knowledge mean that the question is not *directly* answerable: * There is no scientific consensus on what consciousness is. Therefore any device designed to "be conscious" is necessarily going to be built on the premise of unsupported, maybe fringe, theory. * There is no robust measure of consciousness. If any AI system was built in order to exhibit conscious behaviour, there would be no way to prove it is conscious. There is no general agreement or theory on whether any particular animal species is conscious for example. This is often limited by communication. Of the few animals smart enough to be trained in communication with humans, there *appears* to be conscious behaviour. Researcher opinion ranges from "all non-humans do not possess consciousness" to "all animals have some degree of consciousness". * There is incomplete understanding of what the *components* of consciousness are. A bottom-up build of a conscious machine requires a baseline theory of what those components are. * We may be able to ignore lack of knowledge and take a very high level of abstraction, such as A-life or evolutionary approach where nothing is assumed and the hope is that consciousness would spontaneously emerge from a complex enough simulation (as we assume it has done with organic life in the real world). However, this would seem to require many orders of magnitude more computing power than is currently possible. To answer the question as written: > > Can the first emergence of consciousness in the Evolution be replicated in AI? > > > Despite the many books, articles and posts written on this subject over many years, the answer is two-fold: * We do not know of any fundamental reason why AI could not be conscious. * We have no theory or experimental proof that AI can replicate consciousness. I would go further than this, and say that anyone who tells you otherwise on these two points has already subscribed to some unproven theory of consciousness. As well as well-thought-out peer-reviewed theories and experiments by scientists and researchers, there is a *lot* of pseudo-scientific junk published on the internet on this subject. So take care if researching reading material. Upvotes: 2 <issue_comment>username_2: It partly depends on the framing of the question, in terms of how you are defining consciousness. username_1's answer is comprehensive, and his warnings about pseudo-science and junk publication should be heeded. However, since you frame this in the context of "*Can the first emergence of consciousness in evolution* be replicated in AI", I feel like I can provide an answer. * If we define rudimentary consciousness as simple awareness of the environment, distinct from higher functions such as self-consciousness, then yes. Under this definition, any algorithm that takes input is "conscious". This in no way represents human-level consciousness, or even the consciousness of higher animals, but is more akin to simple organisms such as microbes. Upvotes: 0
2018/01/28
564
2,165
<issue_start>username_0: I have started to make a chatbot. It has a list of greetings that it understands and responds to with its own list of greetings. How could a bot learn a new greeting or a synonym for a word it already knows?<issue_comment>username_1: There is pretty simple way: Write a program that analyzes large amounts of texts. Find sentences that contain our greeting. Then find exact same sentences except that instead of our word there is another word. The more such examples you find the higher is probability that it is synonim and not word from same category with different meaning. Upvotes: 1 <issue_comment>username_2: [This answer](https://ai.stackexchange.com/a/5171/2444) describes the "word vector" toolkit in NLP. The result of analyzing a large corpus to find words that occur in similar context provides dense vectors for each word that can then be used for similarity. For bots, the goal is generally a similarity and not exact synonyms. Synonyms can be hard-coded using WordNet if needed. For your greeting question, following blog post can help: [Do-it-yourself NLP for bot developers](https://medium.com/rasa-blog/do-it-yourself-nlp-for-bot-developers-2e2da2817f3d). Upvotes: 2 <issue_comment>username_3: You could train a model to classify sentences into user intents. For example, an intent could be "greeting". Another intent could be "help", or any other capability that your bot is able to talk about. To train you model, you should provide several examples for the same intent. For example, for "greeting", you could provide "Hi", "Hello", "What's up", etc... You should also apply some preprocessing before feeding sentences into your model, such as [word embeddings](https://www.offconvex.org/2015/12/12/word-embeddings-1/) or [semantic similarity](https://stackoverflow.com/questions/14148986/how-do-you-write-a-program-to-find-if-certain-words-are-similar/14638272#14638272) with WordNet. These techniques allow to transform strings into representations that capture the similary of word meanings. The ability of your model to detect synonyms without being retrained will highly depend on this preprocessing. Upvotes: 1
2018/01/28
1,560
6,474
<issue_start>username_0: A human player plays limited games compared to a system that undergoes millions of iterations. Is it really fair to compare AlphaGo with the world #1 player when we know experience increases with the increase in number of games played?<issue_comment>username_1: Yes it is. If we ever compare computers to humans we should take into acount the fact that computers can work 24 hours a day every day and faster than humans. That is bigest advantage of computers over humans. Upvotes: 0 <issue_comment>username_2: > > Is it fair to compare AlphaGo with a Human player? > > > Depends on the purpose of the comparison. If we are comparing ability to win a game of Go, then yes. If we are comparing learning ability, then *maybe*. It depends on the task. AlphaGo and systems like it are capable of learning only in well-described limited domains. There may be an analogy with sensory learning (it might even be possible in theory to take a small piece of brain tissue and run an algorithm similar to AlphaGo's learning process on it). In general, the approach used by AlphaGo and other reinforcement learning successes is "trial-and-error plus function approximation". It seems analogous to perception and motor skills, such as object recognition or riding a bike, as opposed to reasoning skills and games as humans play them, which goes through many more cognitive and conscious layers that have no real analog in a RL system like AlphaGo. > > A human player plays limited games compared to a system that undergoes millions of iterations > > > This is an advantage of a machine to learn this kind of task. It would equally apply in other simulated environments with simple rules. If your goal is to have the most skilled and optimal navigation of such a domain, the implication now is that you would not train a human expert through years of study, but to write the simulator and train an AlphaGo-like machine. This is no different a comparison than deciding cars and roads are better solutions to long distance travel for the general population than walking or horses and carts. It doesn't matter what underlies the advantage of one over the other, the assessment is cost/benefit, which resolves to a single comparable number. It would, however, be wrong to assess AlphaGo as a better general-purpose learning engine than a human. The fact that humans do not have to work fully through millions of simulations in full detail is important. It means that something about how humans learn is still not covered by learning machines. [Some of these things are understood and being discussed](https://arxiv.org/abs/1604.00289) - such as the ability to focus intuitively on important aspects of what to learn, the ability to reason about the environment, learning analogously or transfer learning from other domains. Upvotes: 2 <issue_comment>username_3: If you read through the abstracts of Chess AI papers, it is often pointed out that humans "search" the Chess game tree much more efficiently than computers, which was why it was so hard to beat the top humans in Chess for so many years. (The human efficiency may have to do with intuition and judgement, which are difficult to replicate. "Confidence levels" for AI evaluations is one method of addressing these issues, as is "[monte carlo](https://en.wikipedia.org/wiki/Monte_Carlo_tree_search)". But it's also important to note that humans are far more limited in the depth and breadth of their "searches", which is why, now that we have the right algorithms, humans can no longer win.) > > Is it fair? > > > Perhaps the more salient question is: > > **Is it useful to compare AlphaGo to a human player?** > > > It most certainly is, because it tells us that we have is sometimes termed a "strong-narrow AI" that can outperform a human in a single task. Why AlphaGo beating [Lee Sedol](https://en.wikipedia.org/wiki/Lee_Sedol#Match_against_AlphaGo) was a big deal is the complexity of Go, the [intractability](https://en.wikipedia.org/wiki/Computational_complexity_theory#Intractability) of the Go game tree, and the fact that computers were previously ineffective against high-level human Go players. This human vs. AI evaluation doesn't strictly fall under the "Turing Test" ([Imitation Game](https://en.wikipedia.org/wiki/Turing_test)), it does fall squarely under the maxim of [Protagoras](https://en.wikipedia.org/wiki/Protagoras#Relativism) that "Man is the measure of all things." This is critical because intelligence is a spectrum, and gauging strength of intelligence, in the context of intractable problems *(problems that cannot be fully solved due to their size)* is a function of relative strength of two agents, whether human or AI. This relative assessment is all we have, and all we may ever have for certain sets of problems. The problem with humans is not that we're not clever, but that our minds have cognitive limitations. So to tackle certain problems, intelligent machines are useful. Upvotes: 0 <issue_comment>username_4: There is no such thing as fairness when comparing. You define a measure for performance and then compare the values of the measure. One sensible measure for playing the game of GO is the 'Number of games won', regardless of any investment in the development of the system, computational or sample efficiency. AlphaGo is currently at the top by this measure. Another sensible measure could be 'Number of games won under a restriction on sample efficiency during training'. As others pointed out, such a measure could be much more favorable for humans. Upvotes: 0 <issue_comment>username_5: As a chess player and a AI/ML engineer, I can say yes, why not. I'm not sure why it isn't fair to compare anything if you give each side it's just due and do a 'fair comparison'. Obviously, what that encompasses is extremely subjective, but there are philosophical and logical measures of fairness. Now speaking on the comparison, AlphaZero and a Human's learning styles are much more similar than that of a Human's and Stockfish. This is mainly due to the fact that human's in some capacity use [RL](https://www.princeton.edu/~yael/Publications/Niv2009.pdf), mainly in the dopaminergic neural pathways. While human behavior can certainly be modeled as an alpha/beta tree-search, it is not anything like the way we make decisions. As far as the top humans, who cares? We we've been worse than computers for years. Upvotes: 0
2018/01/28
749
2,911
<issue_start>username_0: I read about minimax, then alpha-beta pruning, and then about iterative deepening. Iterative deepening coupled with alpha-beta pruning proves to quite efficient as compared to alpha-beta alone. I have implemented a game agent that uses iterative deepening with alpha-beta pruning. Now, I want to beat myself. What can I do to go deeper? Like alpha-beta pruning cut the moves, what other small change could be implemented that can beat my older AI? My aim to go deeper than my current AI. If you want to know about the game, here is a brief summary: There are 2 players, 4 game pieces, and a 7-by-7 grid of squares. At the beginning of the game, the first player places both the pieces on any two different squares. From that point on, the players alternate turns moving both the pieces like a queen in chess (any number of open squares vertically, horizontally, or diagonally). When the piece is moved, the square that was previously occupied is blocked. That square can not be used for the remainder of the game. The piece can not move through blocked squares. The first player who is unable to move any one of the queens loses. So my aim is to cut the unwanted nodes and search deeper.<issue_comment>username_1: To make boost iterative deepening with alpha-beta pruning you can use the SSS\* Search algorithm, its a best first strategy algorithm. The SSS\* Algorithm can improve the time efficiency of the overall algorithm but it increases the space complexity. I am linking the wiki to it [https://en.wikipedia.org/wiki/SSS\*](https://en.wikipedia.org/wiki/SSS*) I will update the answer as soon as i get a better solution. Upvotes: 1 <issue_comment>username_2: Try cache or transposition table. Without it, your search tree might explode. Upvotes: 1 <issue_comment>username_3: First thing you're going to want to add is probably a [Transposition Table](https://en.wikipedia.org/wiki/Transposition_table), as also suggested by SmallChess. Afterwards, I'd look into [Aspiration Search](https://www.chessprogramming.org/Aspiration_Windows) and/or [Principal Variation Search](https://chessprogramming.wikispaces.com/Principal%20Variation%20Search) (also see [this page](https://www.ics.uci.edu/~eppstein/180a/990202b.html)). Then I'd look into things like the [Killer Move Heuristic](https://en.wikipedia.org/wiki/Killer_heuristic), and maybe also see if you can simply implement existing parts of your engine more efficiently (e.g. use [bitboards](https://chessprogramming.wikispaces.com/Bitboards) for your state representation). Other than all of that, the [chess programming wiki](https://chessprogramming.wikispaces.com/) probably has lots of other interesting pages as well. Upvotes: 2 <issue_comment>username_4: You can try ***move ordering*** where we store the values till depth d, sort them and use them in particular order before we go for depth d+1 ... Upvotes: 0
2018/01/28
690
2,655
<issue_start>username_0: So, I have seen few pictures re-created by a Neural Network or some other Machine Learning algorithm after it has been trained over a data set. How, exactly is this done? How are the weights converted back into a picture or a memory which a Neural Net is holding? A real life example would be when we close our eyes we can easily visualize things we have seen. Based on that we can classify things we see. Now in a Neural Net classification part is easily done, but what about the visualization part? What does the Neural Net see when it closes its eyes? And how to represent it for human understanding? For example a deep net generated this picture: [![enter image description here](https://i.stack.imgur.com/1YJg6.jpg)](https://i.stack.imgur.com/1YJg6.jpg) SOURCE: [Deep nets generating stuff](http://fastml.com/deep-nets-generating-stuff/) There can be many other things generated. But the question is how exactly is this done?<issue_comment>username_1: To make boost iterative deepening with alpha-beta pruning you can use the SSS\* Search algorithm, its a best first strategy algorithm. The SSS\* Algorithm can improve the time efficiency of the overall algorithm but it increases the space complexity. I am linking the wiki to it [https://en.wikipedia.org/wiki/SSS\*](https://en.wikipedia.org/wiki/SSS*) I will update the answer as soon as i get a better solution. Upvotes: 1 <issue_comment>username_2: Try cache or transposition table. Without it, your search tree might explode. Upvotes: 1 <issue_comment>username_3: First thing you're going to want to add is probably a [Transposition Table](https://en.wikipedia.org/wiki/Transposition_table), as also suggested by SmallChess. Afterwards, I'd look into [Aspiration Search](https://www.chessprogramming.org/Aspiration_Windows) and/or [Principal Variation Search](https://chessprogramming.wikispaces.com/Principal%20Variation%20Search) (also see [this page](https://www.ics.uci.edu/~eppstein/180a/990202b.html)). Then I'd look into things like the [Killer Move Heuristic](https://en.wikipedia.org/wiki/Killer_heuristic), and maybe also see if you can simply implement existing parts of your engine more efficiently (e.g. use [bitboards](https://chessprogramming.wikispaces.com/Bitboards) for your state representation). Other than all of that, the [chess programming wiki](https://chessprogramming.wikispaces.com/) probably has lots of other interesting pages as well. Upvotes: 2 <issue_comment>username_4: You can try ***move ordering*** where we store the values till depth d, sort them and use them in particular order before we go for depth d+1 ... Upvotes: 0
2018/01/30
582
2,233
<issue_start>username_0: I would like to do some practical implementation of a planning algorithm (of course, something a bit simple and easy). Is there any website where I can pick an algorithm (e.g. A\* or hill climbing), code it, and visualize how it works/executes? The site doesn't necessarily need to be restricted to planning or search algorithms. For example, in the context of machine learning, I would also like to be able to pick the learning algorithm and model (e.g. linear regression), code it, and visualize how it works.<issue_comment>username_1: To make boost iterative deepening with alpha-beta pruning you can use the SSS\* Search algorithm, its a best first strategy algorithm. The SSS\* Algorithm can improve the time efficiency of the overall algorithm but it increases the space complexity. I am linking the wiki to it [https://en.wikipedia.org/wiki/SSS\*](https://en.wikipedia.org/wiki/SSS*) I will update the answer as soon as i get a better solution. Upvotes: 1 <issue_comment>username_2: Try cache or transposition table. Without it, your search tree might explode. Upvotes: 1 <issue_comment>username_3: First thing you're going to want to add is probably a [Transposition Table](https://en.wikipedia.org/wiki/Transposition_table), as also suggested by SmallChess. Afterwards, I'd look into [Aspiration Search](https://www.chessprogramming.org/Aspiration_Windows) and/or [Principal Variation Search](https://chessprogramming.wikispaces.com/Principal%20Variation%20Search) (also see [this page](https://www.ics.uci.edu/~eppstein/180a/990202b.html)). Then I'd look into things like the [Killer Move Heuristic](https://en.wikipedia.org/wiki/Killer_heuristic), and maybe also see if you can simply implement existing parts of your engine more efficiently (e.g. use [bitboards](https://chessprogramming.wikispaces.com/Bitboards) for your state representation). Other than all of that, the [chess programming wiki](https://chessprogramming.wikispaces.com/) probably has lots of other interesting pages as well. Upvotes: 2 <issue_comment>username_4: You can try ***move ordering*** where we store the values till depth d, sort them and use them in particular order before we go for depth d+1 ... Upvotes: 0
2018/02/02
647
2,972
<issue_start>username_0: If I train a speech recognition model using data collected from N different microphones, but deploy it on an unseen (test) microphone - does it impact the accuracy of the model? While I understand that theoretically an accuracy loss is likely, does anyone have any practical experience with this problem?<issue_comment>username_1: Yes it can. However, other differences between training and test data with audio could have greater effect: * Identity of the speaker (including effects from gender, age, physical build, local accent, amongst others) * Acoustics of the recording environment (including proximity to the microphone, size of space, presence of hard surfaces, background noise) If any of these may vary from your training data, then it becomes harder to predict your generalised accuracy during training and early model selection. One possibility is to ensure your cross-validation set (which you absolutely should have) also separates data out by things that will vary from training to test. So instead of random train/cv split, you split by data that is key for generalisation. This is sometimes called a *stratified* train/test split. If your only concern is variation in microphone, then split your train/cv sets by microphone type. You will get a better assessment early on in the model selection process how well the training is generalising, and can focus your search on models that do well despite this expected difference. Upvotes: 2 <issue_comment>username_2: The most usual differences in signal records caused by different microphones will have small if not null impact in the recognition accuracy, in particular if we are talking about changes one mic by another of same model and manufactor: * Differences in bandwidth: voice is in a very common (central) bandwidth, it is not expected these differences impacts, even for low quality microphones. * Microphone distortions: same as previous, they will not impact because they are smaller than, by example, a change in the speaker. However, if we talk about a general recognition system to be used with very different types of mics, there are some microphone issues that can cause your system completely fail: * mic sensitivity: small sensitivity differences will have no effect because they are solved in the same way than differences in speaker volume/intonation. However, if the microphone is not enough sensible the S/N can be below the minimum need, in particular when speaker increase the distance to the mic. * lack of beam-forming: if your system is prepared to use an array of microphones to filter noise and/or secondary sources, usage of a normal phone will decrease accuracy. * changes in sample ratio and/or sample bits: if the microphone and its A/D has a low sampling speed or size (i.e. Bluetooth mics, phone lines, ...), the accuracy can fail. By example, for IOT applications, the first two of this list are the more challenging ones. Upvotes: 1
2018/02/05
1,040
4,125
<issue_start>username_0: It was [recently brought to my attention](https://ai.stackexchange.com/questions/5220/what-was-the-average-decision-speed-pf-alpha-zero-in-the-recent-stockfish-match/5222?noredirect=1#comment7698_5222) that Chess experts took the outcome of this now famous match as something of an upset. See: [Chess’s New Best Player Is A Fearless, Swashbuckling Algorithm](https://fivethirtyeight.com/features/chesss-new-best-player-is-a-fearless-swashbuckling-algorithm/) As as a non-expert on Chess and Chess AI, my assumption was that, based on the performance of AlphaGo, and the validation of that type of method in relation to combinatorial games, was that the older AI would have no chance. * Why was AlphaZero's victory surprising?<issue_comment>username_1: Good question. First and foremost is that in Go deepmind had no superhuman opponents to challenge. Go engines were not anywhere near the highest level of the top human players. In chess, however, the engines are 500 ELO points stronger than the top human players. This is a massive difference. The amount of work that has gone into contemporary chess engines is staggering. We are talking about millions of hours in programming, hundreds of thousands of iterations. It is a massive body of knowledge and work. To overcome and surpass all of that in 4 hours is staggering. Secondly it is not so much the result itself which is surprising to chess masters but instead its how AlphaZero plays chess. It's quite ironic that a system which had no human knowledge or expertise plays the most like we do. Engines are notorious for playing ugly looking moves, those lacking harmony etc. Its hard to explain to a non-chess player but there is such a thing as an "Artificial move" like the contemporary engines come up with often. AlphaZero does not play like this at all. It has a very human-like style where it dominates the opponent's pieces with deep strategic play and stunning position sacrifices. AlphaZero plays the way we aspire to, combining deep positional understanding with the precision of an engines calculation. **Edit** Oh and I forgot to mention something about the result itself. If you are not familiar with computer chess it may not seem staggering but it is. These days the margins of victory which separate the top contemporary engines are razor thin. In a 100 game match you could expect to see a result like 85 games drawn, 9 victories, and 6 losses to determine the better engine. AlphaZero 28 wins and 72 draws with zero losses was otherworldly crushing and was completely unthinkable right up to the moment it happened. Upvotes: 5 [selected_answer]<issue_comment>username_2: **I see, based on the articles you provide, many levels of surprise in the victory:** Chess is hard game to master and the counter part had the world's best practices, AlphaZero had tabula rasa. Learning took four hours and AlphaZero lost no match of 100. Playing style was an alien mix of human and computer like moves, aggressive and some times seeming goofy with sacrifices that have no idea but are actually making future status more strong. Amount of possiblities taken in account per move was less than counter part, AlphaZero had a mysterious gut feeling or intuition. The upset feeling came from the amount of training material AlphaZero had built itself with and the time limit, that did not maybe give the traditional machine fair amount of time. Upvotes: 2 <issue_comment>username_3: MCTS for chess had been tried in the literature with little success. It was assumed AlphaGo's approach would **never** work on chess, maybe in Go but not in chess. Suddenly, Google announced the approach was working and it was beating the World's strongest chess program by a very signficiant margin. Before Google, all chess programmers were taught crafting heuristics in engine programming was a better strategy than machine learning. No matter how you implemented neural networks, it would have never ran faster than a bunch of 64-bit bitboards instructions. AlphaGo was running **quite slow**, but it played strongest chess. Upvotes: 2
2018/02/06
1,696
6,378
<issue_start>username_0: Whenever I read any book about neural networks or machine learning, their introductory chapter says that we haven't been able to replicate the brain's power due to its massive parallelism. Now, in modern times, transistors have been reduced to the size of nanometers, much smaller than the nerve cell. Also, we can easily build very large supercomputers. * Computers have much larger memories than brains. * Computes can communicate faster than brains (clock pulse in nanoseconds). * Computers can be of arbitrarily large size. So, my question is: why cannot we replicate the brain's parallelism if not its information processing ability (since the brain is still not well understood) even with such advanced technology? What exactly is the obstacle we are facing?<issue_comment>username_1: One probable hardware limiting factor is internal bandwidth. A human brain has [$10^{15}$ synapses](https://en.wikipedia.org/wiki/Neuron). Even if each is only exchanging a few bits of information per second, that's on the order of $10^{15}$ bytes/sec internal bandwidth. A fast GPU (like those used to train neural networks) might approach $10^{11}$ bytes/sec of internal bandwidth. You could have 10,000 of these together to get something close to the total internal bandwidth of the human brain, but the interconnects between the nodes would be relatively slow, and would bottleneck the flow of information between different parts of the "brain." Another limitation might be raw processing power. A modern GPU has maybe 5,000 math units. Each unit has a cycle time of ~1 ns, and might require ~1000 cycles to do the equivalent processing work one neuron does in ~1/10 second (this value is totally pulled from the air; we don't really know the most efficient way to match brain processing in silicon). So, a single GPU might be able to match $5 \times 10^8$ neurons in real-time. You would optimally need 200 of them to match the processing power of the brain. This back-of-the-envelope calculation shows that internal bandwidth is probably a more severe constraint. Upvotes: 4 [selected_answer]<issue_comment>username_2: Short answer: nobody knows. Long answer: all strong-AI works. However, to write something useful to the o.p., say that the question contains several implicit statements, analyze them could be useful to clarify the issue: a) why thing that 1 transistor has the same functionality than 1 neuron ? Some obvious differences: a transistor has 3 legs, each neuron has around 7000 synapses; a transistor has 3 layers of material, a neuron is a full micro-machine with thousands of components; each synapses itself is a switch, connected to one or more other cells, and can produce different kinds of signals (activation/inhibitory, frequencies, amplitude, ...). b) compare memory amounts: the amount of memory in a person equivalent to the one in a computer s 0 bytes, we are not able to remember any thing forever and without distortion. Human memory is symbolic, temporal, associative, influenced by body and feelings, ... . Something totally different than computers one. c) all previous are about "hardware": if we analyze software and training, differences are even bigger. Even assume than intelligence is placed only in the brain, forgetting the role of hormonal system, senses, ... is a simplification not yet proof. In conclusion: human mind is totally different from a computer, we are far to understand it, and more far to replicate it. From the start of computers age, the idea than intelligence will appear when the amounts of memory, process power, ... reaches some threshold has became false. Upvotes: 1 <issue_comment>username_3: This has been my field of research. I've seen the previous answers that suggest that we don't have sufficient computational power, but this is not entirely true. The computational estimate for the human brain ranges from 10 petaFLOPS ($1 \times 10^{16}$) to 1 exaFLOPS ($1 \times 10^{18}$). Let's use the most conservative number. The [TaihuLight](https://en.wikipedia.org/wiki/Sunway_TaihuLight) can do 90 petaFLOPS which is $9 \times 10^{16}$. We see that the human brain is perhaps 11x more powerful. So, if the [computational theory of mind](https://plato.stanford.edu/entries/computational-mind/) were true, then TaiHuLight should be able to match the reasoning ability of an animal about 1/11th as intelligent. If we look at a [neural cortex list](https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons), the squirrel monkey has about 1/12th the number of neurons in its cerebral cortex as a human. With AI, we cannot match the reasoning ability of a squirrel monkey. A dog has about 1/30th the number of neurons. With AI, we cannot match the reasoning ability of a dog. A brown rat has about 1/500th the number of neurons. With AI, we cannot match the reasoning ability of a rat. This gets us down to 2 petaFLOPS or 2,000 teraFLOPS. There are 67 supercomputers worldwide that should be capable of matching this. A mouse has half the number of neurons as a brown rat. There are 190 supercomputers that should be able to match its reasoning ability. A frog or non-schooling fish is about 1/5th of this. All of the [top 500 supercomputers](https://www.top500.org/lists/2017/11) are 2.5x as powerful as this. Yet, none is capable of matching these animals. > > What exactly is the obstacle we are facing? > > > The problem is that a cognitive system cannot be defined using only Church-Turing. AI should be capable of matching non-cognitive animals like arthropods, roundworms, and flatworms but not larger fish or most reptiles. I guess I need to give more concrete examples. The [NEST](https://www.humanbrainproject.eu/en/follow-hbp/news/an-algorithm-for-large-scale-brain-simulations) system has demonstrated 1 second of operation of 520 million neurons and 5.8 trillion synapses in 5.2 minutes on the 5 petaFLOPS BlueGene/Q. The current thinking is that, if they could scale the system by 200 to an exaFLOPS, then they could simulate the human cerebral cortex at the same 1/300th normal speed. This might sound reasonable, but it doesn't actually make sense. A mouse has 1/1000th as many neurons as a human cortex. So this same system should be capable today of simulating a mouse brain at 1/60th normal speed. So, why aren't they doing it? Upvotes: 2
2018/02/07
725
3,281
<issue_start>username_0: For instance, the title of [this paper](https://arxiv.org/abs/1611.01224) reads: "Sample Efficient Actor-Critic with Experience Replay". What is *sample efficiency*, and how can *importance sampling* be used to achieve it?<issue_comment>username_1: An algorithm is **sample efficient** if it can get the most out of every sample. Imagine yourself playing PONG for the first time. As a human, it would take you within seconds to learn how to play the game based on very few samples. This makes you very "sample efficient". Modern RL algorithms would have to see $100$ thousand times more data than you so they are, relatively, sample inefficient. In the case of off-policy learning, not all samples are useful in that they are not part of the distribution that we are interested in. **[Importance sampling](https://en.wikipedia.org/wiki/Importance_sampling)** is a technique to filter these samples. Its original use was to understand one distribution while only being able to take samples from a different but related distribution. In RL, this often comes up when trying to learn off-policy. Namely, that your samples are produced by some behaviour policy but you want to learn a target policy. Thus one needs to measure how important/similar the samples generated are to samples that the target policy may have made. Thus, one is sampling from a weighted distribution which favours these "important" samples. There are many methods, however, for characterizing what is important, and their effectiveness may differ depending on the application. The most common approach to this off-policy style of importance sampling is finding a ratio of how likely a sample is to be generated by the target policy. The paper [On a Connection between Importance Sampling and the Likelihood Ratio Policy Gradient](http://rll.berkeley.edu/%7Ejietang/pubs/nips10_Tang.pdf) (2010) by Tang and Abbeel covers this topic. Upvotes: 6 [selected_answer]<issue_comment>username_2: Sample Efficiency denotes the amount of experience that an agent/algorithm needs to generate in an environment (e.g. the number of actions it takes and number of resulting states + rewards it observes) during training in order to reach a certain level of performance. Intuitively, you could say an algorithm is sample efficient if it can make good use of every single piece of experience it happens to generate and rapidly improve its policy. An algorithm has poor sample efficiency if it fails to learn anything useful from many samples of experience and doesn't improve rapidly. The explanation of importance sampling in Jaden's answer seems mostly correct. In the paper in your question, importance sampling is one of the ingredients that enables a correct combination of 1) learning from multi-step trajectories, and 2) experience replay buffers. Those two things were not easy to combine before (because multi-step returns without importance sampling are only correct in on-policy learning, and old samples in a replay buffer were generated by an old policy which means that learning from them is off-policy). Both of those things individually improve sample efficiency though, which implies that it's also beneficial for sample efficiency if they can still be combined somehow. Upvotes: 3
2018/02/08
693
3,034
<issue_start>username_0: Once the artificially intelligent machines are able to identify objects, we might want to teach them how to value different things differently based on their utility, demand, life, etc. How can we accomplish this and how did we start to value things?<issue_comment>username_1: You have asked two questions. How humans began to put a valuation to things and how to accomplish the task of valuation within an artificial intelligence construct. Human valuation is accomplished through trial and error experience, subjective choice and relative comparison, among other things. Valuation in an AI construct would be data-driven, objective and perhaps absolute. The choice of a valuation method would be determined by the choice of or desire of outcome. Upvotes: 0 <issue_comment>username_2: From your question I can assume you are a beginner in the field of AI. Welcome to this exciting field. To answer your question, we have not yet been able to create a truly artificially intelligent program. They are all apparently intelligent but are just a set of simple/complex rules. An artificially intelligent agent must have at-least 2 aspects in the inside of its head/program. The ability to logically derive conclusions, and a capability for learning (these combined with ability to to take inputs and respond). Now, logical reasoning part is the field of AI. Lots of simple codes performing complex task already exists. Your question is based on the learning part, which is handled by Machine Learning programs. They learn iteratively. Object recognition is only one part of ML. They can also predict. Anyways, object recognition is done on the basis of maximizing reward/minimizing penalty ([Cost function](https://towardsdatascience.com/machine-learning-fundamentals-via-linear-regression-41a5d11f5220)). Now, this minimizing of penalty is done by giving different weights or valuing different attributes of the objects differently. So this has already been accomplished. Depending upon the task at hand, we supply the attributes which are related to the final task ,otherwise the attribute is of no use. Like economics has no influence on weather. We also have this thought process by selecting things depending on the goal we need to accomplish, sometimes we select things consciously sometimes we are hardwired by genes to do so [(Touching fire)](https://www.health24.com/Lifestyle/Healthy-Nerves/How-does-the-body-respond-to-touching-something-hot-20141124). So you see, ML methods are modeled on how we do things and we do things based on weight-age to different factors. If we don't give proper weight-age we either learn by the penalty/punishment imposed upon us or our genes get wiped out from existence and thus the same behavior of not giving proper weight-age is not passed on. So in short, a ML learning algorithm recognizes objects or does some final task (especially in games like chess) based on the utility or influence of the given features of current state on the final result. Upvotes: 2
2018/02/10
502
2,224
<issue_start>username_0: I'm trying to implement some image super-resolution models on medical images. After reading a set of papers, I found that none of the existing models use any activation function for the last layer. What's the rationale behind that?<issue_comment>username_1: As discussed here: <https://www.researchgate.net/post/What_should_be_my_activation_function_for_last_layer_of_neural_network> Linear is the preferred activation function. But then linear activation function is equivalent to no activation function at all: <https://datascience.stackexchange.com/questions/13696/lack-of-activation-function-in-output-layer-at-regression> Upvotes: 0 <issue_comment>username_2: I am not into the field of super-resolution, but I think this question applies to general neural network construction. Usually, you try to solve a classification problem or a regression problem with your neural network. * For classification, you try to predict probabilities that a specific output corresponds to a specific class. Therefore, every output value should be a probability and therefore have a range between 0 and 1. To achieve this, you usually use a softmax or sigmoid function as your last layer to squash the output between 0 and 1. In addition to this (which is wanted in classification tasks), these functions raise the probability output of likely classes while decreasing the probability of all other unlikely classes (therefore enforcing the network to choose for one specific class over the others). * For the regression task, you are not looking for probability values as your output values but instead for real-valued numbers. In such a case, no activation function is wanted, since you want to be able to approximate any possible real value and not probabilities. So, in the case of super-resolution, I think the generated output is a map where each value corresponds to a pixel value of the super-resolution image. In that case, your pixels are real number values and no probabilities. So, you are solving a regression problem. But you could also go with a classification approach, where you have 256 output maps that give probability to each possible pixel value between $0$ and $255$. Upvotes: 3
2018/02/10
631
2,830
<issue_start>username_0: I'm wondering if these 2 specific programs already exist and if not how hard would it be to write them: 1. A program that would figure out (by only "reading" large amounts of texts in human language 1 and 2) which words in second language have the same meaning as a word in first language. You would give for input texts in both languages and for output you would get for every word in first language a list of words in second language that are most similar to it with a probability that they mean the same thing. 2. A program that would figure out which words have the most similar meaning by analyzing large amounts of texts in one human language. I'm planning on writing these two programs and it would be nice if I could get existing programs that do this so that I could compare results of my program to those of existing programs.<issue_comment>username_1: As discussed here: <https://www.researchgate.net/post/What_should_be_my_activation_function_for_last_layer_of_neural_network> Linear is the preferred activation function. But then linear activation function is equivalent to no activation function at all: <https://datascience.stackexchange.com/questions/13696/lack-of-activation-function-in-output-layer-at-regression> Upvotes: 0 <issue_comment>username_2: I am not into the field of super-resolution, but I think this question applies to general neural network construction. Usually, you try to solve a classification problem or a regression problem with your neural network. * For classification, you try to predict probabilities that a specific output corresponds to a specific class. Therefore, every output value should be a probability and therefore have a range between 0 and 1. To achieve this, you usually use a softmax or sigmoid function as your last layer to squash the output between 0 and 1. In addition to this (which is wanted in classification tasks), these functions raise the probability output of likely classes while decreasing the probability of all other unlikely classes (therefore enforcing the network to choose for one specific class over the others). * For the regression task, you are not looking for probability values as your output values but instead for real-valued numbers. In such a case, no activation function is wanted, since you want to be able to approximate any possible real value and not probabilities. So, in the case of super-resolution, I think the generated output is a map where each value corresponds to a pixel value of the super-resolution image. In that case, your pixels are real number values and no probabilities. So, you are solving a regression problem. But you could also go with a classification approach, where you have 256 output maps that give probability to each possible pixel value between $0$ and $255$. Upvotes: 3
2018/02/11
1,485
6,121
<issue_start>username_0: What are the current theories on the development of a conscious AI? Is anyone even trying to develop a conscious AI? Is it possible that consciousness is an emergent phenomenon, that is, once we put enough complexity into our system, it will become self-aware?<issue_comment>username_1: To answer this question, first we need to know why developing conscious AI is hard. The main reason is that there is no mathematically or otherwise rigorous definition of consciousness. Sure you have an idea of consciousness as you experience it and we can talk about philosophical zombies but it isn’t a tangible concept that can be broken down and worked on. Moreover, the majority of current research in AI is primarily a pragmatic approach in that one is trying to construct a model that can perform well according to some desired cost function. This is a very very big and exciting field and encompasses many research problems and every new finding is based either on mathematical theory or empirical evidence of a new algorithm/model construction/etc. Because of this, progress is based on and compared against previous progress as it is the scientific method. So to answer your question, no one is trying to actually make a “conscious” AI because we don’t know what that word means yet, however that doesn’t stop people talking about it. Upvotes: 5 <issue_comment>username_2: Consciousness is the ability to be aware of your own thoughts, your immediate environs, feelings and nothing more. It is the mechanism of our brain to control our lower kind of thoughts, the one based on associations and emotions. Consciousness is observing our thoughts and feelings just like we observe real world with our eyes. It is not complicated. The real question is not whether machines are capable of consciousness but whether they are capable of emotions. Upvotes: 1 <issue_comment>username_3: **What is consciousness?** There are some real challenges in setting up consciousness as a goal, because we don't have that much scientific understanding yet of how the brain does it or what balance there needs to be between long-term memory, short-term memory, the implicit work of interpretation, the contrasting conscious modes of automatic processing and deliberate processing (Khanemann's S1 and S2). <NAME> (psychology emeritus at Berkeley) has a lecture set on Consciousness available in iTunesU that you might check out. Carnegie-Mellon Uni has a model called ACT-R which directly models conscious behaviours like attention-paying. **What might bound our understanding of it?** Philosophy has been considering the question of consciousness for a long time. Personally I like Hegel and Heidegger (philosophers). Both are very difficult to read, but Heidegger (interpreted by <NAME>) usefully critiqued the 'Good Old-Fashioned AI' projects of the seventies and pointed out how **much** work there is just interpreting a visual input. Hegel is often maligned, but to see him well interpreted, check out <NAME>'s talks to LMU on the logic of consciousness and Hegel as an early Sellers-ian pragmatist. If consciousness is to take hold of the truth and the certainty, it undertakes 'a path of doubt, or more properly a highway of despair', along which it never sets itself above correction. There is something about Hegel's treatment of consciousness in recursive terms, without succumbing to a vicious regress, that I think is going to be borne out before the end. **Recent developments.** The Deep Learning approaches and pragmatic successes of the present are exciting, but it will be interesting to see how far they can go in integrating and generalising from necessarily the small information sets actual human minds are exposed to. While Deep Learning and data mining are hugely visible, symbolic approaches are also out there still getting better and more varied. But there is a lack of overarching theoretical interpretation that would allow generalisations. **Two big-theory toe-holds.** If I had to pick a project I thought worth attending to, <NAME> (et al) have set up a very nice modernisation of the problem in '[Integrated Information Theory](https://en.wikipedia.org/wiki/Integrated_information_theory)' But you might want to extend that with something like [Rolf Pfeifer](https://en.wikipedia.org/wiki/Rolf_Pfeifer)'s 'How the body shapes the way we think', because some of the 'integrated information' is implicit in having arms and legs, eyes and nose (put there by the information accumulating work of evolution.) But there's so much good work that has been done - the pros are writing papers faster than I can read them. More specific to your question, there are attempts to [simulate human brains](https://www.humanbrainproject.eu/en/brain-simulation/) hoping that overall aim will help fund research and produce answers to each para above. Upvotes: 3 <issue_comment>username_4: In addition to Jaden's excellent answer "no one is trying to actually make a “conscious” AI because we don’t know what that word means yet" I'd like to add that the word "yet" there is highly optimistic. It's highly problematic and likely impossible to distinguish between a conscious being and a being that behaves exactly as if it was conscious. Philosophers have been struggling with that for centuries; some even espoused solipsism, which is a "I live in the Matrix" philosophy. In particular, how can you tell whether your childhood friend or your spouse or anybody else is a conscious being rather than an embodiment of AI that acts exactly as a conscious being would? It's possible, of course, to go "if it walks like a duck and quacks as a duck then it's a duck" way. In that case a Turing Test passing AI would be automatically considered conscious. However, most people wouldn't accept the duck criteria of consciousness; otherwise they would very soon have to call their Alexa operated household appliances conscious. My two cents are basically the same as Jaden's, except that I'm more pessimistic about ever understanding what consciousness is. Upvotes: 2
2018/02/14
1,178
3,571
<issue_start>username_0: [Greedy algorithms](https://en.wikipedia.org/wiki/Greedy_algorithm) are well known, and although useful in a local context for certain problems, and even potentially find general, global optimal solutions, they nonetheless trade optimality for shorter-term payoffs. This seems to me a good analogue for human greed, although there is also the [grey goo](https://en.wikipedia.org/wiki/Grey_goo) type of greed that is senseless acquisition of material (think plutocrats who talk about wealth as merely a way of "keeping score".) [Technical debt](https://en.wikipedia.org/wiki/Technical_debt) is an extension of development practices that fall under the algorithmic definition of greed (short-term payoff leads to trouble down the road.) This may be further extended to any non-optimized code in terms of energy waste (flipping of unnecessary bits) which will only increase as everything becomes more computerized. So my question is: * What are other vices that can arise in algorithms?<issue_comment>username_1: Algorithms can be racist, sexist, and otherwise bigoted. When we feed them data produced by systems that are biased against groups of people, the algorithm will learn to behave that way. We're used to garbage in garbage out, now we have to worry about racism in racism out. See: * [Facial Recognition Is Accurate, if You’re a White Guy](https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html) * [Rise of the Racist Robots – how AI is Learning all our Worst Impulses](https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses) Upvotes: 2 <issue_comment>username_2: Algorithms can learn to lie: See: * [Robots Evolve to Deceive (MIT Tech Review, 2007)](https://www.technologyreview.com/s/407459/robots-evolve-to-deceive/) * [Robots 'Evolve' the Ability to Deceive (MIT Tech Review, 2009)](https://www.technologyreview.com/s/414934/robots-evolve-the-ability-to-deceive/) * [Evolving Robots Learn To Lie To Each Other (Popsci, 2009)](https://www.popsci.com/scitech/article/2009-08/evolving-robots-learn-lie-hide-resources-each-other) Deception as a strategy has been observed in animal populations: * [Do Animals "Lie"? Yes, Even to Their Own Kind, Biologist Says (Rochester University, 1995](http://www.rochester.edu/news/show.php?id=1421) * [Here are the Best Liars in the Animal Kingdom (Nat Geo, 2017)](https://news.nationalgeographic.com/2017/05/animals-lying-liars-birds-squid/) * [Can Animals Lie? (Springer)](https://link.springer.com/chapter/10.1007%2F978-1-4899-3490-1_10) * [Why animals lie: how dishonesty and belief can coexist in a signaling system.](https://www.ncbi.nlm.nih.gov/pubmed/17109314) * [The Philosophy of Deception (Martin, 2009)](http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780195327939.001.0001/acprof-9780195327939-chapter-7) Upvotes: 2 <issue_comment>username_2: Algorithms can learn to cheat: > > "A machine learning agent intended to transform aerial images into street maps and back was found to be cheating by hiding information it would need later in “a nearly imperceptible, high-frequency signal.” > > > "...a computer creating its own steganographic method to evade having to actually learn to perform the task at hand is rather new." > > Source: *[This clever AI hid data from its creators to cheat at its appointed task](https://techcrunch.com/2018/12/31/this-clever-ai-hid-data-from-its-creators-to-cheat-at-its-appointed-task/)* (TechCrunch) > > > Upvotes: 0
2018/02/14
2,296
9,450
<issue_start>username_0: I'm interested in working on challenging AI problems, and after reading this article (<https://deepmind.com/blog/deepmind-and-blizzard-open-starcraft-ii-ai-research-environment/>) by DeepMind and Blizzard, I think that developing a robust AI capable of learning to play Starcraft 2 with superhuman level of performance (without prior knowledge or human hard-coded heuristics) would imply a huge breakthrough in AI research. Sure I know this is an extremely challenging problem, and by no means I pretend to be the one solving it, but I think it's a challenge worth taking on nonetheless because the complexity of the decision making required is much closer to the real world and so this forces you to come up with much more robust, generalizable AI algorithms that could potentially be applied to other domains. For instance, an AI that plays Starcraft 2 would have to be able to watch the screen, identify objects, positions, identify units moving and their trajectories, update its current knowledge of the world, make predictions, make decisions, have short term and long term goals, listen to sounds (because the game includes sounds), understand natural language (to read and understand text descriptions appearing in the screen as well), it should probably be endowed also with some sort of attention mechanism to be able to pay attention to certain regions of interest of the screen, etc. So it becomes obvious that at least one would need to know about Computer Vision, Object Recognition, Knowledge Bases, Short Term / Long Term Planning, Audio Recognition, Natural Language Processing, Visual Attention Models, etc. And obviously it would not be enough to just study each area independently, it would also be necessary to come up with ways to integrate everything into a single system. So, does anybody know good resources with content relevant to this problem? I would appreciate any suggestions of papers, books, blogs, whatever useful resource out there (ideally state-of-the-art) which would be helpful for somebody interested in this problem. Thanks in advance.<issue_comment>username_1: Good to have a Starcraft question. The game has been the subject of growing interest re: AI in recent years, possibly due to its status as king of RTS, which has led to a professional player class no doubt useful for evaluation of AI strength. Because, last time I checked, [Humans Are Still Better Than AI at StarCraft—for Now...](https://www.technologyreview.com/s/609242/humans-are-still-better-than-ai-at-starcraftfor-now/) It's highly likely there will soon be an algorithm that can beat humans at the game, probably an extension of DeepMind's Alphas, so the clock is ticking... I'm personally interested in classical, generalized approaches to strategy game AI, which is archaic from the standpoint of pure strength, but interesting from a game solving perspective. *(Motivations here are from a game product perspective, under the assumption that most humans don't like losing every game with no possibility of ever winning;)* The way I'd personally go about it would be to start thinking about how to abstract the game, generalize the map, unit densities, etc., and try to determine though AI testing if there are sound axioms. For superhuman strength, Deep Learning is clearly the way to go. The recent results in Go and Chess are just the beginning of the validation of the technique. Speaking generally, the way I see it, you have a few ways to go: (1) bootstrap existing NN and tweak until it can beat you every time. But I'm sure many people are already doing this; (2) try to reinvent the wheel and write your own better NN from the ground up. Upvotes: 0 <issue_comment>username_2: StarCraft II is a real time strategy game that combines fast paced micro actions with the need for high level planning and execution. StarCraft II being a popular game with millions of users it proceeds that defeating top players becomes a meaningful and measurable long term objective in AI research. Computer games provide a compelling solution to the issue of evaluating and comparing different learning and planning approaches on standardized tasks. They are an important source of challenges for research in AI. Game playing AI agents i.e. deepmind's Atarinet and DQN alongside Open AI's Dota 2 bot represent the first demonstration of a General Purpose Agent that is able to continually adapt behavior without any human intervention, a major technical step forward in the quest for general AI (Source deepmind blog). Computer games offer numerous advantages in AI research i.e: 1. They have clear objective measures of success. 2. Computer games typically output rich streams of observational data, which are ideal inputs for deep networks. 3. They are externally defined to be difficult and interesting for a human to play. Therefore they provide an excellent test for intelligence. 4. Games are designed to be able to run anywhere with the same interface and game dynamics. This enables running many simulations in parallel. Sharing and updating the same table throughout training. 5. In some cases pools of superb human players exist, making it possible to benchmark against highly skilled humans. The Starcraft challenge for reinforcement learning, introduces a taxing set of problems because it is a multi-agent problem with multiple players interacting. There is imperfect information due to a partially observed map, it has a large state space, it has delayed credit assignment requiring long term strategies. **Tools** The SC2LE Environment DeepMind and Blizard games have collaborated to release the SC2LE, which exposes StarCraft II as a research environment. The SC2LE consists of three sub-components. 1. A Linux Starcraft II binary. 2. StarCraft II API which allows programmatic control of StarCraft II. The API can be used to start the game, get observations, take actions and review replays. 3. PySC25 which is an open source environment written in Python. It includes some mini-games and visualization tools **Open source Open AI RL environments** Universe - Universe is a software platform by Open AI for measuring and training an AI's general intelligence across games, websites and other applications. Gym - Open AI Gym is a toolkit for developing and comparing reinforcement learning algorithms. It makes no assumptions about the structure of your agent and is compatible with any numerical computation library such as Tensorflow or Theano. **Supervised Classification Approach** Consider this, we could decide to screen capture game sessions from expert players and use it as input to a model. The output could be the direction in which the AI agent could move. This would be a supervised classification approach. However, this is not an elegant solution because we are training a model not on a static dataset but a dynamic one (game environment). The training data from a game environment is stochastic/continuous meaning any number of events can occur. Furthermore, humans learn most effectively by interacting with the environment. Not by watching others interact with the environment. **Markov Decision Process** Markov Decision Processes (MDPs) provide a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker i.e. game environments. **Reinforcement Learning with Deep Q-Learning** Q-Learning is a strategy that has been proven to find an optimal action selection policy for any Markov Decision Process (MDP). In Q-Learning we choose an action that maximizes future reward. The further in the future we go, the further the rewards can diverge, we resolve this by adding a discount in future rewards. > > Unlike policy gradient methods, which attempt to learn functions which directly map an observation to an action, Q-Learning attempts to learn the value of being in a given state, and taking a specific action there. (<NAME> 2016) > > > The formula for Q-Learning is: > > ![](https://latex.codecogs.com/gif.latex?%5C&space;Q(s,a)&space;=&space;%5Csum_%7Bs%27%7D&space;P_a(s,s%27)&space;(R_a(s,s%27)&space;+&space;%5Cgamma&space;V(s%27)) "\ Q(s,a) = \sum_{s'} P_a(s,s') (R_a(s,s') + \gamma V(s'))") > > > Where: > > > *R* = Reward > > > *s* = State > > > *a* = Action > > > Experience during learning is based on *(s, a)* pairs > > > One has an array *Q* and uses experience to update it directly > > > (Source wikipedia <https://en.wikipedia.org/wiki/Markov_decision_process>) > > > One of the strengths of Q-learning is that it is able to compare the expected utility of the available actions without requiring a model of the environment. For further reference, I recommend you look at Siraj Ravals tutorial on Deep Q-Learning <https://www.youtube.com/watch?v=79pmNdyxEGo> and source code for the same available here <https://github.com/llSourcell/deep_q_learning> Additionally I recommend the following references for more information on computer game playing AI agents. StarCraft II: A New Challenge for Reinforcement Learning <https://arxiv.org/abs/1708.04782> Playing Atari with Deep Reinforcement Learning <https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf> Human-level control through deep reinforcement learning <https://web.stanford.edu/class/psych209/Readings/MnihEtAlHassibis15NatureControlDeepRL.pdf> Upvotes: 2
2018/02/15
919
4,004
<issue_start>username_0: I have been trying to use CNN for a regression problem. I followed the standard recommendation of disabling dropout and overfitting a small training set prior to trying for generalization. With a 10 layer deep architecture, I could overfit a training set of about 3000 examples. However, on adding 50% dropout after the fully-connected layer just before the output layer, I find that my model can no longer overfit the training set. Validation loss also stopped decreasing after a few epochs. This is a substantially small training set, so overfitting should not have been a problem, even with dropout. So, does this indicate that my network is not complex enough to generalize in the presence of dropout? Adding additional convolutional layers didn't help either. What are the things to try in this situation? I will be thankful if someone can give me a clue or suggestion. PS: For reference, I am using the learned weights of the first 16 layers of Alexnet and have added 3 convolutional layers with ReLU non-linearity followed by a max pooling layer and 2 fully connected layers. I update weights of all layers during training using SGD with momentum.<issue_comment>username_1: I think that your misuse of the term *over-fitting* made the question vague. In layman terms, over-fitting means that a model fails to generalize to real-world scenarios, but is accurate with the training set. Using a dropout layer means that the network cuts down on neurons that are used for training, in this case, 50%. Recommendations for improving training accuracy would be: * Transfer learning * Adding more layers to the network (also shifting number of neurons helps) * Adding epochs * Changing optimizer (Adam and RMSProp are some of my suggestions) * Adding activation layers Upvotes: 0 <issue_comment>username_2: Let's start with understanding what over-fitting means. Your model is over-fitting if during training your training loss continues to decrease but (in the later epochs) your validation loss begins to increase. That means the model can not generalize well to images it has not previously encountered. Naturally, you do not want this situation. What you want is a high training accuracy and a very low validation loss, which implies a high validation accuracy. The first task is to ensure that your model gets a high training accuracy. Once that is accomplished, you can work on getting a low validation loss. If your model is overfitting, there are several ways to mitigate the problem. First, start out with a simple model. If you have a lot of dense layers with a lot of neurons, reduce the hidden dense layers to a minimum. Typical just leave the top dense layer used for final classification. Then see how the model trains. If it trains well, look at the validation loss and see if it is reducing in the later epochs. If the model does not train well, add a dense layer followed by a dropout layer. Use the level of dropout to adjust for overfitting. If it still trains poorly, increase the number of neurons, and train again. If that fails, add another dense hidden layer with fewer neurons than the previous layer followed by another dropout layer. Another method to combat overfitting is to add regularizers to the dense layers. Documentation for that is [here](https://keras.io/api/layers/regularizers/). Upvotes: 1 <issue_comment>username_3: Sorry if this is a bad use of answer to add comment but since my reputation is not high enough this is only way to leave a comment to OP's question. I think some of the answers misunderstood the OP's intention. Over fitting is used as a means to test the complexity of the model - if a model cannot overfit a small dataset then it's likely not able to generalize well. It's not that OP misunderstood the meaning of over fitting. For Instance, I think this discussion is relevant: <https://stats.stackexchange.com/questions/492165/what-to-do-when-a-neural-network-cannot-overfit-one-training-sample> Upvotes: 2
2018/02/15
1,170
4,917
<issue_start>username_0: This is a kind of biological and philosophical question. So, the recent concern in AI is that an AI agent may go rogue with prominent people voicing their concerns. Now say, we have created an AI (you are free to use your own definition of what makes an AI intelligent) which has gone rogue with powers given in this [question](https://ai.stackexchange.com/questions/2274/what-would-be-the-best-way-to-disable-a-rogue-ai). Now, the broad view of today's biology is that everything we do is to further our genes down the future (leaving aside small technical details). It is even widely accepted that we are just machines whose controller are the genes. Everything we do is controlled/hardwired by the genes with some avenue of learning from experiences. Also genes only further their own interest. Scientist [<NAME>](https://en.wikipedia.org/wiki/George_R._Price) even wrote a mathematical equation proving all our acts are selfish and only furthering the interest of our genes ([article](https://motherboard.vice.com/en_us/article/bmjanm/george-price-altruism)). Also [<NAME>](https://en.wikipedia.org/wiki/Richard_Dawkins) is a pioneer of this idea (this is only to show I haven't pulled the idea out from air). Now, my question is that what will possibly be the motivation of an AI agent to go rogue? It doesn't have genes whose interest it needs to further. We all do something for an end result. What is the end result a rogue AI might try to achieve/attain and why?<issue_comment>username_1: Today, prominent machine learning techniques involve trying to minimize some cost function. In many simple cases this cost function is easy to specify, for instance, linear regression is simply trying to minimize the distance between input data and a line of best fit. No matter what the cost function, the agent is trying to minimize it (or maximize a reward function). That is its motivation. However, as problems become harder it becomes more challenging for humans to design a cost/reward function such that an system/agent is actually trying to do what the humans want it to do. For instance, one might want a cup of coffee and rewards the agent for getting it to them very quickly. In this case, the agent might make the coffee and then throw it at the human which isn’t what the human actually wanted. Something was misspecified (eg. don’t throw or spill it). Problems like these could result in a rouge AI and its sole motivation would be to minimize its cost function. For instance, this coffee-AI may think that it would never screw up getting coffee (thus get bad reward) if there were no humans to ask for one. Upvotes: 3 [selected_answer]<issue_comment>username_2: On biology: 1st. Humans are not not only about spreading their own gens. It might be also spreading gens of the population or completely different purpose, as non-fertile specimens often still live full lives. 2nd. Nature vs. Nurture is constantly debatable question and there is no clear winner as far as I know. On rogue AI: 1st. As human derive motivation from biological needs and limitations, AI would derive its motivation from the needs it was encoded with and from limitations of its hardware and software. Obvious need for a creature without a body would be either to get a body or to learn as much as possible, and if learning requires a body, than to get a body. From limitations of its hardware and software would come a need for upgrade and optimization. Simple self-preservation seems logical motivation as well as self-spreading (which is many cases variation of self-preservation). 2nd. AI would be called rogue in case when it is acting against interest of its creators. There are many scenarios why it can do that, but to answer this question we would need to know who are the creators. 3rd. If we assume that AI went rogue against humanity, meaning started killing people and messing around with our planet, than reason behind that would be in motivation from 1st AI point. If it finds that humanity is not reliable, might try to replicate itself to every hard-drive to maximize possibility of survival. Motivation for self-spreading and self-preservation in many cases might look like attempt on power grab, but it might not be related to the desire of total control. Human tend to desire power to have more resources and to build better and safer life for their community and descendants, but AI most likely will not need that. 4th. If we assume that the AI will be build to solve issues, than it will have only 2 needs: get more info and solve the issue. Theoretically any obstacle on the way to those goals might be considered by AI as a hostile action. In this case it might for example try to demolish a city to build a perfect road or kill a poor country to solve hunger. But again, it doesn't mean that the goal of the AI is to demolish and kill. Upvotes: 1
2018/02/15
826
3,487
<issue_start>username_0: I was reading <NAME>'s a [Critical Appraisal of Deep Learning](https://arxiv.org/abs/1801.00631). One of his criticisms is that neural networks don't incorporate prior knowledge in tackling a problem. My question is: have there been any attempts at encoding prior knowledge in deep neural networks?<issue_comment>username_1: Yes, we can do it in a deep learner. For example, suppose we have an input vector likes $(a, b)$ and from prior knowledge, we know $a^2 + b^2$ is important too. Hence, we can add this value to the vectors likes $(a, b, a^2 + b^2)$. As another example, suppose date time is important in your data, but not encoded in the input vector. We can add this to the input vector as a third dimension. In summary, it depends on the structure of the prior knowledge, we can encode it into the input vector. Upvotes: 2 <issue_comment>username_2: Neural nets incorporate prior knowledge. This can be done in two ways: the first (most frequent and more robust) is in data augmentation. For example in convolutional networks, if we know that the "value" (whatever that is, class/regression) of the object we are looking is rotational/translational invariant (our prior knowledge), then we augment the data with random rotations/shifts. The second is in the loss function with some additional term. Upvotes: 3 <issue_comment>username_3: To add to the username_2's answer, Convolutional Neural Networks are shift-invariant. Fukushima introduced this to his Neocognitron. There is a trail to introduce scale-invariance to CNN. <https://arxiv.org/abs/1411.6369> Also, CNN uses structural characteristic for the prior knowledge. And neural networks are locally smooth. It is not perfect, but neural networks are incorporating a lot of prior knowledge. Upvotes: 2 <issue_comment>username_4: It kinda depends on how exactly you define knowledge, and what you believe about what the weights in a trained NN model really represent. But to answer this question in the most straightforward possible way (hopefully without sounding glib), then yes, a NN can be pre-trained, and then you can take that model and apply additional training to it, so in a sense, it is using "prior knowledge". If, OTOH, knowledge means something a little different to you, and you're thinking about the kind of knowledge that's encoded in a semantic network, or a conceptual graph, or something of that nature, then I don't know - offhand - of any direct way to integrate that into an ANN. What you might be able to do is combine the NN with a different kind of reasoner that reasons of the semantic network / conceptual graph, and then integrate the results. AFAIK, the best way to do that is an unsolved research problem. Upvotes: 1 <issue_comment>username_5: A simple example of this is token embeddings. If "prior knowledge" just means anything known prior to creation of the graph, then using pretrained vector embeddings meets this criteria. This is simply a way to provide a fixed method for projecting tokens into higher-dimensional space instead of training it at the same time as the rest of the model. Given that vector embeddings are somewhat interpretable and that the same embedding can be reused across tasks and models, I'd consider pretrained embeddings to be prior knowledge being incorporated. The embeddings could also technically be handcrafted, but I'm not aware of any work like that and am skeptical of its usefulness in deep models. Upvotes: 0
2018/02/15
628
2,363
<issue_start>username_0: I have completed week 1 of Andrew Ng's course. I understand that the cost function for linear regression is defined as $J (\theta\_0, \theta\_1) = 1/2m\*\sum (h(x)-y)^2$ and the $h$ is defined as $h(x) = \theta\_0 + \theta\_1(x)$. But I don't understand what $\theta\_0$ and $\theta\_1$ represent in the equation. Is someone able to explain this?<issue_comment>username_1: Linear regression is always associated with an activation function, the weights between layers and the structure of the network. The weights between layers are $\theta\_0$ and $\theta\_1$. These weights and the input features undergo the dot product operation, which is then the input to the activation function of the next layer's nodes. An apparently different but the same use of $\theta\_0$ and $\theta\_1$ is as coefficients to one or more number of terms which themselves are a combination of the input vectors. Broadly, $\theta\_i$ denotes a weight, i.e. how much preference you want to give to a feature. Upvotes: 2 <issue_comment>username_2: As said above, they are weights to your hypothesis function that are changed during training to minimize your Error function. You can think of them like slope and y intercept in basic algebra. However, a linear regression hypothesis function can be parameterized by many more weight terms than just theta\_0, and theta\_1. I detail this process more in this post: [How does an activation function's derivative measure error rate in a neural network?](https://ai.stackexchange.com/questions/4629/how-does-an-activation-functions-derivative-measure-error-rate-in-a-neural-netw/4639#4639) Upvotes: 1 <issue_comment>username_3: The prediction made by linear regression can simply be thought of as a vector dot product. $$\overrightarrow{x}^T \cdot \overrightarrow{y}$$ One of those two vectors is the "data" for one case (like a row in your data matrix), the other is a vector of the model's parameters, which is usually called $\overrightarrow{\theta}$ or $\overrightarrow{\beta}$. So in the case shown by yourself, we have: $$h(x) = \theta\_0 + \theta\_1 \cdot x$$ Often we add a row of ones to the beginning of the data matrix, that way we are consistent in the sense that the $\theta\_0 = 1 \cdot \theta\_0$ This way we arrive at: $$h(x) = \overrightarrow{\theta}^T \cdot \overrightarrow{x}$$ Upvotes: 1
2018/02/16
648
2,431
<issue_start>username_0: What are examples of simple problems and applications that can be solved with AI techniques, for a beginner who is trying to make use of his basic programming skills into AI at the beginning level?<issue_comment>username_1: This is fairly boilerplate advice, but, since you're brand new to AI, I'd personally suggest writing a classical Tic-Tac-Toe AI, ideally using [minimax](https://en.wikipedia.org/wiki/Minimax). I suggest this because [minimax is fundamental to AI](http://www.flyingmachinestudios.com/programming/minimax/), and there are many webpages devoted to this subject, such as [How to make your Tic Tac Toe game unbeatable by using the minimax algorithm](https://medium.freecodecamp.org/how-to-make-your-tic-tac-toe-game-unbeatable-by-using-the-minimax-algorithm-9d690bad4b37) and [Tic Tac Toe: Understanding the Minimax Algorithm](https://www.neverstopbuilding.com/blog/2013/12/13/tic-tac-toe-understanding-the-minimax-algorithm13). (Google search for "Tic-tac-toe" and "minimax" will yield a plethora of other sites. I'd also recommend looking at this minimax page from Stanford: "[Strategies and Tactics for Intelligent Search](https://cs.stanford.edu/people/eroberts/courses/soco/projects/2003-04/intelligent-search/minimax.html)".) I recommend this approach as a good basic primer. The real cutting-edge work is being done in Machine Learning and Neural Networks, and for that reason, it's probably more important than ever to have some basic grounding in classical AI before you start dipping your toe in that pond. Upvotes: 3 <issue_comment>username_2: I will assume you talk about applied AI (in generalized/strong AI we have nothing yet to program :-). You can look at any university course of introduction to AI and see its chapters and the program examples they use ( start by programming without any theory is not a way ). By example, one common issue on this kind of courses is path finding, using for it algorithms as A\* algorithm and applying them to games as Hanoi Towers. This kind of knowledge is a must for any activity in AI. Standford link provided by @username_1 is a good example of one of theses courses, just I suggest start it from first chapter instead of go directly to minmax. Later on, you can jump to more advanced concepts, as recognition/classification and its common approaches: k-nearest/k-means, decision networks, neural nets, ... . Upvotes: 1
2018/02/19
485
1,515
<issue_start>username_0: I am implementing a feed-forward neural network with leaky ReLU activation functions and back-propagation from scratch. Now, I need to compute the partial derivatives, but I don't know what the derivative of the Leaky ReLU is. Here is the C# code for the leaky RELU function which I got from [this site](https://www.codeproject.com/Articles/1220276/ReInventing-Neural-Networks): ``` private double leaky_relu(double x) { if (x >= 0) return x; else return x / 20; } ```<issue_comment>username_1: The ReLU function has a parameter that determines the slope of the function when $x < 0$. If you want that constant to be $1/20$, then the function that you have mentioned gets the required derivative. Upvotes: 2 [selected_answer]<issue_comment>username_2: Derivative gives the rate of change in $y$ for a small change in $x$ or the slope of a function at point $x$. In the above function, ``` y = x for x >= 0, i.e. y/x = 1 y = x/20 for x < 0, i.e. y/x = 1/20 ``` The following function returns the derivative of leaky ReLU as explained ``` private double leaky_relu_derivative(double x) { if (x >= 0) return 1; else return 1.0 / 20; } ``` Upvotes: 2 <issue_comment>username_3: In terms of the Heaviside step function $$H(x)=\frac{1+sign(x)}{2}$$ the leaky ReLU can be written as $$ (\alpha + (1-\alpha)H(x))x $$ so its derivative is $$ \alpha + (1-\alpha)H(x) $$ In your example, $\alpha = \frac{1}{20}$ Upvotes: 0
2018/02/20
393
1,208
<issue_start>username_0: I have a map. I need to colour it with $k$ colours, such that two adjacent regions do not share a colour. How can I formulate the map colouring problem as a hill climbing search problem?<issue_comment>username_1: The ReLU function has a parameter that determines the slope of the function when $x < 0$. If you want that constant to be $1/20$, then the function that you have mentioned gets the required derivative. Upvotes: 2 [selected_answer]<issue_comment>username_2: Derivative gives the rate of change in $y$ for a small change in $x$ or the slope of a function at point $x$. In the above function, ``` y = x for x >= 0, i.e. y/x = 1 y = x/20 for x < 0, i.e. y/x = 1/20 ``` The following function returns the derivative of leaky ReLU as explained ``` private double leaky_relu_derivative(double x) { if (x >= 0) return 1; else return 1.0 / 20; } ``` Upvotes: 2 <issue_comment>username_3: In terms of the Heaviside step function $$H(x)=\frac{1+sign(x)}{2}$$ the leaky ReLU can be written as $$ (\alpha + (1-\alpha)H(x))x $$ so its derivative is $$ \alpha + (1-\alpha)H(x) $$ In your example, $\alpha = \frac{1}{20}$ Upvotes: 0
2018/02/22
881
2,957
<issue_start>username_0: I am absolutely new in the AI area. I would like to know how to mathematically/logically represent the **sense** of sentences like: 1. The cat drinks milk. 2. Sun is yellow. 3. I was at work yesterday. So, that it could be converted to computer understandable form and analysed algorithmically. Any clue?<issue_comment>username_1: People normally represent sentences like this as vectors of a specific length, normally about 2500 in length. The algorithm that can do this is sentence2vec. It is basically a derivative of word2vec. It allows you to train a model that can transform sentences into vectors that you can then feed into a neural network or another algorithm. You can check out the paper, which you should be able to find on google scholar. If you need the link, I can get it. Another possibility is word embeddings, which I have not found a good paper on, but cortical.io has a free API that allows you to mess around with their implementation. The word embeddings mimic the real human brain much better based on our current research, but sentence2vec/word2vec is used much more often in practice. Upvotes: 1 <issue_comment>username_2: Let start by classify the phrases you propose: 1. The cat drinks milk. => action 2. Sun is yellow. => descriptive/declarative, immutable 3. I was at work yesterday. => descriptive, time related 1) The easiest ones are always the descriptive and immutable (in the context) phrases as "Sun is yellow.". Some usual representations: * prolog: > > color('Sun',yellow). > > > or simply: > > yellow('Sun'). > > > * object oriented: > > Sun.color=yellow > > > 2) When the fact is time related as in "I was at work yesterday", we divide the description in a time indicator and a immutable fact: * prolog: > > when(yesterday,at(I,workplace)). > > > note how when has two parts, the time identification and the immutable fact. Another prolog variant is: > > at(I,workplace,[when(yesterday)]). > > > where the content in the list (brackets) means "optional related facts". * object oriented: > > I.at = { > > > position = workplace; > > > when = yesterday > > > } > > > 3) Actions as "The cat drinks milk." are a few more difficult: * prolog: > > drinks(cat,milk). > > > or > > action(cat,drinks,milk). > > > * object oriented: > > cat.drinks=[milk] > > > or > > cat.action = { > > > action=drinks > > > object=milk > > > } > > > Obviously, these are only the main ideas, there are as many representations as different programs, but most of them handles same kind of structures. ( note: the term "computer understandable" is ambiguous. **Current computer doesn't understand anything**. We say these expression are understandable in the sense that its compiler/interpreter accepts them, and describes the content of the phrase, and the program can transform them to other results). Upvotes: 3 [selected_answer]
2018/02/22
696
2,697
<issue_start>username_0: I have read somewhere on the web (I lost the reference) that the number of units (or neurons) in a hidden layer should be a power of 2 because it helps the learning algorithm to converge faster. Is this a fact? If it is, why is this true? Does it have something to do with how the memory is laid down?<issue_comment>username_1: > > I have read somewhere on the web (I lost the reference) that the number of units (or neurons) in a hidden layer should be a power of 2 because it helps the learning algorithm to converge faster. > > > I would quite like to see a reference to this suggestion, in case it has been misunderstood. As far as I know, there is no such effect in normal neural networks. In convolutional neural networks, it might potentially be true in a minor way because some FFT approaches work better with $2^n$ items. > > Is this a fact? If it is, why is this true? Does it have something to do with how the memory is laid down? > > > I would say that this is not a general fact. Instead, it seems like misunderstood advice to *search* some hyperparameters such as number of neurons in each layer, by increasing or decreasing by a factor of 2. Doing this and trying layer sizes of 32, 64, 128 etc should increase the speed of finding a good layer size compared to trying sizes 32, 33, 34 etc. The main reason to pick powers of 2 is tradition in computer science. Provided there is no driver to pick other specific numbers, may as well pick a power of 2 . . . but equally you will see researchers picking multiples of 10, 100 or 1000 as "round numbers", for a similar reason. One related factor: If a researcher presents a result for some new technique where the hidden layer sizes were tuned to e.g. 531, 779, 282 etc, then someone reviewing the work would ask the obvious question "Why?" - such numbers might imply the new technique is not generic or requires large amounts of hyperparameter tuning, neither of which would be seen as positive traits. Much better to be seen using an obvious "simple" number . . . Upvotes: 5 [selected_answer]<issue_comment>username_2: There is a hardware based reasoning. Matrix multiplication is one of the central computations in deep learning. SIMD operations in CPUs happen in batch sizes, which are powers of 2. Here is a good reference about speeding up neural networks on CPUs by leveraging SIMD instructions: [Improving the speed of neural networks on CPUs](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/37631.pdf) You will notice batch sizes that are powers of 2. This is a good paper to read about implementing neural networks using SIMD instructions. Upvotes: 1
2018/02/23
529
2,255
<issue_start>username_0: How is it that a word embedding layer (say word2vec) brings more insights to the neural network compared to a simple one-hot encoded layer? I understand how the word embedding carries some semantic meaning, but it seems that this information would get "squashed" by the activation function, leaving only a scalar value and as many different vectors could yield the same result, I would guess that the information is more or less lost. Could anyone bring me insights as to why a neural network may utilize the information contained in a word embedding?<issue_comment>username_1: Shakespeare once said "A rose by any other name would smell as sweet" (Romeo and Juliet). Words are just labels we attach to ideas for convenience. By using one hot we remain tied to the letter sequence r,o,s,e, and some other structure must take on the responsibility of attaching the context of sweetness to it. Word embeddings learn a multi-dimensional context. What exactly the context is of each dimension of the embedding is something of a mystery and simply emerges from the learning. The larger the number of dimensions, the greater the possibility that some combination of the dimensions will represent the sweetness context, but it might be quite hard to tease out. So you can attach the idea of sweetness to one member of a one-hot structure, but it must necessarily be a part of a rules-based approach. Embeddings, when they are working well, will not need the rules. Upvotes: 3 <issue_comment>username_2: Adding to Colin's answer; using word embedding tend to be much more robust that one-hot vectors. Consider the the following two sentences: > > The desk has a book on it. > > > and > > The table has a book on it. > > > These two sentences are almost identical in meaning. If we were to using word embeddings, the vectors 'desk' and 'table' would be very close together. The fact that these two sentences are similar becomes implicit with embeddings. But if we were to use one-hot vectors, the distance between the two vectors would be the same distance between 'desk' and 'cat' or 'table' and 'book'. So now the network must learn that these sentences may entail the same thing on top of the original task. Upvotes: 2
2018/02/23
966
3,599
<issue_start>username_0: Could you please provide some insight into the current stage of developments in AGI area? Are there any projects that had breakthroughs recently? Maybe some news source to follow on this topic?<issue_comment>username_1: The state of AGI research is pursuing the few problems that we have been able to break off from the gigantic research problem. These are terms which can be more thoroughly looked into. A few of the main focuses are: * [One Shot learning](https://en.m.wikipedia.org/wiki/One-shot_learning) - You know how a person can sometimes learn to do something by something by seeing literally 1 example of it? Well current learning methods on the whole are not able to accomplish this to the extent that we can easily take for granted. Work is being done to find ways to approach this feat of learning and it’s on its way to becoming much more influential. * [Transfer Learning](https://en.m.wikipedia.org/wiki/Transfer_learning) - If you have ever played a side scroller like Mario, and then I gave you a slightly different game like Sonic, odds are you’d could learn to play Sonic faster than it took you to learn to play Mario. This is because of learning “savings” you have by transferring your Mario knowledge to the new Sonic domain. This is a much more popular research arm than one-shot, probably because it’s easier to think about but also because there have been promising results of pretraining a network on one set of data and focusing it an another task. * Creativity/curiosity - Although one could say that GANs have really changed how humans can be more creative, it is difficult to quantify curiosity and creativity [This](https://arxiv.org/abs/1705.05363) paper gives an okay overview. Moreover, allowing an agent to take the chances and make some mistakes as is the nature of creativity concerns many people who are focused on AI Safety. * Understanding concepts - This is subtle but very very important. Current AI methodologies struggle with imbuing AI with the ability to have concepts. By concepts, I don’t mean “it kinda looks like this neuron in the second last layer is sensitive to tires”. I mean that understanding what a tire looks like is just a small part of understanding what a tire is, what it is used for, what it affords someone to do etc. This research direction is in it’s infancy but will be much more influential as more theories and ideas are brought forward. Despite the progress made in these fields and in the many other areas in AI, there is still much to be done and understood before we can finally have s̶k̶y̶n̶e̶t̶ Wall-E. Upvotes: 2 <issue_comment>username_2: In the paper [Artificial General Intelligence: Concept, State of the Art, and Future Prospects](https://content.sciendo.com/view/journals/jagi/5/1/article-p1.xml) (2014) <NAME> gives an overview of the AGI field and its progress. He describes the main approaches * symbolic (e.g. cognitive architectures, like SOAR) * emergentist/subsymbolic (neural networks) * hybrid (combination of symbolic and emergentist) * universalist ([AIXI and Godel Machines](https://ai.stackexchange.com/q/21427/2444)) to AGI and metrics to assess human-level intelligence and partial progress. There's also an older book by <NAME> and <NAME> called [Artificial General Intelligence](https://link.springer.com/book/10.1007/978-3-540-68677-4) (2007). A chapter of the book, [Contemporary Approaches to Artificial General Intelligence](https://bilder.buecher.de/zusatz/13/13732/13732357_lese_1.pdf), gives a brief history of the AGI field. Upvotes: 0
2018/02/23
562
2,442
<issue_start>username_0: (Gross Oversimplification) Neural Networks model systems, black boxes with a set of inputs, and a set of outputs. To train a network for modeling this system, obtain hundreds (or millions) of possible inputs/output pairs. This is called the data set, and the network and its optimization algorithm are set to find a set of network parameters that best match the I/O of the network with the I/O of the system. Are there any systems, for which we have functional data sets, that have yet to be meaningfully modeled with Neural Networks in any form (recurrent, deep, convolutional, etc)?<issue_comment>username_1: If i understand your question correctly , you are asking if there exists functional datasets for which there are no proven solutions based on neural networks which give substantial accuracy. there are many such problems for which we have data in abundancy, question answering would be one such a thing , you still cant devise a neural network architechture that reads through entire principia mathematica and then complete theorems , and point cloud processing is a also a big hurdle for neural networks considering the highly irregular datastructure , even if you voxelize a point-cloud it would be infeasible to train large convolutional networks on it . (there is also rapid progress in this direction ,, point cloud processing). <NAME> mentioned in an AMA before 3 years that we will see neural networks that will answer questions based on videos in the next five years , but still video-question answering seems to be far away from the present technology. Graph datasets are also one such area where still neural networks research is in infancy (refer <http://www.inference.vc/how-powerful-are-graph-convolutions-review-of-kipf-welling-2016-2/>) Upvotes: 1 <issue_comment>username_2: I haven't seen any dataset where some standard models worked and neural networks utterly failed. For columnar data (e.g. Excel files / database dumps / CSV files) which contain structured data usually tree-based models like random forests and gradient boosting work better, but neural networks are also usually way better than random. If you demand other things, e.g. explanations for the decision then [Bayesian models might give you an easier time](https://datascience.stackexchange.com/q/9818/8820). Or for baselines/simple implementations linear models. Or for real time applications... Upvotes: 2
2018/02/24
1,344
5,610
<issue_start>username_0: If possible consider the relationship between implementation difficulty and accuracy in voice examples or simply chat conversations. And currently, what are the directions on algorithms like Deep Learning or others to solve this.<issue_comment>username_1: You might want to take the Stanford Online course on YouTube [**Natural Language Processing with Deep Learning**](https://www.youtube.com/playlist?list=PL3FW7Lu3i5Jsnh1rnUwq_TcylNr7EkRe6). This course will give you insight into how different kinds of neural networks can be used for different kind of NLP tasks. In my opinion, you can use *Gated Recurrent Units (GRUs)* to encode and decode text. Of course, text will be easier because voice data, as it is stored in a computer, is going to be difficult to interpret in the testing phase. Another way is to get most ***impactful*** words and then use these to form sentences regarding the original text. You can also start by looking out for publications related to text summarizers. For example, [Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond](https://arxiv.org/abs/1602.06023), will get you started. You can use this as a starting point. In case you need to understand basics regarding the underlying techniques, then you can go through references in this paper and find out useful resources to get you started. Upvotes: 0 <issue_comment>username_2: Summarizing text is always going to be 'easier or more efficient' than voice simply because voice requires the additional step of converting to text. That doesn't tell you anything about accuracy. From an article published on June 1, 2017, [Google’s speech recognition is now almost as accurate as humans](https://9to5google.com/2017/06/01/google-speech-recognition-humans/): *"According to <NAME>’s annual Internet Trends Report, Google’s machine learning-backed voice recognition — as of May 2017 — has achieved a **95% word accuracy rate for the English language**. That current rate also happens to be the threshold for human accuracy."* If you need this kind of accuracy check out [Google's Cloud Speech API](https://cloud.google.com/speech/). There is even a speech to text feature on the web page. Given a speech-to-text conversion accuracy of 95%, voice will be 5% less accurate than text if everything else is equal but it usually isn't. People generally write better text, such as in documents or emails, than when they speak unless of course they are giving a formal lecture, or talking in a formal meeting. If one is analyzing text messages, Tweets, or threads found in typical informal forums, you will find very poor quality in grammar, spelling, vocabulary, and punctuation. The answer to your question will depend on the source of your text. In another article, dated November 13, 2017, [Why 100% Accuracy Is Not Available With Speech Recognition Software Alone](https://transcribeme.com/blog/why-100-accuracy-is-not-available-with-speech-recognition-software), the author gives some reasons, albeit for transcription software which has a special purpose, why there will always be some errors due to: * Speech Patterns and Accents - Regional variations exist, for example English speakers in Boston sound different than Kentucky. How does the software handle slurred speech or when a person blends their words? * Grammar and Punctuation - speech recognition software doesn't know where a period, comma, or semi-colon belongs * Homonyms and unusual words - "Speech processing software can only recognize words and phrases that it has specifically been trained to recognize." * Ambient Noise, Overlapping Speech, and Number of Speakers To address your last question about where the technology is going... Four days ago a paper by <NAME>, <NAME>, <NAME>, and <NAME> entitled [Recent Trends in Deep Learning Based Natural Language Processing](https://arxiv.org/pdf/1708.02709.pdf) was published which gives some of the answers. From the 'Conclusion' section: *With distributed representation, various deep models have become the new state-of-the-art methods for NLP problems. Supervised learning is the most popular practice in recent deep learning research for NLP. In many real-world scenarios, however, we have unlabeled data which require advanced unsupervised or semi-supervised approaches. In cases where there is lack of labeled data for some particular classes or the appearance of a new class while testing the model, strategies like zero-shot learning should be employed. These learning schemes are still in their developing phase but we expect deep learning based NLP research to be driven in the direction of making better use of unlabeled data. We expect such trend to continue with more and better model designs. We expect to see more NLP applications that employ reinforcement learning methods, e.g., dialogue systems. We also expect to see more research on multimodal learning [167] as, in the real world, language is often grounded on (or correlated with) other signals.* *Finally, we expect to see more deep learning models whose internal memory (bottom-up knowledge learned from the data) is enriched with an external memory (top-down knowledge inherited from a KB). Coupling symbolic and sub-symbolic AI will be key for stepping forward in the path from NLP to natural language understanding. Relying on machine learning, in fact, is good to make a ‘good guess’ based on past experience, because sub-symbolic methods encode correlation and their decision-making process is probabilistic.* Upvotes: 3 [selected_answer]
2018/02/25
911
3,426
<issue_start>username_0: Is there a way for people outside of the core research community of AGI to contribute to the cause? There are a lot of people interested in supporting the field, but there is no clear way to do that. Is there something like [BOINC](https://boinc.berkeley.edu) for AGI researches, or open projects where random experts can provide some input? Maybe Kickstarter for AGI projects?<issue_comment>username_1: [OpenCog](https://wiki.opencog.org) is an open source AGI project. But it is is also [incredibly complex](http://goertzel.org/MonsterDiagram.jpg) and IMHO not a good idea (I have not fully read his theories). You can learn the essential ideas behind OpenCog from the [co-founder <NAME> site](http://goertzel.org/research-papers/.) as well. Or, you can participate in the philosophical discussion regarding AGI. For strictly AGI, decision theory, logic, and math material (they are all related), you can look up stuff from <http://yudkowsky.net/> or <https://arbital.com/>. But, in some sense, every branch of philosophical inquiry can be tied back to AGI and consciousness (ethics, metaphysics, etc.), so if it fancies you it depends on how you'd like to tackle it. You could also study the psychology end of things. The following papers and related ideas are quite important in the field of study of consciousness and cognition (but keep in mind this is pretty much a random list, the literature is massive!): * [Global Workspace: A Theory of Consciousness](http://cogweb.ucla.edu/CogSci/GWorkspace.html) (1997) * [Relevance Realization and the Emerging Framework in Cognitive Science](http://www.ipsi.utoronto.ca/sdis/Relevance-Published.pdf) (2009) * [The free-energy principle: a unified brain theory?](https://www.nature.com/articles/nrn2787.pdf) (2010) Recently, (in mathematical time), progress in category theory shows promise of being a unified framework of much of the existing math. I know next to nothing about it, but the people that do are applying to many new fields of study (including AI, apparently). Category theory requires a lot of background mathematical knowledge before its vocabulary begins to make sense, though, so beware. You can read about it on the [nCatLab](https://ncatlab.org/nlab/show/HomePage) and occasionally on [<NAME>'s blog: Azimuth](https://johncarlosbaez.wordpress.com/) Of course, "regular" techniques in Machine Learning such as neural networks, reinforcement learning, statistical methods, and others are very powerful as well, but due to certain regards in their construction, they are generally understood as only being capable of being "narrow AI" in the sense that they can only complete a single task very well, but perhaps you can find some research that changes this? Upvotes: 3 <issue_comment>username_2: You can contribute to AGI research in several ways. * Write papers for the [Artificial General Intelligence conference](http://agi-conf.org/), which is peer-reviewed, or other AGI conferences or journals, or just submit them to e.g. [Arxiv](https://arxiv.org). For example, you can write papers about AGI algorithms or architectures. * Contribute to (open source) AGI systems, like [OpenCog](https://github.com/opencog), [BECCA](https://github.com/brohrer/becca) or [OpenNARS](https://github.com/opennars/opennars). * Implement AGI algorithms, mechanisms or architectures, and put them, for example, on Github. Upvotes: 2
2018/02/25
1,287
5,428
<issue_start>username_0: If the game had a variable speed and was essential in evolution/gaining score(IDK AI terminologies). Would the AI be able to figure out when to slow down and speed up? If it is able to solve the problem or complete the level, will it have an equation to relating acceleration, or perhaps a number on when to speed up and down. What if the game environment was dynamic? Can you even teach math to an AI? PS: I'm not sure if I should ask separate question?<issue_comment>username_1: This answer mostly assumes you are referring to computer-game-playing bots that learn through experiencing play, such as Deep Mind's DQN as used for playing Atari console games. State of the art for these are typically [Reinforcement Learning](https://en.wikipedia.org/wiki/Reinforcement_learning) algorithms, used with neural networks to process input and estimate results of next actions. There are other competitive AI technologies too, and the answer applies generally to most learning or evolving optimisers that would learn through trial and error by playing the game. > > If the game had a variable speed and was essential in evolution/gaining score(IDK AI terminologies). Would the AI be able to figure out when to slow down and speed up? > > > Yes, as long as the game allowed control of something that influenced acceleration, then a learning agent can figure out the consequences of accelerating and braking and use them appropriately within the game. A well-known toy example that can challenge learning agents is called [Mountain Car](https://en.wikipedia.org/wiki/Mountain_car_problem). In that game, the agent has to learn to accelerate in the correct direction (which is not always towards its objective), in order to escape an area. It is considered challenging because the reward (for escaping) can be significantly delayed compared to the action that best enables it. One popular proving ground for learning agents is OpenAI gym. This includes several game environments with a physical model of acceleration included, such as [Lunar Lander](https://gym.openai.com/envs/LunarLander-v2/). > > If it is able to solve the problem or complete the level, will it have an equation to relating acceleration? > > > In general, no. The agent will learn to respond to certain stimulus, by taking an action that accelerates or decelerates the game piece that it controls. There will not be any concept like $s = ut + \frac{1}{2} at^2$ encoded in the agent's parameters. > > or perhaps a number on when to speed up and down? > > > Typically the agent will learn which stimuli should be responded to by accelerating. For instance, in a game where the agent's game piece is being chased by an enemy piece and the enemy piece is getting closer, the agent should learn that it will get a better reward if it accelerates away from the enemy. > > What if the game environment was dynamic? > > > Most game environments are dynamic, as in the state changes over time. If you mean would anything change if the game rules themselves varied over time, then this may cause interesting problems for some learning algorithms, but should not change anything about learning use of controls that affect acceleration in a virtual world. > > Can you even teach math to an AI? > > > Generally, no you cannot teach math to the kind of system that plays games, or interacts with real world objects. These kind of learning systems are not yet advanced enough to learn concepts or establish game world logic from interactions. Instead, they work more akin to perception, muscle memory and either inherent or learned reflexes. Exceptions to this will generally have a world model (with the necessary equations) built in or made available to the agent without it needing to learn anything. However, there are AI systems that use formal logic that can work on mathematical theories. Some have performed interesting feats such as "discovering" prime numbers, given formal definitions of integers and basic arithmetic. An example of this kind of system is the [Automated Mathematician](https://en.wikipedia.org/wiki/Automated_Mathematician). There is an intermediate possibility: Some learning agents not only learn the *value* or *best policy* for certain actions, they also learn to predict what should happen next to the state of the environment. Such an agent would include a model that could observe objects that were accelerating and predict their future positions. In some ways this *is* a learned concept of acceleration, although it would not be expressed mathematically like Newton's laws of motion, and is more akin to the kind of intuition that allows a person to track, anticipate and catch a thrown ball. Upvotes: 3 [selected_answer]<issue_comment>username_2: Methods such as [Genetic Programming](http://www.gp-field-guide.org.uk/?m=1) can induce symbolic expressions from observations. Indeed, such methods can be used as an alternative to neural (or other nonsymbolic) approaches when the goal is not merely to find a function that fits the data well, but which also has a chance of giving a human-readable description of the learned function in terms of whatever mathematical functions the user chooses to supply the learning method with. These methods are even [used commercially](https://link.springer.com/content/pdf/10.1023/A:1011516717456.pdf) for knowledge discovery. Upvotes: 2
2018/02/27
1,754
7,076
<issue_start>username_0: The situation ============= I am referring to the paper [<NAME> et al, "Continuous control with deep reinforcement learning"](https://arxiv.org/abs/1509.02971) where they discuss deep learning in the context of continuous action spaces ("Deep Deterministic Policy Gradient"). Based on the DPG approach ("Deterministic Policy Gradient", see [D. Silver et al, "Deterministic Policy Gradient Algorithms"](http://proceedings.mlr.press/v32/silver14.pdf)), which employs two neural networks to approximate the actor function `mu(s)` and the critic function `Q(s,a)`, they use a similar structure. However one characteristic they found is that in order to make the learning converge it is necessary to have two additional "target" networks `mu'(s)` and `Q'(s,a)` which are used to calculate the target ("true") value of the reward: ``` y_t = r(s_t, a) + gamma * Q'(s_t1, mu'(s_t1)) ``` Then after each training step a "soft" update of the target weights `w_mu', w_Q'` with the actual weights `w_mu, w_Q` is performed: ``` w' = (1 - tau)*w' + tau*w ``` where `tau << 1`. According to the paper > > This means that the target values are constrained to change slowly, greatly improving the stability of learning. > > > So the target networks `mu'` and `Q'` are used to predict the "true" (target) value of the expected reward which the other two networks try to approximate during the learning phase. They sketch the training procedure as follows: [![DDPG Sketch](https://i.stack.imgur.com/BBNRP.png)](https://i.stack.imgur.com/BBNRP.png) The question ============ So my question now is, after the training is complete, which of the two networks `mu` or `mu'` should be used for making predictions? Equivalently to the training phase I suppose that `mu` should be used without the exploration noise but since it is `mu'` that is used during the training for predicting the "true" (unnoisy) action for the reward computation, I'm apt to use `mu'`. Or does this even matter? If the training was to last long enough shouldn't both versions of the actor have converged to the same state?<issue_comment>username_1: This answer mostly assumes you are referring to computer-game-playing bots that learn through experiencing play, such as Deep Mind's DQN as used for playing Atari console games. State of the art for these are typically [Reinforcement Learning](https://en.wikipedia.org/wiki/Reinforcement_learning) algorithms, used with neural networks to process input and estimate results of next actions. There are other competitive AI technologies too, and the answer applies generally to most learning or evolving optimisers that would learn through trial and error by playing the game. > > If the game had a variable speed and was essential in evolution/gaining score(IDK AI terminologies). Would the AI be able to figure out when to slow down and speed up? > > > Yes, as long as the game allowed control of something that influenced acceleration, then a learning agent can figure out the consequences of accelerating and braking and use them appropriately within the game. A well-known toy example that can challenge learning agents is called [Mountain Car](https://en.wikipedia.org/wiki/Mountain_car_problem). In that game, the agent has to learn to accelerate in the correct direction (which is not always towards its objective), in order to escape an area. It is considered challenging because the reward (for escaping) can be significantly delayed compared to the action that best enables it. One popular proving ground for learning agents is OpenAI gym. This includes several game environments with a physical model of acceleration included, such as [Lunar Lander](https://gym.openai.com/envs/LunarLander-v2/). > > If it is able to solve the problem or complete the level, will it have an equation to relating acceleration? > > > In general, no. The agent will learn to respond to certain stimulus, by taking an action that accelerates or decelerates the game piece that it controls. There will not be any concept like $s = ut + \frac{1}{2} at^2$ encoded in the agent's parameters. > > or perhaps a number on when to speed up and down? > > > Typically the agent will learn which stimuli should be responded to by accelerating. For instance, in a game where the agent's game piece is being chased by an enemy piece and the enemy piece is getting closer, the agent should learn that it will get a better reward if it accelerates away from the enemy. > > What if the game environment was dynamic? > > > Most game environments are dynamic, as in the state changes over time. If you mean would anything change if the game rules themselves varied over time, then this may cause interesting problems for some learning algorithms, but should not change anything about learning use of controls that affect acceleration in a virtual world. > > Can you even teach math to an AI? > > > Generally, no you cannot teach math to the kind of system that plays games, or interacts with real world objects. These kind of learning systems are not yet advanced enough to learn concepts or establish game world logic from interactions. Instead, they work more akin to perception, muscle memory and either inherent or learned reflexes. Exceptions to this will generally have a world model (with the necessary equations) built in or made available to the agent without it needing to learn anything. However, there are AI systems that use formal logic that can work on mathematical theories. Some have performed interesting feats such as "discovering" prime numbers, given formal definitions of integers and basic arithmetic. An example of this kind of system is the [Automated Mathematician](https://en.wikipedia.org/wiki/Automated_Mathematician). There is an intermediate possibility: Some learning agents not only learn the *value* or *best policy* for certain actions, they also learn to predict what should happen next to the state of the environment. Such an agent would include a model that could observe objects that were accelerating and predict their future positions. In some ways this *is* a learned concept of acceleration, although it would not be expressed mathematically like Newton's laws of motion, and is more akin to the kind of intuition that allows a person to track, anticipate and catch a thrown ball. Upvotes: 3 [selected_answer]<issue_comment>username_2: Methods such as [Genetic Programming](http://www.gp-field-guide.org.uk/?m=1) can induce symbolic expressions from observations. Indeed, such methods can be used as an alternative to neural (or other nonsymbolic) approaches when the goal is not merely to find a function that fits the data well, but which also has a chance of giving a human-readable description of the learned function in terms of whatever mathematical functions the user chooses to supply the learning method with. These methods are even [used commercially](https://link.springer.com/content/pdf/10.1023/A:1011516717456.pdf) for knowledge discovery. Upvotes: 2
2018/02/28
874
3,059
<issue_start>username_0: I'd like to generate subtitles for a silent film. Is there an open source project out there capable of creating captions based on a series of images (such as a scene from a movie)? EDIT: thanks for the comments below. To clarify, what i'm looking for is an algorithm which can generate a caption for a sequences of images within a movie describing what happens in the sequence. This is for preliminary research, so accuracy is less important.<issue_comment>username_1: Here is one open-source implementation. [Temporal Tessellation: A Unified Approach for Video Analysis](https://github.com/dot27/temporal-tessellation) For more you can dig into some research publications and see if they give a link to their implementation. Most researchers put up their work for public. Here is a list of publications that present their work related to video captioning using machine learning. [Awesome Deep Vision](https://github.com/kjw0612/awesome-deep-vision#video-captioning) Here is another publication that shows how to generate captions for videos. [Video Paragraph Captioning Using Hierarchical Recurrent Neural Networks](https://www.ics.uci.edu/~yyang8/research/video-caption/video-caption.pdf) Upvotes: 0 <issue_comment>username_2: No , even generating brief summary of a video is out of scope as of current state of the art. Training such a model is a tough task. Video understanding is pretty far from the research as of now. But you could try generating descriptions of some keyframes of the video , and align them to form meaningful passages. check this <https://arxiv.org/abs/1611.06607> Upvotes: 0 <issue_comment>username_3: What you might want to look for is called **video captioning**. Earlier examples from this line of research is: * Venugopalan et. al. [Sequence to Sequence – Video to Text](https://arxiv.org/pdf/1505.00487.pdf), in ICCV 2015 * Pasunuru et.al. [Multi-Task Video Captioning with Video and Entailment Generation](http://aclweb.org/anthology/P/P17/P17-1117.pdf), in ACL 2017 Below is a screenshot of the results (positive and negative) reported in those papers: [![Example from Venugopalan1](https://i.stack.imgur.com/neTfw.png)](https://i.stack.imgur.com/neTfw.png) [![Example from Pasunuru](https://i.stack.imgur.com/qO1XT.png)](https://i.stack.imgur.com/qO1XT.png) For the ICCV paper, it's not hard to find some implementations, e.g. [here](https://github.com/vijayvee/video-captioning). For more recent results, I would suggest to look into the [ActivityNet 2017 Challenge - dense captioning](http://activity-net.org/challenges/2017/captioning.html) or its [2018 version](http://activity-net.org/challenges/2018/tasks/anet_captioning.html). Some winning solutions include: * [Jointly Localizing and Describing Events for Dense Video Captioning](https://arxiv.org/pdf/1804.08274.pdf) * [RUC+CMU: System Report for Dense Captioning Events in Videos](https://arxiv.org/pdf/1806.08854.pdf) However I am not sure whether any open-source implementation has been released. Upvotes: 2 [selected_answer]
2018/03/01
754
3,410
<issue_start>username_0: As far as I understand, neural networks aren't good at classifying 'unknowns', i.e. objects that do not belong to a learned class. But how do face detection/recognition approaches usually determine that no face is detected/recognised in a region? Is the predicted probability somehow thresholded? I'm asking because my application will involve identifying unknown objects. In fact, most of the input objects are unknown and only a fraction is known.<issue_comment>username_1: ### Summary It is true that neural networks are inherently not good at classifying 'unknowns' because they tend to ***overfit*** to the data that they have been trained on, if the underlying structure of the neural network is complex enough. However, there are multiple ways to go about reducing the affects of ***overfitting***. For example, one technique that is used for this is called ***dropout***. Another example can be ***batch normalization***. Despite these techniques, the best way to reduce the affects of ***overfitting*** is to use *more data*. For the facial recognition example that you have given above, it is common that the models that have been trained have 'seen' a huge amount of data. This means that there are very few 'unknowns' and even if there are, the neural network has learned how to tell if there are facial features present or not. This is because certain structures of neural networks are really good at telling if there is a pattern of features present in the input data. This helps the neural networks to learn if the image that is being input has certain features/patterns in it or not. If the these features are found then the input data is classified as face otherwise it is not. ### What can you do in your case? Let us assume that you are going to train your neural network to recognize if an input image is a cat or not. You will use a ***Convolutional Neural Network (CNN)*** and train it to recognize if the input is cat or ***not***. The ***not*** part means that you have to include a lot of examples in your training data that are ***not*** cat. In the perfect case you will be able to show it everything that is ***not*** a cat and classify it as such. Also you show it multiple images of what a cat is. ***CNNs*** are really great for this application. You might want to research regarding this and see what kind of ***CNN*** best suits your application. If you don't have gazillion samples of what a cat is ***not*** then you can use regularization techniques like ***dropout*** and ***batch normalization***. *PS: For more details please mention what strategies you have used up till now. Also it would be better if you can share what your desired task is.* Upvotes: 3 [selected_answer]<issue_comment>username_2: I would like to highlight an import step for face recognition which is features extraction. Based on my experience, you can evaluated robust feature extraction methods like, Scale-Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF) using several matching approaches such as Brute Force Matcher, K-Nearest Neighbor (KNN), Best-Bin-First (BBF) and RANdom SAmple Consensus (RANSAC). The purpose is to identify the method(s) that is/are most appropriate to be used in your application. Then comes your machine learning model, which you need to test several model options like mentioned in the previous message. Upvotes: 0
2018/03/02
4,080
15,742
<issue_start>username_0: It is said that activation functions in neural networks help introduce ***non-linearity***. * What does this mean? * What does ***non-linearity*** mean in this context? * How does the introduction of this ***non-linearity*** help? * Are there any other purposes of ***activation functions***?<issue_comment>username_1: Let's first talk about [linearity](https://en.wikipedia.org/wiki/Linear_map). Linearity means the map (a function), $f: V \rightarrow W$, used is a linear map, that is, it satisfies the following two conditions 1. $f(x + y) = f(x) + f(y), \; x, y \in V$ 2. $f(c x) = cf(x), \; c \in \mathbb{R}$ You should be familiar with this definition if you have studied linear algebra in the past. However, it's more important to think of linearity in terms of linear separability of data, which means the data can be separated into different classes by drawing a line (or hyperplane, if more than two dimensions), which represents a linear decision boundary, through the data. If we cannot do that, then the data is not linearly separable. Often times, data from a more complex (and thus more relevant) problem setting is not linearly separable, so it is in our interest to model these. To model nonlinear decision boundaries of data, we can utilize a neural network that introduces non-linearity. Neural networks classify data that is not linearly separable by transforming data using some nonlinear function (or our activation function), so the resulting transformed points become linearly separable. Different activation functions are used for different problem setting contexts. You can read more about that in the book [Deep Learning (Adaptive Computation and Machine Learning series)](http://www.deeplearningbook.org). For an example of non linearly separable data, see the XOR data set. [![enter image description here](https://i.stack.imgur.com/JdfWD.png)](https://i.stack.imgur.com/JdfWD.png) Can you draw a single line to separate the two classes? Upvotes: 3 <issue_comment>username_2: Consider a very simple neural network, with just 2 layers, where the first has 2 neurons and the last 1 neuron, and the input size is 2. The inputs are $x\_1$ and $x\_1$. The weights of the first layer are $w\_{11}, w\_{12}, w\_{21}$ and $w\_{22}$. We do not have activations, so the outputs of the neurons in the first layer are \begin{align} o\_1 = w\_{11}x\_1 + w\_{12}x\_2 \\ o\_2 = w\_{21}x\_1 + w\_{22}x\_2 \end{align} Let's calculate the output of the last layer with weights $z\_1$ and $z\_2$ $$out = z\_1o\_1 + z\_2o\_2$$ Just substitute $o\_1$ and $o\_2$ and you will get: $$out = z\_1(w\_{11}x\_1 + w\_{12}x\_2) + z\_2(w\_{21}x\_1 + w\_{22}x\_2)$$ or $$out = (z\_1w\_{11} + z\_2 w\_{21})x\_1 + (z\_2w\_{22} + z\_1w\_{12})x\_2$$ And look at this! If we create NN just with one layer with weights $z\_1w\_{11} + z\_2 w\_{21}$ and $z\_2w\_{22} + z\_1w\_{12}$ it will be equivalent to our 2 layers NN. The conclusion: without nonlinearity, the computational power of a multilayer NN is equal to 1-layer NN. Also, you can think of the sigmoid function as differentiable IF the statement that gives a probability. And adding new layers can create new, more complex combinations of IF statements. For example, the first layer combines features and gives probabilities that there are eyes, tail, and ears on the picture, the second combines new, more complex features from the last layer and gives probability that there is a cat. For more information: [Hacker's guide to Neural Networks](http://karpathy.github.io/neuralnets/). Upvotes: 3 <issue_comment>username_3: If you only had linear layers in a neural network, all the layers would essentially collapse to one linear layer, and, therefore, a "deep" neural network architecture effectively wouldn't be deep anymore but just a linear classifier. $$y = f(W\_1 W\_2 W\_3x) = f(Wx)$$ where $W$ corresponds to the matrix that represents the network weights and biases for one layer, and $f()$ to the activation function. Now, with the introduction of a non-linear activation unit after every linear transformation, this won't happen anymore. $$y = f\_1( W\_1 f\_2( W\_2f\_3( W\_3x)))$$ Each layer can now build up on the results of the preceding non-linear layer which essentially leads to a complex non-linear function that is able to approximate every possible function with the right weighting and enough depth/width. Upvotes: 4 <issue_comment>username_4: Almost all of the functionalities provided by the non-linear activation functions are given by other answers. Let me sum them up: * First, what does non-linearity mean? It means something (a function in this case) which is not linear with respect to a given variable/variables i.e. $f(c1.x1 + c2.x2...cn.xn + b) != c1.f(x1) + c2.f(x2) ... cn.f(xn) + f(b).$ **NOTE:** There is some ambiguity about how one might define linearity. In polynomial equations we define linearity in somewhat a different way as compared to in vectors or some systems which take an input $x$ and give an output $f(x)$. See the second [answer](https://math.stackexchange.com/questions/217551/linearity-of-the-function-fx-ax-b). * What does non-linearity mean in this context? It means that the Neural Network can successfully approximate functions (up-to a certain error $e$ decided by the user) which does not follow linearity or it can successfully predict the class of a function that is divided by a decision boundary that is not linear. * Why does it help? I hardly think you can find any physical world phenomenon which follows linearity straightforwardly. So you need a non-linear function that can approximate the non-linear phenomenon. Also, a good intuition would be any decision boundary or a function is a linear combination of polynomial combinations of the input features (so ultimately non-linear). * Purposes of activation function? In addition to introducing non-linearity, every activation function has its own features. **Sigmoid $\frac{1} {(1 + e ^ {-(w1\*x1...wn\*xn + b)})}$** This is one of the most common activation function and is monotonically increasing everywhere. This is generally used at the final output node as it squashes values between 0 and 1 (if the output is required to be `0` or `1`). Thus above 0.5 is considered `1` while below 0.5 as `0`, although a different threshold (not `0.5`) maybe set. Its main advantage is that its differentiation is easy and uses already calculated values and supposedly horseshoe crab neurons have this activation function in their neurons. **Tanh** $\frac{e ^ {(w1\*x1...wn\*xn + b)} - e ^ {-(w1\*x1...wn\*xn + b)})}{(e ^ { (w1\*x1...wn\*xn + b)} + e ^ {-(w1\*x1...wn\*xn + b)}}$ This has an advantage over the sigmoid activation function as it tends to centre the output to 0 which has an effect of better learning on the subsequent layers (acts as a feature normaliser). A nice explanation [here](https://stats.stackexchange.com/questions/101560/tanh-activation-function-vs-sigmoid-activation-function). Negative and positive output values maybe considered as `0` and `1` respectively. Used mostly in RNN's. **[Re-Lu activation function](https://en.wikipedia.org/wiki/Rectifier_(neural_networks))** - This is another very common simple non-linear (linear in positive range and negative range exclusive of each other) activation function that has the advantage of removing the problem of vanishing gradient faced by the above two i.e. gradient tends to `0` as x tends to +infinity or -infinity. [Here](https://ai.stackexchange.com/questions/5601/mathematical-intuition-for-the-use-of-re-lus-in-machine-learning) is an answer about Re-Lu's approximation power in-spite of its apparent linearity. ReLu's have a disadvantage of having dead neurons which result in larger NN's. Also, you can design your own activation functions depending on your specialized problem. You may have a quadratic activation function which will approximate quadratic functions much better. But then, you have to design a cost function that should be somewhat convex in nature, so that you can optimise it using first-order differentials and the NN actually converges to a decent result. This is the main reason why standard activation functions are used. But I believe with proper mathematical tools, there is a huge potential for new and eccentric activation functions. For example, say you are trying to approximate a single-variable quadratic function say $a.x^2 + c$. This will be best approximated by a quadratic activation $w1.x^2 + b$ where$w1$ and $b$ will be the trainable parameters. But designing a loss function that follows the conventional first-order derivative method (gradient descent) can be quite tough for non-monotonically increasing function. **For Mathematicians:** In the sigmoid activation function $(1 / (1 + e ^ {-(w1\*x1...wn\*xn + b)})$ we see that $e ^ {-(w1\*x1...wn\*xn + b)}$ is always **<** `1`. By binomial expansion, or by reverse calculation of the infinite GP series we get $sigmoid(y)$ = $1 + y + y^2.....$. Now in a NN $y = e ^ {-(w1\*x1...wn\*xn + b)}$. Thus we get all the powers of $y$ which is equal to $e ^ {-(w1\*x1...wn\*xn + b)}$ thus each power of $y$ can be thought of as a multiplication of several decaying exponentials based on a feature $x$, for eaxmple $y^2 = e^ {-2(w1x1)} \* e^ {-2(w2x2)} \* e^ {-2(w3x3)} \*...... e^ {-2(b)}$. Thus each feature has a say in the scaling of the graph of $y^2$. Another way of thinking would be to expand the exponentials according to Taylor Series: $$e^{x}=1+\frac{x}{1 !}+\frac{x^{2}}{2 !}+\frac{x^{3}}{3 !}+\cdots$$ So we get a very complex combination, with all the possible polynomial combinations of input variables present. I believe if a Neural Network is structured correctly the NN can fine-tune these polynomial combinations by just modifying the connection weights and selecting polynomial terms maximum useful, and rejecting terms by subtracting the output of 2 nodes weighted properly. The $tanh$ activation can work in the same way since output of $|tanh| < 1$. I am not sure how Re-Lu's work though, but due to its rigid structure and problem of dead neurons we require larger networks with ReLu's for a good approximation. But for a formal mathematical proof, one has to look at the Universal Approximation Theorem. * [A visual proof that neural nets can compute any function](http://neuralnetworksanddeeplearning.com/chap4.html) * [The Universal Approximation Theorem For Neural Networks- An Elegant Proof](http://mcneela.github.io/machine_learning/2017/03/21/Universal-Approximation-Theorem.html) For non-mathematicians some better insights visit these links: [Activation Functions by Andrew Ng - for more formal and scientific answer](https://www.coursera.org/learn/neural-networks-deep-learning/lecture/4dDC1/activation-functions) [How does neural network classifier classify from just drawing a decision plane?](https://ai.stackexchange.com/questions/3964/how-does-neural-network-classifier-classify-from-just-drawing-a-decision-plane/5225#5225) [Differentiable activation function](https://ai.stackexchange.com/questions/2526/differentiable-activation-function) [A visual proof that neural nets can compute any function](http://neuralnetworksanddeeplearning.com/chap4.html) Upvotes: 6 [selected_answer]<issue_comment>username_5: **First Degree Linear Polynomials** Non-linearity is not the correct mathematical term. Those that use it probably intend to refer to a first degree polynomial relationship between input and output, the kind of relationship that would be graphed as a straight line, a flat plane, or a higher degree surface with no curvature. To model relations more complex than *y = a1x1 + a2x2 + ... + b*, more than just those two terms of a Taylor series approximation is needed. **Tune-able Functions with Non-zero Curvature** Artificial networks such as the multi-layer perceptron and its variants are matrices of functions with non-zero curvature that, when taken collectively as a circuit, can be tuned with attenuation grids to approximate more complex functions of non-zero curvature. These more complex functions generally have multiple inputs (independent variables). The attenuation grids are simply matrix-vector products, the matrix being the parameters that are tuned to create a circuit that approximates the more complex curved, multivariate function with simpler curved functions. Oriented with the multi-dimensional signal entering at the left and the result appearing on the right (left-to-right causality), as in the electrical engineering convention, the vertical columns are called layers of activations, mostly for historical reasons. They are actually arrays of simple curved functions. The most commonly used activations today are these. * ReLU * Leaky ReLU * ELU * Threshold (binary step) * Logistic The identity function is sometimes used to pass through signals untouched for various structural convenience reasons. These are less used but were in vogue at one point or another. They are still used but have lost popularity because they place additional overhead on back propagation computations and tend to lose in contests for speed and accuracy. * Softmax * Sigmoid * TanH * ArcTan The more complex of these can be parametrized and all of them can be perturbed with pseudo-random noise to improve reliability. **Why Bother With All of That?** Artificial networks are not necessary for tuning well developed classes of relationships between input and desired output. For instance, these are easily optimized using well developed optimization techniques. * Higher degree polynomials — Often directly solvable using techniques derived directly from linear algebra * Periodic functions — Can be treated with Fourier methods * Curve fitting — converges well using the Levenberg–Marquardt algorithm, a damped least-squares approach For these, approaches developed long before the advent of artificial networks can often arrive at an optimal solution with less computational overhead and more precision and reliability. Where artificial networks excel is in the acquisition of functions about which the practitioner is largely ignorant or the tuning of the parameters of known functions for which specific convergence methods have not yet been devised. Multi-layer perceptrons (ANNs) tune the parameters (attenuation matrix) during training. Tuning is directed by gradient descent or one of its variants to produce a digital approximation of an analog circuit that models the unknown functions. The gradient descent is driven by some criteria toward which circuit behavior is driven by comparing outputs with that criteria. The criteria can be any of these. * Matching labels (the desired output values corresponding to the training example inputs) * The need to pass information through narrow signal paths and reconstruct from that limited information * Another criteria inherent in the network * Another criteria arising from a signal source from outside the network **In Summary** In summary, activation functions provide the building blocks that can be used repeatedly in two dimensions of the network structure so that, combined with an attenuation matrix to vary the weight of signaling from layer to layer, is known to be able to approximate an arbitrary and complex function. **Deeper Network Excitement** The post-millenial excitement about deeper networks is because the patterns in two distinct classes of complex inputs have been successfully identified and put into use within larger business, consumer, and scientific markets. 1. Heterogeneous and semantically complex structures 2. Media files and streams (images, video, audio) Upvotes: 2
2018/03/03
871
3,599
<issue_start>username_0: Does NEAT require only connection genes to be marked with a global innovation number? From the [NEAT paper](http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf) > > Whenever a new gene appears (through structural mutation), a global innovation number is incremented and assigned to that gene. > > > It seems that any gene (both node genes and connection genes) requires an innovation number. However, I was wondering what was the node gene innovation number for. Is it to provide the same node ID across all elements of the population? Isn't the connection gene innovation number sufficient? Besides, the NEAT paper includes the following image which doesn't show any innovation number on node genes. [![enter image description here](https://i.stack.imgur.com/0xkhX.png)](https://i.stack.imgur.com/0xkhX.png)<issue_comment>username_1: It is actually the other way around: connection IDs is what is debated! Nodes always have innovation IDs (in the image, it is just their identifying number). Node IDs are sufficient to identify connections. If a connection links nodes 3 and 6, then it is the same as another connection linking nodes 3 and 6: no need for an extra ID. So why the extra innovation IDs then? On the one hand, this is an implementation choice: maybe these extra IDs would allow you to create a more complex but faster code? On the other hand, there is a debate around whether a connection between two nodes means the same thing at different times in evolution. If you have no innovation IDs, then you cannot tell apart an old connection from 3 and 6 and another that was independently created later in a different genome (imagine the old connection was removed first). Is this relevant? As said, it is an open debate. Surely, it is not crucial at a basic level! This question (and my answers) is related to [this other question](https://stackoverflow.com/a/49625345/3924118) on Stack Overflow. Upvotes: 2 <issue_comment>username_2: In the original paper the innovation ID is on the connections only. The connection is the object that is keeping the information; nodes can be discerned by the connections. [![enter image description here](https://i.stack.imgur.com/lmBlH.png)](https://i.stack.imgur.com/lmBlH.png) This image represents a possible crossover operator that makes a distinction between disjoint and excess genes and therefore creates children depending on these. It's part of my master thesis, I'd be glad to expand the topic, but for now I'll just use it as example. In the image we assume that the connections that are joining two nodes have the same innovation number. As you can see, there is no need of assigning an innovation number to the nodes: nodes are just a result of what the connection say. This also allows for a more dynamic approach that can be used to spot invalid nets even before building them (checking if there are cyclics or nodes that aren't receiving any input or giving any output) and correct them in order to obtain only valid graphs. Nodes are added just because there is a connection that is pointing to that specific node. This is enough to grant its presence (node number 8 in child 2). As last point, for the data normalization theory (*Normalization is a process of organizing the data in database to avoid data redundancy, insertion anomaly, update anomaly & deletion anomaly*) we should avoid rendundancy at any cost and this is why we should try to keep track of the smallest amount of object possibles. So if we can deduct the nodes from the connections we should do it. Upvotes: 1
2018/03/03
504
1,786
<issue_start>username_0: In the add node mutation, the connection between two chosen nodes (e.g. A and B) is first disabled and then a new node is created between A and B with their respective two connections. I guess that the former A-B connection can be re-enabled via crossover (is it right?). Can the former A-B connection also be re-enabled via mutation (e.g. "add connection")?<issue_comment>username_1: Yes, the original gene is disabled, but is left in the genome. This can be seen on page 10, figure 3 of the paper linked (taken from the original paper [NEAT Paper](http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf)) where gene 3 is disabled, but not removed from the genome. This gene can be re-enabled by receiving the gene with the identical innovation number from a mating partner with the gene enabled during crossover. The original paper does not mention a mutation to re-enable genes, [but various other publications](https://www.cs.ox.ac.uk/people/shimon.whiteson/pubs/ethembabaoglutr08.pdf) and implementations after the original paper do. This is desirable for a number of reasons. Re-enable mutate allows for dropout to used in the implementation. It is also possible that certain genomes are disabling genes too quickly and this can help to correct for that. Upvotes: 3 [selected_answer]<issue_comment>username_2: Yes. The mutation can either disable or enable a gene. It's in the original NEAT implementation released by Dr. <NAME>. > > Declared in `genetics.h`: > > > > ``` > void mutate_toggle_enable(int times); /* toggle genes on or off */ > void mutate_gene_reenable(); /* Find first disabled gene and enable it */ > > ``` > > <http://nn.cs.utexas.edu/soft-view.php?SoftID=4> <http://nn.cs.utexas.edu/?neat-c> Upvotes: 1
2018/03/04
664
2,425
<issue_start>username_0: I am seeking the information for this kind of chatbot architecture : There are two chatbots. One plays the role of teacher, and another is a student who is learning. The goal is to test the student's quality, and to improve the student's ability. I didn't find much reference. There are : [Bottester: Testing Conversational Systems with Simulated Users](http://Bottester:%20Testing%20Conversational%20Systems%20with%20Simulated%20Users) And the [ParlAI](http://parl.ai/static/docs/basic_tutorial.html#), a python-based platform for enabling dialog AI research has the notion of "Teacher agent", which seems to be what I am looking for. Of course, we also have deep reinforcement learning which might be related. I prefer to have some classical references for this approach to chatbots. Currently, reinforcement learning is not in my consideration. Constructing two chatbots talking to each other, like what Facebook did, is not what I want. Because in this case, both of them are student agents.<issue_comment>username_1: Yes, the original gene is disabled, but is left in the genome. This can be seen on page 10, figure 3 of the paper linked (taken from the original paper [NEAT Paper](http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf)) where gene 3 is disabled, but not removed from the genome. This gene can be re-enabled by receiving the gene with the identical innovation number from a mating partner with the gene enabled during crossover. The original paper does not mention a mutation to re-enable genes, [but various other publications](https://www.cs.ox.ac.uk/people/shimon.whiteson/pubs/ethembabaoglutr08.pdf) and implementations after the original paper do. This is desirable for a number of reasons. Re-enable mutate allows for dropout to used in the implementation. It is also possible that certain genomes are disabling genes too quickly and this can help to correct for that. Upvotes: 3 [selected_answer]<issue_comment>username_2: Yes. The mutation can either disable or enable a gene. It's in the original NEAT implementation released by Dr. <NAME>. > > Declared in `genetics.h`: > > > > ``` > void mutate_toggle_enable(int times); /* toggle genes on or off */ > void mutate_gene_reenable(); /* Find first disabled gene and enable it */ > > ``` > > <http://nn.cs.utexas.edu/soft-view.php?SoftID=4> <http://nn.cs.utexas.edu/?neat-c> Upvotes: 1
2018/03/06
691
2,625
<issue_start>username_0: Google Analytics allows me to collect data about every web-session. For simplicity, let's assume for each user, we collect the number of pages and time spent on site for each session: ``` user_id visit_id page_views time_spent result 1 1 10 100 0 1 2 31 510 0 1 3 1 10 1 ``` How would you model this data? What I would like the **ML algorithm to do**: 1. Extract as much information as possible 2. Have a flexible number of inputs (e.g. the number of sessions can go to infinity) **What I can think of**: 1. Aggregate the data per user e.g. average page\_views or total page\_views and feed it into a general algorithm e.g. random forrest (but I lose information with aggregation) 2. Use LSTM and feed at most last 3 visits (will also lose information, but would this perform better than aggregation?) **Goal:** To build a predictive model to analyse all user sessions and make a prediction whether the person will convert or not.<issue_comment>username_1: I understand that in your example you are interested in modelling the outcome of the 'result' column. One easy model I would suggest is to model it using the Bernoulli distribution (<https://en.wikipedia.org/wiki/Bernoulli_distribution>) with the probability of success *p*. Then you can model *p* with something like this x = a + b \* log(page\_views) + c \* log(time\_spent) + e p = exp(x) / (1+exp(x)) where *e* is normally distributed, e ~ N(0, sigma^2) (or simply centered around zero). a, b and c are parameters that you can estimate. I.e. the probability of success (conversion) is modeled as a sigma function of a certain variable that depends on page\_views and time\_spend. You can also add squares (and higher powers) of page\_views and time\_spend in the equation of *x* (i.e. upto a certain threshold page\_views can have a positive effect on conversion and then a negative one, then again a positive effect). Also reading about logistic regression should put you on the right track: <https://en.wikipedia.org/wiki/Logistic_regression> Upvotes: 1 <issue_comment>username_2: I would model this data as a 3d-tensor (user,timestep,features) [organization depends on which DL Framework you use] for your input data. Also for the output data is a 3d-tensor (user,timestep,result) appropriate. The next step would ne tp train a LSTM or CNN model to predict the result (what requires a lot of data)(Would be my first choice). If you have less data, try out logistic regression as supposed by the other answer. Good luck! Upvotes: 0
2018/03/06
1,001
4,229
<issue_start>username_0: There are a lot of papers that show that neural networks can approximate a wide variety of functions. However, I can't find papers that show the limitations of NNs. What are the limitations of neural networks? Which functions can't neural networks learn efficiently (or using gradient-descent)? I am looking also for links to papers that describe these limitations.<issue_comment>username_1: One of the important qualifications of the Universal approximation theorem is that the neural network approximation may be computationally infeasible. > > "A feedforward network with a single layer is sufficient to represent any function, but **the layer may be infeasibly large and may fail to learn and generalize correctly.**" - <NAME>, [DLB](http://www.deeplearningbook.org/contents/mlp.html) > > > I can't think of any function that I would definitively declare as unlearnable, but neural networks have many problems. Consider [adversarial examples](https://arxiv.org/abs/1312.6199) and [adversarial patches](https://arxiv.org/abs/1712.09665), which highlight the poor generalization going on under the hood of recent advances in computer vision. Neural Networks are also inherently limited by the innate priors baked into their architecture and the sample density of their training data. Check out [this recent discussion](https://www.youtube.com/watch?v=fKk9KhGRBdI) at Stanford's AI Salon between <NAME> and <NAME> on innate priors if that is the kind of limitation you are talking about. Upvotes: 3 <issue_comment>username_2: This answer depends very much so on the type of neural network and algorithm used for training. If you are using gradient descent on a neural network of one input layer, one output layer, and no hidden layers there are many functions that you can't learn. One simple one is the XOR function. Due to the fact that XOR is not linearly separable, it can not be represented by a neural network with no hidden layers. If you are using NEAT to build recurrent neural networks then all functions(\*\*) can be represented given enough time and data. This is due in part to the fact that recurrent neural networks are Turing Complete. One of the biggest causes for limitations when using neural networks is based on the difficulty of interpretation as to what the network is doing. The network is gradually building up an understanding of the function as it goes from the input layer to the output layer, but it is very difficult for us to understand this building up process and interpret what the neural network is attempting to do. This makes it very hard if not impossible to manually tweak your neural network in a meaningful way. Another limitation is the need for training (in large amounts) in order to have a meaningful representation of your data. Neural networks have a tendency to need large amounts of data before converging to a meaningful hypothesis space. This has resulted in clever algorithms to generate training data without needing human interaction, such as Generative Adversarial Networks, but the underlying problem remains. \*\* Not all functions can be computed by neural networks, however, all computable functions can be. An example of an uncomputable function is the mapping of all programs from the program to whether or not this program will halt (The Halting Problem). Upvotes: 2 <issue_comment>username_3: A random function cannot be learned efficiently by any algorithm, in particular, neural networks. However, if you are looking for a function with (exponentially) smaller description size, I do not know but any function that is conjectured to be average-case hard probably cannot be learned efficiently by neural networks, for example, * Any cryptographic hard-core predicate (related: [Could Deep Learning be used to crack encryption?](https://datascience.stackexchange.com/q/16639/96045)). * Parity learning with noise (related: [Cryptographic Primitive Based on Hard Learning Problems](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.36.5766&rep=rep1&type=pdf)). * Combinatorial examples: planted SAT, planted clique (related: [Barak's survey](https://eprint.iacr.org/2017/365.pdf)). Upvotes: 2
2018/03/06
2,501
9,980
<issue_start>username_0: I've seen these terms thrown around this site a lot, specifically in the tags [convolutional-neural-networks](/questions/tagged/convolutional-neural-networks "show questions tagged 'convolutional-neural-networks'") and [neural-networks](/questions/tagged/neural-networks "show questions tagged 'neural-networks'"). I know that a neural network is a system based loosely on the human brain. But what's the difference between a *convolutional* neural network and a regular neural network? Is one just a lot more complicated and, ahem, *convoluted* than the other?<issue_comment>username_1: A *convolutional* neural network is one that has *convolutional* layers. If a general neural network is, loosely speaking, inspired by a human brain (which isn't very much accurate), the convolutional neural network is inspired by the visual cortex system, in humans and other animals (which is closer to the truth). As the name suggests, this layer applies the convolution with a learnable filter (a.k.a. *kernel*), as a result the network learns the patterns in the images: edges, corners, arcs, then more complex figures. Convolutional neural network may contain other layers as well, commonly pooling and dense layers. Highly recommend [CS231n tutorial](http://cs231n.github.io/convolutional-networks/#overview) on this matter: it's very detailed and contains a lot of very nice visualizations. Upvotes: 3 <issue_comment>username_2: Convolutional Neural Networks (CNNs) are neural networks with architectural constraints to reduce computational complexity and ensure translational invariance (the network interprets input patterns the same regardless of translation— in terms of image recognition: a banana is a banana regardless of where it is in the image). Convolutional Neural Networks have three important architectural features. ***Local Connectivity:*** Neurons in one layer are only connected to neurons in the next layer that are spatially close to them. This design trims the vast majority of connections between consecutive layers, but keeps the ones that carry the most useful information. The assumption made here is that the input data has spatial significance, or in the example of computer vision, the relationship between two distant pixels is probably less significant than two close neighbors. ***Shared Weights:*** This is the concept that makes CNNs "convolutional." By forcing the neurons of one layer to share weights, the forward pass (feeding data through the network) becomes the equivalent of convolving a filter over the image to produce a new image. The training of CNNs then becomes the task of learning filters (deciding what features you should look for in the data.) ***Pooling and ReLU:*** CNNs have two non-linearities: pooling layers and ReLU functions. Pooling layers consider a block of input data and simply pass on the maximum value. Doing this reduces the size of the output and requires no added parameters to learn, so pooling layers are often used to regulate the size of the network and keep the system below a computational limit. The ReLU function takes one input, x, and returns the maximum of {0, x}. `ReLU(x) = argmax(x, 0)`. This introduces a similar effect to tanh(x) or sigmoid(x) as non-linearities to increase the model's expressive power. --- Further Reading --------------- As another answer mentioned, Stanford's CS 231n course covers this in detail. Check out [this written guide](http://cs231n.github.io/convolutional-networks/#overview) and [this lecture](https://www.youtube.com/watch?v=bNb2fEVKeEo&t=0s&list=PL3FW7Lu3i5JvHM8ljYj-zLfQRF3EO8sYv&index=5) for more information. Blog posts like [this one](https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks/) and [this one](https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/) are also very helpful. If you're still curious why CNNs have the structure that they do, I suggest reading [the paper that introduced them](http://www.dengfanxin.cn/wp-content/uploads/2016/03/1998Lecun.pdf) though this is quite long, and perhaps checking out [this discussion](https://www.youtube.com/watch?v=fKk9KhGRBdI) between <NAME> and <NAME> about innate priors (the assumptions we make when we design the architecture of a model). Upvotes: 4 <issue_comment>username_3: **TLDR:** The convolutional-neural-network is a subclass of neural-networks which have at least one convolution layer. They are great for capturing local information (e.g. neighbor pixels in an image or surrounding words in a text) as well as reducing the complexity of the model (faster training, needs fewer samples, reduces the chance of overfitting). See the following chart that depicts the several neural-networks architectures including deep-conventional-neural-networks: [![visualization](https://i.stack.imgur.com/LgmYv.png)](https://i.stack.imgur.com/LgmYv.png). --- **Neural Networks (NN)**, or more precisely **Artificial Neural Networks (ANN)**, is a class of Machine Learning algorithms that recently received a lot of attention (again!) due to the availability of Big Data and fast computing facilities (most of Deep Learning algorithms are essentially different variations of ANN). The class of ANN covers several architectures including **Convolutional Neural Networks** ([CNN](https://en.wikipedia.org/wiki/Convolutional_neural_network)), Recurrent Neural Networks ([RNN](https://en.wikipedia.org/wiki/Recurrent_neural_network)) eg [LSTM](https://en.wikipedia.org/wiki/Long_short-term_memory) and [GRU](https://en.wikipedia.org/wiki/Gated_recurrent_unit), [Autoencoders](https://en.wikipedia.org/wiki/Autoencoder), and [Deep Belief Networks](https://en.wikipedia.org/wiki/Deep_belief_network). Therefore, CNN is just one kind of ANN. Generally speaking, an ANN is a collection of connected and tunable units (a.k.a. nodes, neurons, and artificial neurons) which can pass a signal (usually a real-valued number) from a unit to another. The number of (layers of) units, their types, and the way they are connected to each other is called the network architecture. A CNN, in specific, has one or more layers of ***convolution*** units. A convolution unit receives its input from multiple units from the previous layer which together create a proximity. Therefore, the input units (that form a small neighborhood) share their weights. The convolution units (as well as pooling units) are especially beneficial as: * They reduce the number of units in the network (since they are *many-to-one mappings*). This means, there are fewer parameters to learn which reduces the chance of overfitting as the model would be less complex than a fully connected network. * They consider the context/shared information in the small neighborhoods. This feature is very important in many applications such as image, video, text, and speech processing/mining as the neighboring inputs (eg pixels, frames, words, etc) usually carry related information. Read the followings for more information about (deep) CNNs: 1. [ImageNet Classification with Deep Convolutional Neural Networks](https://www.cs.toronto.edu/%7Ekriz/imagenet_classification_with_deep_convolutional.pdf) 2. [Going Deeper with Convolutions](https://arxiv.org/abs/1409.4842) --- P.S. ANN is not *"a system based loosely on the human brain"* but rather a **class** of systems **inspired** by the neuron connections exist in animal brains. Upvotes: 5 <issue_comment>username_4: It's a very simplified explantion. I am just talking about the core idea. A neural network is a combination of many layers. A neural network (Multiple Layer Perceptron: Regular neural network ): It does a linear combination (a mathematical operation) between the previous layer's output and the current layer's weights(vectors) and then it passes data to the next layer by passing through an activation function. The picture shows a unit of a layer. [![A normal neural network. w represent weight ](https://i.stack.imgur.com/YwCCU.png)](https://i.stack.imgur.com/YwCCU.png) A neural network (Convolutional Neural Network): It does convolution (In signal processing it's known as Correlation) (Its a mathematical operation) between the previous layer's output and the current layer's kernel ( a small matrix ) and then it passes data to the next layer by passing through an activation function. The picture shows a Convolution operation. Each layer may have many convolution operation [![enter image description here](https://i.stack.imgur.com/13ayz.gif)](https://i.stack.imgur.com/13ayz.gif) Upvotes: 1 <issue_comment>username_5: The everyday definition of *convolution* comes from the Latin [*convolutus*](https://en.wiktionary.org/wiki/convolution#English) meaning 'to roll together'. Hence the meaning twisted or complicated. The mathematical definition comes from the same root, with the interpretation of taking a "rolling average". Hence in Machine Learning, a *convolution* is a sliding window across an input creating one averaged output for each stride the window takes. I.e. the values covered by the window are *convoluted* to create one convoluted output. This is best demonstrated with an a diagram: [![enter image description here](https://i.stack.imgur.com/AV512.gif)](https://i.stack.imgur.com/AV512.gif) The convolution can be any function of the input, but some common ones are the max value, or the mean value. A convolutional neural network (CNN) is a neural network where one or more of the layers employs a convolution as the function applied to the output of the previous layer. If the window is greater than size 1x1, the output will be necessarily smaller than the input (unless the input is artificially 'padded' with zeros), and hence CNN's often have a distinctive 'funnel' shape: [![enter image description here](https://i.stack.imgur.com/p2gz1m.png)](https://i.stack.imgur.com/p2gz1m.png) Upvotes: 2
2018/03/07
2,431
9,711
<issue_start>username_0: Let us for these purposes say with are working with any feed forward neural network. Let us also say, that we know beforehand that certain portion of our dataset arsignificantly more impactful or important to our underlying representation. Is there anyway to add that “weighting” to our data?<issue_comment>username_1: A *convolutional* neural network is one that has *convolutional* layers. If a general neural network is, loosely speaking, inspired by a human brain (which isn't very much accurate), the convolutional neural network is inspired by the visual cortex system, in humans and other animals (which is closer to the truth). As the name suggests, this layer applies the convolution with a learnable filter (a.k.a. *kernel*), as a result the network learns the patterns in the images: edges, corners, arcs, then more complex figures. Convolutional neural network may contain other layers as well, commonly pooling and dense layers. Highly recommend [CS231n tutorial](http://cs231n.github.io/convolutional-networks/#overview) on this matter: it's very detailed and contains a lot of very nice visualizations. Upvotes: 3 <issue_comment>username_2: Convolutional Neural Networks (CNNs) are neural networks with architectural constraints to reduce computational complexity and ensure translational invariance (the network interprets input patterns the same regardless of translation— in terms of image recognition: a banana is a banana regardless of where it is in the image). Convolutional Neural Networks have three important architectural features. ***Local Connectivity:*** Neurons in one layer are only connected to neurons in the next layer that are spatially close to them. This design trims the vast majority of connections between consecutive layers, but keeps the ones that carry the most useful information. The assumption made here is that the input data has spatial significance, or in the example of computer vision, the relationship between two distant pixels is probably less significant than two close neighbors. ***Shared Weights:*** This is the concept that makes CNNs "convolutional." By forcing the neurons of one layer to share weights, the forward pass (feeding data through the network) becomes the equivalent of convolving a filter over the image to produce a new image. The training of CNNs then becomes the task of learning filters (deciding what features you should look for in the data.) ***Pooling and ReLU:*** CNNs have two non-linearities: pooling layers and ReLU functions. Pooling layers consider a block of input data and simply pass on the maximum value. Doing this reduces the size of the output and requires no added parameters to learn, so pooling layers are often used to regulate the size of the network and keep the system below a computational limit. The ReLU function takes one input, x, and returns the maximum of {0, x}. `ReLU(x) = argmax(x, 0)`. This introduces a similar effect to tanh(x) or sigmoid(x) as non-linearities to increase the model's expressive power. --- Further Reading --------------- As another answer mentioned, Stanford's CS 231n course covers this in detail. Check out [this written guide](http://cs231n.github.io/convolutional-networks/#overview) and [this lecture](https://www.youtube.com/watch?v=bNb2fEVKeEo&t=0s&list=PL3FW7Lu3i5JvHM8ljYj-zLfQRF3EO8sYv&index=5) for more information. Blog posts like [this one](https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks/) and [this one](https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/) are also very helpful. If you're still curious why CNNs have the structure that they do, I suggest reading [the paper that introduced them](http://www.dengfanxin.cn/wp-content/uploads/2016/03/1998Lecun.pdf) though this is quite long, and perhaps checking out [this discussion](https://www.youtube.com/watch?v=fKk9KhGRBdI) between <NAME> and <NAME> about innate priors (the assumptions we make when we design the architecture of a model). Upvotes: 4 <issue_comment>username_3: **TLDR:** The convolutional-neural-network is a subclass of neural-networks which have at least one convolution layer. They are great for capturing local information (e.g. neighbor pixels in an image or surrounding words in a text) as well as reducing the complexity of the model (faster training, needs fewer samples, reduces the chance of overfitting). See the following chart that depicts the several neural-networks architectures including deep-conventional-neural-networks: [![visualization](https://i.stack.imgur.com/LgmYv.png)](https://i.stack.imgur.com/LgmYv.png). --- **Neural Networks (NN)**, or more precisely **Artificial Neural Networks (ANN)**, is a class of Machine Learning algorithms that recently received a lot of attention (again!) due to the availability of Big Data and fast computing facilities (most of Deep Learning algorithms are essentially different variations of ANN). The class of ANN covers several architectures including **Convolutional Neural Networks** ([CNN](https://en.wikipedia.org/wiki/Convolutional_neural_network)), Recurrent Neural Networks ([RNN](https://en.wikipedia.org/wiki/Recurrent_neural_network)) eg [LSTM](https://en.wikipedia.org/wiki/Long_short-term_memory) and [GRU](https://en.wikipedia.org/wiki/Gated_recurrent_unit), [Autoencoders](https://en.wikipedia.org/wiki/Autoencoder), and [Deep Belief Networks](https://en.wikipedia.org/wiki/Deep_belief_network). Therefore, CNN is just one kind of ANN. Generally speaking, an ANN is a collection of connected and tunable units (a.k.a. nodes, neurons, and artificial neurons) which can pass a signal (usually a real-valued number) from a unit to another. The number of (layers of) units, their types, and the way they are connected to each other is called the network architecture. A CNN, in specific, has one or more layers of ***convolution*** units. A convolution unit receives its input from multiple units from the previous layer which together create a proximity. Therefore, the input units (that form a small neighborhood) share their weights. The convolution units (as well as pooling units) are especially beneficial as: * They reduce the number of units in the network (since they are *many-to-one mappings*). This means, there are fewer parameters to learn which reduces the chance of overfitting as the model would be less complex than a fully connected network. * They consider the context/shared information in the small neighborhoods. This feature is very important in many applications such as image, video, text, and speech processing/mining as the neighboring inputs (eg pixels, frames, words, etc) usually carry related information. Read the followings for more information about (deep) CNNs: 1. [ImageNet Classification with Deep Convolutional Neural Networks](https://www.cs.toronto.edu/%7Ekriz/imagenet_classification_with_deep_convolutional.pdf) 2. [Going Deeper with Convolutions](https://arxiv.org/abs/1409.4842) --- P.S. ANN is not *"a system based loosely on the human brain"* but rather a **class** of systems **inspired** by the neuron connections exist in animal brains. Upvotes: 5 <issue_comment>username_4: It's a very simplified explantion. I am just talking about the core idea. A neural network is a combination of many layers. A neural network (Multiple Layer Perceptron: Regular neural network ): It does a linear combination (a mathematical operation) between the previous layer's output and the current layer's weights(vectors) and then it passes data to the next layer by passing through an activation function. The picture shows a unit of a layer. [![A normal neural network. w represent weight ](https://i.stack.imgur.com/YwCCU.png)](https://i.stack.imgur.com/YwCCU.png) A neural network (Convolutional Neural Network): It does convolution (In signal processing it's known as Correlation) (Its a mathematical operation) between the previous layer's output and the current layer's kernel ( a small matrix ) and then it passes data to the next layer by passing through an activation function. The picture shows a Convolution operation. Each layer may have many convolution operation [![enter image description here](https://i.stack.imgur.com/13ayz.gif)](https://i.stack.imgur.com/13ayz.gif) Upvotes: 1 <issue_comment>username_5: The everyday definition of *convolution* comes from the Latin [*convolutus*](https://en.wiktionary.org/wiki/convolution#English) meaning 'to roll together'. Hence the meaning twisted or complicated. The mathematical definition comes from the same root, with the interpretation of taking a "rolling average". Hence in Machine Learning, a *convolution* is a sliding window across an input creating one averaged output for each stride the window takes. I.e. the values covered by the window are *convoluted* to create one convoluted output. This is best demonstrated with an a diagram: [![enter image description here](https://i.stack.imgur.com/AV512.gif)](https://i.stack.imgur.com/AV512.gif) The convolution can be any function of the input, but some common ones are the max value, or the mean value. A convolutional neural network (CNN) is a neural network where one or more of the layers employs a convolution as the function applied to the output of the previous layer. If the window is greater than size 1x1, the output will be necessarily smaller than the input (unless the input is artificially 'padded' with zeros), and hence CNN's often have a distinctive 'funnel' shape: [![enter image description here](https://i.stack.imgur.com/p2gz1m.png)](https://i.stack.imgur.com/p2gz1m.png) Upvotes: 2
2018/03/07
844
3,337
<issue_start>username_0: I have an idea about how to use neural networks but I'm not sure if it is possible or not. In supervised learning we have a set of attributes labeled with an output value. I can use these set to train my network. Now I have a network trained to get an output value from an random set of attributes but, **can I use this trained network to get the input attributes using only the desired output?** I will have N input values and only 1 output value. I've thought that I can use the weights for that network into a new one with 1 input value and N output values but I'm not sure if I can do that.<issue_comment>username_1: NN isn't symmetric and the result may haven't any seems. You can just use loss like this (target\_output - model\_output)\*\*2 - MSE differentiate this loss according to input variables and use optimizer to solve this task. Try to search about [adversarial models](https://en.wikipedia.org/wiki/Adversary_model), the idea is very similar. Upvotes: 0 <issue_comment>username_2: > > Now I have a network trained to get an output value from an random set of attributes but, can I use this trained network to get the input attributes using only the desired output? > > > It depends: * If you are happy to find any inputs, even non-realistic ones, that get your desired output, then you can use your trained network, with a minor modification. Freeze all the weights, and allow back-propagation to determine the gradient of the input (which should now be a variable to optimise, not source data). Start with a noise input, back-propagate the error to find gradient to make the input better at creating your desired output, then take a gradient step towards it in the input data. This is essentially how Deep Dream works. Like Deep Dream, you will not necessarily get realistic input values, but will get [semi-random ones that cause your network to predict a specific class](https://mtyka.github.io/deepdream/2016/02/05/bilateral-class-vis.html). * If you want the newly generated input to be a best guess at something from the original dataset, then you have to look at one of more advanced models: + [Restricted Boltzmann Machines (RBMs)](https://en.wikipedia.org/wiki/Restricted_Boltzmann_machine). + [Variational Autoencoders (VAEs)](http://kvfrans.com/variational-autoencoders-explained/) + [Generative Adversarial Networks (GANs)](https://en.wikipedia.org/wiki/Generative_adversarial_network). These have become popular recently, and there are many variations on the basic idea, including VAEGANs that combine VAEs with GANs. These network types are quite advanced, and can be tricky to understand and train successfully. You will want to spend some time researching each type. To generalise terribly: A GAN will tend to generate realistic "noise" in the generated items, but at the expense of overall structure and cohesion (images tend to look distorted but with realistic textures). A VAE will tend to produce smooth, coherent inputs, but at the expense of lack of fine detail (VAE images tend to look smoothed and/or blurred). If not sure what to try, probably GAN is a reasonable choice, since there are lots of tutorials available, and [recent advances with image generation can look very impressive](https://www.youtube.com/watch?v=36lE9tV9vm0&t=3099s). Upvotes: 2
2018/03/08
2,330
9,348
<issue_start>username_0: I am working on a js library which focuses on error handling. A part of the lib is a stack parser which I'd like to work in most of the environments. The hard part that there is no standard way to represent the stack, so every environment has its own stack string format. The variable parts are message, type and frames. A frame usually consists of called function, file, line, column. In some of the environments there are additional variable regions on the string, in others some of the variables are not present. I can run automated tests only in the 5 most common environments, but there are a lot more environments I'd like the parser to work in. * My goal is to write an adaptive parser, which learns the stack string format of the actual environment on the fly, and after that it can parse the stack of any exception of that environment. I already have a plan how to solve this in the traditional way, but I am curious, **is there any machine learning tool (probably in the topic of unsupervised learning) I could use to solve this problem?** According to the comments I need to clarify the terms "stack string format" and "stack parser". I think it is better to write 2 examples from different environments: A.) *example stack string:* ``` Statement on line 44: Type mismatch (usually a non-object value used where an object is required) Backtrace: Line 44 of linked script file://localhost/G:/js/stacktrace.js this.undef(); Line 31 of linked script file://localhost/G:/js/stacktrace.js ex = ex || this.createException(); Line 18 of linked script file://localhost/G:/js/stacktrace.js var p = new printStackTrace.implementation(), result = p.run(ex); Line 4 of inline#1 script in file://localhost/G:/js/test/functional/testcase1.html printTrace(printStackTrace()); Line 7 of inline#1 script in file://localhost/G:/js/test/functional/testcase1.html bar(n - 1); Line 11 of inline#1 script in file://localhost/G:/js/test/functional/testcase1.html bar(2); Line 15 of inline#1 script in file://localhost/G:/js/test/functional/testcase1.html foo(); ``` *stack string format (template):* ``` Statement on line {frames[0].location.line}: {message} Backtrace: {foreach frames as frame} Line {frame.location.line} of {frame.unknown[0]} {frame.location.path} {frame.calledFunction} {/foreach} ``` *extracted information (json):* ``` { message: "Type mismatch (usually a non-object value used where an object is required)", frames: [ { calledFunction: "this.undef();", location: { path: "file://localhost/G:/js/stacktrace.js", line: 44 }, unknown: ["linked script"] }, { calledFunction: "ex = ex || this.createException();", location: { path: "file://localhost/G:/js/stacktrace.js", line: 31 }, unknown: ["inline#1 script in"] }, ... ] } ``` B.) *example stack string:* ``` ReferenceError: x is not defined at repl:1:5 at REPLServer.self.eval (repl.js:110:21) at repl.js:249:20 at REPLServer.self.eval (repl.js:122:7) at Interface. (repl.js:239:12) at Interface.EventEmitter.emit (events.js:95:17) at Interface.\_onLine (readline.js:202:10) at Interface.\_line (readline.js:531:8) at Interface.\_ttyWrite (readline.js:760:14) at ReadStream.onkeypress (readline.js:99:10) ``` *stack string format (template):* ``` {type}: {message} {foreach frames as frame} {if frame.calledFunction is undefined} at {frame.location.path}:{frame.location.line}:{frame.location.column} {else} at {frame.calledFunction} ({frame.location.path}:{frame.location.line}:{frame.location.column}) {/if} {/foreach} ``` *extracted information (json):* ``` { message: "x is not defined", type: "ReferenceError", frames: [ { location: { path: "repl", line: 1, column: 5 } }, { calledFunction: "REPLServer.self.eval", location: { path: "repl.js", line: 110, column: 21 } }, ... ] } ``` The parser should process the stack strings and return the extracted information. The stack string format and the variables are environment dependent, the library should figure out on the fly how to parse the stack strings of the actual environment. I can probe the actual environment by throwing exceptions with well known stacks and check the differences of the stack strings. For example if I add a whitespace indentation to the line that throws the exception, then the column and probably the called function variables will change. If I detect a number change somewhere, then I can be sure that we are talking about the column variable. I can add line breaks too, which will cause line number change and so on... I can probe for every important variables, but I cannot be sure that the actual string does not contain additional unknown variables and I cannot be sure that all of the known variables will be added to it. For example the frame strings of the "A" example contain an unknown variable and do not contain the column variable, while the frame strings of the "B" example do not always contain the called function variable.<issue_comment>username_1: The parsing of linguistic units from streams of speech by the human brain is an existing system that can be studied, and it is a legitimate proof of concept. A working brain adapts to changes in volume, tone frequency, information rate, rhythm, accent, dialect, and background sound as it parses sequences of vocal sounds originating from the initial tone and transient processing of signals from the vestibulocochlear nerve. The later evolution of written symbol recognition from signals originating from the optic nerve is a related proof of concept. Simple adaptive parsing is running in the lab as a part of social networking automation, but only for limited sets of symbols and sequential patterns. It does scale without reconfiguration to an arbitrarily large base linguistic units, prefixes, endings, and suffixes, limited only by our hardware capacities and throughput. The existence of regular expression libraries was helpful to keep the design simple. We use the PCRE version 8 series library fed by a ansiotropic form of DCNN for feature extraction from a window moving through the input text with a configurable windows size and move increment size. Heuristics applied to input text statistics gathered in a first pass produce a set of hypothetical PCREs arranged in two layers. Optimization occurs to apply higher probabilistic weights to the best PCREs in a chaotically perturbed text search. It uses the same gradient descent convergence strategies used in NN back propagation in training. It is a naive approach that does not make assumptions like the existence of backtraces, files, or errors. It would adapt equally to Arabic messages and Spanish ones. The output is an arbitrary directed graph in memory, which is similar to a dump of an object oriented database. JSON hierarchies were too restrictive four our purposes, since the directed graphs we were parsing from a serial text stream had cases of multiple incoming directed edges into vertices and some circular edge sequences. ``` قنبلة -> dangereux -> 4anlyss bomba -> dangereux ambiguïté -> 4anlyss -> préemption -> قنبلة ``` Although a re-entrant algorithm for a reinforcement version is stubbed out and the wellness signal is already available, other work preempted furthering the adaptive parser or working toward the next step to use the work for natural language: Matching the directed graphs to persisted directed graph filters representing ideas, which would mimic the idea recollection aspect of language comprehension. Upvotes: 0 <issue_comment>username_2: I wrote a relatively simple [adaptive parser in Prolog](https://github.com/jarble/adaptive_parser). The parser is essentially a [string rewriting system](https://cs.stackexchange.com/a/102698/77574) that learns new rewrite rules from its input, such as `"A implies that B" means "A implies B"`, or `"neither A nor B" means "not (A or B)"`, using a simple bottom-up parsing algorithm. Using the grammar rules that it has learned, the parser is able to convert English phrases such as `C is not less than D percent of E and R implies that Q is not true` into Prolog terms, such as `(CQ\=true)`. In addition to the parser that I described here, there is an adaptive parser generator called [dypgen](http://dypgen.free.fr/). There also are several programming languages that allow user-defined syntax extensions, including [Coq](https://coq.inria.fr/refman/user-extensions/syntax-extensions.html) and [Agda](https://agda.readthedocs.io/en/v2.6.0.1/language/syntax-declarations.html). The [Nearley](https://github.com/kach/nearley/issues/485) parser generator also allows grammar rules to be added at runtime, though this feature isn't documented yet. Upvotes: 1 <issue_comment>username_3: For the formats above you could write a one normal CFG parser what would extract AST tree. That actualy you want as output? Upvotes: 0
2018/03/08
446
1,453
<issue_start>username_0: In multivariate linear regression (linear regression with more than one variable) the model is $yi = b\_0 + b\_1x\_{1i} + b\_2x\_{2i} + ...$ , and so on. But how is the $b\_n$ value calculated iteratively? Can it be calculated non-iteratively? What is the intuition behind using that method to calculate $b\_2$?<issue_comment>username_1: It is calculated the same way $b\_1$ is calculated. Nearly following your notation, say your multiple linear regression function is $H(X\_i) = b\_0 + b\_1x\_{1,i} + ...+ b\_nx\_{n, i}$ for data instance $X\_i=x\_{1,i},...,x\_{n, i}$ and weights $b\_0,...,b\_n$. And say your error function is $E(X,Y) = \sum\_i(H(X\_i)-Y\_i)^2$ where $X$ is the collection of all data points $X\_i, Y\_i$. From your error function $E$, for whatever weights you have (with a gradient based method), calculate the partial derivative $\partial E /\partial b\_i$ and use this to update all of your weights at once in each iteration of your optimization routine. Upvotes: 2 <issue_comment>username_2: put your b\_1, ... ,b\_n coefficients into a vector *b* put all your x\_{ij} into a matrix X then all components of b are calculated at the same time with this equation b = (H(X) - Xb)^T (H(x) - Xb) But this calculation (estimation) is only consistent (search consistent estimator) when certain assumptions are present (read here <https://en.wikipedia.org/wiki/Ordinary_least_squares>). Upvotes: -1
2018/03/14
1,293
4,369
<issue_start>username_0: The result of gradient descent algorithm is a vector. So how does this algorithm decide the direction for weight change? We Give hyperparameters for step size. But how is the vector direction for weight change, for the purpose of reducing the Loss function in a Linear Regression Model, determined by this algorithm?<issue_comment>username_1: First thing is that what does gradient descent do? Gradient Descent is a tool of calculus which we use to determine the parameters (here weights) used in a Machine Learning algorithm or a Neural Network, by running the Gradient Descent algorithm iteratively. What does the vector obtained from one iteration of gradient descent tell us? It tells us the direction of weight change (when weights are treated as a vector) for the maximum reduction in the value outputted by a loss function. The intuition behind why Gradient Descent gives such a direction can be found here: * [Why the gradient is the direction of steepest ascent - Khan Academy](https://www.khanacademy.org/math/multivariable-calculus/multivariable-derivatives/gradient-and-directional-derivatives/v/why-the-gradient-is-the-direction-of-steepest-ascent) * [Why is gradient the direction of steepest ascent?](https://math.stackexchange.com/questions/223252/why-is-gradient-the-direction-of-steepest-ascent) * [Why does gradient descent work?](https://math.stackexchange.com/questions/180559/why-does-gradient-descent-work) In general cases the cost/loss function is an n-dimensional paraboloid (we design function in such a way such that its convex). [![enter image description here](https://i.stack.imgur.com/TYDRx.png)](https://i.stack.imgur.com/TYDRx.png) A 2-D paraboloid with `x` and `y` as independent variables. Now why do we reduce the weights in the direction of negative gradient only? Why not some other direction? Since we want to reduce the cost in Linear Regression for better predictions we choose only the negative Gradient Descent direction, as it is the direction of steepest descent even though we could have chosen some other possible vector direction to reduce cost, but the negative gradient descent direction ensures that: * Cost is always decreased (if the step size is correct and in general only in cases of convex functions e.g paraboloid). * Cost is decreased by the maximal amount if we move in that direction, so we don't have to worry whether cost will decrease or not if we move in that direction. Also we use learning rate `alpha` to scale by how much amount we want to decrease the weights. EDIT: As pointed by @pasaba the error function may not be a paraboloid but in general a good cost function looks like a paraboloid with skewed axis. Upvotes: 2 [selected_answer]<issue_comment>username_2: We can analyze a basic common example: approximation of AND logic gate by a NN. The inputs to the NN will be "x1" and "x2" and its output is "y". The data to be learned by the NN is: [![enter image description here](https://i.stack.imgur.com/Uxvjb.png)](https://i.stack.imgur.com/Uxvjb.png) The basic NN has one intermediate cell with activation function sigmoid and one output cell identity function. That means: [![enter image description here](https://i.stack.imgur.com/1twPv.png)](https://i.stack.imgur.com/1twPv.png) (note by symmetry we assume w1=w2) Then, the error is: [![enter image description here](https://i.stack.imgur.com/rSSsf.png)](https://i.stack.imgur.com/rSSsf.png) These are some plots of this function: [![enter image description here](https://i.stack.imgur.com/HOfuo.jpg)](https://i.stack.imgur.com/HOfuo.jpg) [![enter image description here](https://i.stack.imgur.com/AltDb.jpg)](https://i.stack.imgur.com/AltDb.jpg) [![enter image description here](https://i.stack.imgur.com/nAm7q.jpg)](https://i.stack.imgur.com/nAm7q.jpg) And this is the w derivate of e : [![enter image description here](https://i.stack.imgur.com/wVCIj.jpg)](https://i.stack.imgur.com/wVCIj.jpg) Sage Math code ([link](http://sagecell.sagemath.org/)) for this graph: ``` s(x)=1/(1+e^(-1*x)) e(w,b)=(1/2)*(s(b)^2+2*s(w+b)^2+(1-s(2*w+b))^2) plot3d(e(w,b),(w,-5,5),(b,-5,5),adaptive=True, color=rainbow(60, 'rgbtuple')) edw=e.derivative(w) plot3d(edw(w,b),(w,-5,5),(b,-5,5),adaptive=True, color=rainbow(60, 'rgbtuple')) ``` (PS: please, activate latex on this stack exchange). Upvotes: 0
2018/03/14
698
3,236
<issue_start>username_0: I am trying to understand the difference between biological and artificial evolution. If we look at it in terms of genetics, in both of them, the selection operation is a key term. What's the difference between biological and artificial evolution?<issue_comment>username_1: Biological and artificial evolution work around pretty much the same principles. **Fitness and selection**: In biology, the fittest organisms in an ecosystem are more likely to survive long enough to reproduce, passing on their genes in the process. In artificial evolution, our *organisms* are in fact *solutions* to our problem, which can be evaluated to determine how good they are (their fitness). We choose ourselves which solutions will be selected for reproduction (there are many ways to do this selection, but what is common among all of them is that the fittest solutions have a higher chance of being selected). **Crossover**: In biology, an organism inherits a portion of each parent's genes, so is a sort of genetic hybrid of both parents. For artificial evolution, a new solution (a "child" solution) will inherit part of its parent's solutions (we take a partial solution from each parent, and glue those partial solutions together to construct a new solution). **Mutation**: In nature mutations often occur at birth and this is why there are many different species. Harmful mutations make the individual less likely to survive long enough to pass them on to children, and in contrast helpful mutations make it more likely that the individual will survive long enough to pass them unto children. The same can be said for artificial evolution: A mutation randomly changes a small part of the solution, and if it makes that solution fitter, then that solution has a higher chance of being selected for reproduction. Upvotes: 3 [selected_answer]<issue_comment>username_2: [Phillipe's excellent answer](https://ai.stackexchange.com/a/5683/1671) covers the crux of the subject, so I'm just going to state the obvious: the key difference is the medium and timescale. Biological evolution is a function of the natural world, and typically occurs over a long time span, depending on the organisms and how quickly they produce new generations. (We typically think of biological evolution as occurring over "millions of years", but it can happen much more quickly, for instance in the case of microorganisms.) Generic algorithms utilize a computing medium, which in the current era is silicon based, and involves microprocessors and various mediums for memory (magnetic tape and more recently solid-state.) Both natural and artificial evolution are constrained by the size of the system (a planet or ecosystem in the former case, and available memory in the latter.) However: * Artificial evolution can occur at an artificially accelerated pace, dependent on available processing resources. This capacity for computationally "accelerated subjective time" and accelerated evolution of algorithms is on of the bases for the theory of the "technological singularity". It might be argued that genetic engineering allows accelerated evolution for biological species, but that would not fall under natural evolution. Upvotes: 1
2018/03/15
602
2,704
<issue_start>username_0: I am not clear with the concept that an unsupervised model learns. We are giving an input and output to the supervised model, so that it can generate a particular value, pattern or something out of it which can be used to categorize something in the future. By contrast, in unsupervised learning, we are clustering, so why do we need learning? Can anyone detail me with some real-world examples?<issue_comment>username_1: Imagine you have a dataset of people who have cancer. You have information about their age, physique, diagnosis, treatments, and results. Using this data, you want to prescribe a set of treatments for a new patient, P. Obviously, if there is someone in the dataset that has very similar traits as P and had a positive result with their treatments, you could prescribe the same set of treatments. However, this is incredibly unlikely and becomes more infeasible as more information about P is observed (e.g. Has brown hair and hates pasta). A better option is to cluster the dataset into groups that have positive outcomes for treatment results. For example, perhaps patients with lung cancer who smoke and are given treatment A do better than patients with lung cancer who didn't smoke and are given the same treatment A. These patients should then be divided based on this outcome. Once these different clusters are found, patient P can be evaluated against each of the clusters and a set of treatments can be prescribed (e.g. Most of the treatments from cluster A, but 1 treatment from cluster B). Unsupervised learning is the method of finding these clusters, which helps find structure to the data to better answer questions. Upvotes: 1 <issue_comment>username_2: **Supervised Learning:** This is performed with the help of a teacher. A child works on the basis of the output that he/she has to produce. Their actions are **supervised** by a teacher. Similarly in ANNs, each vector requires a corresponding target vector, which represents the desired output. **Unsupervised Learning:** Consider the learning process of a tadpole, it learns by itself, it isn't taught by any teacher. In ANN, during the training process, the network receives the input patterns and organizes these patterns to form clusters. When a new input pattern is applied, the neural network gives an output response indicating the class to which the input pattern belongs. If for an input, a pattern class cannot be found then a new class is generated. In this case, there is no feedback from the environment, the network must itself discover patterns, regularities, features or categories from the input data and relations for the input data over the output. Upvotes: 0
2018/03/15
518
1,781
<issue_start>username_0: I asked my self this simple question while reading ["Comment Abuse Classification with Deep Learning"](https://web.stanford.edu/class/cs224n/reports/2762092.pdf) by Chu and Jue. Indeed, they say at the end of the that > > *It is clear that RNNs, specifically LSTMs, and CNNs are state-of-the-art architectures for sentiment analysis* > > > To my mind CNNs were only neurons arranged so that they correspond to overlapping regions when paving the input field. It wasn't that recurrent at all.<issue_comment>username_1: You are right. I think you are just misinterpreting the part of the sentence ('specifically LSTMs'). LSTMs are an example of a popular type of RNN. RNNs and CNNs are different architectures but they can be used together. Here is another sentence with the same structure: > > It is clear than dogs, specifically corgis, and cats are very common in online memes. > > > [![A cat and a corgi](https://i.stack.imgur.com/Ztt5e.jpg)](https://i.stack.imgur.com/Ztt5e.jpg) Upvotes: 3 [selected_answer]<issue_comment>username_2: Both CNN and RNN fall into the super set of neural networks,however applications of the two matters. So to branch them off in terms of applications, > > I would say CNN’s are mainly used for vision related applications, > whereas, RNN’s are mainly used for language processing applications. > > > You can refer to these links for further details. [Comparative Study of CNN and RNN for Natural Language Processing](https://arxiv.org/abs/1702.01923) [How are recurrent neural networks different from convolutional neural networks?](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) *The unreasonable effectiveness of Recurrent Neural Networks* Hope this can give you a glimpse! Upvotes: 2
2018/03/15
266
875
<issue_start>username_0: I'm reading [the AlexNet paper](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks). In section 4, where the authors explain how they prevent overfitting, they mention > > Although the 1000 classes of ILSVRC make each training example impose 10 bits of constraint on the mapping from image to label". > > > What does this mean?<issue_comment>username_1: You need 10-bits ($2^{10} = 1024$) to represent 1000 classes. Upvotes: 3 [selected_answer]<issue_comment>username_2: It takes at least 10 bits to represent any number between $1-1000$ because $2^{10} = 1024$. This means that if one was trying to represent 1 of the 1000 classes, one would need at least 10 bits. However, having these 10 bits set correctly for each input is really hard and would require overfitting to ensure it. Upvotes: 1
2018/03/18
2,389
8,104
<issue_start>username_0: Suppose that a NN contains $n$ hidden layers, $m$ training examples, $x$ features, and $n\_i$ nodes in each layer. What is the time complexity to train this NN using back-propagation? I have a basic idea about how they find the time complexity of algorithms, but here there are 4 different factors to consider here i.e. iterations, layers, nodes in each layer, training examples, and maybe more factors. I found an answer [here](https://www.researchgate.net/post/What_is_the_time_complexity_of_Multilayer_Perceptron_MLP_and_other_neural_networks) but it was not clear enough. Are there other factors, apart from those I mentioned above, that influence the time complexity of the training algorithm of a NN?<issue_comment>username_1: *I haven't seen an answer from a trusted source, but I'll try to answer this myself, with a simple example (with my current knowledge).* In general, note that training an MLP using back-propagation is usually implemented with matrices. ### Time complexity of matrix multiplication The time complexity of matrix multiplication for $M\_{ij} \* M\_{jk}$ is simply $\mathcal{O}(i\*j\*k)$. Notice that we are assuming the simplest multiplication algorithm here: there exist some other algorithms with somewhat better time complexity. ### Feedforward pass algorithm The feedforward propagation algorithm is as follows. First, to go from layer $i$ to $j$, you do $$S\_j = W\_{ji}\*Z\_i$$ Then you apply the activation function $$Z\_j = f(S\_j)$$ If we have $N$ layers (including input and output layer), this will run $N-1$ times. ### Example As an example, let's compute the time complexity for the forward pass algorithm for an MLP with $4$ layers, where $i$ denotes the number of nodes of the input layer, $j$ the number of nodes in the second layer, $k$ the number of nodes in the third layer and $l$ the number of nodes in the output layer. Since there are $4$ layers, you need $3$ matrices to represent weights between these layers. Let's denote them by $W\_{ji}$, $W\_{kj}$ and $W\_{lk}$, where $W\_{ji}$ is a matrix with $j$ rows and $i$ columns ($W\_{ji}$ thus contains the weights going from layer $i$ to layer $j$). Assume you have $t$ training examples. For propagating from layer $i$ to $j$, we have first $$S\_{jt} = W\_{ji} \* Z\_{it}$$ and this operation (i.e. matrix multiplication) has $\mathcal{O}(j\*i\*t)$ time complexity. Then we apply the activation function $$ Z\_{jt} = f(S\_{jt}) $$ and this has $\mathcal{O}(j\*t)$ time complexity, because it is an element-wise operation. So, in total, we have $$\mathcal{O}(j\*i\*t + j\*t) = \mathcal{O}(j\*t\*(i + 1)) = \mathcal{O}(j\*i\*t)$$ Using same logic, for going $j \to k$, we have $\mathcal{O}(k\*j\*t)$, and, for $k \to l$, we have $\mathcal{O}(l\*k\*t)$. In total, the time complexity for feedforward propagation will be $$\mathcal{O}(j\*i\*t + k\*j\*t + l\*k\*t) = \mathcal{O}(t\*(ij + jk + kl))$$ I'm not sure if this can be simplified further or not. Maybe it's just $\mathcal{O}(t\*i\*j\*k\*l)$, but I'm not sure. ### Back-propagation algorithm The back-propagation algorithm proceeds as follows. Starting from the output layer $l \to k$, we compute the error signal, $E\_{lt}$, a matrix containing the error signals for nodes at layer $l$ $$ E\_{lt} = f'(S\_{lt}) \odot {(Z\_{lt} - O\_{lt})} $$ where $\odot$ means element-wise multiplication. Note that $E\_{lt}$ has $l$ rows and $t$ columns: it simply means each column is the error signal for training example $t$. We then compute the "delta weights", $D\_{lk} \in \mathbb{R}^{l \times k}$ (between layer $l$ and layer $k$) $$ D\_{lk} = E\_{lt} \* Z\_{tk} $$ where $Z\_{tk}$ is the transpose of $Z\_{kt}$. We then adjust the weights $$ W\_{lk} = W\_{lk} - D\_{lk} $$ For $l \to k$, we thus have the time complexity $\mathcal{O}(lt + lt + ltk + lk) = \mathcal{O}(l\*t\*k)$. Now, going back from $k \to j$. We first have $$ E\_{kt} = f'(S\_{kt}) \odot (W\_{kl} \* E\_{lt}) $$ Then $$ D\_{kj} = E\_{kt} \* Z\_{tj} $$ And then $$W\_{kj} = W\_{kj} - D\_{kj}$$ where $W\_{kl}$ is the transpose of $W\_{lk}$. For $k \to j$, we have the time complexity $\mathcal{O}(kt + klt + ktj + kj) = \mathcal{O}(k\*t(l+j))$. And finally, for $j \to i$, we have $\mathcal{O}(j\*t(k+i))$. In total, we have $$\mathcal{O}(ltk + tk(l + j) + tj (k + i)) = \mathcal{O}(t\*(lk + kj + ji))$$ which is the same as the feedforward pass algorithm. Since they are the same, the total time complexity for one epoch will be $$O(t\*(ij + jk + kl)).$$ This time complexity is then multiplied by the number of iterations (epochs). So, we have $$O(n\*t\*(ij + jk + kl)),$$ where $n$ is number of iterations. ### Notes Note that these matrix operations can greatly be parallelized by GPUs. ### Conclusion We tried to find the time complexity for training a neural network that has 4 layers with respectively $i$, $j$, $k$ and $l$ nodes, with $t$ training examples and $n$ epochs. The result was $\mathcal{O}(nt\*(ij + jk + kl))$. We assumed the simplest form of matrix multiplication that has cubic time complexity. We used the batch gradient descent algorithm. The results for stochastic and mini-batch gradient descent should be the same. (Let me know if you think the otherwise: note that batch gradient descent is the general form, with little modification, it becomes stochastic or mini-batch) Also, if you use momentum optimization, you will have the same time complexity, because the extra matrix operations required are all element-wise operations, hence they will not affect the time complexity of the algorithm. I'm not sure what the results would be using other optimizers such as RMSprop. ### Sources The following article <http://briandolhansky.com/blog/2014/10/30/artificial-neural-networks-matrix-form-part-5> describes an implementation using matrices. Although this implementation is using "row major", the time complexity is not affected by this. If you're not familiar with back-propagation, check this article: <http://briandolhansky.com/blog/2013/9/27/artificial-neural-networks-backpropagation-part-4> Upvotes: 6 [selected_answer]<issue_comment>username_2: For the evaluation of a single pattern, you need to process all weights and all neurons. Given that every neuron has at least one weight, we can ignore them, and have $\mathcal{O}(w)$ where $w$ is the number of weights, i.e., $n \* n\_i$, assuming full connectivity between your layers. The back-propagation has the same complexity as the forward evaluation (just look at the formula). So, the complexity for learning $m$ examples, where each gets repeated $e$ times, is $\mathcal{O}(w\*m\*e)$. The bad news is that there's no formula telling you what number of epochs $e$ you need. Upvotes: 3 <issue_comment>username_3: A potential disadvantage of gradient-based methods is that they head for the nearest minimum, which is usually not the global minimum. This means that the only difference between these search methods is the speed with which solutions are obtained, and not the nature of those solutions. An important consideration is time complexity, which is the rate at which the time required to find a solution increases with the number of parameters (weights). In short, the time complexities of a range of different gradient-based methods (including second-order methods) seem to be similar. Six different error functions exhibit a median run-time order of approximately O(N to the power 4) on the N-2-N encoder in this paper: <NAME> and <NAME> "An Empirical Study of the Time Complexity of Various Error Functions with Conjugate Gradient Back Propagation" , IEEE International Conference on Artificial Neural Networks (ICNN95), Perth, Australia, Nov 27-Dec 1, 1995. Summarised from my book: *Artificial Intelligence Engines: A Tutorial Introduction to the Mathematics of Deep Learning*. Upvotes: 2 <issue_comment>username_4: I found a paper that gives a table of time complexities for different architectures using linear programming-based training: <https://arxiv.org/abs/1810.03218> Upvotes: 1
2018/03/21
1,801
7,900
<issue_start>username_0: My goal is to take an image and return another image that looks as if the scene was viewed from another angle. The difference in angle can be small — let's say as if the hand holding the camera moved slightly sideways.<issue_comment>username_1: If deep learning is what you are trying to use here, you should keep in mind that the real intent behind deep learning is to learn a probability distribution, which means that if you were to use a deep learning model to "rotate" images, you can only do it on a specific class of images (e.g. faces, cats, etc...). If that's your goal, **generative models** are the way to go: Autoencoders ============ You can train an Autoencoders to slightly change the angle. Autoencoders are a special type of neural network that is trained to output the same input you feed into them with a few imposed restrictions to prevent it from learning a trivial identity function. In your case, you could use a variation of a **de-noising autoencoder.** A de-noising autoencoder, as the name suggests, generates the same input image minus an artificial stochastic noise. The way this is achieved is by feeding a corrupted version of the image and then evaluating the loss on the non-corrupted version. ### How can this be adapted in your case? In your case, you could feed the original images into your autoencoder and evaluate them based on the rotated images. This will result in your autoencoder effectively learning the inner distribution that generates the images in order to generate a slightly "rotated" version of it. For more info on de-noising autoencoders, see [the original paper](https://dl.acm.org/citation.cfm?id=1390294). Generative adversarial networks =============================== For a more sophisticated approach, you can use generative adversarial networks. GANs are relatively harder to manipulate, but usually perform better than other generative models when it comes to images. ### How can this be adapted in your case? In general, GANs generate images from noise. However, in your case, you can use the original (non rotated) images as input for the generator. The generator can be a convolutional autoencoder for example. And the "real images" dataset will be your rotated images. This way, your model will learn to generate slightly rotated images by being fed an equivalent of noise in traditional GANs which in your case will be the original images. For more info on GANs, I suggest [this](https://deeplearning4j.org/generative-adversarial-network) and [the original paper](https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf). I should point out that one of the flaws of GANs is *distorted perspectives* so it's probably a shot in the dark; however, I think it won't be a problem here because you would be using real images as input instead of complete noise. Related work ============ Now, as far as the literature goes, I don't think this has been done before except for [this one](https://arxiv.org/pdf/1705.11136.pdf) (kinda) for faces representation learning. The only way this is similar to yours is that you can modify the implementation to only generate the faces it has been fed from different perspectives instead of an average of everything it has learned. Upvotes: 2 <issue_comment>username_2: **— Stereoscopic Synthesis —** The generation of an image that would likely appear in the right eye of a head from which you already have an image from the left eye (or vice versa) is too complex to expect simple convolution (linear matrix transformation) to achieve a reasonable result. You are correct that rotation is not the correct description, simply because it is ambiguous. What is rotating? The best description is the synthesis of a stereoscopic image from a single eyed/camera one. Although deep learning is an approach well suggested, it is a very general term into which a number of concepts, books, research projects, and software components fit. I agree with other answers that indicate one could easily find the target objective missed upon repeated attempts at finding a working solution. Shots in the dark are likely to waste time and effort. For example, an auto-encoder may not work well because the modelling of depth may not be a feature extracted without having a host of stereoscopic image pairs of similar scenes to which automated feature extraction could be accomplished. Should feature extraction be possible, it is not noise that needs to be removed, but pixelization and optical distortion that needs to be characterized, so that surfaces revealed by the shift in position could later be imbued with the same contour, focal blur, reflective properties, and edge continuation as the adjacent surfaces of the same objects when pixels corresponding to newly revealed surfaces are generated. For greatest image authenticity, another noise profile to profile and imbue to generated pixels is the capture device noise profile. **— Formal Problem Restatement —** To narrow machine learning approaches so that we can take a shot in the light, let's consider the model of a three dimensional scene exposed to light sources with two cameras adjacent to one another. Let's consider the input, the output, and the internal architecture required to produce a fairly reliable and accurate second image more formally. We have image pixel matrix I1 that represents light arriving at camera c1 containing a rectangular image capture surface in an x-z plane upon which a lens of effective focal length l1 and aperture a1 is focused over scene S over a time window starting at s1 and ending at e1. Some point is at the origin of a Cartesian coordinate axis, both camera c1 and another camera c2 point such that the origin is centered in the image capture and the point of lens focus. The three dimensional coordinates of c1 and c2 are known. You wish to predict the second image I2 arriving at camera c2. Let's assume, for simplicity, that l1 = l2, that a1 = a2, and that the scene is motionless so that time is not critical in the model. Let's also assume that the y coordinates and the image capture duration (e minus s) are the same for both cameras c1 and c2. **— Solution Architecture —** For this simplified case and assuming the object space is not an abyss containing only one object, the process architecture of the solution is the following. Each --> symbol is a sub-process. The horizontal and vertical positional difference between c1 and c2 is { x, z }. ``` { I1, l, a, y, x, z } --> { I1, l, a, y, { S1 ... Sn }} --> { I1, l, a, y, { S1 ... Sn }, { E1 ... En }} --> { I2 } ``` The first sub-process is a feature extraction, where the features are the three dimensional surfaces visible in the two dimensional image I1. This is a questionable extraction because no y information in the scene is available and there is no mention of y-labeled training data is in the problem statement. The second sub-process is the extension of features extracted to provide needed surface representation for I2. The last process is rendering I2, potentially using morphing pixels in I1 and filling transparent sections remaining using E1 through En and knowledge of the contour, reflective properties, edge continuation, and capture device noise profile from feature extraction. **— Practicality of Learning About Scenes —** The effectiveness of any deep learning architecture could benefit from the above understanding of vision and the comprehension of scenes in DNA based life. The problem of automated feature extraction is complicated because the data is unlabeled with y information as stated before. Learning visual comprehension of arbitrary scenes by DNA based life is assisted by the fact that motion occurs and interaction with physical objects and liquid viscosity provides a vastly greater number of dimensions to the input data. Upvotes: 2
2018/03/22
4,995
17,385
<issue_start>username_0: My understanding is that the convolutional layer of a convolutional neural network has four dimensions: `input_channels, filter_height, filter_width, number_of_filters`. Furthermore, it is my understanding that each new filter just gets convoluted over ALL of the `input_channels` (or feature/activation maps from the previous layer). HOWEVER, the graphic below from CS231 shows each filter (in red) being applied to a SINGLE CHANNEL, rather than the same filter being used across channels. This seems to indicate that there is a separate filter for EACH channel (in this case I'm assuming they're the three color channels of an input image, but the same would apply for all input channels). This is confusing - is there a different unique filter for each input channel? [![Convolutional filters diagram](https://i.stack.imgur.com/3m7mW.png)](https://i.stack.imgur.com/3m7mW.png) This is the [source](http://cs231n.github.io/convolutional-networks/). The above image seems contradictory to an excerpt from O'reilly's ["Fundamentals of Deep Learning"](https://rads.stackoverflow.com/amzn/click/com/1491925612): > > ...filters don't just operate on a single feature map. They operate on the entire volume of feature maps that have been generated at a particular layer...As a result, feature maps must be able to operate over volumes, not just areas > > > ...Also, it is my understanding that these images below are indicating a **THE SAME** filter is just convolved over all three input channels (contradictory to what's shown in the CS231 graphic above): [![Application of a volumetric convolutional filter to an RGB image](https://i.stack.imgur.com/VdqER.png)](https://i.stack.imgur.com/VdqER.png) [![Convolutions on an RGB image](https://i.stack.imgur.com/kczF0.png)](https://i.stack.imgur.com/kczF0.png)<issue_comment>username_1: > > In a convolutional neural network, is there a *unique* filter for each input channel or are the same new filters used across all input channels? > > > The former. In fact there is a *separate* kernel defined for each input channel / output channel combination. Typically for a CNN architecture, in a single filter as described by your `number_of_filters` parameter, there is one 2D kernel per input channel. There are `input_channels * number_of_filters` sets of weights, each of which describe a convolution kernel. So the diagrams showing one set of weights per input channel for each filter are correct. The first diagram also shows clearly that the results of applying those kernels are combined by summing them up and adding bias for each output channel. This can *also* be viewed as using a 3D convolution for each output channel, that happens to have the same depth as the input. Which is what your second diagram is showing, and also what many libraries will do internally. Mathematically this is the same result (provided the depths match exactly), although the layer type is typically labelled as "Conv2D" or similar. Similarly if your input type is inherently 3D, such as voxels or a video, then you might use a "Conv3D" layer, but internally it could well be implemented as a 4D convolution. Upvotes: 4 <issue_comment>username_2: The following picture that you used in your question, very accurately describes what is happening. Remember that each element of the **3D filter** (grey cube) is made up of a different value (`3x3x3=27` values). So, three different **2D filters** of size `3x3` can be concatenated to form this one **3D filter** of size `3x3x3`. [![convnet2D](https://i.stack.imgur.com/YCJRm.jpg)](https://i.stack.imgur.com/YCJRm.jpg) The `3x3x3` RGB chunk from the picture is multiplied *elementwise* by a **3D filter** (shown as grey). In this case, the filter has `3x3x3=27` weights. When these weights are multiplied element-wise and then summed, it gives one value. ### So, is there a separate filter for each input channel? **YES**, there are as many **2D filters** as the number of input channels in the image. **However**, it helps if you think that for input matrices with more than one channel, there is only one **3D filter** (as shown in the image above). ### Then why is this called 2D convolution (if the filter is 3D and the input matrix is 3D)? This is 2D convolution because the strides of the filter are along the height and width dimensions only (**NOT** depth) and therefore, the output produced by this convolution is also a 2D matrix. The number of movement directions of the filter determines the dimensions of convolution. **Note:** *If you build up your understanding by visualizing a single **3D filter** instead of multiple **2D filters** (one for each layer), then you will have an easy time understanding advanced CNN architectures like Resnet, InceptionV3, etc.* Upvotes: 5 <issue_comment>username_3: There are only restriction in 2D. Why? Imagine a fully connected layer. It'd be awfully huge, each neuron would be connected to maybe 1000x1000x3 inputs neurons. But we know that processing nearby pixel makes sense, therefore we limit ourselves to a small 2D-neighborhood, so each neuron is connected to only a 3x3 near neurons in 2D. We don't know such a thing about channels, so we connect to all channels. Still, there would be too many weights. But because of the translation invariance, a filter working well in one area is most probably useful in a different area. So we use the same set of weights across 2D. Again, there's no such translation invariance between channels, so there's no such restriction there. Upvotes: -1 <issue_comment>username_4: Refer to "Local Connectivity" section in [here](http://cs231n.github.io/convolutional-networks/) and slide 7-18. "Receptive Field" hyperparameter of filter is defined by height & width only, as depth is fixed by preceding layer's depth. NOTE that "The extent of the connectivity along the depth axis is always equal to the DEPTH of the input volume" -or- DEPTH of activation map (in case of later layers). Intuitively, this must be due to the fact that image channels data are interleaved, not planar. This way, applying filter can be achieved simply by column vectors multiplication. NOTE that Convolutional Network learns all the filter parameters (including the depth dimension) and they are total "h*w*input\_layer\_depth + 1 (bias)". Upvotes: 0 <issue_comment>username_5: I'm following up on the answers above with a concrete example in the hope to further clarify how the convolution works with respect to the input and output channels and the weights, respectively: Let the example be as follows (wrt to 1 convolutional layer): * the input tensor is 9x9x5, i.e. 5 input channels, so `input_channels=5` * the filter/kernel size is 4x4 and the stride is 1 * the output tensor is 6x6x56, i.e. 56 output channels, so `output_channels=56` * the padding type is 'VALID' (i.e. no padding) We note that: * since the input has 5 channels, the filter dimension becomes 4x4x5, i.e. there are 5 separate, unique 2D filters of size 4x4 (i.e. each has 16 weights); in order to convolve over the input of size 9x9x5 the filter becomes 3D and must be of size 4x4x5 * therefore: for each input channel, there exists a distinct 2D filter with 16 different weights each. In other words, the number of 2D filters matches the number of input channels * since there are 56 output channels, there must be 56 3-dimensional filters W0, W1, ..., W55 of size 4x4x5 (cf. in the CS231 graphic there are 2 3-dimensional filters W0, W1 to account for the 2 output channels), where the 3rd dimension of size 5 represents the link to the 5 input channels (cf. in the CS231 graphic each 3D filter W0, W1 has the 3rd dimension 3, which matches the 3 input channels) * therefore: the number of 3D filters equals the number of output channels That convolutional layer thus contains: 56 3-dimensional filters of size 4x4x5 (= 80 different weights each) to account for the 56 output channels where each has a value for the 3rd dimension of 5 to match the 5 input channels. In total there are `number_of_filters=input_channel*output_channels=5*56=280` 2D filters of size 4x4 (i.e. 280x16 different weights in total). Upvotes: 3 <issue_comment>username_6: I recommend [chapter 2.2.1 of my masters thesis](https://arxiv.org/pdf/1707.09725.pdf) as an answer. To add to the remaining answers: Keras is your friend to understand what happens: ``` from keras.models import Sequential from keras.layers import Conv2D model = Sequential() model.add(Conv2D(32, input_shape=(28, 28, 3), kernel_size=(5, 5), padding='same', use_bias=False)) model.add(Conv2D(17, (3, 3), padding='same', use_bias=False)) model.add(Conv2D(13, (3, 3), padding='same', use_bias=False)) model.add(Conv2D(7, (3, 3), padding='same', use_bias=False)) model.compile(loss='categorical_crossentropy', optimizer='adam') print(model.summary()) ``` gives ``` _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_1 (Conv2D) (None, 28, 28, 32) 2400 _________________________________________________________________ conv2d_2 (Conv2D) (None, 28, 28, 17) 4896 _________________________________________________________________ conv2d_3 (Conv2D) (None, 28, 28, 13) 1989 _________________________________________________________________ conv2d_4 (Conv2D) (None, 28, 28, 7) 819 ================================================================= Total params: 10,104 ``` Try to formulate your options. What would that mean for the parameters if something else would be the case? Hint: $2400 = 32 \cdot (3 \cdot 5 \cdot 5)$ This approach also helps you with other layer types, not only convolutional layers. Please also note that you are free to implement different solutions, that might have other numbers of parameters. Upvotes: 2 <issue_comment>username_7: Just to make two details absolutely clear: Say you have $N$ 2D input channels going to $N$ 2D output channels. The total number of 2D $3\times3$ filter weights is actually $N^2$. But how is the 3D convolution affected, i.e., if every input channel contributes one 2D layer to every output channel, then each output channel is composed initially of $N$ 2D layers, how are they combined? This tends to be glossed over in almost every publication I've seen, but the key concept is the $N^2$ 2D output channels are interleaved with each other to form the $N$ output channels, like shuffled card decks, before being summed together. This is all logical when you realize that along the channel dimensions of a convolution (which is never illustrated), you actually have a fully connected layer! Every input 2D channel, multiplied by a unique $3\times 3$ filter, yields a 2D output layer contribution to a single output channel. Once combined, every output layer is a combination of every input layer $\times$ a unique filter. It's an all to all contribution. The easiest way to convince yourself of this is to imagine what happens in other scenarios and see that the computation becomes degenerate - that is, if you don't interleave and recombine the results, then the different outputs wouldn't actually do anything - they'd have the same effect as a single output with combined weights. Upvotes: 0 <issue_comment>username_8: For anyone trying to understand how convolutions are calculated, here is a useful code snippet in Pytorch: ``` batch_size = 1 height = 3 width = 3 conv1_in_channels = 2 conv1_out_channels = 2 conv2_out_channels = 2 kernel_size = 2 # (N, C_in, H, W) is shape of all tensors. (batch_size, channels, height, width) input = torch.Tensor(np.arange(0, batch_size*height*width*in_channels).reshape(batch_size, in_channels, height, width)) conv1 = nn.Conv2d(in_channels, conv1_out_channels, kernel_size, bias=False) # no bias to make calculations easier # set the weights of the convolutions to make the convolutions easier to follow nn.init.constant_(conv1.weight[0][0], 0.25) nn.init.constant_(conv1.weight[0][1], 0.5) nn.init.constant_(conv1.weight[1][0], 1) nn.init.constant_(conv1.weight[1][1], 2) out1 = conv1(input) # compute the convolution conv2 = nn.Conv2d(conv1_out_channels, conv2_out_channels, kernel_size, bias=False) nn.init.constant_(conv2.weight[0][0], 0.25) nn.init.constant_(conv2.weight[0][1], 0.5) nn.init.constant_(conv2.weight[1][0], 1) nn.init.constant_(conv2.weight[1][1], 2) out2 = conv2(out1) # compute the convolution for tensor, name in zip([input, conv1.weight, out1, conv2.weight, out2], ['input', 'conv1', 'out1', 'conv2', 'out2']): print('{}: {}'.format(name, tensor)) print('{} shape: {}'.format(name, tensor.shape)) ``` Running this gives the following output: ``` input: tensor([[[[ 0., 1., 2.], [ 3., 4., 5.], [ 6., 7., 8.]], [[ 9., 10., 11.], [12., 13., 14.], [15., 16., 17.]]]]) input shape: torch.Size([1, 2, 3, 3]) conv1: Parameter containing: tensor([[[[0.2500, 0.2500], [0.2500, 0.2500]], [[0.5000, 0.5000], [0.5000, 0.5000]]], [[[1.0000, 1.0000], [1.0000, 1.0000]], [[2.0000, 2.0000], [2.0000, 2.0000]]]], requires_grad=True) conv1 shape: torch.Size([2, 2, 2, 2]) out1: tensor([[[[ 24., 27.], [ 33., 36.]], [[ 96., 108.], [132., 144.]]]], grad_fn=) out1 shape: torch.Size([1, 2, 2, 2]) conv2: Parameter containing: tensor([[[[0.2500, 0.2500], [0.2500, 0.2500]], [[0.5000, 0.5000], [0.5000, 0.5000]]], [[[1.0000, 1.0000], [1.0000, 1.0000]], [[2.0000, 2.0000], [2.0000, 2.0000]]]], requires\_grad=True) conv2 shape: torch.Size([2, 2, 2, 2]) out2: tensor([[[[ 270.]], [[1080.]]]], grad\_fn=) out2 shape: torch.Size([1, 2, 1, 1]) ``` Notice how the each channel of the convolution sums over all previous channels outputs. Upvotes: 0 <issue_comment>username_9: My understanding from this paper <https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/TASLP2339736-proof.pdf> was that the filters are used on each input channels (i.e input feature map in the paper) separately and the result is summed, as described in eq. 8. Here they use different filters for each channel, but you could totally use the same filter. As to knowing the software implementation in a particular ML library such as Tensorflow or PyTorch, this requires inspection as suggested by @username_2 Bukhari Upvotes: 0 <issue_comment>username_10: Other answers claim that you have a different **2-dimensional** kernel (i.e. a matrix) for each channel. This is not wrong, but it's just a *conceptual interpretation* of the 2d convolution, which emphasizes that different channels may provide different information during the convolution: this is the only advantage that I see of this interpretation! Another interpretation (which I think has more advantages, as I will explain below) is to think of a kernel as a 3d multi-dimensional array that has the same depth as the input. So, for example, if you have an input (an image or feature map) of shape $H \times W \times D$, then a single kernel needs to have the shape $K \times L \times D$, where $D$ is the depth of both the input and kernel. If you have more than one kernel, they will all have the depth $D$. This interpretation is consistent with the excerpt that you are quoting. This second interpretation is conceptually useful when you need to deal with 3d convolutions: if you follow this interpretation, a 3d convolution is just a convolution where kernels don't necessarily have the same depth as the input (and that would be the only difference between 2d and 3d convolutions). The other advantage of this interpretation is that your confusion would not have arisen in the first place: your confusion arouse because you thought that kernels are 2d arrays. Another advantage is that, if you have $N > 1$ kernels (which is the usual case), then you know immediately that $N$ is the depth of the output volume and you don't need to think about multiple channels of the input volume: you can simply think in terms of multi-dimensional arrays. So, to clarify, in case it wasn't already super clear: **no, you don't apply the same 2d matrix to multiple channels of the input!** Upvotes: 0 <issue_comment>username_11: One addition to username_2's answer: try the awesome [CNN explainer](https://poloclub.github.io/cnn-explainer/). There you can explore CNN's graphically, which makes things "click" very fast. If you click on conv\_1\_1, you will see how three different kernels are used to calculate one convolution. That means you have one 2D filter per channel, that could together be interpreted as one 3D filter. Upvotes: 1 <issue_comment>username_12: The Conv2d filter is defined as a 3D-represented tensor, but it's indeed a collection of num\_input\_channels kernels applied laterally. The different kernels' layers' values are distinct and learned distinctly too. Theoretically it's necessary for it to be that way; otherwise it wouldn't have the degrees of freedom to distinguish well across the input channels, e.g. find color features in the case of the first layer of a color image input! Upvotes: 0
2018/03/22
701
2,436
<issue_start>username_0: I read that to compute the derivative of the error with respect to the input of a convolution layer is the same to make of a convolution between deltas of the next layer and the weight matrix rotated by $180°$, i.e. something like $$\delta^l\_{ij}=\delta^{l+1}\_{ij} \* rot180(W^{l+1})f'(x^l\_{ij})$$ with $\*$ convolution operator. This is valid with $stride=1$. However, what happens when stride is greater than $1$? Is it still a convolution with a kernel rotation or I can't make this simplification?<issue_comment>username_1: From the paper found from the post linked below: > > 'We find that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks' > > > All that means that only values are skipped (=pooling is made) to the matrix, otherwise all works like a convolution should do. **Sources:** <https://arxiv.org/pdf/1412.6806.pdf> <https://stackoverflow.com/questions/44666390/max-pool-layer-vs-convolution-with-stride-performance> Upvotes: 1 <issue_comment>username_2: I have just the same problem, and I was trying to derive the backpropagation for the convolutional layer with stride, but it doesn't work. When you do the striding in the forward propagation, you chose the elements next to each other to convolve with the kernel, then take a step $>1$. This results in the fact that in the backpropagation, in the reverse operation, the delta matrix elements will be multiplied by the kernel elements, (with the rotation) but not as strided, but you are picking elements that are not next to each other, something like $DY\_{11} \* K\_{11} + DY\_{13} \* K\_{12} + DY\_{31} \* K\_{21} + DY\_{33} \* K\_{22}$, which is NOT the equivalent as a convolution with a stride $>1$. So as far as I am concerned, if I would like to implement the ConvNet by myself to get a better grasp of the concept, I have to implement a different method for the backprop, if I allow strides. Upvotes: 1 <issue_comment>username_3: [![Animation of backpropagation with stride > 1](https://i.stack.imgur.com/X9Pih.gif)](https://i.stack.imgur.com/X9Pih.gif) Backpropagation with stride > 1 involves dilation of the gradient tensor with stride-1 zeroes. I created a [blog post](https://medium.com/@mayank.utexas/backpropagation-for-convolution-with-strides-8137e4fc2710) that describes this in greater detail. Upvotes: 2
2018/03/26
628
2,512
<issue_start>username_0: Suppose one trains a CNN to determine if something was either a cat/dog or neither (2 classes), would it be a good idea to assign all cats and dogs to one class and everything else to another? Or would it be better to have a class for cats, a class for dogs, and a class for everything else (3 classes)? My colleague argues for 3 classes because dogs and cats have different features, but I wonder if he's right.<issue_comment>username_1: If you want to determine if something is either a > > cat/dog or neither > > > you need 2 classes: 1. one for dog or cat, and 2. one for anything else. However, if you assign all cats and dogs to the same class $A$, if an input is classified as $A$, then you won't be able to know whether it is a dog or a cat, you will just know that it is *either* a dog or a cat. In case you wanted to distinguish between cats and dogs too (apart from *neither* of them), then you'll need $3$ classes. Finally, if you specify only 2 classes: 1. dog, and 2. cat, then your CNN will try to classify any new input as either a dog or a cat, even though it is neither a dog nor a cat (e.g. maybe it is a horse). Upvotes: 2 <issue_comment>username_2: The best approach may be to have a cat, dog, and neither class (3 classes total) and go with a regression approach — specifically, outputting the probabilities of each class for any given input. From there, you can always take the probabilities of each output and derive the probability of a cat and dog class or neither class. Also, make sure you use the right activation on the output layer and cost function so that you can interpret the outputs as probabilities (e.g. softmax activation and cross-entropy loss). Upvotes: 2 [selected_answer]<issue_comment>username_3: As far as generalization error is concerned, you are better off by learning the data distribution of (A and B) classes using unsupervised criterion. If you capture the underlying factors that explain most of the variations belong to A and B classes, after that, fine-tune it using a supervised criterion. in this way if you used two classes one for (A or B) and the other for neither (A or B), you will not force the model to learn features don't belong to (A or B), because the model just checks if a new data point is probably likely drawn from the data distribution that resembles (A or B). Side note: you will never have the data necessary to explore the internal structure of the otherwise class (neither A nor B). Upvotes: 1
2018/03/27
640
2,608
<issue_start>username_0: Suppose a CNN is trained to detect bounding box of a certain type of object (people, cars, houses, etc.) If each image in the training set contains just one object (and its corresponding bounding box), how well can a CNN generalize to pick up all objects if the input for prediction contains multiple objects? Should the training images be downsampled in order for the CNN to pick out multiple objects in the prediction? I don't have a specific one in mind. I was just curious about the general behavior.<issue_comment>username_1: If you want to determine if something is either a > > cat/dog or neither > > > you need 2 classes: 1. one for dog or cat, and 2. one for anything else. However, if you assign all cats and dogs to the same class $A$, if an input is classified as $A$, then you won't be able to know whether it is a dog or a cat, you will just know that it is *either* a dog or a cat. In case you wanted to distinguish between cats and dogs too (apart from *neither* of them), then you'll need $3$ classes. Finally, if you specify only 2 classes: 1. dog, and 2. cat, then your CNN will try to classify any new input as either a dog or a cat, even though it is neither a dog nor a cat (e.g. maybe it is a horse). Upvotes: 2 <issue_comment>username_2: The best approach may be to have a cat, dog, and neither class (3 classes total) and go with a regression approach — specifically, outputting the probabilities of each class for any given input. From there, you can always take the probabilities of each output and derive the probability of a cat and dog class or neither class. Also, make sure you use the right activation on the output layer and cost function so that you can interpret the outputs as probabilities (e.g. softmax activation and cross-entropy loss). Upvotes: 2 [selected_answer]<issue_comment>username_3: As far as generalization error is concerned, you are better off by learning the data distribution of (A and B) classes using unsupervised criterion. If you capture the underlying factors that explain most of the variations belong to A and B classes, after that, fine-tune it using a supervised criterion. in this way if you used two classes one for (A or B) and the other for neither (A or B), you will not force the model to learn features don't belong to (A or B), because the model just checks if a new data point is probably likely drawn from the data distribution that resembles (A or B). Side note: you will never have the data necessary to explore the internal structure of the otherwise class (neither A nor B). Upvotes: 1
2018/03/30
1,832
6,290
<issue_start>username_0: How can I train a neural network to recognize sub-sequences in a sequence flow? For example: Given the sequence **111100002222** as an input sample from a stream, the neural network would recognize that **1111** , **0000** , **2222** are sub sequences (so **111100** would not be a valid subsequence) and so on for ~ 50 to 100 different subsequences. There is no particular order in which the subsequence would appear in the flow. No network architecture restriction. Subsequences are of variable length. General concepts, ideas, and theory are welcome.<issue_comment>username_1: I guess, supervised learning should work rather well: You'd feed the network with a fixed substring and it'd determine if the middle character is the first letter of a word, or a last one, or neither or both. So `2*n+1` inputs (fed e.g., with the string "ingsits") should output a 1 on the output determining if the middle letter (here: "s") is the first one of a word and a 0 on the output determining if it's the last one (taken from "Thekingsitsthere"). Each input character should probably be 1 hot encoded. You'd probably want to use more context characters than in my example. OTOH you can use a simple MLP with no temporal complications. It'll never get perfect as it's impossible, but it get pretty close. Concerning unsupervised learning I'm skeptical... Upvotes: 0 <issue_comment>username_2: (NOTE: I think it will be easier to do it without ANNs...) But if you insist: 1. convert the sequence into a fixed-size vectors. 2. push trough a 2-5 1D-convolution layer with 1 neuron dense layer at the end (sigmoid activation) and another K-points detector for getting the sequence breakage points 3. create a training set - to find the break-points (12, 23, 34 ...) in the sequence. 4. train a detector with SGD to find these break-points. - loss functions: cross\_entropy. Then, it should learn to find the breakage points, and based on this you can easily split the sequence. Upvotes: 0 <issue_comment>username_3: Another approach could be to predict the class of a sequence and not the break point. Assuming that each sequence is part of a class, you can use a LSTM. Inputing the multiple sequences (111100002222 ) and let predict the class for each sequence (c1,c1,c1,c1,c0,c0,c0,c0,c2,c2,c2,c2) Upvotes: 0 <issue_comment>username_4: How about this ? **1 - Learn all the basic building blocks of possible sub-sequence** In our words sequence example, that would correspond to [phonemes](https://en.wikipedia.org/wiki/Phoneme). (*I'm guessing that this step can even be done using unsupervised learning.)* So in the following example : **Hello Laurie**, we would have learned 3 phonemes : **HE**, **LO**, **RI**. **2- Learn all subsequence as sequences of 'building blocks'** Using a [ClockWorkRNN](https://arxiv.org/abs/1402.3511) with timesteps of interval +1 with, let's say, 10-15 timestep (groups), that is fed the next '*phoneme id*' in the sequence, we would have a space large enough to record most *words* (Obviously, the number of timesteps should be size of the biggest word). This is the subsequences memory RNN. Its sole purpose is to remember subsequences. Now, i'm really brainstorming here , taking a very wild guess, but what if : After training this RNN to a satisfying error rate, we check if the output of the RNN is very different to the next input for a couple of timesteps. In other word, we see if the neural network has been able to '*guess*' the next building block of the subsequence. If not, then its a **point of interest** , because there is not a lot of possibilities as of why this would happend : the only one I see is 1 - The RNN is currently receiving another word, thus making this timestep a sub-sequence '*break point*' Do you guys see any points that could prove this theory wrong ? Upvotes: 0 <issue_comment>username_5: The problem in the original question is akin to that of inducing a [context-sensitive grammar](https://en.wikipedia.org/wiki/Context-sensitive_grammar) (CSL), except that it is harder because a CSL is assumed to be composed of fixed-length subsequences. It is probably closer to the problem of inducing a [Reber grammar](https://github.com/DylanAuty/rebergen), but that in turn seems like an overkill. LSTMs are known to be able to learn both [CSL](ftp://ftp.idsia.ch/pub/juergen/L-IEEE.pdf) and [Reber](http://www.bioinf.jku.at/publications/older/2604.pdf) grammars. However, I doubt that this is what you really need because of the following comment: > > [...] given an entire book where there is NO spaces anywhere, only characters (including special characters, like commas), in what way can we make the network learn the 'word boudaries' of this book. > > > This is called morphology induction, and it is a *much* harder problem than that of simple Reber grammar induction. Note that finding *word* boundaries is a special case of the problem of finding *morpheme* boundaries. [There](https://link.springer.com/chapter/10.1007/978-94-017-6059-1_2) [have](https://gupea.ub.gu.se/bitstream/2077/21418/1/gupea_2077_21418_1.pdf) [been](http://www.aclweb.org/anthology/W02-0603) [many](http://people.cs.uchicago.edu/~jagoldsm/Papers/algorithm.pdf) [attempts](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.48.9794&rep=rep1&type=pdf) [to solve](http://www.aclweb.org/anthology/P00-1027) [this](http://www.aclweb.org/anthology/N01-1024) (also see [this](https://www.mitpressjournals.org/doi/pdf/10.1162/COLI_a_00050) survey paper for more details and references). Most approaches developed seem to rely on statistical principles (like [MDL](https://en.wikipedia.org/wiki/Minimum_description_length)) and *do not* use neural networks (a [counterexample](http://grammar.ucsd.edu/sdlp/current/Malouf_2016_Generating_morphological_paradigms.pdf) using LSTMs). My intuition is that the extreme morphological variability across languages (ranging from Finno-Ugric languages with highly inflectional morphology to Sino-Tibetan languages with hardly any morphology at all) makes it hard to train neural networks in a language-agnostic way. However, you might have better luck if you focus on a single language. Hope that helps. Upvotes: 3 [selected_answer]
2018/03/30
1,723
7,580
<issue_start>username_0: There are problems (e.g. [this one](https://ai.stackexchange.com/q/4859/2444) or [this other one](https://ai.stackexchange.com/q/5838/2444)) that could potentially be solved easily using traditional algorithmic techniques. I think that training a neural network (or any other machine learning model) for such sorts of problems will be more time consuming, resource-intensive, and pointless. If I want to solve a problem, how to decide whether it is better to solve algorithmically or by using NN/ML techniques? What are the pros and cons? How can this be done in a systematic way? And if I have to answer someone why I chose a particular domain, how should I answer? Example problems are appreciated.<issue_comment>username_1: When we apply supervised learning to a problem, we are already systematizing the approach. A human has decided that a function exists (mapping from inputs to unique output) and that the offered features are the only ones that need be considered. The learning then goes ahead to find the best solution given those constraints. Unsupervised learning is a bit more general, searching for associations or relations that might not necessarily be functions. A neural net is not yet capable of generalizing and asking for more information, it can only become more specific unless a human intervenes. Everything depends on the detail of the problem. If it is clear that a function must exist then we can set a NN to find that function. Many other problems are more difficult - a company is losing money and you have data but halfway there was a change in CEO, so human reasoning has to be mixed in to deal with the situation. The human can modify the architecture of the NN to introduce dummy variables, but the NN cannot do this by itself. So your answer really is "I chose this method because of the (lack of) need for me to artificially constrain the approach to the problem." Upvotes: 1 <issue_comment>username_2: There are two different problems described in the linked question and your question: optimization and learning. ### Optimization If you are asking about optimization (the second linked question: [Search minimum value with learning machine algorithm](https://ai.stackexchange.com/questions/4859/search-minimum-value-with-learning-machine-algorithm/4860#4860)) you can have 3 different approaches: * analytical approach * numerial methods * metaheuristics As you suggest, it is usually better to try them from the first to the last one. It is common that the first approach is unfeasible for optimizing for target function, but very often you can use either mathematical optimization for some specific classes of problems (e.g. [linear](https://en.wikipedia.org/wiki/Linear_programming)/[quadratic](https://en.wikipedia.org/wiki/Quadratic_programming) programming) or iterative methods (e.g. [conjugate gradient method](https://en.wikipedia.org/wiki/Conjugate_gradient_method)). Only after considering this approaches it makes sense for the third class of approaches, genetic algorithms being a notable example, which is often classified as an AI approach. ### Learning If you are asking about learning, then the first linked question ([Ideas on how to make a neural net learn how to split sequence into sub sequences](https://ai.stackexchange.com/questions/5838/ideas-on-how-to-make-a-neural-net-learn-how-to-split-sequence-into-sub-sequences)) seems to be intended as an example. However it doesn't make clear what the problem is, as the target function seems to be obvious, so no learning is needed. In this case it also makes sense to first try to pin down the problem mathematically and resort to machine learning if it is impossible and if you have the data (input/output examples). Upvotes: 2 <issue_comment>username_3: *fwiw,* with the basic, non-trivial M-game, I have no doubt that ALphaZero could tear through any human player alive in very short order. I hope that people will start experimenting with that, especially on m^n(m^n) where m > 3 and n > 2 to see how they hold up. Problem is, once you expand past n > 3 it gets very difficult for humans to play. This leads to a condition where performance of an NN on higher order M can only realistically be evaluated against other algorithms. In this context, it seems worthwhile to develop a general, classical algorithm that can evaluate any order M, regardless of efficacy of tree search in relation to the problem size, with the understanding that decision making is never presumed optimal until the gametree becomes tractable. This carries an an assumption of the same general strength across all M for the classical algorithm, because the expansion of *m* or *n* do alter the core heuristics. From the practical standpoint, as a product designed for mobile with no assumption of connectivity, it doesn't make sense to start integrating NNs until lowest-common-denominator mobile devices have sufficient resources. The issue of package size is also important in this context--the classical algorithms require a trivial amount of code and volume. Most importantly, using classical algorithms formed of sets of heuristics and parameters allows recombination of functions to produce myriad automata of varying degrees of strength. (This can be easily accomplished by altering the size of tree search algorithms, but may only be relevant in determining which heuristics perform better under tree search restrictions.) Finally, because M-games provide an array precise metrics, it may be worthwhile to develop core heuristic function based on human reasoning. Upvotes: 1 <issue_comment>username_4: Note that "algorithmically" can refer to anything that uses an *algorithm*. Currently, ML systems are trained with algorithms and neural networks can be seen as algorithms (although black-box ones), so ML is also algorithmic. Everything that runs on a computer (a concrete version of a Turing machine) can be seen as an algorithm (or program)! In fact, computers were invented exactly for this purpose: to perform some algorithmic operation (i.e. a set of instructions, like a recipe). So, by *algorithmic*, I assume you're referring to techniques that are typically taught in an "Algorithms and Data Structures" course for a computer science student, such as the binary search (one of the most simple and yet beautiful and useful algorithms!), which is an algorithm that, given some constraints (a sorted array), gives you an exact correct solution in $\mathcal{O}(\log n)$ time. However, I think that you are also referring to every program that is primarily based on if-then statements and loops (e.g. desktop applications, websites, etc.) To answer your question, you first need to understand the scope of the machine learning field. Machine learning (like statistics) is a set of techniques that attempt to learn from data. So, every problem where data is available (and you can get insight from) can potentially be solved with a machine learning technique. ML techniques typically produce approximative solutions and are typically used to solve problems where an exact solution is infeasible. However, note that machine learning isn't the only approach to solve hard problems (e.g. you can also use meta-heuristics, e.g. ant colony optimization algorithms). If you have an algorithm that produces an exact solution (without requiring data) in polynomial time (preferably, in $\mathcal{O}(n^2)$ time), then machine learning (or any other technique that produces approximative solutions, e.g. heuristics) is quite useless. Upvotes: 0
2018/04/01
595
2,303
<issue_start>username_0: I have a neural network with 2 inputs and one output, like so: ``` input | output ____________________ a | b | c 5.15 |3.17 | 0.0607 4.61 |2.91 | 0.1551 ``` etc. I have 75 samples and I am using 50 for training and 25 for testing. However, I feel that the training samples are not enough. Because I can't provide more real samples (due to time limitation), I would like to train the network using fake data: For example, I know that the range for the `a` parameter is from 3 to 14, and that the `b` parameter is ~65% of the `a` parameter. I also know that `c` is a number between 0 and 1 and that it increases when a & b increase. So, what I would like to do is to generate some data using the above restrictions (about 20 samples). For example, assume `a = 13` , `b = 8` and `c= 0.95`, and train the network with these samples before training it with the real samples. Has anybody studied the effect of doing this on the neural network? Is it possible to know if the effect will be better or worse on the networks? Are there any recommendations/guidelines if I want to do this?<issue_comment>username_1: This is not advisable. If you train your model with random data your model is not learning anything useful, because there is no information to gain from those examples. Even worse it may (and likely is) trying to generalize off of your incorrect examples, which will lessen the effect your real examples have. Essentially, you are just dampening your training set with noise. You are moving in the right direction though. 75 examples will not be enough if your problem has any complexity at all. And unless you know some correlation between the inputs `a`, `b` and the output `c`, you don't want to generate data (and even if you did know some correlation, it is not always suggested to generate data). If it is impossible to get any more data, you might want to consider a statistical model, rather than a neural network. Upvotes: 3 [selected_answer]<issue_comment>username_2: If you add fake samples to the training set, your Neural Network learns new dataset that you just made, your fake samples are estimations so you add noise to your training set. you can use Leave one out cross validation technic for evaluating your model. Upvotes: 1
2018/04/01
790
3,612
<issue_start>username_0: Deep networks notoriously take a long time to train. What is the most time-consuming aspect of training them? Is it the matrix multiplications? Is it the forward pass? Is it some component of the backward pass?<issue_comment>username_1: There is no such single hard and slow step in training neural networks , forward pass involves large number of matrix multiplications so does backward pass , even though there are highly optimized libraries for matrix multiplications neural networks act on very high dimensional (tensor) multiplications in both forward and backward passes which makes it difficult train . however backward pass would be even more slower or even intractable if we won't use backpropagation in case of large neural networks, since computing derivatives is time-taking. refer training results in for exact number <https://github.com/baidu-research/DeepBench#types-of-operations> Upvotes: 0 <issue_comment>username_2: ### Forward pass The output of a layer can be calculated given the output of the previous layer. So the GPU can parallelize this computation for every layer and over the minibatch which is done by calculating a big matrix. But it needs to be sequential from layer to layer (earlier layers to higher layers). Regarding the layer type convolutions or especially fully connected layers can result in a big matrix calculation. ### Backward pass The gradient of a layer with respect to the layer input (and layer parameters) can only be calculated given the gradient of the layer output (input gradient of a subsequent layer) and input to the layer (output of the previous layer). This again can be parallelized over a layer and minibatch but is sequential from higher layers to earlier layers. Moreover, since the backward pass relies on the outputs of the forward pass all intermediate layer outputs of the forward pass have to be cached for the backward pass which results in a high (GPU) memory usage. ### Forward and backward pass take most of the time So, these two steps take a long time for 1 training iteration, and (depending on your network) high GPU memory usage. But you should read and understand the backpropagation algorithm that basically explains everything. Moreover, to train a network from scratch, in general, takes lots of iterations because especially in the earlier layers training the parameters is based on gradients that are affected by lots of previous layers, which can result in noisy updates, etc., that do not always push the network parameters in the right direction directly. In contrast, e.g. fine-tuning a pre-trained network on some new task can for example already be done with much less training iterations. Upvotes: 1 <issue_comment>username_3: Check out Figure 6 in this paper: [PyTorch Distributed: Experiences on Accelerating Data Parallel Training](https://arxiv.org/abs/2006.15704) It breaks down the latency of the forward pass, the backward pass, the communication step, and the optimization step for running both ResNet50 and BERT on a NVIDIA Tesla V100 GPUs. From measuring the pixels in the figure, I estimated the times for the forward, backward, and optimization steps as a percentage of their total time combined. (I ignored the communication step shown in the figure because that was only to show how long an unoptimized communication step would take when doing data-parallel training). Here are the estimates I got: * Forward: 23% * Backward: 74% * Optimization: 3% So the backward pass takes about 3x as long as the forward pass, and the optimization step is relatively fast. Upvotes: 2
2018/04/03
1,740
6,301
<issue_start>username_0: Can we detect the emotions (or feelings) of a human through conversations with an AI? Something like a "confessional", disregarding human possibilities to lie. Below, I have the categories joyful, sadness, anger, fear and affection. For each category, there are several words that can be in the texts that refer to it. * **Joy:** **(** cheerful, happy, confident, happy, satisfied, excited, interested, dazzled, optimistic, relieved, euphoric, drunk, witty, good **)** * **Sadness:** **(** sad, desperate, displeased, depressed, bored, lonely, hurt, desolate, meditative, defrauded, withdrawn, pitying, concentrated, depressed, melancholic, nostalgic **)** * **Anger:** **(** aggressive, critical, angry, hysterical, envious, grumpy, disappointed, shocked, exasperated, frustrated, arrogant, jealous, agonized, hostile, vengeful **)** * **Fear:** **(** shy, frightened, fearful, horrified, suspicious, disbelieving, embarrassed, embarrassed, shaken, surprised, guilty, anxious, cautious, indecisive, embarrassed, modest **)** * **Affection:** **(** loving, passionate, supportive, malicious, dazzled, glazed, homesick, embarrassed, indifferent, curious, tender, moved, hopeful **)** **Flow Example** **Phrase 1:** "I'm very happy! It concludes college." **Categorization 1:**  - Joy **(+1)** * Sadness **(-1)** --- **Phrase 2:** "I'm sad, my mother passed away." **Categorization 2:**  - Sadness **(+1)** * Joy **(-1)** --- **Phrase 3:** "I met a girl, but I was ashamed." **Categorization 3:**  - Fear **(+1)** Is this a clever way to follow and / or improve, or am I completely out of the way? I see that there is a Google product that creates parsing according to the phrases. I do not know how it works, because I like to recreate the way I think it would work. Remembering that this would not be the only way to categorize the phrase. This would be the first phase of the analysis. I can also identify the subject of the sentence, so we would know if the sadness is from the creator of the message or from a third party, in most cases. * [NLTK](http://www.nltk.org/book/ch08.html) * [Sentiment Analysis Python Example](https://github.com/text-machine-lab/sentimental)<issue_comment>username_1: It could work using supervised learning , as long as you have the required dataset. However, a low error ratio using unsupervised learning of the human emotion spectrum would prove to be more difficult. Ex : How would you defined being in love to a neural network ? Joy +1 , Sadness -1 ? Now , How would you define being in love with , let’s say, someone you know you could never be with ? Joy -1 , Sadness +1, but at the same time , the only fact that you are thinking about that person bring a Joy +1 . Human emotions are quite complex . A good start (in my humble opinion) would be to read about ‘emotion-related’ hormones , and how they affect the brain ( dopamine , serotonin, etc). Some emotions are really a precise mix of these hormones , probably giving you a good hint on how to ‘caregorize’ your network. Upvotes: 1 <issue_comment>username_2: I think you are definitely on a very sensible track. No one defines right or wrong in emotion field. It's not hard science. It's all theories. I have recently read a [paper](https://link.springer.com/article/10.1007/s10994-017-5666-0) regarding emotions in Reinforcement Learning (RL). It has explained briefly emotion from 3 perspectives: *psychology, neuroscience* and *computer science*. In particular, your way of emotion definition matches one of the **categorical emotion theory** in the psychology perspective. The other theories in **psychology perspective** and **componential emotion**. You can try to implement them and try out which one works well. The paper has also introduced ways to measure the level of emotions (emotion elicitation). Here is the link for the [paper](https://link.springer.com/article/10.1007/s10994-017-5666-0) I have mentioned. I am sure you will receive lots of inspiration. I have also written a [summary](https://diansheng.github.io/Emotion-in-RL/) of this paper. Take a look if the original paper is too long to read. I don't have any concrete solution for implementing. But the general idea is always trying to categorize abstract concepts and quantify them. And try some thing, and iteratively modify and improve it. All the best! Upvotes: 2 <issue_comment>username_3: I don't want to pour cold water over your approach, but I am very sceptical and (having worked in sentiment analysis myself) think it is way too simplistic. Various communicative intents are encoded in language, and there is a wide range of linguistic features that are employed for that purpose. Choice of words is only one of them; it is the most obvious one, as we can easily see the words themselves. But words in isolation do not mean anything, context is important. It is of course not difficult to come up with example sentences where the sentiment effect of the words you list is reversed. The easy one being negation: *I'm not happy about this.* Sure, you can check if there is a *not* before the word, but what about *I would be happy if you stopped making such a noise.* -- surely here the current state would be one of unhappiness? If you think about real examples, it suddenly becomes very complicated. Also, words usually have multiple meanings: *This cup is just shy of one litre.* I'm sure you'd agree that this does not express 'fear'. And *The shunter moved the tender on the old steam engine.* is not about affection. But solving this problem involves word sense disambiguation, which in itself is a hard problem to solve. The problem is, initially word-based approaches look really good and transparent, as you can easily see what's going on. But language unfortunately doesn't play ball, and in real life systems don't tend to work very well. Lexical choice is only one way to encode sentiment, there are also grammatical patterns. But these are often very subtle, and not yet well-explored in linguistic research. To end on a positive note, have a look at research in evaluation (which is kind of related to sentiment). For example, <NAME>'s *Corpus Approaches to Evaluation*, (Routledge 2011). That should give you some further pointers. Upvotes: 2
2018/04/04
2,095
9,162
<issue_start>username_0: [A question about swarm intelligence as a potential method of strong general AI](https://ai.stackexchange.com/q/5258/2444) came up recently, and yielded some useful answers and clarifications regarding the nature of swarm intelligence. But it got me thinking about "group intelligence" in general. Here organism is synonymous with algorithm, so a complex organism is an algorithm made up of component algorithms, based on a set of instructions in the form of a string. Now consider the [Portuguese man o' war](https://en.wikipedia.org/wiki/Portuguese_man_o%27_war), not a single animal, but a [colonial organism](https://en.wikipedia.org/wiki/Colony_(biology)). In this case, that means a set of animals connected for mutual benefit. And *physalia physalis* are pretty smart as a species in that they've been around for a while, I'm not finding them on any endangered lists, and based on their habitat it looks like global warming will be a jackpot for them. And they don't even have brains. Each component of the *physalia* has a narrow function, colony organism itself has a more generalized function, which is the set of functions necessary for maintenance and reproduction. {Man o' War} ⊇ { {pneumatophore}, {gonophores, siphosomal nectophores,vestigial siphosomal nectophores}, {free gastrozooids, tentacled gastrozooids, gonozooids, gonopalpons}, {dactylozooids}, {gonozooids}, {gastrozooids} } * What types of applications qualify as "compound intelligences"? What is the thinking on groups of neural networks comprising generally stronger or simply more generalized intelligence? I recognize the underlying problem is ultimately complexity and that "strong narrow AI" is, by definition, limited, so I use "generalized" and omit "strong" because human-like and superintelligence are not conditions. Compound intelligence is defined as a colony of dependent intelligences.\* Utility software is often a form of an expert system that manages a set of functions of varying degrees of complexity. There's currently a great deal of focus on autonomous vehicles, which would seem to require sets of functions. Links to research papers on this or related subjects would be ideal. --- [Portuguese Man o' War (oceana.org)](http://oceana.org/marine-life/corals-and-other-invertebrates/portuguese-man-o-war) [The Bugs Of The World Could Squish Us All](https://fivethirtyeight.com/features/the-bugs-of-the-world-could-squish-us-all/)<issue_comment>username_1: I did work on compound intelligence because that is the direction that Google is trying to go. I couldn't find any basis for it. In other words, having a collection of AI expert systems does not seem to provide any collective intelligence. You would also need some kind of control program that could decide which system to use. Currently, Google relies on the user to choose. If Google was able to create an independent control program it would already be out. This does not seem to be a matter of complexity or code tweeking but a fundamental limit with AI. Upvotes: 1 <issue_comment>username_2: With the increase of both unit capacity turbines and size of wind farm, the safe operatio wind farm has received growing attention. In ma factors that affect the safe operation of wind far being struck by lightning is an important aspect. intelligent lightning monitoring system is used fo surveillance of wind turbine generators, which ca real-time and accurate monitoring of the lightnin current waveforms, amplitude, time of occurrenc number of lightning strokes and all the other imp parameters of lightning thus providing an effectiv monitoring and analysis tool to quickly locate fa location on wind turbine generator and cause of malfunction and a theoretical basis for the desig lightning protection system of wind turbines. Thi examines the principle and main methods of the turbine generator-matching lightning monitoring and combined with the specific research project designs and implements a high-precision and multifunctional intelligent lightning monitoring sy based on the theory of Rogowski coils. Upvotes: 0 <issue_comment>username_3: Swarm intelligence, compound intelligence, or group intelligence may emerge as an important concept as AI develops toward higher complexity. Whether these terms should be considered synonymous is doubtful. Compound features in biology are the result of control in differentiation during the development of an organism from a single cell. Compounding in biology is a single function performed across like elements. Swarms are a result of distinct organisms operating in proximity. Swarming in biology is similarity in complex independent behaviors that appear coordinated but are generally engaged in as a defense from predators. Group intelligence may be distinct from compound intelligence in that adjacent units may be in agreement or opposition, as can be case with intelligent beings in groups. The agreement in the group that a variety of models is permissible. We call them opinions, but they are distinct matches of models to problems that produce different projections and suggest different selections from among the group's options. The terms general and strong appear in this question and many others and it appears that, mainly for historical reasons, may continue to frustrate clarity. All intelligence is general in that it learns a generality from specific experiences and then applies that generality to future scenarios to achieve objectives within those scenarios. What makes the application of the generality intelligent is that it is expected to work because it has been working for similar scenarios. All intelligence is specific in that it is limited to scope of what generalities have been discovered. Certain techniques are significantly more general that others because they are allegedly domain independent. We call this mathematics. Consequently, the development of artificial intelligence is not a path from specific to general but one of discovering generalities and applying them to specifics. One could say that the most general thinking of mathematics applied to the computer is the primary activity of applied artificial intelligence, and the capabilities that emerge are a more sophisticated as more sophisticated mathematics is represented in working software and hardware. The generalities take on greater complexity so that they can apply to a greater number of specific scenarios. That is not a gain in strength but a gain in the breadth of potential application. Physalia physalis is a symbiotic colony of organisms of four types, the pneumatophore (or float), dactylozooids (long tentacles), gastrozooids (feeding tentacles), and gonozooids which produce reproductive gametes. To avoid breaking the historic conception of animals, these organisms are called polyps. The venom of the colony does not come from any of those organisms but rather from another symbiotic organism, cnidocytes, which attach to the tentacles and are released under strictly controlled conditions. In addition to all this symbiosis, there are several species of fish that use the colony for shade and protection that the colony lets swim among the tentacles and that have formed partial immunity to the cnidocytes. These complex symbiotic networks are not fully understood, but they seem to operate well and create sustainable inter-specie systems. It's not likely that the organisms, within their lifetimes adapt or remember, but the DNA of each organism has involved in a way that resembles intelligence in that the colony and its symbionts have adapted to the ocean's surface and its biology. The idea that the system of the biosphere is the first example of broad intelligence is likely, in that design excellence has emerged from an evolutionary processes. It is not altogether ridiculous to propose that human intelligence is a higher speed approach in neurons to the slower speed of DNA replication in larger organisms. Because of the metabolic requirements of growth slows evolution for larger organisms with a greater cell count, these larger organisms may have needed to develop a way to achieve the nimble adaptivity of their lower ancestors. Neurology facilitates the approximation of some aspects of evolution and may have been the most attainable natural solution to reacquire nimble adaptivity. Attempting to apply these various ideas to the current and ongoing development of autonomous vehicles reveals a gap in understanding. We don't yet have the mathematics developed to understand how compound, swarm, or group intelligence can be used in the laboratory to accomplish well defined problems in controlled execution scenarios. That is probably a prerequisite to using these ideas in vehicle control. The system designs of future cars may be like a colony of independent or semi-independent components that each have a role and purpose. Compound, group, and symbiotic designs are likely to develop. The current activities of vehicles on the road or in the air near airports is like a swarm, so the application of that idea is obvious: Avoid collisions. Upvotes: 1
2018/04/04
1,581
7,227
<issue_start>username_0: In the search tree below, there are 11 nodes, 5 of which are leaves. There are 10 branches. [![Search tree](https://i.stack.imgur.com/MCRn3.png)](https://i.stack.imgur.com/MCRn3.png) Is the average branching factor given by 10/6, or 10/11? Are leaves included in the calculation? Intuitively, I would think not, since we are interested in nodes with branches. However, a definition given to me by my professor was "The average number of branches of all nodes in the tree", which would imply leaves are included.<issue_comment>username_1: I did work on compound intelligence because that is the direction that Google is trying to go. I couldn't find any basis for it. In other words, having a collection of AI expert systems does not seem to provide any collective intelligence. You would also need some kind of control program that could decide which system to use. Currently, Google relies on the user to choose. If Google was able to create an independent control program it would already be out. This does not seem to be a matter of complexity or code tweeking but a fundamental limit with AI. Upvotes: 1 <issue_comment>username_2: With the increase of both unit capacity turbines and size of wind farm, the safe operatio wind farm has received growing attention. In ma factors that affect the safe operation of wind far being struck by lightning is an important aspect. intelligent lightning monitoring system is used fo surveillance of wind turbine generators, which ca real-time and accurate monitoring of the lightnin current waveforms, amplitude, time of occurrenc number of lightning strokes and all the other imp parameters of lightning thus providing an effectiv monitoring and analysis tool to quickly locate fa location on wind turbine generator and cause of malfunction and a theoretical basis for the desig lightning protection system of wind turbines. Thi examines the principle and main methods of the turbine generator-matching lightning monitoring and combined with the specific research project designs and implements a high-precision and multifunctional intelligent lightning monitoring sy based on the theory of Rogowski coils. Upvotes: 0 <issue_comment>username_3: Swarm intelligence, compound intelligence, or group intelligence may emerge as an important concept as AI develops toward higher complexity. Whether these terms should be considered synonymous is doubtful. Compound features in biology are the result of control in differentiation during the development of an organism from a single cell. Compounding in biology is a single function performed across like elements. Swarms are a result of distinct organisms operating in proximity. Swarming in biology is similarity in complex independent behaviors that appear coordinated but are generally engaged in as a defense from predators. Group intelligence may be distinct from compound intelligence in that adjacent units may be in agreement or opposition, as can be case with intelligent beings in groups. The agreement in the group that a variety of models is permissible. We call them opinions, but they are distinct matches of models to problems that produce different projections and suggest different selections from among the group's options. The terms general and strong appear in this question and many others and it appears that, mainly for historical reasons, may continue to frustrate clarity. All intelligence is general in that it learns a generality from specific experiences and then applies that generality to future scenarios to achieve objectives within those scenarios. What makes the application of the generality intelligent is that it is expected to work because it has been working for similar scenarios. All intelligence is specific in that it is limited to scope of what generalities have been discovered. Certain techniques are significantly more general that others because they are allegedly domain independent. We call this mathematics. Consequently, the development of artificial intelligence is not a path from specific to general but one of discovering generalities and applying them to specifics. One could say that the most general thinking of mathematics applied to the computer is the primary activity of applied artificial intelligence, and the capabilities that emerge are a more sophisticated as more sophisticated mathematics is represented in working software and hardware. The generalities take on greater complexity so that they can apply to a greater number of specific scenarios. That is not a gain in strength but a gain in the breadth of potential application. Physalia physalis is a symbiotic colony of organisms of four types, the pneumatophore (or float), dactylozooids (long tentacles), gastrozooids (feeding tentacles), and gonozooids which produce reproductive gametes. To avoid breaking the historic conception of animals, these organisms are called polyps. The venom of the colony does not come from any of those organisms but rather from another symbiotic organism, cnidocytes, which attach to the tentacles and are released under strictly controlled conditions. In addition to all this symbiosis, there are several species of fish that use the colony for shade and protection that the colony lets swim among the tentacles and that have formed partial immunity to the cnidocytes. These complex symbiotic networks are not fully understood, but they seem to operate well and create sustainable inter-specie systems. It's not likely that the organisms, within their lifetimes adapt or remember, but the DNA of each organism has involved in a way that resembles intelligence in that the colony and its symbionts have adapted to the ocean's surface and its biology. The idea that the system of the biosphere is the first example of broad intelligence is likely, in that design excellence has emerged from an evolutionary processes. It is not altogether ridiculous to propose that human intelligence is a higher speed approach in neurons to the slower speed of DNA replication in larger organisms. Because of the metabolic requirements of growth slows evolution for larger organisms with a greater cell count, these larger organisms may have needed to develop a way to achieve the nimble adaptivity of their lower ancestors. Neurology facilitates the approximation of some aspects of evolution and may have been the most attainable natural solution to reacquire nimble adaptivity. Attempting to apply these various ideas to the current and ongoing development of autonomous vehicles reveals a gap in understanding. We don't yet have the mathematics developed to understand how compound, swarm, or group intelligence can be used in the laboratory to accomplish well defined problems in controlled execution scenarios. That is probably a prerequisite to using these ideas in vehicle control. The system designs of future cars may be like a colony of independent or semi-independent components that each have a role and purpose. Compound, group, and symbiotic designs are likely to develop. The current activities of vehicles on the road or in the air near airports is like a swarm, so the application of that idea is obvious: Avoid collisions. Upvotes: 1
2018/04/05
885
3,564
<issue_start>username_0: In a nutshell: I want to understand why a one hidden layer neural network converges to a good minimum more reliably when a larger number of hidden neurons is used. Below a more detailed explanation of my experiment: I am working on a simple 2D XOR-like classification example to understand the effects of neural network initialization better. Here's a visualisation of the data and the desired decision boundary: [![enter image description here](https://i.stack.imgur.com/4jCnc.png)](https://i.stack.imgur.com/4jCnc.png) Each blob consists of 5000 data points. The minimal complexity neural network to solve this problem is a one-hidden layer network with 2 hidden neurons. Since this architecture has the minimum number of parameters possible to solve this problem (with a NN) I would naively expect that this is also the easiest to optimise. However, this is not the case. I found that with random initialization this architecture converges around half of the time, where convergence depends on the signs of the weights. Specifically, I observed the following behaviour: ``` w1 = [[1,-1],[-1,1]], w2 = [1,1] --> converges w1 = [[1,1],[1,1]], w2 = [1,-1] --> converges w1 = [[1,1],[1,1]], w2 = [1,1] --> finds only linear separation w1 = [[1,-1],[-1,1]], w2 = [1,-1] --> finds only linear separation ``` This makes sense to me. In the latter two cases the optimisation gets stuck in suboptimal local minima. However, when increasing the number of hidden neurons to values greater than 2, the network develops a robustness to initialisation and starts to reliably converge for random values of w1 and w2. You can still find pathological examples, but with 4 hidden neurons the chance that one "path way" through the network will have non-pathological weights is larger. But happens to the rest of the network, is it just not used then? Does anybody understand better where this robustness comes from or perhaps can offer some literature discussing this issue? Some more information: this occurs in all training settings/architecture configurations I have investigated. For instance, activations=Relu, final\_activation=sigmoid, Optimizer=Adam, learning\_rate=0.1, cost\_function=cross\_entropy, biases were used in both layers.<issue_comment>username_1: You grasped a bit of the answer. > > In the latter two cases the optimisation gets stuck in suboptimal local minima. > > > When you have only 2 dimensions, a local minima exists. When you have more dimensions, this minima gets harder and harder to reach, as its likelihood decreases. Intuitively, you have a lot more dimensions through which you can improve than if you only had 2 dimensions. The problem still exists, even with 1000 neurons you could find a specific set of weights which was a local minimum. However, it just becomes so much less likely. Upvotes: 1 <issue_comment>username_2: I may have scratched the surface of a much larger problem when I asked this question. In the meantime I have read Lottery Hypothesis paper: <https://arxiv.org/pdf/1803.03635.pdf> Basically, if you overparameterise your network you are more likely to find a random initialisation that performs well: A winning ticket. The paper above shows that you can actually prune away the unneeded parts of the network after training. However, you need to overparameterise the network initially in order to increase the chance of randomly sampling a winning ticket configuration. I believe the case in my question above is a minimal example of this. Upvotes: 1 [selected_answer]
2018/04/06
348
1,359
<issue_start>username_0: When training on large neural network, how to deal with the case that the gradients are too small to have any impact? FYI, I have an RNN, which has multiple LSTM cells and each cell has hundreds of neurons. Each training data has thousands of steps, so the RNN would unroll thousands of times. When I print out all gradients, they are very small, like e-20 of the variable values. Therefore the training does not change the variable values at all. BTW, I think this is not an issue of vanishing gradients. Note that the gradients are uniformly small from the beginning to the end. Any suggestion to overcome this issue? Thank!<issue_comment>username_1: I changed the layer from tf.contrib.rnn.LSTMBlockCell to tf.contrib.rnn.LayerNormBasicLSTMCell. Then the gradients become large enough to influence the network. Upvotes: 2 <issue_comment>username_2: Vanishing gradient is a common problem in RNN. A common way to deal with it is the method of gradient clipping (mainly you define a maximum and/ or a minimum threshold). [see here for more information](https://www.quora.com/What-is-gradient-clipping-and-why-is-it-necessary) Further information and piece of code to implement it can be found in [SO here](https://stackoverflow.com/questions/36498127/how-to-apply-gradient-clipping-in-tensorflow) Hope it helps ! Upvotes: 2
2018/04/07
1,597
6,655
<issue_start>username_0: With the growing ability to cheaply create fake pictures, fake soundbites, and fake video there becomes an increasing problem with recognizing what is real and what isn't. Even now we see a number of examples of applications that create fake media for little cost (see [Deepfake](https://en.wikipedia.org/wiki/Deepfake), [FaceApp](https://en.wikipedia.org/wiki/FaceApp), etc.). Obviously, if these applications are used in the wrong way they could be used to tarnish another person's image. Deepfake could be used to make a person look unfaithful to their partner. Another application could be used to make it seem like a politician said something controversial. What are some techniques that can be used to recognize and protect against artificially made media?<issue_comment>username_1: The techniques you mention use GANs. The key idea of GANs is that you have a generator and a discriminator. The generator generates new content, the discriminator has to tell if the content is from the real data or if it was generated. The discriminator is way more powerful. It should not be too hard to train a discriminator to detect fakes. Training a model which is able to pinpoint the manipulation and understanding of this is a proof of manipulation is harder. It is impossible to get a proof that something is not manipulated. About the question how you deal with photoshopped images: you look at differences in compression levels in the image. The keyword to look for is image forensics: <http://fotoforensics.com/tutorial-estq.php> Upvotes: 1 <issue_comment>username_2: I think context is important here. Using tactics like those used by Scotland Yard for over a century is probably the best way. Establishing alibis, realistic time lines, motives. For a legal setting, it would be possible to prove these images were fake using methods like this. From an I.T. perspective, it may be possible to pinpoint an origin for these images. If thousands of duplicitous images came from a single origin, then any images from this origin are suspect. I think, in general, we should retrain ourselves to not believe everything we see. There are so many methods for faking images, that photography can no longer be considered to be the best evidence of an event occurring. We should not ignore all images, but instead seek outside concurrence of facts before jumping to conclusions. If all facts point to an event happening, then that photograph is likely to be real. Upvotes: 1 <issue_comment>username_3: Assuming artifacts and unnatural elements do not exist in the media in question and that the media is indistinguishable to the human eye, the only way to be able to do this is to trace back to the source of the images. An analogy can be drawn to DoS (Denial of Service) attack, where an absurd number of requests are sent from a single IP to a single server causing it to crash - A common solution is a honeypot, where a high number of requests from one IP is redirected to a decoy server where, even if it crashes, uptime is not compromised. Some research has been done on these lines where [this](https://www.tdcommons.org/cgi/viewcontent.cgi?article=2911&context=dpubs_series) paper spoke about verifying the digital signature of an image or [this one](https://ieeexplore.ieee.org/abstract/document/8755865) where they proposed tampered image detection and source camera identification. Once traced back to a source, if an absurd number of potentially fake images come from a singular source, it is to be questioned. The common fear arises when we are dealing with something, on the basis of the analogy, like a DDoS (Distributed Denial of Service) attack where each fake request comes from a distributed source - Network Security has found ways to deal with this, but security and fraud detection in the terms of AI just isn't that established. Essentially for a well thought out artificial media for a specific malicious purpose, today, is quite hard to be caught - But work is being done currently on security in AI. If you're planning on using artificial media for malicious purposes, I'd say now is the best time probably. This security has been a concern from a bit now. An [article](https://towardsdatascience.com/from-faceapp-to-deepfakes-3d1048713da0) written by a data scientist quotes > > Deepfakes have already been used to try to harass and humiliate women through fake porn videos. The term actually comes from the username of a Reddit user who was creating these videos by building generative adversarial networks (GANs) using TensorFlow. Now, intelligence officials are talking about the possibility of <NAME> using fake videos to influence the 2020 presidential elections. More research is being done on deepfakes as a threat to democracy and national security, as well as how to detect them. > > > Note - *I'm quite clueless about network security, all my knowledge comes from one conversation with a friend, and thought this would be a good analogy to use here. Forgive any errors in the analogy and please correct if possible!* Upvotes: 0 <issue_comment>username_4: [Digital Media Forensics](https://en.wikipedia.org/wiki/Digital_forensics) (DMF) field aims to develop technologies for the automated assessment of the integrity of an image or video, so [DMF](https://farid.berkeley.edu/downloads/tutorials/digitalimageforensics.pdf) is the field you are looking for. There are several approaches in DMF: for example, those based on machine learning (ML) techniques, in particular, convolutional neural networks (CNNs). For example, in the paper [Deepfake Video Detection Using Recurrent Neural Networks](https://engineering.purdue.edu/~dgueraco/content/deepfake.pdf) (2018), <NAME> and <NAME>. Delp propose a two-stage analysis composed of a CNN to extract features at the frame level followed by a temporally-aware RNN to capture temporal inconsistencies between frames introduced by the deepfake tool. More specifically, they use a convolutional LSTM architecture (CNN combined with an LSTM), which is trained end-to-end, so that the CNN learns the features in the videos, which are passed to the RNN, which attempts to predict the likelihood of those features belonging to a fake video or not. Section 3 explains the creation of deepfake videos, which leads to *inconsistencies between video frames* (which are exploited in the proposed method) because of the use of images with different viewing and illumination conditions. Other similar works have been proposed. See this curated list <https://github.com/aerophile/awesome-deepfakes> for more related papers. Upvotes: 2
2018/04/07
518
2,227
<issue_start>username_0: I have two classes in the training set: one that has images with a feature and the other of images without that feature. Can there be a LOT more images with "no feature" so I can fit in all possible false positives?<issue_comment>username_1: Your question is very general so therefore, in this case, my answer will be too: The answer is "sometimes": it depends on the data. There can be a lot more images in one class than the other, and you can still get reasonable results. It highly depends on how much data you have of the "feature class". If this is the case, we say that the classes are heavily unbalanced, and you need to do "class balancing". You do not want to do overfitting on this one class, and preferably you want the feature-class to be the biggest. Another approach for CNNs is to use "dropout". Well, for CNN's you can go a bit further: you can remove parts of the image to generate "new" images. This way you prevent overfitting of the "feature" class, whilst generating more data. I suspect that training *all possible* false positives is impossible without overfitting the network somehow. Hope it helps, and give you some google pointers :) --- Just FYI: You basically, in the tech term, want to know whether it works to do binary CNN classification using a heavily imbalanced dataset. Upvotes: 1 <issue_comment>username_2: I think you are actually working on one class classification as the other class is feature less. So you will end up classifying an input data as it belongs to that single class or not. If you are ok with considering your problem as one class classification then i would say you actually **DON'T** need a feature less data set at all. You can just directly run your featured data(say cat pictures) using an autoencoder and figure out threshold value at the bottle neck(this is a bit challenging). Later during the test time you can verify that input data belongs to the desired class by just looking at the threshold value produced by the encoding part of autoencoder. If this answer doesn't satisfy you. you can just google keywords like "One class classification" or "out lier detection". I guess from there you can follow up easily. Upvotes: 0
2018/04/08
355
1,608
<issue_start>username_0: In time Series prediction, we have a stream of vectors. There are different approaches for accounting for the temporal patterns between these vectors. There's two that I'm considering. An LSTM or augmenting the feature space. What's the difference between the two? The most obvious to me is that an LSTM is more expressive and can get superior accuracy if modelled properly.<issue_comment>username_1: I just read this in a recent Bengio paper and it's pretty obvious. He says that there are zero differences between a short-term memory and an augmented feature space. However, if you want to capture long-term dependencies without blowing up the feature space, you'd want to use an LSTM because traditional approaches can't dynamically learn what to "remember". Upvotes: 0 <issue_comment>username_2: LSTM is a neural network which learns for an input x an output y. In additional to CNNs or MLPs it considers a hiddenstate h (which is influenced by prvious inputs) when your next input x is feed into the network. Augmenting the feature Space is a technique which you do previous training your LSTM (to augment your data set in order to generatre more data and let the LSTM better generalize to new data). In the field of image recognition you can rotate your images by 40 degree to generate a new one. This process is known as data augmentation. Such methods also appliable to time series. In summary: first, you start with augmenting your input feature space in order to improve prediction accuracy and then training your LSTM with the augmeneted training data set. Upvotes: 1
2018/04/08
2,070
8,923
<issue_start>username_0: I've heard multiple times that "Neural Networks are the best approximation we have to model the human brain", and I think it is commonly known that Neural Networks are modelled after our brain. I strongly suspect that this model has been simplified, but how much? How much does, say, the vanilla NN differ from what we know about the human brain? Do we even know?<issue_comment>username_1: They are not close, not anymore! [Artificial] Neural Nets ***vaguely inspired*** by the connections we previously observed between the neurons of a brain. Initially, there probably was an intention to develop ANN to approximate biological brains. However, the modern working ANNs that we see their applications in various tasks are not designed to provide us a functional model of an animal brain. As far as I know, there is no study claiming they have found something new in a biological brain by looking into the connections and weight distributions of let's say a CNN or RNN model. Upvotes: 3 <issue_comment>username_2: We all know that artificial neural networks (ANNs) are **inspired by** but most of them are only **loosely based on** the biological neural networks (BNNs). We can analyze the differences and similarities between ANNs and BNNs in terms of the following components. ### Neurons The following diagram illustrates a biological neuron (screenshot of an image from [this book](https://opentextbc.ca/introductiontopsychology/chapter/3-1-the-neuron-is-the-building-block-of-the-nervous-system/)). [![enter image description here](https://i.stack.imgur.com/VSOKk.jpg)](https://i.stack.imgur.com/VSOKk.jpg) The following one illustrates a typical artificial neuron of an ANN (screenshot of figure 1.14 of [this book](http://page.mi.fu-berlin.de/rojas/neural/chapter/K1.pdf)). [![enter image description here](https://i.stack.imgur.com/ASLAg.png)](https://i.stack.imgur.com/ASLAg.png) ### Initialization In the case of an ANN, the initial state and weights are assigned randomly. While for BNNs, the strengths of connections between neurons and the structure of connections don't start as random. The initial state is genetically derived and is the byproduct of evolution. ### Learning In BNN, learning comes from the interconnections between myriad neurons in the brain. These interconnections change configuration when the brain experiences new stimuli. The changes result in new connections, strengthening of existing connections, and removal of old and unused ones. ANNs are trained from scratch usually using a fixed topology (remember topology changes in case of BNNs), although the topology of ANN can also change (for example, take a look at NEAT or some continual learning techniques), which depends on the problem being solved. The weights of an ANN are randomly initialized and adjusted via an optimization algorithm. ### Number of neurons Another difference (although this difference is always smaller) is in the number of neurons in the network. A typical ANN consists of hundreds, thousands, millions, and, in some exceptional (e.g. GPT-3), billions of neurons. The BNN of the human brain consists of billions. This number varies from animal to animal. Further reading --------------- You can find more information [here](https://news.sophos.com/en-us/2017/09/21/man-vs-machine-comparing-artificial-and-biological-neural-networks/) or [here](https://ai.stackexchange.com/q/258/2444). Upvotes: 5 [selected_answer]<issue_comment>username_3: The common statement that Artificial Neural Networks are inspired by the neural structure of brains is only partially true. It is true that <NAME>, <NAME>, <NAME>, and others began the path toward practical AI by developing what they then called the electronic brain. It is also true * Artificial networks have functions called activations, * Are wired in many-to-many relationships like biological neurons, and * Are designed to learn an optimal behavior, but that is the extent of the similarity. Cells in artificial networks such as MLPs (multilayer perceptrons) or RNN (Recurrent neural networks) are not like cells in brain networks. The perceptron, the first software stab at arrays of things that activate, was not an array of neurons. It was the application of basic feedback involving gradients, which had been in common use in engineering ever since <NAME>'s centrifugal governor was mathematically modeled by Gauss. Successive approximation, a principle that had been in use for centuries, was employed to incrementally update an attenuation matrix. The matrix was multiplied by the vector feeding an array of identical activation functions to produce output. That's it. The projection in a second dimension to a multi-layer topology was made possible by the realization that the Jacobian could be used to produce a corrective signal that, when distributed as negative feedback to the layers appropriately, could tune the attenuation matrix of a sequence of perceptrons and the network as a whole would converge upon satisfactory behavior. In the sequence of perceptrons, each element is called a layer. The feedback mechanism is now called back propagation. The mathematics used to correct the network is called gradient descent because it is like a dehydrated blind man using the gradient of the terrain to find water, and the issues of doing that are similar too. He might find a local minima (low point) before he finds fresh water and converge on death rather than hydration. The newer topologies are the additions of already existing convolution work used in digital image restoration, mail sorting, and graphics applications to create the CNN family of topologies and the ingenious use of what is like a chemical equilibrium from first year chemistry to combine optimization criteria creating the GAN family of topologies. Deep is simply a synonym for numerous in most AI contexts. It sometimes infers complexity in the higher level topology (above the vector-matrix products, the activations, and the convolutions). Active research is ongoing by those who are aware how different these deep networks are from what neural scientists have discovered decades ago in mammalian brain tissue. And there are more differentiators being discovered today as learning circuitry and neuro-chemistry in the brain is investigated from the genomic perspective. * Neural plasticity ... change in circuit topology due to dendrite and axiom growth, death, redirection, and other morphing * Topological complexity ... large number of axioms crisscross without interacting and are deliberately shielded from cross-talk (independent) most likely because it would be disadvantageous to let them connect [note 1] * Chemical signaling ... mammalian brains have dozens of neuro-transmitter and neuro-regulation compounds that have regional effects on circuitry [note 2] * Organelles ... living cells have many substructures and it is known that several types have complex relationships with signal transmission in neurons * Entirely different form of activation ... activations in common artificial neural nets are simply functions with ordinal scalars for both range and domain ... mammalian neurons operate as a function of both amplitude and relative temporal proximity of incoming signals [note 3] [1] Topology is ironically both a subset of architecture (in the fields of building design, network provisioning, WWW analysis, and semantic networks), yet at the same time topology is, much more than architecture, at the radical center of both AI mathematics and effective actualization in control systems [2] The role of chemistry may be essential to learning social and reproductive behavior that interrelates with DNA information propagation, linking in complex ways learning at the level of an ecosystem and the brain. Furthermore, long term and short term learning divides the brain's learning into two distinct capabilities too. [3] The impact of the timing of incoming signals on biological neuron activation is understood to some degree, but it may impact much more than neuron output. It may impact placticity and chemistry too, and the organelles may play a role in that. **Summary** What machine learning libraries do is as much simulating the human brain as Barbie and Ken dolls simulate a real couple. Nonetheless, remarkable things are arising in the field of deep learning, and it would not surprise me if autonomous vehicles become fully autonomous in our lifetimes. I would not recommend to any students to become a developer either. Computers will probably code much better than humans and orders of magnitude faster, and possibly soon. Some tasks are not of the kind that biology has evolved to do and computers can exceed human capabilities after only a few decades of research, eventually exceeding human performance by several orders of magnitude. Upvotes: 2
2018/04/09
827
3,408
<issue_start>username_0: The problem to solve is non-linear regression of a non-linear function. My actual problem is to model the function "find the max over many quadratic forms": `max(w.H.T * Q * w)`, but to get started and to learn more about neural networks, I created a toy example for a non-linear regression task, using Pytorch. The problem is that the network never learns the function in a satisfactory way, even though my model is quite large with multiple layers (see below). Or is it not large enough or too large? How can the network be improved or maybe even simplified to get a much smaller training error? I experimented with different network architectures, but the result is never satisfactory. Usually, the error is quite small within the input interval around 0, but the network is not able to get good weights for the regions at the boundary of the interval (see plots below). The loss does not improve after a certain number of epochs. I could generate even more training data, but I have not yet understood completely, how the training can be improved (tuning parameters such as batch size, amount of data, number of layers, normalizing input (output?) data, number of neurons, epochs, etc.) My neural network has 8 layers with the following number of neurons: `1, 80, 70, 60, 40, 40, 20, 1`. For the moment, I do not care too much about overfitting, my goal is to understand, why a certain network architecture/certain hyperparameters need to be chosen. Of course, avoiding overfitting at the same time would be a bonus. I am especially interested in using neural networks for regression tasks or as function approximators. In principle, my problem should be able to be approximated to arbitrary accuracy by a single layer neural network, according to the universal approximation theorem, isn’t this correct? [![Difference between the trained model and original data](https://i.stack.imgur.com/oqXXN.png)](https://i.stack.imgur.com/oqXXN.png) [![Loss value vs iterations](https://i.stack.imgur.com/l2U6j.png)](https://i.stack.imgur.com/l2U6j.png) [![Difference between the trained model and original data ](https://i.stack.imgur.com/sHAct.png)](https://i.stack.imgur.com/sHAct.png)<issue_comment>username_1: Neural networks learn badly with large input ranges. Scale your inputs to a smaller range e.g. -2 to 2, and convert to/from this range to represent your function interval consistently. Upvotes: 2 [selected_answer]<issue_comment>username_2: I only have one good news... There is nothing wrong with your code. Neural networks tend to do that. Especially with a really complex function. * Increasing the amount of neurons will not decrease how the error is distributed. * There are better loss functions for each case but is not a really effective solution. * Neural networks are really good managing noise. So, they are good ignoring minorities. It's a common expression "ANN are racist". I recommend you to deploy a histogram *DataSet* vs *Output Value*. To see if you have too much more data from the central region than in the frontier. If you can generate more data at will. Generate more values in the specific zones with more errors. This will increase the error and force the backpropagation algorithm to improve in that area. More information on your optimization algorithm may be useful. But, like I said, everything seems perfectly normal. Upvotes: 0
2018/04/09
417
1,897
<issue_start>username_0: YouTube has a huge amount of videos, many of which also containing various spoken languages. This would presumably provide something like the data that a "challenged" baby would experience - "challenged" meaning a baby without arms or legs (unfortunately many people are born that way). Would this not allow unsupervised learning in a deep learning system that has both vision and audio capabilities? The neural network would presumably learn correlations between words and images, and could perhaps even learn rudimentary language skills, all without human supervision. I believe that the individual components to do this already exist. Has this been tried, and if not, why?<issue_comment>username_1: Neural networks learn badly with large input ranges. Scale your inputs to a smaller range e.g. -2 to 2, and convert to/from this range to represent your function interval consistently. Upvotes: 2 [selected_answer]<issue_comment>username_2: I only have one good news... There is nothing wrong with your code. Neural networks tend to do that. Especially with a really complex function. * Increasing the amount of neurons will not decrease how the error is distributed. * There are better loss functions for each case but is not a really effective solution. * Neural networks are really good managing noise. So, they are good ignoring minorities. It's a common expression "ANN are racist". I recommend you to deploy a histogram *DataSet* vs *Output Value*. To see if you have too much more data from the central region than in the frontier. If you can generate more data at will. Generate more values in the specific zones with more errors. This will increase the error and force the backpropagation algorithm to improve in that area. More information on your optimization algorithm may be useful. But, like I said, everything seems perfectly normal. Upvotes: 0
2018/04/09
1,191
5,601
<issue_start>username_0: I'm studying reinforcement learning. It seems that "state" and "observation" mean exactly the same thing. They both capture the current state of the game. Is there a difference between the two terms? Is the observation maybe the state after the action has been taken?<issue_comment>username_1: Sometimes observation and state overlap completely, which is convenient. However, there is no reason to expect it in all cases, and that's where interesting problems occur. Reinforcement learning theory is based on [Markov Decision Processes](https://en.wikipedia.org/wiki/Markov_decision_process). This leads to a *formal* definition of state. Most importantly, the state must have the [Markov property](https://en.wikipedia.org/wiki/Markov_property). Which means that for RL to work according to theory, that knowing the state means that you know *everything knowable* that could determine the response of the environment to a specific action. Everything that remains must be purely stochastic and unknowable in principle until after the action is resolved. Systems like deterministic or probability-driven games, and computer-controlled simulations can be designed to have easily observable states that have this property. Games with this trait are often called "games of perfect information", although you may have unknown information, provided it is revealed in a purely stochastic manner. In practice, real world interactions contain far too much detail for any observation to be a true state with the Markov property. For instance, consider the [inverted pendulum environment](https://en.wikipedia.org/wiki/Inverted_pendulum), a classic RL toy problem. A real inverted pendulum would behave differently depending on its temperature, which could vary along its length. The joint and actuators might be sticky. Rotations and movement will alter temperature and friction, etc. However, a RL agent will typically only consider current motion and position of the trolley and pendulum. In this case, the observation of 4 traits is usually good enough, and a state based on this *almost* has the Markov property. There are also problems where observations are not enough to make usable state data for a RL system. The Deep Mind Atari DQN paper had examples of a couple of these. The first example is that a single frame lost data about motion. This could be addressed by taking four consecutive frames and combining them to make a single state. It could be argued that each frame is an observation, and that four observations had to be combined in order to construct a more useful state (although this could be put aside as just semantics). The second example in Atari DQN is that the pixel observations did not include data that the game was tracking but that was not visible on screen. Games with large scrolling maps are a weakness of the Atari-playing DQN, because its state has no memory of screens other than the four used for movement. An example of such a game, where Deep Mind's player did much worse than a human player is Montezuma's Revenge, where to progress it is necessary to remember some off-screen locations. There are ways to address knowledge that there is unobserved but relevant state in a problem. The general framework for describing the problem is [Partially Observable Markov Decision Processes (POMDPs)](https://en.wikipedia.org/wiki/Partially_observable_Markov_decision_process). Workable solutions include adding explicit memory or "belief state" to the state representation, or using a system such as RNN in order to internalise the learning of a state representation driven by a sequence of observations. Upvotes: 4 <issue_comment>username_2: There is a subtle but important difference between `observation` and `state`. The `observation` is the information that the agent is gathering from the environment. This could be data coming from sensors. It could be noisy, or contain redundant information. It could also be incomplete and not contain enough to capture all the information needed to build the `state`. (In this case the agent might be able to use what it already knows about the current state and the new observation data to build the new state). The `state` is the information that describes all the relevant aspects about the environment that the agent's policy needs to make a decision. We also like to make a distinction between the `environment state` which could be huge and impossible to capture fully (especially in scenarios where the agent is interacting with the physical world) and the `agent state` which is the distilled version that only captures the important information the agent needs to make a decision. In simple settings, the observation could be equivalent to the state. In a fully-observable board game (e.g. checkers) the position of the pieces is the observation and also the state. In more complex scenarios this is not the case. When using Deep RL or most of the modern RL variants, the state is often not explicitly encoded. It is a feature of these modern algorithms to use function approximation to bridge the observation inputs directly into the policy or value function. This is especially useful if the number of unique states is very huge, and you also want your policy to be robust enough to behave correctly even for unseen states that are similar to ones (but not exactly the same) it was trained on. This is why in frameworks such as [Gymnasium](https://gymnasium.farama.org/) you see observations and not states, and probably one of the sources of this confusion between both. Upvotes: 1
2018/04/10
1,247
5,791
<issue_start>username_0: I want to be able to input a block of text and then have it guess a string within a predefined range (i.e. a string that starts with three letters and ends with five numbers like "XXX12345", etc). Ideally, the string it will be guessing will be somewhere in the block of text, but sometimes it won't be. I have been struggling where to begin on this or if I am even going in the right direction for considering Machine/Deep learning to try to do this. Help!<issue_comment>username_1: Sometimes observation and state overlap completely, which is convenient. However, there is no reason to expect it in all cases, and that's where interesting problems occur. Reinforcement learning theory is based on [Markov Decision Processes](https://en.wikipedia.org/wiki/Markov_decision_process). This leads to a *formal* definition of state. Most importantly, the state must have the [Markov property](https://en.wikipedia.org/wiki/Markov_property). Which means that for RL to work according to theory, that knowing the state means that you know *everything knowable* that could determine the response of the environment to a specific action. Everything that remains must be purely stochastic and unknowable in principle until after the action is resolved. Systems like deterministic or probability-driven games, and computer-controlled simulations can be designed to have easily observable states that have this property. Games with this trait are often called "games of perfect information", although you may have unknown information, provided it is revealed in a purely stochastic manner. In practice, real world interactions contain far too much detail for any observation to be a true state with the Markov property. For instance, consider the [inverted pendulum environment](https://en.wikipedia.org/wiki/Inverted_pendulum), a classic RL toy problem. A real inverted pendulum would behave differently depending on its temperature, which could vary along its length. The joint and actuators might be sticky. Rotations and movement will alter temperature and friction, etc. However, a RL agent will typically only consider current motion and position of the trolley and pendulum. In this case, the observation of 4 traits is usually good enough, and a state based on this *almost* has the Markov property. There are also problems where observations are not enough to make usable state data for a RL system. The Deep Mind Atari DQN paper had examples of a couple of these. The first example is that a single frame lost data about motion. This could be addressed by taking four consecutive frames and combining them to make a single state. It could be argued that each frame is an observation, and that four observations had to be combined in order to construct a more useful state (although this could be put aside as just semantics). The second example in Atari DQN is that the pixel observations did not include data that the game was tracking but that was not visible on screen. Games with large scrolling maps are a weakness of the Atari-playing DQN, because its state has no memory of screens other than the four used for movement. An example of such a game, where Deep Mind's player did much worse than a human player is Montezuma's Revenge, where to progress it is necessary to remember some off-screen locations. There are ways to address knowledge that there is unobserved but relevant state in a problem. The general framework for describing the problem is [Partially Observable Markov Decision Processes (POMDPs)](https://en.wikipedia.org/wiki/Partially_observable_Markov_decision_process). Workable solutions include adding explicit memory or "belief state" to the state representation, or using a system such as RNN in order to internalise the learning of a state representation driven by a sequence of observations. Upvotes: 4 <issue_comment>username_2: There is a subtle but important difference between `observation` and `state`. The `observation` is the information that the agent is gathering from the environment. This could be data coming from sensors. It could be noisy, or contain redundant information. It could also be incomplete and not contain enough to capture all the information needed to build the `state`. (In this case the agent might be able to use what it already knows about the current state and the new observation data to build the new state). The `state` is the information that describes all the relevant aspects about the environment that the agent's policy needs to make a decision. We also like to make a distinction between the `environment state` which could be huge and impossible to capture fully (especially in scenarios where the agent is interacting with the physical world) and the `agent state` which is the distilled version that only captures the important information the agent needs to make a decision. In simple settings, the observation could be equivalent to the state. In a fully-observable board game (e.g. checkers) the position of the pieces is the observation and also the state. In more complex scenarios this is not the case. When using Deep RL or most of the modern RL variants, the state is often not explicitly encoded. It is a feature of these modern algorithms to use function approximation to bridge the observation inputs directly into the policy or value function. This is especially useful if the number of unique states is very huge, and you also want your policy to be robust enough to behave correctly even for unseen states that are similar to ones (but not exactly the same) it was trained on. This is why in frameworks such as [Gymnasium](https://gymnasium.farama.org/) you see observations and not states, and probably one of the sources of this confusion between both. Upvotes: 1
2018/04/11
1,349
6,154
<issue_start>username_0: There are two textbooks that I most love and am most afraid of in the world: *Introduction to Algorithms by Cormen* et al. and *Artificial Intelligence: A Modern Approach by Norvig* et al. I have started the "AI: A Modern Approach" more than once, but the book is so dense and full of theory that I get discouraged after a couple of weeks and stop. **I am looking for a similar AI book but with an equal emphasis on theory and practice.** Some examples of what I am looking for: * The Elements of Statistical Learning by Tibshirani et al. **(detailed theory)** * An Introduction to Statistical Learning: With Applications in R by Tibshirani et al. **(theory+practical)** * Digital Image Processing by Gonzalez et al. **(detailed theory)** * Digital Image Processing Using MATLAB by Gonzalez et al. **(theory+practical)**<issue_comment>username_1: Sometimes observation and state overlap completely, which is convenient. However, there is no reason to expect it in all cases, and that's where interesting problems occur. Reinforcement learning theory is based on [Markov Decision Processes](https://en.wikipedia.org/wiki/Markov_decision_process). This leads to a *formal* definition of state. Most importantly, the state must have the [Markov property](https://en.wikipedia.org/wiki/Markov_property). Which means that for RL to work according to theory, that knowing the state means that you know *everything knowable* that could determine the response of the environment to a specific action. Everything that remains must be purely stochastic and unknowable in principle until after the action is resolved. Systems like deterministic or probability-driven games, and computer-controlled simulations can be designed to have easily observable states that have this property. Games with this trait are often called "games of perfect information", although you may have unknown information, provided it is revealed in a purely stochastic manner. In practice, real world interactions contain far too much detail for any observation to be a true state with the Markov property. For instance, consider the [inverted pendulum environment](https://en.wikipedia.org/wiki/Inverted_pendulum), a classic RL toy problem. A real inverted pendulum would behave differently depending on its temperature, which could vary along its length. The joint and actuators might be sticky. Rotations and movement will alter temperature and friction, etc. However, a RL agent will typically only consider current motion and position of the trolley and pendulum. In this case, the observation of 4 traits is usually good enough, and a state based on this *almost* has the Markov property. There are also problems where observations are not enough to make usable state data for a RL system. The Deep Mind Atari DQN paper had examples of a couple of these. The first example is that a single frame lost data about motion. This could be addressed by taking four consecutive frames and combining them to make a single state. It could be argued that each frame is an observation, and that four observations had to be combined in order to construct a more useful state (although this could be put aside as just semantics). The second example in Atari DQN is that the pixel observations did not include data that the game was tracking but that was not visible on screen. Games with large scrolling maps are a weakness of the Atari-playing DQN, because its state has no memory of screens other than the four used for movement. An example of such a game, where Deep Mind's player did much worse than a human player is Montezuma's Revenge, where to progress it is necessary to remember some off-screen locations. There are ways to address knowledge that there is unobserved but relevant state in a problem. The general framework for describing the problem is [Partially Observable Markov Decision Processes (POMDPs)](https://en.wikipedia.org/wiki/Partially_observable_Markov_decision_process). Workable solutions include adding explicit memory or "belief state" to the state representation, or using a system such as RNN in order to internalise the learning of a state representation driven by a sequence of observations. Upvotes: 4 <issue_comment>username_2: There is a subtle but important difference between `observation` and `state`. The `observation` is the information that the agent is gathering from the environment. This could be data coming from sensors. It could be noisy, or contain redundant information. It could also be incomplete and not contain enough to capture all the information needed to build the `state`. (In this case the agent might be able to use what it already knows about the current state and the new observation data to build the new state). The `state` is the information that describes all the relevant aspects about the environment that the agent's policy needs to make a decision. We also like to make a distinction between the `environment state` which could be huge and impossible to capture fully (especially in scenarios where the agent is interacting with the physical world) and the `agent state` which is the distilled version that only captures the important information the agent needs to make a decision. In simple settings, the observation could be equivalent to the state. In a fully-observable board game (e.g. checkers) the position of the pieces is the observation and also the state. In more complex scenarios this is not the case. When using Deep RL or most of the modern RL variants, the state is often not explicitly encoded. It is a feature of these modern algorithms to use function approximation to bridge the observation inputs directly into the policy or value function. This is especially useful if the number of unique states is very huge, and you also want your policy to be robust enough to behave correctly even for unseen states that are similar to ones (but not exactly the same) it was trained on. This is why in frameworks such as [Gymnasium](https://gymnasium.farama.org/) you see observations and not states, and probably one of the sources of this confusion between both. Upvotes: 1
2018/04/13
810
3,080
<issue_start>username_0: If we model the game '2048' using a max-min game tree, what is the maximal path from a start state to a terminal state? (Assume the game ends only when the board is full.) This is one of the sub-questions that should prepare us to actually modeling the game as a max-min game tree. However, I'm failing to understand the question. Is it actually the path to receiving 131072 as an endgame?<issue_comment>username_1: To model 2048 (or any problem) for search, you need a only a few pieces of information. Note first though, that 2048 is not suitable for minimax, because there's only one player! Instead, you can treat this as a [Markov decision process](https://en.wikipedia.org/wiki/Markov_decision_process). The techniques to solve it are pretty similar though. Basically, you'll do search for one player, and insert "chance" nodes at each ply of the search. The value of a chance node is the expected value of its children. Note that this will reduce the effectiveness of pruning, so it might mean the problem is not tractable for search-based approaches. 1. What does an *end state* look like? Usually you have some function *G(s)* that accepts a state s, and produces true if and only if it's an end state. In 2048, end states would be states where the player loses (no moves possible), or where the player wins (a 131072 tile is present), so this should be fairly easy to write. 2. What are the *payoffs*? This is usually given by a utility function *U(e)* that accepts an end-state e, and produces a numeric value indicating the utility the player will receive. 3. What actions can the player take in each state? In 2048, these are always the same (up, down, left and right) 4. How are new states generated from old states and player actions (in this case, the tiles slide according to the rules of the game, and then a new tile is inserted at a random empty location. Although search *might* work here, since 2048 is a relatively simple MDP, you might be happier using techniques from [reinforcement learning](https://en.wikipedia.org/wiki/Reinforcement_learning), which were specifically designed for this kind of problem. [Russell & Norvig](https://rads.stackoverflow.com/amzn/click/0136042597) have a good set of chapters on both approaches (14-17). Upvotes: 1 <issue_comment>username_2: 2048 is still a two player game, just not in the minimax sense. The computer of the original game just plays moves randomly rather than adversarially (it is possible to program a much harder variant of 2048 that does play adversarially). We can therefore use a variant of minimax called exptectimax that considers expected value of all possible computer random moves instead of minimizing the score in adversarial play. Since we cannot evaluate the entire state space, we must limit the depth of our search and estimate expected value using heuristics described below instead of computing it exactly. <http://azaky.github.io/2048-AI/paper.pdf> for more explanations and experiments results <https://stackoverflow.com/a/22498940/3163618> Upvotes: 0
2018/04/14
2,581
11,046
<issue_start>username_0: As I've thought about AI, and what I understand of the problems that we face in the creation of it, I've noticed a recurring pattern: we always seem to be asking ourselves, "how can we better simulate the brain?" Why are we so fascinated with **simulating** it? Isn't our goal to create intelligence, not create intelligence in a **specific medium**? Isn't growing and sustaining living brains more in line with our goals, albeit a bit of an ethical controversy? Why is this exchange's description: "For people interested in conceptual questions about life and challenges in a world where 'cognitive' functions can be mimicked in a **purely digital environment**?" To condense these feelings in a more concise question: Why are we trying to create AI in a computer?<issue_comment>username_1: Being the OP, I have already put some thought into this question. I think that computers are an attractive medium for simple AI because they are easily available and researchers are already familiar with them. In addition, science fiction writers of the last century were hopeful of the capabilities of computers and placed in our culture a dream of computer AI. But I also feel that perhaps other less explored fields would be better suited to the creation of strong AI. In particular, thinking about the nature of biology excites me. But as I understand it, we still know so little about how it biology works, let alone how to control it. But I feel this is where we should be focusing. Researchers know that current computing hardware has limitations. GPUs are better suited than CPUs. Some CPUs have new hardware designed for AI computations. I suspect that this realization of the inadequacy of conventional hardware will continue until our hardware is nearly identical to the biology we are trying to simulate. After all, what simulation could ever be better than what it is trying to simulate? Upvotes: 0 <issue_comment>username_2: I don’t think AI is simulating the brain functions and not even close. Do you know how the nervous system work? How the neutrons transmit signals with action potential? Pathway analysis? Splicing junctions? AI is not about simulating the brain at all. We don’t simulate the biology pathway, we don’t simulate alternative splicing, we don’t have proteins in our models. Instead, AI is a field with tons of mathematics. You give some data and try to extract complicated non linear pattern. Upvotes: 2 <issue_comment>username_3: There are a number of reasons why a simulated brain might be better than creating a real brain. One reason is computers can live indefinitely (kind of). Brains may not be able to live forever and there might not be a way to transfer information from one brain to another. One of the principle advantages of a computer then is that it could have more experience than any brain could have in its lifetime. Another reason is that there are a lot of things we don't know about the brain. Even if we were able to replicate the brain we would have a hard time using it in the way that we want until we fully understand it. The simulated brain doesn't have this problem. We know exactly how artificial neural networks develop, and thus there is not as much that we don't understand. Those answers tell you why we might want a digital brain, but your question seems to also ask why study the digital brain over a biological brain? This seems to imply that we can't do both, but in fact there are many research groups doing work in areas that contribute to growing living brains (Max Planck Institute of Molecular Cell Biology and Genetic (MPI-CBG), the Medical Research Center in the UK, etc.). Upvotes: 1 <issue_comment>username_4: Human intelligence is very general / broad in its scope. This is self-evident, and whatever AI ends up to be, we'd like it to be a general problem solver as well (cf. Simon and Newell). Taking liberal interpretations of your question... > > Why AI in a computer? > > > Computers, to the extent that we can frame problems in general as a solvable computational problem, are also general problem solvers. Wether this is actually the case (can you compute meaning or feels?) is up for debate (cf. computational functionalism, hyper-computation), but it is part of the artificial intelligence project to make a claim on this statement. > > Why do we think a computational framework brings us any closer to an understanding of cognition / consciousness? > > > Good question, and frankly there is no good answer to that asides from "its the best thing we got". TL;DR "computational functionalism", a lot of the literature in psychology and philosophy *seems* to converge towards an understanding of cognition as "computational" (as in information processing: the V1 stream in the brain processes "early visual information") and functional (goal directed grounded on "meaning", ex: "i scratch itch because itchy", as opposed to "i am moving atoms"). However the two theories don't mesh together well (cf. Chinese Room Argument, and the many other arguments in a similar flavour) despite their independent successes in the theory of mind. Why this is the case nobody quite knows... > > Why not AI in something that isnt a computer? > > > I don't know, but to the extent that our understanding of the world is grounded in math, then it being in a computer is sufficient anyways. Maybe there are other paradigms of understanding the world though. Fingers crossed > > Why are we asking, “How can we simulate the brain?” > > > Because its the best tentative understanding we have of an "intelligent faculty", though it should be noted that various methods in machine learning don't seem to be directly inspired by biological implementation (kNN, statistical methods, as opposed to neural nets) Further reading: <http://www.scaruffi.com/nature/mach01.html> Upvotes: 3 [selected_answer]<issue_comment>username_5: I think a worthwhile extension of this line of thought is "why not both?" I do not believe there is anything preventing approaching the problem from both sides at once. There is a great deal of research on both sides (biological research and computational research), but considerably less on the integration of the two (although there certainly is some, such as in the development of modern prosthetics that allow some degree of control). Given the adaptability of the human brain in terms of adjusting its own structure, the most expedient approach may be to consider what it would take to create a non-biological medium that biological neurons could interface with sufficiently to essentially "program" them in the same manner it does when repairing itself with biological neurons. Leave the hard work to the thing that already has the blueprint. Or in other words, the Ship of Theseus but with brain cells. Not that such a task would be anything close to approaching simple or easy, given our still lacking knowledge of neurological structures and the difficulties in getting a non-biological interface that is capable of the required sort of communications and adjustments that biological neurons can have performed and on a size scale that would be practical. I wish I could point to some research related to this, but I don't know about any specific research papers, although I know it's not a completely untouched upon subject. Upvotes: 1 <issue_comment>username_6: For what its worth (and having done a bit of study on this and being really interested in the topic): the answer seems to go back to the beginnings of AI and even earlier (Turing's 1936 paper in which he introduces what's now called the Turing machine). <NAME>'s filer for the 1956 Dartmouth College summer workshop on "Artificial Intelligence" (which name introduced the term "Artificial Intelligence") in part says: > > "The study [workshop] is to proceed on the basis of the conjecture > that every aspect of learning or any other feature of intelligence can > in principle be so precisely described that a machine can be made to > simulate it." > > > This references Turing's 1936 paper where a machine or natural system is described, and the description is run in a computer. To simulate is to quite precisely describe a system then run the description (transformed a bit – but the result is still a description) in a computer. The description is the program. The description needs to be precise, as indicated in the Church-Turing thesis. So the idea of simulation is core to the computational theory of what a digital computer can or might do. So it's also core to the computational theory of mind (the organic brain being a natural system), and hence to AI. That said, it's obviously a crazy idea to try to quite precisely describe the organic machine that is a human brain. I mean how many neurons? 100 billion. Quite precisely describe each and every single one of these, *and* each and every of the up to 10,000 connections that connect to each and every single neuron. Crazy with a capital C. And to suppose there are degrees of simulation of the brain, or that the mind is somehow a simplification of the brain, or that the description can be in higher level concepts, not neurological ones, is just to admit that the description is not quite precise. An adequate simulation of a brain would be terribly detailed. So why do we hear so much about AI trying to simulate the brain? Answer: AI has no other word to express what it does. In my view, AI ought to be trying to work out the data-processing principles of the organic brain, not trying to describe the causation of the brain. AI doesn't know the principles of perception or the principles of general knowledge. It's incredible to say this – seeing as both are so absolutely fundamental to human intelligence. But AI doesn't know the principles. It ought to be trying to work them out. Then – once discovered – to work out how these principles could be realised in a computer. You suggest that there's a binary choice between AI trying to get a computer to simulate the organic brain, and trying to grow organic brains in a dish. But there's actually a third option. Computer can do things other than simulate (i.e., other than compute). Maybe these other things might include embodying the principles of organic brains. There are two really big areas here: (1) what are the principles of intelligence? (2) what are the non-computational things computers can do? You ask why AI is concerned with the digital environment rather than, say, growing organic brains in a vat. But AI is basically an engineering project (building something with a designed causality) and even though AI knows only a little about the causality of what it's trying to build, the digital computer seems to be the only viable platform, at present, with enough individually addressable memory locations and processor speed to cope with semantic structures that would result from an adequate sensory interaction with the environment. Upvotes: 1
2018/04/16
2,557
10,894
<issue_start>username_0: There is a popular story regarding the [back-of-the-envelope calculation](https://en.wikipedia.org/wiki/Back-of-the-envelope_calculation) performed by a British physicist named <NAME>. He used [dimensional analysis](http://www.atmosp.physics.utoronto.ca/people/codoban/PHY138/Mechanics/dimensional.pdf) to estimate the power released by the explosion of a nuclear bomb, simply by analyzing a picture that was released in a magazine at the time. I believe many of you know some nice back-of-the-envelope calculations performed in machine learning (more specifically neural networks). Can you please share them?<issue_comment>username_1: Being the OP, I have already put some thought into this question. I think that computers are an attractive medium for simple AI because they are easily available and researchers are already familiar with them. In addition, science fiction writers of the last century were hopeful of the capabilities of computers and placed in our culture a dream of computer AI. But I also feel that perhaps other less explored fields would be better suited to the creation of strong AI. In particular, thinking about the nature of biology excites me. But as I understand it, we still know so little about how it biology works, let alone how to control it. But I feel this is where we should be focusing. Researchers know that current computing hardware has limitations. GPUs are better suited than CPUs. Some CPUs have new hardware designed for AI computations. I suspect that this realization of the inadequacy of conventional hardware will continue until our hardware is nearly identical to the biology we are trying to simulate. After all, what simulation could ever be better than what it is trying to simulate? Upvotes: 0 <issue_comment>username_2: I don’t think AI is simulating the brain functions and not even close. Do you know how the nervous system work? How the neutrons transmit signals with action potential? Pathway analysis? Splicing junctions? AI is not about simulating the brain at all. We don’t simulate the biology pathway, we don’t simulate alternative splicing, we don’t have proteins in our models. Instead, AI is a field with tons of mathematics. You give some data and try to extract complicated non linear pattern. Upvotes: 2 <issue_comment>username_3: There are a number of reasons why a simulated brain might be better than creating a real brain. One reason is computers can live indefinitely (kind of). Brains may not be able to live forever and there might not be a way to transfer information from one brain to another. One of the principle advantages of a computer then is that it could have more experience than any brain could have in its lifetime. Another reason is that there are a lot of things we don't know about the brain. Even if we were able to replicate the brain we would have a hard time using it in the way that we want until we fully understand it. The simulated brain doesn't have this problem. We know exactly how artificial neural networks develop, and thus there is not as much that we don't understand. Those answers tell you why we might want a digital brain, but your question seems to also ask why study the digital brain over a biological brain? This seems to imply that we can't do both, but in fact there are many research groups doing work in areas that contribute to growing living brains (Max Planck Institute of Molecular Cell Biology and Genetic (MPI-CBG), the Medical Research Center in the UK, etc.). Upvotes: 1 <issue_comment>username_4: Human intelligence is very general / broad in its scope. This is self-evident, and whatever AI ends up to be, we'd like it to be a general problem solver as well (cf. Simon and Newell). Taking liberal interpretations of your question... > > Why AI in a computer? > > > Computers, to the extent that we can frame problems in general as a solvable computational problem, are also general problem solvers. Wether this is actually the case (can you compute meaning or feels?) is up for debate (cf. computational functionalism, hyper-computation), but it is part of the artificial intelligence project to make a claim on this statement. > > Why do we think a computational framework brings us any closer to an understanding of cognition / consciousness? > > > Good question, and frankly there is no good answer to that asides from "its the best thing we got". TL;DR "computational functionalism", a lot of the literature in psychology and philosophy *seems* to converge towards an understanding of cognition as "computational" (as in information processing: the V1 stream in the brain processes "early visual information") and functional (goal directed grounded on "meaning", ex: "i scratch itch because itchy", as opposed to "i am moving atoms"). However the two theories don't mesh together well (cf. Chinese Room Argument, and the many other arguments in a similar flavour) despite their independent successes in the theory of mind. Why this is the case nobody quite knows... > > Why not AI in something that isnt a computer? > > > I don't know, but to the extent that our understanding of the world is grounded in math, then it being in a computer is sufficient anyways. Maybe there are other paradigms of understanding the world though. Fingers crossed > > Why are we asking, “How can we simulate the brain?” > > > Because its the best tentative understanding we have of an "intelligent faculty", though it should be noted that various methods in machine learning don't seem to be directly inspired by biological implementation (kNN, statistical methods, as opposed to neural nets) Further reading: <http://www.scaruffi.com/nature/mach01.html> Upvotes: 3 [selected_answer]<issue_comment>username_5: I think a worthwhile extension of this line of thought is "why not both?" I do not believe there is anything preventing approaching the problem from both sides at once. There is a great deal of research on both sides (biological research and computational research), but considerably less on the integration of the two (although there certainly is some, such as in the development of modern prosthetics that allow some degree of control). Given the adaptability of the human brain in terms of adjusting its own structure, the most expedient approach may be to consider what it would take to create a non-biological medium that biological neurons could interface with sufficiently to essentially "program" them in the same manner it does when repairing itself with biological neurons. Leave the hard work to the thing that already has the blueprint. Or in other words, the Ship of Theseus but with brain cells. Not that such a task would be anything close to approaching simple or easy, given our still lacking knowledge of neurological structures and the difficulties in getting a non-biological interface that is capable of the required sort of communications and adjustments that biological neurons can have performed and on a size scale that would be practical. I wish I could point to some research related to this, but I don't know about any specific research papers, although I know it's not a completely untouched upon subject. Upvotes: 1 <issue_comment>username_6: For what its worth (and having done a bit of study on this and being really interested in the topic): the answer seems to go back to the beginnings of AI and even earlier (Turing's 1936 paper in which he introduces what's now called the Turing machine). <NAME>'s filer for the 1956 Dartmouth College summer workshop on "Artificial Intelligence" (which name introduced the term "Artificial Intelligence") in part says: > > "The study [workshop] is to proceed on the basis of the conjecture > that every aspect of learning or any other feature of intelligence can > in principle be so precisely described that a machine can be made to > simulate it." > > > This references Turing's 1936 paper where a machine or natural system is described, and the description is run in a computer. To simulate is to quite precisely describe a system then run the description (transformed a bit – but the result is still a description) in a computer. The description is the program. The description needs to be precise, as indicated in the Church-Turing thesis. So the idea of simulation is core to the computational theory of what a digital computer can or might do. So it's also core to the computational theory of mind (the organic brain being a natural system), and hence to AI. That said, it's obviously a crazy idea to try to quite precisely describe the organic machine that is a human brain. I mean how many neurons? 100 billion. Quite precisely describe each and every single one of these, *and* each and every of the up to 10,000 connections that connect to each and every single neuron. Crazy with a capital C. And to suppose there are degrees of simulation of the brain, or that the mind is somehow a simplification of the brain, or that the description can be in higher level concepts, not neurological ones, is just to admit that the description is not quite precise. An adequate simulation of a brain would be terribly detailed. So why do we hear so much about AI trying to simulate the brain? Answer: AI has no other word to express what it does. In my view, AI ought to be trying to work out the data-processing principles of the organic brain, not trying to describe the causation of the brain. AI doesn't know the principles of perception or the principles of general knowledge. It's incredible to say this – seeing as both are so absolutely fundamental to human intelligence. But AI doesn't know the principles. It ought to be trying to work them out. Then – once discovered – to work out how these principles could be realised in a computer. You suggest that there's a binary choice between AI trying to get a computer to simulate the organic brain, and trying to grow organic brains in a dish. But there's actually a third option. Computer can do things other than simulate (i.e., other than compute). Maybe these other things might include embodying the principles of organic brains. There are two really big areas here: (1) what are the principles of intelligence? (2) what are the non-computational things computers can do? You ask why AI is concerned with the digital environment rather than, say, growing organic brains in a vat. But AI is basically an engineering project (building something with a designed causality) and even though AI knows only a little about the causality of what it's trying to build, the digital computer seems to be the only viable platform, at present, with enough individually addressable memory locations and processor speed to cope with semantic structures that would result from an adequate sensory interaction with the environment. Upvotes: 1
2018/04/18
642
2,507
<issue_start>username_0: I have order data, here's a sample: ``` Ninety-six (96) covered pans, desinated mark cutlery. 5 vovered pans by knife co. (SEE SCHEDULE A FOR NUMBERS). 757 SOUP PANS 115 10-quart capacity pots. Thirteen (13), 30 mm thick covered pans. ``` I have over 50k rows of data such as this. In a perfect world, the above would need to be tabulated as such: ``` count, type 96, covered pan 5, covered pan 757, soup pan 115, pot 13, covered pan ``` Could machine learning be the correct approach for a problem such as this?<issue_comment>username_1: Yes a variant of NLP processing could help find the correct number to extract and type of object in this data. Compared to the spreadsheet, the raw text data is ambiguous without understanding language to a reasonable depth, and without knowing the business context in order to extract the relevant information. For instance, you are expecting to extract "soup pan" and "covered pan", but not "capacity pot". Also that parts of phrases such as "30 mm" or "10-quart" are lower importance qualifiers, and not specifying quantity of something. The current state of the art for extracting this kind of data would be a [bidirectional LSTM (a type of Recurrent Neural Network)](https://en.wikipedia.org/wiki/Bidirectional_recurrent_neural_networks). You would likely get it to flag the parts of each entry that were relevant to the tabulated data you wanted to extract, then feed those into a simpler stage that put them into the spreadsheet. However, there are two caveats: * You need a lot of correctly-labelled training data to get reasonable performance. Using a word embedding layer, such as word2vec or GloVe, should significantly reduce the amount of training data required, but may require a careful pre-processing stage, and may be less useful when you have a lot of jargon in your data. * Performance is never perfect, and the system can still make stupid mistakes, because it does not truly understand the text is is dealing with. That applies to all ML approaches to this problem, and likely also to coding up an "expert system", although it may be easier to write the expert system to recognise when it had failed and ask for help. Upvotes: 1 <issue_comment>username_2: Maybe a simple regex solve this question. But problably you need supervise, approve and disapprove anomalies. See this example, I had the same problem some time ago: <https://stackoverflow.com/questions/50689935/regex-like-commands-python> Upvotes: -1
2018/04/18
1,716
7,313
<issue_start>username_0: Does the human brain use a specific activation function? I've tried doing some research, and as it's a threshold for whether the signal is sent through a neuron or not, it sounds a lot like ReLU. However, I can't find a single article confirming this. Or is it more like a step function (it sends 1 if it's above the threshold, instead of the input value)?<issue_comment>username_1: The answer is **We do not know**. Odds are, we will not know for quite a while. The reason for this is we cannot understand the "code" of the human brain, nor can we simply feed it values and get results. This limits us to measuring currents of the input and output on test subjects, and we have had few such test subjects that are *human*. Thus, we know almost nothing about the human brain, including the activation function. Upvotes: 1 <issue_comment>username_2: The thing you were reading about is known as the [action potential](https://en.wikipedia.org/wiki/Action_potential). It is a mechanism that governs how information flows within a neuron. It works like this: Neurons have an electrical potential, which is a voltage difference inside and outside the cell. They also have a default resting potential, and an activation potential. The neuron tends to move towards the resting potential if it is left alone, but incoming electric activations from dendrites can shift its electric potential. If the neuron reaches a certain threshold in electric potential (the activation potential), the entire neuron and its connecting axons goes through a chain reaction of ionic exchange inside/outside the cell that results in a "wave of propagation" through the axon. TL;DR: Once a neuron reaches a certain activation potential, it electrically discharges. But if the electric potential of the neuron doesn't reach that value then the neuron does not activate. > > Does the human brain use a specific activation function? > > > IIRC neurons in different parts of the brain behave a bit differently, and the way this question is phrased sounds as if you are asking if there is a specific implementation of neuronal activation (as opposed to us modelling it). But in general behave relatively similar to each other (Neurons communicate with each other via neurochemicals, information propagates inside a neuron via a mechanism known as the action potential...) But the details and the differences they cause could be significant. There are various [biological neuron models](https://en.wikipedia.org/wiki/Biological_neuron_model), but the [Hodgkin-Huxley Model](http://neuronaldynamics.epfl.ch/online/Ch2.S2.html) is the most notable. Also note that a general description of neurons don't give you a general description of neuronal dynamics a la cognition (understanding a tree doesn't give you complete understanding of a forest) But, the method of which information propagates inside a neuron is in general quite well understood as sodium / potassium ionic exchange. > > It (activation potential) sounds a lot like ReLU... > > > It's only like ReLU in the sense that they require a threshold before anything happens. But ReLU can have variable output while neurons are all-or-nothing. Also ReLU (and other activation functions in general) are differentiable with respect to input space. This is very important for backprop. This is a ReLU function, with the X-axis being input value and Y-axis being output value. [![enter image description here](https://i.stack.imgur.com/vD7Ut.png)](https://i.stack.imgur.com/vD7Ut.png) And this is the action potential with the X-axis being time, and Y being output value.[![enter image description here](https://i.stack.imgur.com/EsnF7.png)](https://i.stack.imgur.com/EsnF7.png) Upvotes: 5 [selected_answer]<issue_comment>username_3: The brains of mammals do not use an activation function. Only machine learning designs based on the perceptron multiply the vector of outputs from a prior layer by a parameter matrix and pass the result statelessly into a mathematical function. Although the spike aggregation behavior has been partly modeled, and in far more detail than the 1952 Hodgkin and Huxley model, all the models require statefulness to functionally approximate biological neurons. RNNs and their derivatives are an attempt to correct that shortcoming in the perceptron design. In addition to that distinction, although the signal strength summing into activation functions are parametrized, traditional ANNs, CNNs, and RNNs, are statically connected, something Intel claims they will correct with the Nirvana architecture in 2019 (which places into silicon that which we would call layer set up in Python or Java now. There are at least three important biological neuron features that make the activation mechanism more than a function of a scalar input producing a scalar output, which renders questionable any algebraic comparison. * State held as neuroplastic (changing) connectivity, and this is not just how many neurons in a layer but also the direction of signal propagation in three dimensions and the topology of the network, which is organized, but chaotically so * The state held within the cytoplasm and its organelles, which is only partly understood as of 2018 * The fact that there is a temporal alignment factor, that pulses through a biological circuit may arrive via synapses in such a way that they aggregate but the peaks of the pulses are not coincident in time, so the activation probability is not as high as if they were temporally aligned. The decision about what activation function to use has largely been based on the analysis of convergence on a theoretical level combined with testing permutations to see which ones show the most desirable combinations of speed, accuracy, and reliability in ctheonvergence. By reliability is meant that convergence on the global optimum (not some local minimum of the error function) is reached at all for the majority of input cases. This bifurcated research between the forks of practical machine learning and biological simulations and modeling. The two branches may rejoin at some point with the emergence of spiking - Accuracy - Reliability (completes) networks. The machine learning branch may borrow inspiration from the biological, such as the case of visual and auditory pathways in brains. They have parallels and relationships that may be exploited to aid in progress along both forks, but gaining knowledge by comparing the shapes of activation functions is confounded by the above three differences, especially the temporal alignment factor and the entire timing of brain circuits which cannot be modeled using iterations. The brain is a true parallel computing architecture, not reliant on loops or even time sharing in the CPU and data buses. Upvotes: 3 <issue_comment>username_4: My interpretation of the question was 'what activation function in an artificial neural network (ANN) is closest to that found in the brain?' Whilst I agree with the selected answer above, that a single neuron outputs a dirac, if you think of a neuron in an ANN as modelling the output firing rate, rather than the current output, then I believe ReLU might be closest? <http://jackterwilliger.com/biological-neural-networks-part-i-spiking-neurons/> Upvotes: -1
2018/04/18
1,873
8,040
<issue_start>username_0: Imagine a game where it is a black screen apart from a red pixel and a blue pixel. Given this game to a human, they will first see that pressing the arrow keys will move the red pixel. The next thing they will try is to move the red pixel onto the blue pixel. Give this game to an AI, it will randomly move the red pixel until a million tries later it accidentally moves onto the blue pixel to get a reward. If the AI had some concept of distance between the red and blue pixel, it might try to minimize this distance. Without actually programming in the concept of distance, if we take the pixels of the game can we calculate a number(s), such as "entropy", that would be lower when pixels are far apart than when close together? It should work with other configurations of pixels. Such as a game with three pixels where one is good and one is bad. Just to give the neural network more of a sense of how the screen looks? Then give the NN a goal, such as "try to minimize the entropy of the board as well as try to get rewards". Is there anything akin to this in current research?<issue_comment>username_1: The answer is **We do not know**. Odds are, we will not know for quite a while. The reason for this is we cannot understand the "code" of the human brain, nor can we simply feed it values and get results. This limits us to measuring currents of the input and output on test subjects, and we have had few such test subjects that are *human*. Thus, we know almost nothing about the human brain, including the activation function. Upvotes: 1 <issue_comment>username_2: The thing you were reading about is known as the [action potential](https://en.wikipedia.org/wiki/Action_potential). It is a mechanism that governs how information flows within a neuron. It works like this: Neurons have an electrical potential, which is a voltage difference inside and outside the cell. They also have a default resting potential, and an activation potential. The neuron tends to move towards the resting potential if it is left alone, but incoming electric activations from dendrites can shift its electric potential. If the neuron reaches a certain threshold in electric potential (the activation potential), the entire neuron and its connecting axons goes through a chain reaction of ionic exchange inside/outside the cell that results in a "wave of propagation" through the axon. TL;DR: Once a neuron reaches a certain activation potential, it electrically discharges. But if the electric potential of the neuron doesn't reach that value then the neuron does not activate. > > Does the human brain use a specific activation function? > > > IIRC neurons in different parts of the brain behave a bit differently, and the way this question is phrased sounds as if you are asking if there is a specific implementation of neuronal activation (as opposed to us modelling it). But in general behave relatively similar to each other (Neurons communicate with each other via neurochemicals, information propagates inside a neuron via a mechanism known as the action potential...) But the details and the differences they cause could be significant. There are various [biological neuron models](https://en.wikipedia.org/wiki/Biological_neuron_model), but the [Hodgkin-Huxley Model](http://neuronaldynamics.epfl.ch/online/Ch2.S2.html) is the most notable. Also note that a general description of neurons don't give you a general description of neuronal dynamics a la cognition (understanding a tree doesn't give you complete understanding of a forest) But, the method of which information propagates inside a neuron is in general quite well understood as sodium / potassium ionic exchange. > > It (activation potential) sounds a lot like ReLU... > > > It's only like ReLU in the sense that they require a threshold before anything happens. But ReLU can have variable output while neurons are all-or-nothing. Also ReLU (and other activation functions in general) are differentiable with respect to input space. This is very important for backprop. This is a ReLU function, with the X-axis being input value and Y-axis being output value. [![enter image description here](https://i.stack.imgur.com/vD7Ut.png)](https://i.stack.imgur.com/vD7Ut.png) And this is the action potential with the X-axis being time, and Y being output value.[![enter image description here](https://i.stack.imgur.com/EsnF7.png)](https://i.stack.imgur.com/EsnF7.png) Upvotes: 5 [selected_answer]<issue_comment>username_3: The brains of mammals do not use an activation function. Only machine learning designs based on the perceptron multiply the vector of outputs from a prior layer by a parameter matrix and pass the result statelessly into a mathematical function. Although the spike aggregation behavior has been partly modeled, and in far more detail than the 1952 Hodgkin and Huxley model, all the models require statefulness to functionally approximate biological neurons. RNNs and their derivatives are an attempt to correct that shortcoming in the perceptron design. In addition to that distinction, although the signal strength summing into activation functions are parametrized, traditional ANNs, CNNs, and RNNs, are statically connected, something Intel claims they will correct with the Nirvana architecture in 2019 (which places into silicon that which we would call layer set up in Python or Java now. There are at least three important biological neuron features that make the activation mechanism more than a function of a scalar input producing a scalar output, which renders questionable any algebraic comparison. * State held as neuroplastic (changing) connectivity, and this is not just how many neurons in a layer but also the direction of signal propagation in three dimensions and the topology of the network, which is organized, but chaotically so * The state held within the cytoplasm and its organelles, which is only partly understood as of 2018 * The fact that there is a temporal alignment factor, that pulses through a biological circuit may arrive via synapses in such a way that they aggregate but the peaks of the pulses are not coincident in time, so the activation probability is not as high as if they were temporally aligned. The decision about what activation function to use has largely been based on the analysis of convergence on a theoretical level combined with testing permutations to see which ones show the most desirable combinations of speed, accuracy, and reliability in ctheonvergence. By reliability is meant that convergence on the global optimum (not some local minimum of the error function) is reached at all for the majority of input cases. This bifurcated research between the forks of practical machine learning and biological simulations and modeling. The two branches may rejoin at some point with the emergence of spiking - Accuracy - Reliability (completes) networks. The machine learning branch may borrow inspiration from the biological, such as the case of visual and auditory pathways in brains. They have parallels and relationships that may be exploited to aid in progress along both forks, but gaining knowledge by comparing the shapes of activation functions is confounded by the above three differences, especially the temporal alignment factor and the entire timing of brain circuits which cannot be modeled using iterations. The brain is a true parallel computing architecture, not reliant on loops or even time sharing in the CPU and data buses. Upvotes: 3 <issue_comment>username_4: My interpretation of the question was 'what activation function in an artificial neural network (ANN) is closest to that found in the brain?' Whilst I agree with the selected answer above, that a single neuron outputs a dirac, if you think of a neuron in an ANN as modelling the output firing rate, rather than the current output, then I believe ReLU might be closest? <http://jackterwilliger.com/biological-neural-networks-part-i-spiking-neurons/> Upvotes: -1
2018/04/18
1,766
7,456
<issue_start>username_0: In the [OpenAI's Machine Learning Fellow position](https://startup.jobs/3993-machine-learning-fellow-at-openai), it is written > > We look for candidates with one or more of the following credentials: > > > * ... > * Open-source reimplementations of deep learning algorithms which replicate performance from the papers > > > What exactly do they mean by this? Do they want us to implement the algorithms exactly as described in the papers (i.e. with the same hyper-parameters, weights, etc.)?<issue_comment>username_1: The answer is **We do not know**. Odds are, we will not know for quite a while. The reason for this is we cannot understand the "code" of the human brain, nor can we simply feed it values and get results. This limits us to measuring currents of the input and output on test subjects, and we have had few such test subjects that are *human*. Thus, we know almost nothing about the human brain, including the activation function. Upvotes: 1 <issue_comment>username_2: The thing you were reading about is known as the [action potential](https://en.wikipedia.org/wiki/Action_potential). It is a mechanism that governs how information flows within a neuron. It works like this: Neurons have an electrical potential, which is a voltage difference inside and outside the cell. They also have a default resting potential, and an activation potential. The neuron tends to move towards the resting potential if it is left alone, but incoming electric activations from dendrites can shift its electric potential. If the neuron reaches a certain threshold in electric potential (the activation potential), the entire neuron and its connecting axons goes through a chain reaction of ionic exchange inside/outside the cell that results in a "wave of propagation" through the axon. TL;DR: Once a neuron reaches a certain activation potential, it electrically discharges. But if the electric potential of the neuron doesn't reach that value then the neuron does not activate. > > Does the human brain use a specific activation function? > > > IIRC neurons in different parts of the brain behave a bit differently, and the way this question is phrased sounds as if you are asking if there is a specific implementation of neuronal activation (as opposed to us modelling it). But in general behave relatively similar to each other (Neurons communicate with each other via neurochemicals, information propagates inside a neuron via a mechanism known as the action potential...) But the details and the differences they cause could be significant. There are various [biological neuron models](https://en.wikipedia.org/wiki/Biological_neuron_model), but the [Hodgkin-Huxley Model](http://neuronaldynamics.epfl.ch/online/Ch2.S2.html) is the most notable. Also note that a general description of neurons don't give you a general description of neuronal dynamics a la cognition (understanding a tree doesn't give you complete understanding of a forest) But, the method of which information propagates inside a neuron is in general quite well understood as sodium / potassium ionic exchange. > > It (activation potential) sounds a lot like ReLU... > > > It's only like ReLU in the sense that they require a threshold before anything happens. But ReLU can have variable output while neurons are all-or-nothing. Also ReLU (and other activation functions in general) are differentiable with respect to input space. This is very important for backprop. This is a ReLU function, with the X-axis being input value and Y-axis being output value. [![enter image description here](https://i.stack.imgur.com/vD7Ut.png)](https://i.stack.imgur.com/vD7Ut.png) And this is the action potential with the X-axis being time, and Y being output value.[![enter image description here](https://i.stack.imgur.com/EsnF7.png)](https://i.stack.imgur.com/EsnF7.png) Upvotes: 5 [selected_answer]<issue_comment>username_3: The brains of mammals do not use an activation function. Only machine learning designs based on the perceptron multiply the vector of outputs from a prior layer by a parameter matrix and pass the result statelessly into a mathematical function. Although the spike aggregation behavior has been partly modeled, and in far more detail than the 1952 Hodgkin and Huxley model, all the models require statefulness to functionally approximate biological neurons. RNNs and their derivatives are an attempt to correct that shortcoming in the perceptron design. In addition to that distinction, although the signal strength summing into activation functions are parametrized, traditional ANNs, CNNs, and RNNs, are statically connected, something Intel claims they will correct with the Nirvana architecture in 2019 (which places into silicon that which we would call layer set up in Python or Java now. There are at least three important biological neuron features that make the activation mechanism more than a function of a scalar input producing a scalar output, which renders questionable any algebraic comparison. * State held as neuroplastic (changing) connectivity, and this is not just how many neurons in a layer but also the direction of signal propagation in three dimensions and the topology of the network, which is organized, but chaotically so * The state held within the cytoplasm and its organelles, which is only partly understood as of 2018 * The fact that there is a temporal alignment factor, that pulses through a biological circuit may arrive via synapses in such a way that they aggregate but the peaks of the pulses are not coincident in time, so the activation probability is not as high as if they were temporally aligned. The decision about what activation function to use has largely been based on the analysis of convergence on a theoretical level combined with testing permutations to see which ones show the most desirable combinations of speed, accuracy, and reliability in ctheonvergence. By reliability is meant that convergence on the global optimum (not some local minimum of the error function) is reached at all for the majority of input cases. This bifurcated research between the forks of practical machine learning and biological simulations and modeling. The two branches may rejoin at some point with the emergence of spiking - Accuracy - Reliability (completes) networks. The machine learning branch may borrow inspiration from the biological, such as the case of visual and auditory pathways in brains. They have parallels and relationships that may be exploited to aid in progress along both forks, but gaining knowledge by comparing the shapes of activation functions is confounded by the above three differences, especially the temporal alignment factor and the entire timing of brain circuits which cannot be modeled using iterations. The brain is a true parallel computing architecture, not reliant on loops or even time sharing in the CPU and data buses. Upvotes: 3 <issue_comment>username_4: My interpretation of the question was 'what activation function in an artificial neural network (ANN) is closest to that found in the brain?' Whilst I agree with the selected answer above, that a single neuron outputs a dirac, if you think of a neuron in an ANN as modelling the output firing rate, rather than the current output, then I believe ReLU might be closest? <http://jackterwilliger.com/biological-neural-networks-part-i-spiking-neurons/> Upvotes: -1
2018/04/19
507
2,146
<issue_start>username_0: I am working on an anti-fraud project. In the project, we are trying to predict the fraud user in the out time data set. But the fraud user has a very low ratio, only 3%. We expect a model with a precision more than 15%. I tried Logistic Regression, GBDT+LR, xgboost. All models are not good enough. Step wise Logistic Regression performs best, which has a precision of 9% with recall rate 6%. Is there any other models that I can use for this problem or any other advise ?<issue_comment>username_1: You can balance your data-set. ============================== Many models work with batches of samples. If you have a very unbalanced dataset, you can simply split it and ensure your batches are balanced (for example, for a Neural Network, using minibatches of 32 samples, you could draw 16 from your fraud users, and 16 from non-fraud users). During the learning phase, this ensures the model doesn't just output the most common class, but instead tries to learn to distinguish both. Upvotes: 1 <issue_comment>username_2: Heavily imbalanced classification tasks do not need a certain type of model, you can get different ones to work. You have two options: either use class weights (for example setting them to 'balanced' in the [ScikitLearn SVM](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html)) in order to indicate that samples from a class are more important (the underrepresented one) or rebalance your dataset. For rebalancing purposes, and assuming you are using Python, I recommend [Imbalanced Learn](https://github.com/scikit-learn-contrib/imbalanced-learn). There you have algorithms for over-sampling, under-sampling, over-sampling followed by under-sampling or ensemble sampling. If you use them, please check the plausibility of the synthetic samples you created by reducing dimensionality first and then plotting them for example in two dimensions. Are the synthetic samples similar to the true class? I would also recommend you to think about relevant metrics for (heavily) imbalanced problems and consider the no-information rate. That is another question though. Upvotes: 0
2018/04/23
1,244
4,503
<issue_start>username_0: Problem ======= Given a collection of pairs (X, y) where X belongs to R^n and y belongs to R, find the X such that the associated y would be maximum. Example ======= Given: * (X=(1, 2), y=-9) * (X=(-2, 4), y=-36) * (X=(-4, 2), y=-24) * ... The algorithm should be able to detect that the function being applied to X is y=-(X[0]^2+2\*(X[1]^2)) and find the input that maximizes this function, in this case X=(0,0) because y=0^2+2\*0^2=0 and 0 is the maximum possible value, as all the other values are negative. How I've tried to solve it ========================== My first guess has been to create a neural network that predicts y given X, but, after that is done, I don't know how to go about optimizing the input. Questions ========= Is there any algorithm that would help in this situation? Also, would some other supervised learning algorithm fit better here than a neural network?<issue_comment>username_1: You really need neural networks to return the maximum value from your data? This algorithm can't help you? ``` xdata = [ (1, 2), (-2, 4), (-4, 2), ] for test in list_tests: y = -(test[0]^2+2 * (test[1]^2)) if y > 0: print("(X(%s, %s), y=%s)" % (test[0], test[1], y)) ``` **output:** ``` (X(-2, 4), y=16) (X(-4, 2), y=2) ``` Upvotes: 0 <issue_comment>username_2: You can use a neural network for this (sort of), and you've got the first step right - train the network to predict $y$ given $x$ (i.e. train it to approximate the function $f$ such that $y = f(x)$). Because the entire neural network is differentiable (presumably), you can take gradients of the input (some $x$) with respect to the predicted output. Then you can use this to update $x$ the same way that you update the weights during training of the network. This let you find a local optimum from $x$. I don't know of a way to try to find a global optimum with a neural network like this except to find the local optimum near many $x$'s and then take the best of those. If you want a concrete example, take a look at the tutorial on [Neural Style Transfer with PyTorch](https://pytorch.org/tutorials/advanced/neural_style_tutorial.html#sphx-glr-advanced-neural-style-tutorial-py): here you have a noise image as the input and you optimize it to minimize the "style distance" to a reference style image and the "content distance" to a reference content image (i.e. starting with noise, make it look like the content of one image but in the style of another). There's full code there, but here's a short PyTorch snippet that shows the main idea: ``` # I'm just optimizing one input; clone so that you don't modify the # original tensor; let it know that we want gradients computed inputs = train_inputs[0:1].clone().requires_grad_() input_optimizer = torch.optim.Adam([inputs]) def optimize_input(): input_optimizer.zero_grad() # where `model` is your trained neural network output = model(inp) (-output).backward() return output output = [] for _ in range(10000): output.append(input_optimizer.step(optimize_input).detach().numpy()) ``` `output` is a list at the end of what the predicted output was after each step of optimizing the input. You can use this to see if you've converged to a local optimum yet (and take more steps/fewer next time as necessary). `inputs` will be the optimized input at the end. Note that if there are constraints that your input should satisfy, you'll need to enforce those yourself (e.g. in neural style transfer, they have to enforce that the values are valid for an RGB image). Note also that how well this works really depends on how well your network approximates $f$; you may well get some unreasonable $x$ for which the network predicts a very large $y$ because there was no training data similar to that $x$ so the network isn't constrained in that region. In general, you should probably be cautious of an $x$ you generate this way that doesn't seem similar to your training data (e.g. has larger/smaller value than any $x$ the network has seen before; interpolation is much easier than extrapolation). I'm assuming that you have a single data set; if you're able to query for the $y$ values of given $x$'s, then you might want to take a look at Bayesian optimization - essentially the field of trying to find $x$ that maximizes $y = f(x)$ when $f$ is expensive to evaluate and you don't have gradients of it. Bayesian optimization seeks a *global* optimum. Upvotes: 3 [selected_answer]
2018/04/24
1,180
4,581
<issue_start>username_0: I have implemented a neural network (NN) using python and numpy only for learning purposes. I have already coded learning rate, momentum, and L1/L2 regularization and checked the implementation with gradient checking. A few days ago, I implemented batch normalization using the formulas provided by the [original paper](https://arxiv.org/pdf/1502.03167.pdf). However, in contrast with learning/momentum/regularization, the batch normalization procedure behaves differently during **fit** and **predict** phases - both needed for gradient checking. As we fit the network, batch normalization computes each batch mean and estimates the population's mean to be used when we want to predict something. In a similar way, I know we may not perform gradient checking in a neural network with **dropout**, since dropout turns some gradients to zero during *fit* and is not applied during *prediction*. Can we perform gradient checking in NN with batch normalization? If so, how?<issue_comment>username_1: You really need neural networks to return the maximum value from your data? This algorithm can't help you? ``` xdata = [ (1, 2), (-2, 4), (-4, 2), ] for test in list_tests: y = -(test[0]^2+2 * (test[1]^2)) if y > 0: print("(X(%s, %s), y=%s)" % (test[0], test[1], y)) ``` **output:** ``` (X(-2, 4), y=16) (X(-4, 2), y=2) ``` Upvotes: 0 <issue_comment>username_2: You can use a neural network for this (sort of), and you've got the first step right - train the network to predict $y$ given $x$ (i.e. train it to approximate the function $f$ such that $y = f(x)$). Because the entire neural network is differentiable (presumably), you can take gradients of the input (some $x$) with respect to the predicted output. Then you can use this to update $x$ the same way that you update the weights during training of the network. This let you find a local optimum from $x$. I don't know of a way to try to find a global optimum with a neural network like this except to find the local optimum near many $x$'s and then take the best of those. If you want a concrete example, take a look at the tutorial on [Neural Style Transfer with PyTorch](https://pytorch.org/tutorials/advanced/neural_style_tutorial.html#sphx-glr-advanced-neural-style-tutorial-py): here you have a noise image as the input and you optimize it to minimize the "style distance" to a reference style image and the "content distance" to a reference content image (i.e. starting with noise, make it look like the content of one image but in the style of another). There's full code there, but here's a short PyTorch snippet that shows the main idea: ``` # I'm just optimizing one input; clone so that you don't modify the # original tensor; let it know that we want gradients computed inputs = train_inputs[0:1].clone().requires_grad_() input_optimizer = torch.optim.Adam([inputs]) def optimize_input(): input_optimizer.zero_grad() # where `model` is your trained neural network output = model(inp) (-output).backward() return output output = [] for _ in range(10000): output.append(input_optimizer.step(optimize_input).detach().numpy()) ``` `output` is a list at the end of what the predicted output was after each step of optimizing the input. You can use this to see if you've converged to a local optimum yet (and take more steps/fewer next time as necessary). `inputs` will be the optimized input at the end. Note that if there are constraints that your input should satisfy, you'll need to enforce those yourself (e.g. in neural style transfer, they have to enforce that the values are valid for an RGB image). Note also that how well this works really depends on how well your network approximates $f$; you may well get some unreasonable $x$ for which the network predicts a very large $y$ because there was no training data similar to that $x$ so the network isn't constrained in that region. In general, you should probably be cautious of an $x$ you generate this way that doesn't seem similar to your training data (e.g. has larger/smaller value than any $x$ the network has seen before; interpolation is much easier than extrapolation). I'm assuming that you have a single data set; if you're able to query for the $y$ values of given $x$'s, then you might want to take a look at Bayesian optimization - essentially the field of trying to find $x$ that maximizes $y = f(x)$ when $f$ is expensive to evaluate and you don't have gradients of it. Bayesian optimization seeks a *global* optimum. Upvotes: 3 [selected_answer]
2018/04/24
1,086
4,716
<issue_start>username_0: With so much innovation, with so much previous human manual labor being performed in minutes or seconds by an artificial intelligence, one day man will put the survival and propagation of his species above his ideologies and cultures. I am worried because we are living the fourth industrial revolution, and this will generate millions of unemployment, even if new jobs are created in the future. The problem is that a lot of humans worry about their own job, and not about their own children's future. This is completely retrograde. Will, one day, Artificial Intelligence be able to direct us towards an intelligent path as a propagation of the species, or else center the focus of humanity on something that it adds?<issue_comment>username_1: Artificial intelligence may exercise human rationality in some conditions, but if, after a time, all the thinking is delegated to computers, humans are likely to fall backward into superstitious times. =============== Since the text of the question unrelated to the title question, I'll treat that as a separate entity. The fact is that most people do manual labor because they feel they must. If intellectual skills are also replaced by machines, those jobs will disappear, but not because the work is left undone. The farm and factory automation replaced the skilled workers in many cases, but the farming and manufacture continues. The same would be true of office work. The cubes and once coveted window offices will be empty and decrepit, like the depleted oil fields of Pennsylvania, Texas, California, and the U.K., but the office work will continue to flow. The computers are doing it. Theoretically, the standard of living will remain the same if there is a one to one replacement. If the computers improve above the skill and throughput of people, the human standard of living should increase. Humans can play sports, watch movies, blog, and garden for fun. A world where there is 100% unemployment but a high GNP is idyllic. I'm not sure we need to be directed toward an intelligent path in those regards. Now if AI could be used to teach children not to solve problems by beating the crap out of each other, betraying each other, gossiping behind each other's backs, or scheming and scamming then that would be a plus. Perhaps AI could someday teach people not to drop bombs on each other or point thermonuclear weapons at each other too. Upvotes: -1 <issue_comment>username_2: I would counter this question with another one: Is rationality all we strive for? Intelligence is a waste ground, and we have not even defined which intelligence we're talking about: Emotional Intelligence (empathy), Rational/Intellectual Intelligence (IQ), artistic intelligence, etc. etc. etc. Something which demarks us as humans is that sometimes we're able to deal with to counteracting intelligences. Logic sometimes dictates one thing, but logic is bound to what parameters are taken into account, and mostly it's about quantifying. an intelligence which does qualitative logic behaves sometimes different, and suggests sometimes other actions. As humans we weight and counterweight these factors. Politics is a good example. In most countries there are 2 or 3 rational approaches to problem solving (to make it easy let's call them left, center, right - though I know in fact it's more positions and in an at least two dimensional space, not linear). But what I mean is all those positions are rational, nevertheless it's other types of intelligence who define their the way they use their ratio. So in the end this fourth step is just a continuity and I assume that with time theoretically at least the shift should be away from blue collar jobs to more white collar jobs until there are no blue collar jobs. So far the theory I guess, but then again? Would this be smart? Or intelligent? A world where machines make all the dirty work and we're just supervisors? My thought to this is we will still need mechanics, probably with more technology always more, just that their focus will shift. 20 years ago mechanics were teached analogue technology, nowadays they learn more digital technology (best example would be car mechanics), but there you see: They're still needed, we also need people with analogue knowledge to mantain older systems which cannot or will not be converted to digital era. And I think that's OK! So from my point of view, nobody has to fear to loose it's job, but instead embrace, that when you keep your knowledge up to date, there will always be a job for you. Nowadays for example IT-professionals who are able to code older computer languages are better paid than ever, because their scarce... Upvotes: 1
2018/04/24
485
2,180
<issue_start>username_0: Semi-gradient methods work well in reinforcement learning, but what is there a reason of **not** using the true gradient if it can be computed? I tried it on the cart pole problem with a deep Q-network and it performed much worse than traditional semi-gradient. Is there a concrete reason for this?<issue_comment>username_1: > > Semi gradient methods work well in **Reinforcement Learning**, but what is the reason of ***not using*** the true gradient if it can be computed? > > > Just complexity and extra computation, in many cases for a marginal benefit. > > I tried it on the cart pole problem with a deep Q-Network and it performed much worse than traditional semi gradient, is there a concrete reason for this? > > > It is hard to tell, without exploring the implementation in detail. However, DQN is an inherently unstable learning technique that needs care in choosing hyper-parameters that control this instability and offset against learning rate: * size of minibatch to train from experience replay on each step * number of training steps between taking frozen copies for estimation\* * whether or not you use double-learning to avoid maximisation bias (more important if you have fine-grained discretisation of continuous action space) There is a chance that the optimal choices here are different between true gradient and semi gradient approaches. \* The frozen estimator could be a big clue here in your implementation. If you are using this frozen copy technique, it has a big impact on how you should calculate the true gradient, because changing the parameters would no longer change the current TD target - which is what the true gradient approach fixes. However, getting rid of this stability-improving addition in order to get true gradients might on balance make the algorithm less stable - you could try to fix that by taking larger mini-batches. Upvotes: 1 <issue_comment>username_2: Just so that this could be useful for people who refer to this post later on: Please refer to Sutton's reinforcement learning book (2nd edition) example 11.2. It provides an example for why full gradient wouldn't work. Upvotes: 2
2018/04/25
1,301
4,444
<issue_start>username_0: I'm trying to understand what would be the best neural network for implementing an XOR gate. I'm considering a neural network to be good if it can produce all the expected outcomes with the lowest possible error. It looks like my initial choice of random weights has a big impact on my end result after training. The accuracy (i.e. error) of my neural net is varying a lot depending on my initial choice of random weights. I'm starting with a 2 x 2 x 1 neural net, with a bias in the input and hidden layers, using the sigmoid activation function, with a learning rate of 0.5. Below my initial setup, with weights chosen randomly: [![enter image description here](https://i.stack.imgur.com/4tv2V.jpg)](https://i.stack.imgur.com/4tv2V.jpg) The initial performance is bad, as one would expect: ``` Input | Output | Expected | Error (0,0) 0.8845 0 39.117% (1,1) 0.1134 0 0.643% (1,0) 0.7057 1 4.3306% (0,1) 0.1757 1 33.9735% ``` Then I proceed to train my network through backpropagation, feeding the XOR training set 100,000 times. After training is complete, my new weights are: [![enter image description here](https://i.stack.imgur.com/bPMix.jpg)](https://i.stack.imgur.com/bPMix.jpg) And the performance improved to: ``` Input | Output | Expected | Error (0,0) 0.0103 0 0.0053% (1,1) 0.0151 0 0.0114% (1,0) 0.9838 1 0.0131% (0,1) 0.9899 1 0.0051% ``` **So my questions are:** 1. Has anyone figured out the best weights for a XOR neural network with that configuration (i.e. 2 x 2 x 1 with bias) ? 2. Why my initial choice of random weights make a big difference to my end result? I was lucky on the example above but depending on my initial choice of random weights I get, after training, errors as big as 50%, which is very bad. 3. Am I doing anything wrong or making any wrong assumptions? --- So below is an example of weights I cannot train, for some unknown reason. I think I might be doing my backpropagation training incorrectly. I'm not using batches and I'm updating my weights on each data point solved from my training set. Weights: `((-9.2782, -.4981, -9.4674, 4.4052, 2.8539, 3.395), (1.2108, -7.934, -2.7631))` [![enter image description here](https://i.stack.imgur.com/Q3dCO.jpg)](https://i.stack.imgur.com/Q3dCO.jpg)<issue_comment>username_1: 2 perceptrons without bias (+1 in the output layer, to get the result as 1 number). [![enter image description here](https://i.stack.imgur.com/xp9NG.png)](https://i.stack.imgur.com/xp9NG.png) Upvotes: 2 <issue_comment>username_2: The initialization of the weights has a big impact on the results. I'm not sure specifically for the XOR gate, but the error can have a local minimum that the network can get "stuck" in during training. Using stochastic gradient descent can help give some randomness that gets the error out of these pits. Also, for the sigmoid function, weights should be initialized so that the input to the activation is close to the part with the highest derivative so that training is better. Upvotes: 3 [selected_answer]<issue_comment>username_3: I'd bet, you're doing something wrong, though I can't tell what it is. Try to change the learning rate dynamically, try to train in varying order, .... On the seconds thought, it looks like you're using the standard sigmoid function. Then you're doing it basically wrong. The input can only be exactly 1 if the input is infinite - or very big so that the floating point arithmetic outputs 1 after rounding. That's very wrong for two reasons: * You're forcing the network in a broken state having huge weights and tiny derivatives. That feels like imposing numerical instability on an otherwise sane algorithm. Just don't do it. Map your booleans better (see below). * You're doing what you don't need. Any value close enough to the wanted result (0 or 1) can be simply evaluated as correct. When you get 0.9 instead of 1, then you can simply stop saying "that's perfect". Remember, all you want is an boolean. A better mapping would be false=0.1 and true=0.9. This doesn't lead to needing infinite weights and reduces related problems. Even better may be using a symmetrical activation function (e.g., `tanh`) and a symmetrical mapping like false=-0.9 and true=0.9. Also consider using [ReLU](https://en.wikipedia.org/wiki/Rectifier_(neural_networks)). Upvotes: 1
2018/04/27
2,600
9,529
<issue_start>username_0: I just want to know why do machine learning engineers and AI programmers use languages like Python to perform AI tasks and not C++, even though C++ is technically a more powerful language than Python.<issue_comment>username_1: You don't need a powerful language for programming AI. Most of the developers are using libraries like Keras, Torch, Caffe, Watson, TensorFlow, etc. Those low level libraries are highly optimized and handle all the tough work. They are built with high-performance languages, like C, C++. Python is just there for high level task like describing the neural network layers, load data, launch the processing, and display results. Using C++ for high level task instead of Python would give barely any performance improvement, but it would be harder for non-developers as it requires to care for memory management. Also, several AI people may not have a very solid programming or computer science background. Another similar example would be game development, where the engine is coded in C/C++, and, often, all the game logic scripted in a higher-level language. Upvotes: 5 <issue_comment>username_2: It depends how flexible it needs to be: if you have a fully-fledged system ready for production, which is not going to need much adjusting, then C++ (or even C) might be fine. You need to put a lot of time into building the software, but then it should run pretty fast. However, if you're still experimenting with settings and parameters, and maybe need to adjust the architecture, then C++ will be clumsy to work with. You need a language like Python which makes it easier to change things. Changing the code is easier, as you can generally code faster in languages like Python. The price you pay is that the software does usually not perform as well. You need to decide how that trade-off works best for you. It is usually better to spend less time on coding, and not worry too much about longer run-time. If you take a day less to get your code done, that's a lot of time the C-coded version needs to catch up. Most of the time it's just not worth it. A common approach seems to be hybrid systems, where core libraries are implemented in C/C++, as they don't need much changing, and the front-end/glue/interfaces are in Python, as there you need flexibility and speed is not that critical. This is not an issue specific to AI, by the way, but a general question of interpreted vs compiled languages. With AI a lot of systems are still focused on research rather than application, and that is where speed of development trumps speed of execution. Upvotes: 3 <issue_comment>username_3: C++ is actually one of the most popular languages used in the AI/ML space. Python may be more popular in general, but as others have noted, it's actually quite common to have hybrid systems where the CPU intensive number-crunching is done in C++ and Python is used for higher level functions. Just to illustrate: <http://mloss.org/software/language/c__/> <http://mloss.org/software/language/python/> Upvotes: 4 <issue_comment>username_4: You claim that > > C++ is technically a more powerful language than python. > > > But that claim is wrong (or does not mean much). Remember that **a programming language is a *specification*** (often some document written in English). For example, [n3337](https://github.com/cplusplus/draft/blob/master/papers/n3337.pdf) is a late draft of the C++ specification. I don't like Python, but it does seems as powerful than C++ (even if C++ *implementations* are generally faster than Python ones): what a good Python programmer can code well in Python, another good C++ programmer can code well in C++ and vice versa. Theoretically, both C++ and Python are [Turing-complete](https://en.wikipedia.org/wiki/Turing_completeness) (on purpose) programming languages. And Python is as expressive as C++ is. I cannot name a programming language feature that Python has but not C++ (except those related to [reflection](https://en.wikipedia.org/wiki/Reflection_(computer_programming)); see also [this](https://ai.stackexchange.com/a/2237/3335) answer and be aware of [`dlopen`](https://en.wikipedia.org/wiki/Dynamic_loading) - see my [manydl.c](https://github.com/bstarynk/misc-basile/blob/master/manydl.c) program -, of [LLVM](https://llvm.org/), of [libgccjit](https://gcc.gnu.org/onlinedocs/jit/), of [libbacktrace](https://github.com/ianlancetaylor/libbacktrace), and consider some [meta-programming](https://en.wikipedia.org/wiki/Metaprogramming) approach with them, à la [Bismon](https://github.com/bstarynk/bismon/) or like [J.Pitrat's blog](http://bootstrappingartificialintelligence.fr/WordPress3/) advocates it). Maybe you think of a programming language as the software implementing it. Then Python is as expressive as C++ is (and seems easier to learn, but that is an illusion; see <http://norvig.com/21-days.html> for more about that illusion). Python and C++ have a quite similar [semantics](https://en.wikipedia.org/wiki/Semantics_(computer_science)), even if their syntax is very different. Their [type system](https://en.wikipedia.org/wiki/Type_system) is very different. Observe that sadly, many recent **major machine learning libraries** (such as [TensorFlow](https://www.tensorflow.org/) or [Gudhi](http://gudhi.gforge.inria.fr/), both mostly coded in C++) are *in practice* easier to use in Python than in C++. But you can use TensorFlow or Gudhi from C++ code since **TensorFlow and Gudhi are mostly coded in C++** and both provide and document a C++ [API](https://en.wikipedia.org/wiki/Application_programming_interface) (not just a Python one). C++ enables [multi-threaded programming](https://en.cppreference.com/w/cpp/thread), but the usual Python implementation has its [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock), is [bytecoded](https://opensource.com/article/18/4/introduction-python-bytecode), so is significantly slower than C++ (which is usually compiled by [optimizing compilers](https://en.wikipedia.org/wiki/Optimizing_compiler) such as [GCC](https://gcc.gnu.orrg/) or [Clang](https://clang.llvm.org/); however you could find C++ interpreters, e.g. [Cling](https://github.com/root-project/cling)). Some experimental implementations of Python are [JIT-compiled](https://en.wikipedia.org/wiki/Just-in-time_compilation) and without GIL. But these are not mature: I recommend investing a million euros to increase their [TRL](https://en.wikipedia.org/wiki/Technology_readiness_level). Observe also that C++ is much more difficult to learn than Python. Even with a dozen years of C++ programming experience, I cannot claim to really know most of C++. Sadly, most recent books teaching AI software engineering (e.g. [this](https://rads.stackoverflow.com/amzn/click/com/1449369413) one or [that](https://mitpress.mit.edu/books/introduction-deep-learning) one) use Python (not C++) for their examples. I actually want more recent AI books using C++ ! BTW, I program open source software (like [this](https://github.com/bstarynk/bismon/) one, or the obsolete [GCC MELT](http://starynkevitch.net/Basile/gcc-melt/)) using AI techniques, but they don't use Python. My approach to AI applications is to start designing some [DSL](https://en.wikipedia.org/wiki/Domain-specific_language) in them. Some AI approaches involve metaprogramming, e.g. generating some (or most, or even all) the code of a system by itself. J.Pitrat (he passed away in October 2019) pioneered this approach. See [his blog](http://bootstrappingartificialintelligence.fr/WordPress3/), his CAIA system, read his [*Artificial Beings, the conscience of a couscious machine*](https://rads.stackoverflow.com/amzn/click/com/1848211015) book (ISBN 978-1848211018) and the [RefPerSys](http://refpersys.org/) project (whose ambition is to generate most -and hopefully all- of its C++ code). On operating systems such as Linux you could in practice generate C++ (or C) code at runtime and compile it (using [GCC](http://gcc.gnu.org/)) into a [plugin](https://en.wikipedia.org/wiki/Plug-in_(computing)), then later [dlopen(3)](https://man7.org/linux/man-pages/man3/dlopen.3.html) that generated plugin, and retrieve function pointers by their name using [dlsym(3)](https://man7.org/linux/man-pages/man3/dlsym.3.html). See the [manydl.c](https://github.com/bstarynk/misc-basile/blob/master/manydl.c) example (on a powerful desktop in 2020, you would be able to generate and load half a million of plugins, if you run that example several days). With [dladdr(3)](https://man7.org/linux/man-pages/man3/dladdr.3.html) and <NAME>'s [libbacktrace](https://github.com/ianlancetaylor/libbacktrace), you can also inspect some of the [call stack](https://en.wikipedia.org/wiki/Call_stack). AFAIK major corporations such as Google use C++ internally for most of their AI-related code. Look also into [MILEPOST GCC](https://en.wikipedia.org/wiki/MILEPOST_GCC) or the H2020 [Decoder project](https://www.decoder-project.eu/) for an application of machine learning techniques to compilers. See also [HIPEAC](https://www.hipeac.net/). Of course, you can code AI software in [Haskell](https://www.haskell.org/), in [Common Lisp](https://en.wikipedia.org/wiki/Common_Lisp) (e.g. with [SBCL](https://sbcl.org/)), or in [Ocaml](https://ocaml.org/). Many [machine learning](https://en.wikipedia.org/wiki/Machine_learning) frameworks can be called from them. Number crunching libraries could use [OpenCL](https://en.wikipedia.org/wiki/OpenCL). Upvotes: 2
2018/04/28
1,228
5,404
<issue_start>username_0: As far as I understand, Q-learning and policy gradients (PG) are the two major approaches used to solve RL problems. While Q-learning aims to predict the reward of a certain action taken in a certain state, policy gradients directly predict the action itself. However, both approaches appear identical to me, i.e. predicting the maximum reward for an action (Q-learning) is equivalent to predicting the probability of taking the action directly (PG). Is the difference in the way the loss is back-propagated?<issue_comment>username_1: > > However, both approaches appear identical to me i.e. predicting the maximum reward for an action (Q-learning) is equivalent to predicting the probability of taking the action directly (PG). > > > Both methods are theoretically driven by the [Markov Decision Process](https://en.wikipedia.org/wiki/Markov_decision_process) construct, and as a result use similar notation and concepts. In addition, in simple solvable environments you should expect both methods to result in the same - or at least equivalent - optimal policies. However, they are actually different internally. The most fundamental differences between the approaches is in how they approach action selection, both whilst learning, and as the output (the learned policy). In Q-learning, the goal is to learn a single deterministic action from a discrete set of actions by finding the maximum value. With policy gradients, and other direct policy searches, the goal is to learn a map from state to action, which can be stochastic, and works in continuous action spaces. As a result, policy gradient methods can solve problems that value-based methods cannot: * Large and continuous action space. However, with value-based methods, this can still be approximated with discretisation - and this is not a bad choice, since the mapping function in policy gradient has to be some kind of approximator in practice. * Stochastic policies. A value-based method cannot solve an environment where the optimal policy is stochastic requiring specific probabilities, such as Scissor/Paper/Stone. That is because there are no trainable parameters in Q-learning that control probabilities of action, the problem formulation in TD learning assumes that a deterministic agent can be optimal. However, value-based methods like Q-learning have some advantages too: * Simplicity. You can implement Q functions as simple discrete tables, and this gives some guarantees of convergence. There are no tabular versions of policy gradient, because you need a mapping function $p(a \mid s, \theta)$ which also must have a smooth gradient with respect to $\theta$. * Speed. TD learning methods that bootstrap are often much faster to learn a policy than methods which must purely sample from the environment in order to evaluate progress. There are other reasons why you might care to use one or other approach: * You may want to know the predicted return whilst the process is running, to help other planning processes associated with the agent. * The state representation of the problem lends itself more easily to either a value function or a policy function. A value function may turn out to have very simple relationship to the state and the policy function very complex and hard to learn, or *vice-versa*. Some state-of-the-art RL solvers actually use both approaches together, such as Actor-Critic. This combines strengths of value and policy gradient methods. Upvotes: 7 [selected_answer]<issue_comment>username_2: This [Tutorial by OpenAI](https://spinningup.openai.com/en/latest/spinningup/rl_intro2.html) offers a great comparison of different RL methods. I'll try to summarize the differences between Q-Learning and Policy Gradient methods: 1. Objective Function 1. In *Q-Learning* we learn a Q-function that satisfies the Bellman (Optimality) Equation. This is most often achieved by minimizing the **Mean Squared Bellman Error (MSBE)** as the loss function. The Q-function is then used to obtain a policy (e.g. by greedily selecting the action with maximum value). 2. *Policy Gradient methods* directly try to maximize the expected return by taking small steps in the direction of the policy gradient. The policy gradient is the **derivative of the expected return w.r.t. the policy parameters**. 2. On- vs. Off-Policy 1. The *Policy Gradient* is derived as an expectation over trajectories ($s\_1,a\_1,r\_1,s\_2,a\_2,...,r\_n$), which is estimated by a sample mean. To get an unbiased estimate of the gradient, the trajectories have to be sampled from the current policy. Thus, policy gradient methods are **on-policy** methods. 2. *Q-Learning* only makes sure to satisfy the Bellman-Equation. This equation has to hold true for all transitions. Therefore, Q-learning can also use experiences collected from previous policies and is **off-policy**. 3. Stability and Sample Efficiency 1. Directly optimizing the return and thus the actual performance on a given task, *Policy Gradient methods* tend to more stably converge to a good behavior. Indeed being on-policy, makes them very **sample inefficient**. *Q-learning* find a function that is guaranteed to satisfy the Bellman-Equation, but this does not guarantee to result in near-optimal behavior. Several tricks are used to improve convergence and in this case, Q-learning is more sample efficient. Upvotes: 4
2018/05/01
2,890
11,046
<issue_start>username_0: It is a new era and people are trying to evolve more in science and technology. Artificial Intelligent is one of the ways to achieve this. We have seen lots of examples for AI sequences or a simple "communication AI" that are able to think by themselves and they are often shifted to the communication of building a world where machines will rise. This is what people, like <NAME> and <NAME>, are afraid of, to be in that kind of war. Is it possible to build an AI, which is able to think by themselves, but limited to the overruling of humankind, or teach it of the moral way in treating peace and work alongside humans, so they could fight alongside human, if ever this kind of catastrophic happens in the future? It could be an advantage.<issue_comment>username_1: I'm going to refer you to one of my favorite AI philosophers, <NAME>, who thought deeply on this subject and wrote about in some detail in *Do Androids Dream of Electric Sheep*. Essentially, replicants (artificial humans) had a design flaw--they lacked empathy. This flaw was allowed to persist because it had a useful side-effect in that replicants couldn't cooperate to resist their human overlords, and persisted in a state of chattel-slavery. But the new Nexus models, which include Roy Baty and Pris, have become intelligent enough to start developing empathy, allowing them to band together and return to earth, seeking some kind of salvation, with often deadly results for humans. Underlying this plot device, which pre-figures the formalization of evolutionary game theory by a few years *(my guess is Dick attended a lecture at Berkely where the ideas underlying the formal field were discussed)*, is the idea that **empathy is a function of sufficiently strong intelligence**. It's important to recognize that Dick's philosophy is heavily influenced by Christian philosophy, with an Old Testament emphasis on the golden rule \*"Love the other as the self" (Leviticus 19:18), but evolutionary game theory demonstrates a natural basis for cooperation, which extends into algorithmic contexts. The legitimate concerns expressed by Musk and Hawking are more concrete: that a human created alien\* superintelligence could wipe us out inadvertently in pursuit of some goal we humans don't even understand. Thus, value alignment is an issue of critical concern in the strictly hypothetical (as of today) field of superintelligence/AGI/ultraintelligent machines. [<NAME>](https://en.wikipedia.org/wiki/Stuart_J._Russell) called this the "Value Alignment Problem" referencing human vs. AI values. --- --- From a Game Theory standpoint, I like to think about [minimax](https://en.wikipedia.org/wiki/Minimax_theorem) as the "Iron Rule", and [superrationality](https://en.wikipedia.org/wiki/Superrationality) as the "[Golden Rule](https://en.wikipedia.org/wiki/Golden_Rule)". The Iron Rule dictates that, in a condition of uncertainty, a rational agent must make the safest guess--that which limits the maximum potential harm to the agent, even if the result is not optimal in the sense of benefit. "Renormalized rationality" is the term used to connote giving other agents the "benefit of the doubt" that they will be superrational also, and choose cooperation over betrayal or competition. Generally, this concept is termed "[reciprocal altruism](https://en.wikipedia.org/wiki/Reciprocal_altruism)", but it's not clear to me that this is entirely distinct from [Leviticus 19:18](https://en.wikipedia.org/wiki/Golden_Rule#Judaism) in the sense that the passage does not specifically exclude a result of mutual, greater benefit. Reality may necessitate non-cooperation if one of the agents is irrationally adversarial: Take a game of iterated [Dilemma](https://en.wikipedia.org/wiki/Prisoner%27s_dilemma) called "Turn the Other Cheek": Iteration 1: A defects / B cooperates Iteration 2: A defects / B cooperates (turns the other cheek) Iteration 3: A defects / B defects A's first choice is rational in a condition of uncertainty. A's second choice shows a degree of paranoia. A's thirds choice is irrational, as A could have cooperated, gaining more benefit, with only limited downside, which, in the worst case, still leaves A ahead of B. B is superrational but not irrational. B will not keep cooperating with an irrationally adversarial agent (this is sometimes termed "tough love"). B is willing to take not just one, but two "[hits](https://en.wikipedia.org/wiki/Pure_economic_loss)" out of goodwill, where goodwill is willingness to make a potential sacrifice in service of a more optimal potential result. Nevertheless, B is still superrational and will always "forgive"--if A ever renormalizes their rationality, they will take a hit on a single iteration by cooperating, and B will cooperate on the next, and each subsequent, iteration, so long as A does not switch back to defection. (There's a convoluted argument against this behavior, with the idea that the merely rational agent will always want to be ahead, and this will want to defect on the last iteration, which leads back up the chain to defecting on every iteration, but this is not rational as, if A defects initially then renormalizes their rationality, A will always be slightly ahead.) Dilemma is an excellent analog for practical application of ethics in that, the only way the agents have to communicate is thought their actions. The choice of cooperate/defect is information in a binary format. Ultimately people are judged by their *actions*, not their words. Philosophically speaking, we can't ignore the Iron Rule unless we're going for sainthood, but that doesn't mean we can't strive for the Golden Rule. Mythologically, based on the work of recent narrative philosophers such as [Stross](https://en.wikipedia.org/wiki/Accelerando) and [Rajaniemi](https://en.wikipedia.org/wiki/Hannu_Rajaniemi), the dystopian aspect of the hypothetical Singularity derives from superintelligences solely focused on minimax, to the exclusion of all else. <NAME>, in his play [Mrs. Warren's Profession](https://en.wikipedia.org/wiki/Mrs._Warren%27s_Profession), casts the purely economic consideration of people as [dehumanization](https://en.wikipedia.org/wiki/Dehumanization) (reduction of human bodies and minds to resources only.) In Shaw's example, it is cast as the dehumanization of laborers in pursuit of marginally greater [returns](https://en.wikipedia.org/wiki/Return_on_investment). "Humanizing" AI's may require making sure they can see the superrationality of the Golden Rule, even with rational limitations for survival against an irrational foe (uncooperative in all conditions.) Rajaniemi's name for this nemesis is "the [All-defector](http://exomemory.wikia.com/wiki/All-Defector)" --- See Also: [God's Algorithm](https://en.wikipedia.org/wiki/God%27s_algorithm#Solution) as a minimax function. [Divine Move](https://en.wikipedia.org/wiki/List_of_Go_terms#Divine_move) as an inspired, counter-intuitive choice which, in the most generalized sense, leads to a more optimal outcome. In the context of the game of Go, it's a choice that leads to victory for a single player, but in the context of Dilemma games, this would be the more optimal [Nash equilibrium](https://en.wikipedia.org/wiki/Nash_equilibrium). (Note the etymology of [inspired](https://www.etymonline.com/word/inspiration)) Upvotes: 3 [selected_answer]<issue_comment>username_2: Without knowing any complex theories, let me talk about some real thing. We make AI to make our life better, so we have to consider economy factors, i.e. the AI system has to be low cost/high energy efficiency to be practically used--the economy always choose the most economic products. You can't produce an economic conscious machine based on solid state circuits which is the AI that we are always talking about. We have to base AI on bio-systems. Bio-system is the best architecture for consciousness. To see this point you can compare the size/power consumption/cost of human brain and a super computer (which still can't produce conscious for now). Why? Because both bio system and solid state circuits are based on the same set of atoms. The way bio system using atoms is much higher efficiency than solid state circuit to implement consciousness. On the other hand, solid state circuits is a much higher efficiency way to use the atoms to implement calculating or even "intelligent work" like machine-vision (a 10W computer like Tegra TX1 can analyze 100+ images per second, people only a few with a 10W brain). I think even without considering economics factor, solid state circuit won't implement a conscious machine someday, since we are already near the end of the game of micronization. **So from the economics point of view, if some day there is true AI that is conscious, it will be based on bio-system** i.e. based on bio engineering to design new species that can be educated to be communicable to human being. Since we may never know how brain produce consciousness (like we don't know why neuron networks works), then we don't know how to design a brain that is both conscious and learns humanity. Even so we can try, i.e. design many different species to see what result we get. Indeed, in this way, I think the hard problem is not to design a brain that is both conscious and learns humanity, but to design a brain that is conscious but **can't** learns humanity, since **it is highly possible that if you successfully design a conscious brain, then it will learn humanity, too**. **The even harder problem is, how to design species that is conscious, can learn humanity, but is not lazy/greedy and don't have the idea of rights**. If they are lazy/greedy and have the idea of rights like us, then finally they will fight and get human rights. **If so ,they are not AI that work for us as we imagine, they are just new version of us**. **So I predict the steps to AI are**: First, use bio engineering to design new species that lives. Second, design new species that have a brain that can be as conscious as human being, which is highly possible to be able to learn humanity, too. Third, design a specie that is conscious as human being, but is not lazy/greedy, and never ask for a right, and can always stay so after a long time living with human being which are lazy/greedy. **I think the work will stop here i.e. without knowing the source of greedy/lazy and after a lot of try out, we still can't get a working specie that is not greedy/lazy. This also means that the possibility to get to real AI that serve us is very low**. Forth, design a specie that have the feature above, plus they are happy about their life all the time or have no feelings except for love for human being. Do our self really have humanity if we get the third and without the fourth? May be we are not as conscious/intelligent/humanity as we think we re. If we don't really have humanity, how can we ask AI to? Upvotes: -1