date stringlengths 10 10 | nb_tokens int64 60 629k | text_size int64 234 1.02M | content stringlengths 234 1.02M |
|---|---|---|---|
2018/05/01 | 1,993 | 7,547 | <issue_start>username_0: I'm trying to write my own implementation of NEAT and I'm stuck on the network evaluate function, which calculates the output of the network.
NEAT as you may know contains a group of neural networks with continuously evolving topologies by the addition of new nodes and new connections. But with the addition of new connections between previously unconnected nodes, I see a problem that will occur when I go to evaluate, let me explain with an example:
[](https://i.stack.imgur.com/cJTjN.png)
```
INPUTS = 2 yellow nodes
HIDDEN = 3 blue nodes
OUTPUT = 1 red node
```
In the image a new connection has been added connecting node3 to node5, how can I calculate the output for node5 if I have not yet calculated the output for node3, which depends on the output from node5?
(not considering activation functions)
```
node5 output = (1 * 0.5) + (1 * 0.2) + (node3 output * 0.8)
node3 output = ((node5 output * 0.7) * 0.4)
```<issue_comment>username_1: Consider the execution order, 5 will have an invalid value because it hasn't been set form 3 yet. However the second time around it should have a value set. The invalid value should falloff after sufficient training.
```
0 -> 5
1 -> 5
5 -> 2
2 -> 3
3 -> 4
3 -> 5
RESTART
0 -> 5
1 -> 5
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: I can think of two possible ways of enforcing NEAT to create a feed forward network. One elegant one and one a little more cumbersome one;
1. Only allow the "add connection" mutation to connect a node with another node that have a higher maximum distance from an input node. This should result in feed forward network, without much extra work. (Emergent properties are great!)
2. Run as you did and create a fully connected network with NEAT and then prune it during a forward pass. After creating the network, run through it and remove connections that try to connect to a node already used in the forward pass (example 3->5). Alternatively just remove unused input connections to nodes during the forward pass. Given how NEAT mutates, it should not be possible that you remove a vital connection and cut the the network in two. This property of NEAT make sure your signal will always be able to reach the output, even if you remove those "backwards pointing" connections.
I believe these should work, however i have not tested them.
The original NEAT paper assumed a feed forward ANN, even though its implementation as described would result in a fully connected network. I think it was just an assumption of the paradigm they worked in. The confusion is fully understandable.
Upvotes: 2 <issue_comment>username_3: In my implementation, I used a recursion system to calculate the output nodes. It works as follows:
1. Assume a feed-forward network
>
> Only allow the "add connection" mutation to connect a node with another node >that have a higher maximum distance from an input node. This should result in >feed forward network, without much extra work. (Emergent properties are great!)
>
>
>
2. Define function `x`, a recursive function that takes in a node number
3. Define function `y`, a second function that takes in a node and returns all the connections with that node as an output
**In the recursive function:**
1. Call function `y`
2. Call function `x` on function y outputs
3. If the parameter for `x` is any input node, return the node value.
This was the most elegant way of implementing I could think of, and its a lot simpler than explicitly tracking all of the connections.
Upvotes: 2 <issue_comment>username_4: Hello chris i am also implementing this algorithm from scratch and the way i go about activating my mlp net is as follows:
I instantiate a list of nodes(actives), this is set to all input nodes initially, i then pass that to a function that initializes an empty list(next actives) and proceeds to loop through each set of conns for each node in the actives list, it adds each "to" node from those connections to the "next actives) list unless its an output node or has already been activated, once all the "actives" list is looped through, i call the function again this time passing "next actives" as the actives list unless "next actives" is still empty after then i know the net has been fully activated.
in this scenario the connection from node three to node five would be evaluated but it would not be added to the next actives list because it had already been activated, preventing an infinite loop.
Upvotes: 2 <issue_comment>username_5: Following is the pseudo code of the NEAT's network evaluation (converted from original source code),
```
Until all the outputs are active
for all non-sensor nodes
activate node
sum the input
for all non-sensor and active nodes
calculate the output
```
Note that there is no recursion for feed forwarding concepts according to the original author.
Upvotes: 3 <issue_comment>username_6: Okay, so instead of telling you to just not have recurrent connections, i'm actually going to tell you how to identify them.
First thing you need to know is that recurrent connections are calculated **after** all other connections *and* neurons. So which connection is recurrent and which is not depends on the **order of calculation** of your NN.
Also, the first time when you put data into the system, we'll just assume that every connection is zero, otherwise some or all neurons can't be calculated.
Lets say we have this neural network:
[Neural Network](https://i.stack.imgur.com/TKiZF.png)
We devide this network into 3 layers (even though conceptually it has 4 layers):
```
Input Layer [1, 2]
Hidden Layer [5, 6, 7]
Output Layer [3, 4]
```
First rule: **All outputs from the output layer are recurrent connections**.
Second rule: **All outputs from the input layer may be calculated first**.
We create two arrays. One containing the order of calculation of all neurons *and* connections and one containing all the (potentially) recurrent connections.
Right now these arrays look somewhat like this:
```
Order of
calculation: [1->5, 2->7 ]
Recurrent: [ ]
```
Now we begin by looking at the output layer. Can we calculate Neuron 3? No? Because 6 is missing. Can we calculate 6? No? Because 5 is missing. And so on. It looks somewhat like this:
```
3, 6, 5, 7
```
The problem is that we are now stuck in a loop. So we introduce a temporary array storing all the neuron id's that we already visited:
```
[3, 6, 5, 7]
```
Now we ask: Can we calculate 7? No, because 6 is missing. But we already visited 6...
```
[3, 6, 5, 7,] <- 6
```
Third rule is: **When you visit a neuron that has already been visited before, set the connection that you followed to this neuron as a recurrent connection**.
Now your arrays look like this:
```
Order of
calculation: [1->5, 2->7 ]
Recurrent: [6->7 ]
```
Now you finish the process and in the end join the order of calculation array with your recurrent array so, that the recurrent array follows after the other array.
It looks somethat like this:
```
[1->5, 2->7, 7, 7->4, 7->5, 5, 5->6, 6, 6->3, 3, 4, 6->7]
```
Let's assume we have [x->y, y]
Where x->y is the calculation of x\*weight(x->y)
And
Where y is the calculation of Sum(of inputs to y). So in this case Sum(x->y) or just x->y.
There are still some problems to solve here. For example: What if the only input of a neuron is a recurrent connection? But i guess you'll be able to solve this problem on your own...
Upvotes: 0 |
2018/05/02 | 810 | 3,482 | <issue_start>username_0: The basis of Q-learning is recursive (similar to dynamic programming), where only the absolute value of the terminal state is known.
Shouldn't it make sense to feed the model a greater proportion of terminal states initially, to ensure that the predicted value of a step in terminal states (zero) is learned first?
Will this make the network more likely to converge to the global optimum?<issue_comment>username_1: If you have enough domain knowledge to be able to reliably, intentionally reach those terminal states often when generating experience, yeah, that could help.
Generally, the assumption in Reinforcement Learning is no domain knowledge other than the assumption that we're in a Markov Decision Process. This means we start learning from scratch, and before extensive learning we do not know how to reach terminal states. If we don't know how to reach terminal states, we also can't deliberately go to them to generate the experiences we want as you suggest.
Upvotes: 1 <issue_comment>username_2: >
> The basis of Q-learning is recursive (similar to dynamic programming), where only the absolute value of the terminal state is known.
>
>
>
This may be true in some environments. Many environments do not have a terminal state, they are continuous. Your statement may be true for instance in a board game environment where the goal is to win, but it is false for e.g. the Atari games environment.
In addition, when calculating the value of the terminal state, it is always zero, so often a special hard-coded $0$ is used, and the neural network is not required to learn that. So it is only for deterministic transitions $(S,A) \rightarrow (R, S^T)$ where you need to learn that $Q(S,A) = R$ absolutely.
>
> Shouldn't it make sense to feed the model a greater proportion of terminal states initially, to ensure that the predicted value of a step in terminal states (zero) is learned first?
>
>
>
In a situation where you have and know the terminal state, then yes this could help a little. It will help most where the terminal state is also a "goal" state, which does not have to be the case, even in episodic problems. For instance in a treasure-collecting maze where the episode ends after a fixed time, knowing the value of the terminal state and transitions close to it is less important to optimal control than establishing the expected return for earlier parts of the path.
Focusing on "goal" states does not generalise to all environments, and is of minimal help once the network has approximated Q values close to terminal and/or goal states. There are more generic approaches than your suggestion for distributing knowledge of sparse rewards including episode termination:
* [Prioritised sweeping](https://link.springer.com/content/pdf/10.1023/A:1022635613229.pdf). This generalises your idea of selectively sampling where experience shows that there is knowledge to be gained (by tracking current error values and transitions).
* n-step temporal difference. Using longer trajectories to calculate TD targets increases variance but reduces bias and allows assignment of reward across multiple steps quickly. This is extended in TD($\lambda$) to allow parametric mixing of multiple length trajectories and can be done online using eligibility traces. Combining Q($\lambda$) with deep neural networks is possible - [see this paper for example](https://www.cs.mcgill.ca/%7Ejmerhe1/rnn_nips.pdf).
Upvotes: 3 [selected_answer] |
2018/05/03 | 631 | 2,629 | <issue_start>username_0: Is it possible to use a VAE to reconstruct an image starting from an initial image instead of using `K.random_normal`, as shown in the “sampling” function of [this example](https://github.com/keras-team/keras/blob/master/examples/variational_autoencoder.py)?
I have used a sample image with the VAE encoder to get `z_mean` and `z_logvar`.
I have been given 1000 pixels in an otherwise blank image (with nothing in it).
Now, I want to reconstruct the sample image using the decoder with a given constraint that the 1000 pixels in the otherwise blank image remain the same. The remaining pixels can be reconstructed so they are as close to the initial sample image as possible. In other words, my starting point for the decoder is a blank image with some pixels that don’t change.
How can I modify the decoder to generate an image based on this constraint? Is it possible? Are there variations of VAE that might make this possible? So we can predict the latent variables by starting from an initial point(s)?<issue_comment>username_1: The thing is, the decoder samples from a latent mu and sigma, so you cant sample from a raw image directly. But if you’re trying to put a random image into the encoder of a trained VAE to match it to some sample image (via reconstruction loss), then your random input image will converge to the target sample.
This will work when the following VAE architecture constraints are satisfied:
1. The target sample is contained in the previously used training distribution.
2. The parameters of the VAE are frozen after training.
3. The input image values are “backpropagate-able”. (Interpret the input image as optimizable parameters.)
Upvotes: 1 <issue_comment>username_2: You could use VAE as previously answered though it will not work well in practice.
I think denoising auto-encoder is suitable for your problem because during training, the input is corrupted stochastically, thus it must learn to guess the distribution of the missing information (reconstruct the clean original input)
We could argue that VAE is better than DAE at modeling p(x) because of the randomness introduced at the latent space layer while DAE like algorithm keeps putting noise starting from the input layer.
suppose your data is concentrated on this 1-D curved manifold, what VAE could do is just pick some random value and output p(X|Z) which is Gaussian by the way, while DAE would learn to map a corrupted data point x˜ back to the original data point x.
[](https://i.stack.imgur.com/axc8Z.png)
Upvotes: 0 |
2018/05/04 | 2,082 | 7,564 | <issue_start>username_0: What are the mathematical prerequisites to be able to study artificial general intelligence (AGI) or strong AI?<issue_comment>username_1: I always recommend starting with game theory, combinatorial game theory, and algorithmic combinatorial game theory, (but I'm potentially biased;)
Combinatorics is a given--discrete mathematics is heavily utilized in computer science, and, with the advent of [Combinatorial Game Theory](https://en.wikipedia.org/wiki/Combinatorial_game_theory) (CGT), ability to determine if a given choice can be deemed optimal ("perfect play"). CGT arises out of traditional [Game Theory](https://en.wikipedia.org/wiki/Game_theory), which we sometimes term "economic game theory" to make the distinction. Out of Game Theory also arises subfields such as [Evolutionary Game Theory](https://en.wikipedia.org/wiki/Evolutionary_game_theory), which is important in AI.
These fields relate to [rationality](https://en.wikipedia.org/wiki/Rationality), which is the basis for optimized decision making. Decision making algorithms seems to be the fundamental distinction of what constitutes an Artificial Intelligence.
From [minimax](https://en.wikipedia.org/wiki/Minimax) to [gametrees](https://en.wikipedia.org/wiki/Game_tree), it's probably a good idea to have a basic grounding in these fields, even if the problem you're AI is trying to solve isn't formally defined as a game.
All problems, from a fundamental standpoint, can be regarded either as puzzles--non-competitive context--or games--competitive context. This distinction is based on whether there is a single agent (puzzles) or multiple agents (games.)
Upvotes: 2 <issue_comment>username_2: Most of the answers are oriented towards statistical/probabilistic models. For more 'classic' AI I would say you would need some knowledge of **predicate calculus**. This is the more symbolic planning approach to AI problem solving.
You could argue it's a bit 'old school', but still relevant for certain aspects of AI.
Upvotes: 2 <issue_comment>username_3: Before proceeding and answering the actual question, it's worth noting that **AI** and **AGI** are not the same thing, as was the case at the beginning in 1956, as suggested in [the official proposal for the Dartmouth workshop](https://ojs.aaai.org//index.php/aimagazine/article/view/1904).
Nowadays, people that consider themselves "AI researchers" or "AI practitioners" (e.g. myself) typically are not trying to **directly** build an AGI, but are focusing on a specific AI approach, such as [reinforcement learning, which could, one day, be used to build an AGI](https://ai.stackexchange.com/q/17084/2444). The reason is that we have noticed that directly tackling the "AGI problem" (i.e. creating an AGI) is a lot more complex than was originally thought and [some do not think that this is even possible](https://www.nickbostrom.com/papers/survey.pdf). AGI is a sub-branch of AI that studies how to create an AGI (or human-like AI). Only a few people are still working on AGI.
<NAME>, who's one of the people that is still interested in and attempting to directly create an AGI, wrote a blog post about this topic: [AGI Curriculum](https://goertzel.org/agi-curriculum/). If he had to design a curriculum, it would be divided into 6 courses
1. History of AI
2. AI Algorithms, Structures and Methods
3. Neuroscience & Cognitive Psychology
4. Philosophy of Mind
5. AGI Theories & Architectures
6. Future of AGI
He then suggests multiple readings (books) for each of these courses/topics. Below, I will list one book for each of the courses (also based on their free availability online as pdfs). You can find more books in the blog post.
1. The book [What Computers Still Can't Do](https://mitpress.mit.edu/books/what-computers-still-cant-do) (1992) by <NAME>
2. The book [Artificial Intelligence A Modern Approach](https://cs.calvin.edu/courses/cs/344/kvlinden/resources/AIMA-3rd-edition.pdf) (AIMA) by Russell and Norvig, but Goertzel notes that this is not an AGI book, but gives an introduction to multiple AI topics that have been used in many cognitive architectures for AGI
3. The book [Neuroscience: Exploring the Brain](https://neurophysics.ucsd.edu/courses/physics_171/Neuroscience%20Exploring%20the%20Brain%20-%20Bear,%20Mark%20F.%20%5BSRG%5D.pdf) by Bear, Connors and Paradiso
4. The book [Being No One: The Self-model Theory of Subjectivity](https://static1.squarespace.com/static/58ed4773d482e9c96b524401/t/5b55ef3f575d1f0c911d8061/1532358645129/occulture_boris+Metzinger+BeingNoOne-SelfModelTheoryOfSubjectivity-.pdf) (2003) by <NAME>
5. The paper [Artificial General Intelligence: Concept, State of the Art, and Future Prospects](https://content.sciendo.com/view/journals/jagi/5/1/article-p1.xml?language=en) (2014) by <NAME>
6. The book [Singularity is Near](https://en.wikipedia.org/wiki/The_Singularity_Is_Near) (2005) by Kurzweil
So, to conclude, if you want to study **artificial general intelligence**, it's not sufficient to just read the typical machine learning or deep learning books, but you also need to have a more solid understanding of other aspects of artificial intelligence and even neuroscience in order to study and do research on AGI. Moreover, it's probably a good idea to also have a good background in all the traditional approaches, what they can or not do, the history of AI (why some approaches have failed or not), and understand the philosophical problems and, last but of course not least, read about the current approaches to AGI, such as **universalist** (e.g. [AIXI](https://jan.leike.name/AIXI.html)) or symbolic ones (all the cognitive architectures such as [OpenCog](https://wiki.opencog.org)).
To answer your question more directly, if you can read and understand the AIMA book, then you probably have if not all most of the mathematical prerequisites, which will probably include
* **logic**
* **discrete mathematics**
* **calculus**
* **optimization**
* **linear algebra**
* **probability theory**
* **theory of computation** (this will definitely be needed if e.g. you want to learn about [AIXI](https://jan.leike.name/AIXI.html), but you will also need a nice dose of *measure theory* and *algorithmic information theory* to understand all the mathematical details of the theory)
Note that, although these subjects (logic, probability theory, or theory of computation) are [**necessary**](https://en.wikipedia.org/wiki/Necessity_and_sufficiency) to understand the current approaches to AGI, they may not be sufficient to develop a full AGI, but this is a different story. Moreover, note that these mathematical subjects are not just required to understand the current approaches to AGI, but they would also be useful to understand any other AI sub-branch, such as machine learning (and that's probably why people may think that this answer is misleading, but it's not: if you have ever tried to learn something about AIXI, you will know that all the subjects above are more than required!)
In the future, if you also want to do serious research on AGI, having a degree in Computer Science, Cognitive Science, Neuroscience, Mathematics, and/or, of course, Artificial Intelligence, may be a good thing. By the way, <NAME> has a Ph.D. in math. [<NAME>](http://www.hutter1.net/), the inventor of AIXI, did his bachelor's and master's in CS with minors in mathematics, and one Ph.D. in theoretical particle physics and another Ph.D. in CS during basically the time that he developed AIXI.
Upvotes: 2 |
2018/05/04 | 1,780 | 7,251 | <issue_start>username_0: I'm facing the problem of having images of different dimensions as inputs in a segmentation task. Note that the images do not even have the same aspect ratio.
One common approach that I found in general in deep learning is to crop the images, as it is also suggested [**here**](https://datascience.stackexchange.com/q/16601/10640). However, in my case, I cannot crop the image and keep its center or something similar, since, in segmentation, I want the output to be of the same dimensions as the input.
[**This**](https://pdfs.semanticscholar.org/d65b/cb276bc6fc9445a78fe26de76a43e09ccd27.pdf) paper suggests that in a segmentation task one can feed the same image multiple times to the network but with a different scale and then aggregate the results. If I understand this approach correctly, it would only work if all the input images have the same aspect ratio. Please correct me if I am wrong.
Another alternative would be to just resize each image to fixed dimensions. I think this was also proposed by the answer to [**this**](https://ai.stackexchange.com/q/2403/2444) question. However, it is not specified in what way images are resized.
I considered taking the maximum width and height in the dataset and resizing all the images to that fixed size in an attempt to avoid information loss. However, I believe that our network might have difficulties with distorted images as the edges in an image might not be clear.
1. What is possibly the best way to resize your images before feeding them to the network?
2. Is there any other option that I am not aware of for solving the problem of having images of different dimensions?
3. Also, which of these approaches you think is the best taking into account the computational complexity but also the possible loss of performance by the network?
I would appreciate if the answers to my questions include some link to a source if there is one.<issue_comment>username_1: Assuming you have a large dataset, and it's labeled pixel-wise, one hacky way to solve the issue is to preprocess the images to have same dimensions by inserting horizontal and vertical margins according to your desired dimensions, as for labels you add dummy extra output for the margin pixels so when calculating the loss you could mask the margins.
Upvotes: 1 <issue_comment>username_2: You could also have a look at the paper [Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition](https://arxiv.org/pdf/1406.4729.pdf) (2015), where the SPP-net is proposed. SSP-net is based on the use of a "spatial pyramid pooling", which eliminates the requirement of having fixed-size inputs.
In the abstract, the authors write
>
> Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224×224) input image. This requirement is "artificial" and may reduce the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with another pooling strategy, "**spatial pyramid pooling**", to eliminate the above requirement. The new network structure, called **SPP-net**, can generate a fixed-length representation regardless of image size/scale.
>
>
> Pyramid pooling is also robust to object deformations. With these advantages, SPP-net should in general improve all CNN-based **image classification** methods. On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures despite their different designs. On the Pascal VOC 2007 and Caltech101 datasets, SPP-net achieves state-of-theart classification results using a single full-image representation and no fine-tuning. The power of SPP-net is also significant in **object detection**. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training
> the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method is 24-102× faster than the R-CNN method, while achieving better or comparable accuracy on Pascal VOC 2007. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification among all 38 teams. This manuscript also introduces the improvement made for this competition.
>
>
>
Upvotes: 2 <issue_comment>username_3: As you want to perform image segmentation, you can use [U-Net](https://arxiv.org/pdf/1505.04597.pdf), which does not have fully connected layers, but it is a [fully convolutional network](https://arxiv.org/pdf/1411.4038.pdf), which makes it able to handle inputs of any dimension. You should read the linked papers for more info.
Upvotes: 1 <issue_comment>username_4: There are 2 problems you might face.
1. Your neural net (in this case convolutional neural net) cannot physically accept images of different resolutions. This is usually the case if one has fully-connected layers, however, if the network is **fully-convolutional**, then it should be able to accept images of any dimension. Fully-convolutional implies that it doesn't contain fully-connected layers, but only convolutional, max-pooling, and batch normalization layers all of which are invariant to the size of the image.
Exactly this approach was proposed in this ground-breaking paper [Fully Convolutional Networks for Semantic Segmentation](https://arxiv.org/pdf/1411.4038.pdf). Keep in mind that their architecture and training methods might be slightly outdated by now. A similar approach was used in widely used [U-Net: Convolutional Networks for Biomedical Image Segmentation](https://arxiv.org/pdf/1505.04597.pdf), and many other architectures for object detection, pose estimation, and segmentation.
2. Convolutional neural nets are not scale-invariant. For example, if one trains on the cats of the same size in pixels on images of a fixed resolution, the net would fail on images of smaller or larger sizes of cats. In order to overcome this problem, I know of two methods (might be more in the literature):
1. multi-scale training of images of different sizes in fully-convolutional nets in order to make the model more robust to changes in scale; and
2. having multi-scale architecture.A place to start is to look at these two notable papers: [Feature Pyramid Networks for Object Detection](https://arxiv.org/pdf/1612.03144.pdf) and [High-Resolution Representations for Labeling Pixels and Regions](https://arxiv.org/pdf/1904.04514.pdf).
Upvotes: 5 [selected_answer]<issue_comment>username_5: Try resizing the image to the input dimensions of your neural network architecture(keeping it fixed to something like 128\*128 in a standard 2D U-net architecture) using **nearest neighbor interpolation** technique. This is because if you resize your image using any other interpolation, it may result in tampering with the ground truth labels. This is particularly a problem in segmentation. You won't face such a problem when it comes to classification.
Try the following:
```
import cv2
resized_image = cv2.resize(original_image, (new_width, new_height),
interpolation=cv2.INTER_NEAREST)
```
Upvotes: 1 |
2018/05/06 | 1,672 | 6,520 | <issue_start>username_0: I'm building a 5-class classifier with a private dataset. Each data sample has 67 features and there are about 40000 samples. Samples of a particular class were duplicated to overcome class imbalance problems (hence 40000 samples).
With a one-vs-one multi-class SVM, I am getting an accuracy of ~79% on the validation set. The features were standardized to get 79% accuracy. Without standardization, the accuracy I get is ~72%. Similar result when I tried 50-fold cross validation.
Now moving on to MLP results,
**Exp 1:**
* *Network Architecture:* [67 40 5]
* *Optimizer:* Adam
* *Learning Rate:* exponential decay of base learning rate
* *Validation Accuracy:* ~45%
* *Observation:* Both training accuracy and validation accuracy stops improving.
**Exp 2:**
Repeated **Exp 1** with batchnorm layer
* *Validation Accuracy:* ~50%
* *Observation:* Got 5% increase in accuracy.
**Exp 3:**
To overfit, increased the depth of MLP. A deeper version of **Exp 1** network
* *Network Architecture:* [67 40 40 40 40 40 40 5]
* *Optimizer:* Adam
* *Learning Rate:* exponential decay of base learning rate
* *Validation Accuracy:* ~55%
Thoughts on what might be happening?<issue_comment>username_1: Assuming you have a large dataset, and it's labeled pixel-wise, one hacky way to solve the issue is to preprocess the images to have same dimensions by inserting horizontal and vertical margins according to your desired dimensions, as for labels you add dummy extra output for the margin pixels so when calculating the loss you could mask the margins.
Upvotes: 1 <issue_comment>username_2: You could also have a look at the paper [Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition](https://arxiv.org/pdf/1406.4729.pdf) (2015), where the SPP-net is proposed. SSP-net is based on the use of a "spatial pyramid pooling", which eliminates the requirement of having fixed-size inputs.
In the abstract, the authors write
>
> Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224×224) input image. This requirement is "artificial" and may reduce the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with another pooling strategy, "**spatial pyramid pooling**", to eliminate the above requirement. The new network structure, called **SPP-net**, can generate a fixed-length representation regardless of image size/scale.
>
>
> Pyramid pooling is also robust to object deformations. With these advantages, SPP-net should in general improve all CNN-based **image classification** methods. On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures despite their different designs. On the Pascal VOC 2007 and Caltech101 datasets, SPP-net achieves state-of-theart classification results using a single full-image representation and no fine-tuning. The power of SPP-net is also significant in **object detection**. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training
> the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method is 24-102× faster than the R-CNN method, while achieving better or comparable accuracy on Pascal VOC 2007. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification among all 38 teams. This manuscript also introduces the improvement made for this competition.
>
>
>
Upvotes: 2 <issue_comment>username_3: As you want to perform image segmentation, you can use [U-Net](https://arxiv.org/pdf/1505.04597.pdf), which does not have fully connected layers, but it is a [fully convolutional network](https://arxiv.org/pdf/1411.4038.pdf), which makes it able to handle inputs of any dimension. You should read the linked papers for more info.
Upvotes: 1 <issue_comment>username_4: There are 2 problems you might face.
1. Your neural net (in this case convolutional neural net) cannot physically accept images of different resolutions. This is usually the case if one has fully-connected layers, however, if the network is **fully-convolutional**, then it should be able to accept images of any dimension. Fully-convolutional implies that it doesn't contain fully-connected layers, but only convolutional, max-pooling, and batch normalization layers all of which are invariant to the size of the image.
Exactly this approach was proposed in this ground-breaking paper [Fully Convolutional Networks for Semantic Segmentation](https://arxiv.org/pdf/1411.4038.pdf). Keep in mind that their architecture and training methods might be slightly outdated by now. A similar approach was used in widely used [U-Net: Convolutional Networks for Biomedical Image Segmentation](https://arxiv.org/pdf/1505.04597.pdf), and many other architectures for object detection, pose estimation, and segmentation.
2. Convolutional neural nets are not scale-invariant. For example, if one trains on the cats of the same size in pixels on images of a fixed resolution, the net would fail on images of smaller or larger sizes of cats. In order to overcome this problem, I know of two methods (might be more in the literature):
1. multi-scale training of images of different sizes in fully-convolutional nets in order to make the model more robust to changes in scale; and
2. having multi-scale architecture.A place to start is to look at these two notable papers: [Feature Pyramid Networks for Object Detection](https://arxiv.org/pdf/1612.03144.pdf) and [High-Resolution Representations for Labeling Pixels and Regions](https://arxiv.org/pdf/1904.04514.pdf).
Upvotes: 5 [selected_answer]<issue_comment>username_5: Try resizing the image to the input dimensions of your neural network architecture(keeping it fixed to something like 128\*128 in a standard 2D U-net architecture) using **nearest neighbor interpolation** technique. This is because if you resize your image using any other interpolation, it may result in tampering with the ground truth labels. This is particularly a problem in segmentation. You won't face such a problem when it comes to classification.
Try the following:
```
import cv2
resized_image = cv2.resize(original_image, (new_width, new_height),
interpolation=cv2.INTER_NEAREST)
```
Upvotes: 1 |
2018/05/06 | 941 | 4,197 | <issue_start>username_0: According to [this blog post](https://www.popsci.com/scitech/article/2009-08/evolving-robots-learn-lie-hide-resources-each-other), it seems that AI systems can lie. However, can an AI be programmed in such a way that it never lies (even after learning new things)?<issue_comment>username_1: If a machine learning-based AI is "sufficiently smart enough" to be able to lie, then there is nothing preventing it from lying. This does not mean it can't be persuaded from lying.
So just make the AI simple enough to not be able to lie.
The reasoning here is that in order for a system to be able to lie, a system must be able to recognize an incentive to lie. Recognizing this incentive is a challenging function and would be impossible to code manually into a computer. Machine learning can be applied to problems such as these where the function is hard to code manually. Although there has been promising work on understanding what the representations/features learned by machine learning actually represent, it may not be possible in general to have an understanding of what a lie in the agent's representation looks like. Because of this, having a hand-coded rule to catch when an agent is lying is not possible and thus being able to prevent an agent from (or catch an agent when) lying isn't possible when using machine learning.
Upvotes: 2 <issue_comment>username_2: You may be interested in the utility functions of deception:
From the abstract of [Why Animals Lie: How Dishonesty and Belief can Coexist in a Signaling System. (NIH, 2006)](https://www.ncbi.nlm.nih.gov/pubmed/17109314)"
>
> We develop and apply a simple model for animal communication in which signalers can use a nontrivial frequency of deception without causing listeners to completely lose belief. This common feature of animal communication has been difficult to explain as a stable adaptive outcome of the options and payoffs intrinsic to signaling interactions. Our theory is based on two realistic assumptions. (1) Signals are "overheard" by several listeners or listener types with different payoffs. The signaler may then benefit from using incomplete honesty to elicit different responses from different listener types, such as attracting potential mates while simultaneously deterring competitors. (2) Signaler and listener strategies change dynamically in response to current payoffs for different behaviors. The dynamic equations can be interpreted as describing learning and behavior change by individuals or evolution across generations. We explain how our dynamic model differs from other solution concepts from classical and evolutionary game theory and how it relates to general models for frequency-dependent phenotype dynamics. We illustrate the theory with several applications where deceptive signaling occurs readily in our framework, including bluffing competitors for potential mates or territories. We suggest future theoretical directions to make the models more general and propose some possible experimental tests.
>
>
>
A degree of deceptive capability seems to be beneficial from the standpoint of evolution.
We humans are not always known for veracity, so the ability understand deception might be a critical component in Artificial General Intelligence's ability to interact with humans. (Specifically, you can't always believe what humans tell you.)
Based on recent human history, the recognition of the unreliability of humans (versus data and as-objective-as-possible analysis) may become critical to the survival of our own species.
**More importantly, it will be essential for strong AI to understand that the "data can lie"** (faulty parameters, inaccurate data, unawareness of incomplete information.)
JT's answer is a great functional overview on why it's not possible with current methods. **This answer might be regarded in the sense that, aside from very limited special cases such as solved games where true objectivity can be achieved, reality is subjective and "truth" is a subjective function of the parameters and data.**
Again, understanding that last bit is likely much more important than trying to code AI's not to "lie".
Upvotes: 0 |
2018/05/07 | 508 | 2,094 | <issue_start>username_0: How can we compare, in terms of similarity (and/or meaning), two pieces of text (or documents)?
For example, let's say that I want to determine whether a document is a plagiarized version of another document. Which approach should I use? Could I use neural networks to do this? Or are there other more suitable approaches?<issue_comment>username_1: It depends what you mean by "comparison", but in general I would think not really.
Neural networks operate on the sub-symbolic level, ie instead of handling discrete symbols (such as letters) they work with numerical values. These values can often be mapped onto symbols (eg through input or output nodes) which typically are letters or words.
If you want to compare texts, you are dealing with symbols, so it would probably be easier to operate on the symbolic level, by manipulating words directly, rather than translating them into numerical values and back, as that usually involves some loss of precision.
But as I said, it is hard to answer your question without knowing more detail about the exact nature of the comparison you're after.
Upvotes: 1 <issue_comment>username_2: There are more than 1 way of doing this:
1. You can compute the bleu score between them if you are looking at the quality of machine translation. Check this [link](https://en.wikipedia.org/wiki/BLEU).
2. You can convert them into 2 vectors using doc2vec and find the similarity between the vectors using cosine similarity.
3. [Siamese networks](https://www.researchgate.net/profile/Maarten_Versteegh/publication/304834009_Learning_Text_Similarity_with_Siamese_Recurrent_Networks/links/577c2e5d08aec3b743367064/Learning-Text-Similarity-with-Siamese-Recurrent-Networks.pdf?origin=publication_detail) are something similar to what you are asking. They are neural nets that use distance metric for learning rather than a loss metric.
I don't understand why do you want to use neural network for comparing two text pieces. Generally comparisons are done by some of the distance metrics not by neural network.
Upvotes: 3 [selected_answer] |
2018/05/08 | 1,860 | 7,917 | <issue_start>username_0: This just popped into my head, and I haven't thought it through, but it feels like a sound question. The definition of intelligence might still be somewhat fuzzy, possibly a factor of our evolving understanding of "intelligence" in regard to algorithms, but rationality has some precise definitions.
* Are Rationality and Intelligence distinct?
If not, explain. If so, elaborate.
*(I have some thoughts on the subject and would be very interested in the thoughts of others.)*<issue_comment>username_1: I recall someone (my prof probably) saying that the difference is that intelligence is a problem-solving capability, while rationality more-so refers the capability to apply one's intelligence.
ex: You are smart for knowing that sleeping late is bad for your health, but if you still sleep late then you are irrational.
In that sense then, rationality is like a meta-problem-solving skill perhaps?
Upvotes: 1 <issue_comment>username_2: From Norvig and Russel definitions of rationality:
>
> * Thinking Rationally - The Greek philosopher Aristotle was one of the first to attempt to codify “right thinking,” that
> is, irrefutable reasoning processes. His syllogisms provided patterns for argument structures
> that always yielded correct conclusions when given correct premises—for example, “Socrates
> is a man; all men are mortal; therefore, Socrates is mortal.” These laws of thought were
> supposed to govern the operation of the mind; their study initiated the field called logic.
> * Acting Rationally - An agent is just something that acts. Of course,
> all computer programs do something, but computer agents are expected to do more: operate
> autonomously, perceive their environment, persist over a prolonged time period, adapt to
> change, and create and pursue goals.
> A rational agent is one that acts so as to achieve the
> best outcome or, when there is uncertainty, the best expected outcome.
> In the “laws of thought” approach to AI, the emphasis was on correct inferences. Making
> correct inferences is sometimes part of being a rational agent, because one way to act
> rationally is to reason logically to the conclusion that a given action will achieve one’s goals
> and then to act on that conclusion. On the other hand, correct inference is not all of rationality;
> in some situations, there is no provably correct thing to do, but something must still be
> done. There are also ways of acting rationally that cannot be said to involve inference. For
> example, recoiling from a hot stove is a reflex action that is usually more successful than a
> slower action taken after careful deliberation.
>
>
>
Clearly and also intuitively, rationality is well defined.
**Intelligence as seen form mathematical and computational approach:**
Intelligence can be the ability for an agent to make rational or irrational decisions, on a varying time frame and also choose the level of rationality (strictly in a computational sense). For example, I have exams and I want to watch TV, on a time frame of a week/month the rational decision would be to study so that I can enjoy the fruits of my labor which will be much more than the instantaneous pleasure of TV (also I can watch reruns). But for a time frame of an hour watching TV is definitely the most rewarding thing. So intelligence can be defined as the capability in deciding the length of time frame to be rational (what we call visionaries those who can see rewards far in the future).
Also as Game Theory or economics suggest, we can have different definitions for rationality depending on our needs. Thus, watching TV to gain knowledge might be more important to someone than studying, so effectively he has a different rationality function (arbitrary made-up term) to satisfy. Thus, intelligence can be deciding our rationality function, based on our needs and external experiences (learning in a nutshell). Also we may decide to minimize our rationality cost function or leave it in an intermediate state (which can be thought of as the minima for a different rationality cost function, thus only rationality functions are the true variable and not the intelligent decision to minimize it or not).
Lets take an example of bees (I am not sure whether this is the correct interpretation though): Bees can hardly be called intelligent (no foresight), but they are rational. They perform the task assigned to them with efficiency and toil (even though this does not reward the bee itself, it rewards the genes carried by the bees and ensures it survival through the queen - can be thought of as evolutionary coded intelligence). Bees perform these jobs in an apparently rational way, which has been decided by 1000's of years of evolution. Though bees individually are of hardly any importance, together they create a truly intelligent community - taking smart decisions and actions albeit their farsightedness is only within a smaller time frame compared to humans. Thus in common terms, it can be thought that rationality always almost leads to intelligence, but the same cannot be said for vice versa (as for intelligence you now have a choice for rationality function and you can choose not to satisfy it or choose an irrational function).
An important consequence of bees not being intelligent is that they are always performing rational actions as hard coded in their genes, which is causing the entire colony to behave in an intelligent and a very optimized energy efficient way (but there maybe better strategies, we can never know whether they follow the best strategy unless we account for all the variables).
**TL;DR :** Intelligence can be thought of the ability of an agent to choose the amount of rationality the agent wishes to satisfy. Using mathematics we can always find one or more completely rational methods of solving a problem with variables being time and the environment. But intelligent beings can add more variables like their experience, needs and motivation. Intelligent beings to some extent can single-handedly manipulate the external environment to suit their needs.
**From psychological viewpoint:**
[Here](https://www.tutorialspoint.com/artificial_intelligence/artificial_intelligent_systems.htm) are a few definitions of different types of intelligence and learning - Quite good and concise.
IQ - In science, the term intelligence typically refers to what we could call academic or cognitive intelligence. In their book on intelligence, professors Resing and Drenth (2007)\* answer the question 'What is intelligence?' using the following definition: "The whole of cognitive or intellectual abilities required to obtain knowledge, and to use that knowledge in a good way to solve problems that have a well described goal and structure."
[Intelligence - Wikipedia](https://en.wikipedia.org/wiki/Intelligence) - Actually has some good definitions.
[Intelligence Quotient - Wikipedia](https://en.wikipedia.org/wiki/Intelligence_quotient)
Emotional Intelligence - Emotional intelligence is the ability to identify and manage your own emotions and the emotions of others. It is generally said to include three skills: emotional awareness; the ability to harness emotions and apply them to tasks like thinking and problem solving; and the ability to manage emotions, which includes regulating your own emotions and cheering up or calming down other people.
[Emotional Intelligence - PsychCentral](https://psychcentral.com/lib/what-is-emotional-intelligence-eq/)
[Emotional Intelligence - Wikipedia](https://en.wikipedia.org/wiki/Emotional_intelligence)
Hope this is of some insight!
Upvotes: 2 <issue_comment>username_3: You can solve something rationally or with emotions/intuition.
Intelligence can be rational or intuitive. Rational is the newest more accurate form of intelligence.
Humans use both types of intelligences.
Upvotes: 1 |
2018/05/08 | 2,154 | 8,773 | <issue_start>username_0: I'm working on a Reinforcement Learning task where I use reward shaping as proposed in the paper [Policy invariance under reward transformations:
Theory and application to reward shaping](https://people.eecs.berkeley.edu/~russell/papers/icml99-shaping.pdf) (1999) by <NAME>, <NAME> and <NAME>.
In short, my reward function has this form:
$$R(s, s') = \gamma P(s') - P(s)$$
where $P$ is a **potential function**. When $s = s'$, then $R(s, s) = (\gamma - 1)P(s)$, which is non-positive, since $0 < \gamma <= 1$.
But considering $P(s)$ relatively high (let's say $P(s) = 1000$), $R(s, s)$ become too high as well (e.g. with $\gamma=0.99$, $R(s,s)=-10$), and if for many steps the agent stays in the same state, then the cumulative reward becomes more and more negative, which might affect the learning process.
In practice, I solved the problem by just removing the factor $P(s)$ when $s = s'$. But I have some doubts about the theoretical correctness of this "implementation trick".
Another idea could be to scale appropriately $\gamma$ in order to give a reasonable reward. Indeed, with $\gamma=1.0$, there is no problem, and, with gamma very near to $1.0$, the negative reward is tolerable. Personally, I don't like it because it means that $\gamma$ is somehow dependent on the reward.
What do you think?<issue_comment>username_1: I recall someone (my prof probably) saying that the difference is that intelligence is a problem-solving capability, while rationality more-so refers the capability to apply one's intelligence.
ex: You are smart for knowing that sleeping late is bad for your health, but if you still sleep late then you are irrational.
In that sense then, rationality is like a meta-problem-solving skill perhaps?
Upvotes: 1 <issue_comment>username_2: From Norvig and Russel definitions of rationality:
>
> * Thinking Rationally - The Greek philosopher Aristotle was one of the first to attempt to codify “right thinking,” that
> is, irrefutable reasoning processes. His syllogisms provided patterns for argument structures
> that always yielded correct conclusions when given correct premises—for example, “Socrates
> is a man; all men are mortal; therefore, Socrates is mortal.” These laws of thought were
> supposed to govern the operation of the mind; their study initiated the field called logic.
> * Acting Rationally - An agent is just something that acts. Of course,
> all computer programs do something, but computer agents are expected to do more: operate
> autonomously, perceive their environment, persist over a prolonged time period, adapt to
> change, and create and pursue goals.
> A rational agent is one that acts so as to achieve the
> best outcome or, when there is uncertainty, the best expected outcome.
> In the “laws of thought” approach to AI, the emphasis was on correct inferences. Making
> correct inferences is sometimes part of being a rational agent, because one way to act
> rationally is to reason logically to the conclusion that a given action will achieve one’s goals
> and then to act on that conclusion. On the other hand, correct inference is not all of rationality;
> in some situations, there is no provably correct thing to do, but something must still be
> done. There are also ways of acting rationally that cannot be said to involve inference. For
> example, recoiling from a hot stove is a reflex action that is usually more successful than a
> slower action taken after careful deliberation.
>
>
>
Clearly and also intuitively, rationality is well defined.
**Intelligence as seen form mathematical and computational approach:**
Intelligence can be the ability for an agent to make rational or irrational decisions, on a varying time frame and also choose the level of rationality (strictly in a computational sense). For example, I have exams and I want to watch TV, on a time frame of a week/month the rational decision would be to study so that I can enjoy the fruits of my labor which will be much more than the instantaneous pleasure of TV (also I can watch reruns). But for a time frame of an hour watching TV is definitely the most rewarding thing. So intelligence can be defined as the capability in deciding the length of time frame to be rational (what we call visionaries those who can see rewards far in the future).
Also as Game Theory or economics suggest, we can have different definitions for rationality depending on our needs. Thus, watching TV to gain knowledge might be more important to someone than studying, so effectively he has a different rationality function (arbitrary made-up term) to satisfy. Thus, intelligence can be deciding our rationality function, based on our needs and external experiences (learning in a nutshell). Also we may decide to minimize our rationality cost function or leave it in an intermediate state (which can be thought of as the minima for a different rationality cost function, thus only rationality functions are the true variable and not the intelligent decision to minimize it or not).
Lets take an example of bees (I am not sure whether this is the correct interpretation though): Bees can hardly be called intelligent (no foresight), but they are rational. They perform the task assigned to them with efficiency and toil (even though this does not reward the bee itself, it rewards the genes carried by the bees and ensures it survival through the queen - can be thought of as evolutionary coded intelligence). Bees perform these jobs in an apparently rational way, which has been decided by 1000's of years of evolution. Though bees individually are of hardly any importance, together they create a truly intelligent community - taking smart decisions and actions albeit their farsightedness is only within a smaller time frame compared to humans. Thus in common terms, it can be thought that rationality always almost leads to intelligence, but the same cannot be said for vice versa (as for intelligence you now have a choice for rationality function and you can choose not to satisfy it or choose an irrational function).
An important consequence of bees not being intelligent is that they are always performing rational actions as hard coded in their genes, which is causing the entire colony to behave in an intelligent and a very optimized energy efficient way (but there maybe better strategies, we can never know whether they follow the best strategy unless we account for all the variables).
**TL;DR :** Intelligence can be thought of the ability of an agent to choose the amount of rationality the agent wishes to satisfy. Using mathematics we can always find one or more completely rational methods of solving a problem with variables being time and the environment. But intelligent beings can add more variables like their experience, needs and motivation. Intelligent beings to some extent can single-handedly manipulate the external environment to suit their needs.
**From psychological viewpoint:**
[Here](https://www.tutorialspoint.com/artificial_intelligence/artificial_intelligent_systems.htm) are a few definitions of different types of intelligence and learning - Quite good and concise.
IQ - In science, the term intelligence typically refers to what we could call academic or cognitive intelligence. In their book on intelligence, professors Resing and Drenth (2007)\* answer the question 'What is intelligence?' using the following definition: "The whole of cognitive or intellectual abilities required to obtain knowledge, and to use that knowledge in a good way to solve problems that have a well described goal and structure."
[Intelligence - Wikipedia](https://en.wikipedia.org/wiki/Intelligence) - Actually has some good definitions.
[Intelligence Quotient - Wikipedia](https://en.wikipedia.org/wiki/Intelligence_quotient)
Emotional Intelligence - Emotional intelligence is the ability to identify and manage your own emotions and the emotions of others. It is generally said to include three skills: emotional awareness; the ability to harness emotions and apply them to tasks like thinking and problem solving; and the ability to manage emotions, which includes regulating your own emotions and cheering up or calming down other people.
[Emotional Intelligence - PsychCentral](https://psychcentral.com/lib/what-is-emotional-intelligence-eq/)
[Emotional Intelligence - Wikipedia](https://en.wikipedia.org/wiki/Emotional_intelligence)
Hope this is of some insight!
Upvotes: 2 <issue_comment>username_3: You can solve something rationally or with emotions/intuition.
Intelligence can be rational or intuitive. Rational is the newest more accurate form of intelligence.
Humans use both types of intelligences.
Upvotes: 1 |
2018/05/09 | 631 | 2,132 | <issue_start>username_0: In the paper [*Deterministic Policy Gradient Algorithms*](http://proceedings.mlr.press/v32/silver14.pdf), I am really confused about chapter 4.1 and 4.2 which is "On and off-policy Deterministic Actor-Critic".
I don't know what's the difference between two algorithms.
I only noticed that the equation 11 and 16 are different, and the difference is the action part of Q function where is $a\_{t+1}$ in equation 11 and $\mu(s\_{t+1})$ in equation 16. If that's what really matters, how can I calculate $a\_{t+1}$ in equation 11?

<issue_comment>username_1: The twist here is that the $a\_{t+1}$ in (11) and the $\mu(s\_{t+1})$ in (16) are the *same* and actually the $a\_t$ in the on-policy case and the $a\_t$ in the off-policy case are different.
The key to the understanding is that in on-policy algorithms you have to use actions (and generally speaking trajectories) generated by the policy in the updating steps (to improve the policy itself). This means that in the on-policy case $a\_i = \mu(s\_{i})$ (in equations 11-13).
Whereas in the off-policy case you can use *any* trajectory to improve your value/action-value functions, which means that the actions $a\_t$ can be generated by any distribution $a\_t~\pi(s\_t,a\_t)$. In (16) the algorithm explicitly states however that the action-value function ($Q^w$) has to be evaluated at $\mu\_{s\_{t+1}}$ (just like in the on-policy case) and not at $a\_t$ which was the actual action in the trajectory generated by policy $\pi$.
Upvotes: 3 [selected_answer]<issue_comment>username_2: The main difference between on-policy and off-policy is how to get samples and what policy we optimize.
In off-policy deterministic actor-critic, the trajectories are samples from beta distribution (also called behavior policy), not the policy we are optimized (that is $\mu\_{\theta}$). However, in the on-policy actor-critic, the action $a\_{t+1}$ is sampled from target policy $\mu\_{\theta}$ and the policy we optimized is also the $\mu\_{\theta}$.
Upvotes: 1 |
2018/05/09 | 2,460 | 10,299 | <issue_start>username_0: Imagine trying to create a simulated virtual environment that is complicated enough to create a "general AI" (which I define as a self aware AI) but is as simple as possible. What would this minimal environment be like?
i.e. An environment that was just a chess game would be too simple. A chess program cannot be a general AI.
An environment with multiple agents playing chess and communicating their results to each other. Would this constitute a general AI? (If you can say a chess grand master who thinks about chess all day long has 'general AI'? During his time thinking about chess is he any different to a chess computer?).
What about a 3D sim-like world. That seems to be too complicated. After all why can't a general AI exist in a 2D world.
What would be an example of a simple environment but not too simple such that the AI(s) can have self-awareness?<issue_comment>username_1: General AI can absolutely exist in a 2D world, just that a generalized AI (defined here as "consistent strength across a set of problems") in this context would still be quite distinct from an [Artificial General Intelligence](https://en.wikipedia.org/wiki/Artificial_general_intelligence), defined as "an algorithm that can perform any intellectual task that a human can."
Even there, the definition of AGI is fuzzy, because "which human?" (Human intelligence is a spectrum, where individuals possess different degrees of problem solving capability in different contexts.)
---
[Artificial Consciousness](https://en.wikipedia.org/wiki/Artificial_consciousness): Unfortunately, [self-awareness](https://en.wikipedia.org/wiki/Self-awareness) / [consciousness](https://en.wikipedia.org/wiki/Consciousness) is a heavily metaphysical, issue, distinct from problem solving capability (intelligence).
You definitely want to look a the "[Chinese Room](https://en.wikipedia.org/wiki/Chinese_room)" and rebuttals.
---
Probably worth looking at the [holographic principle](https://en.wikipedia.org/wiki/Holographic_principle): "a concept in physics whereby a space is considered as a hologram of n-1 dimensions." Certainly models and games can be structured in this way.
Another place to explore is theories of emergence of superintelligence on infinite [Conway's Game of Life](https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life). (In a nutshell, my understanding is that once researchers figured out how to generate any number within the cellular automata, the possibility of emergent sentience given a gameboard of sufficient size is at least theoretically sound.)
Upvotes: 2 <issue_comment>username_2: I think the most important thing is that it has to have time simulated in some way. Think self aware chatbot. Then to be "self aware" the environment could be data that is fed in through time that can be distinguished as being "self" and "other". By that I suppose I mean "self" is the part it influences directly and "other" is the part that is influenced indirectly or not at all. Other than that it probably can live inside pretty abstract environments. The reason time is so important is without it the cognitive algorithm is just solving a math problem.
Upvotes: 2 <issue_comment>username_3: I will skip all thematic about "what is an AGI", "simulation game", ... These topics have been discussed during decades and nowadays they are, in my opinion, a dead end.
Thus, I can only answer with my personal experience:
It is a basic theorem in computing that any number of dimensions, including temporal one, in a finite size space, can be reduced to 1D.
However, in practical examples, the 1D representation becomes hard to analyze and visualize. It is more practical work with graphs, that can be seen as an intermediate between 1D and 2D. Graphs allows representation of all necessary facts and relations.
By example, if we try to develop an AGI able to work in the area of mathematics, any expression (that humans we write in a 2D representation with rationals, subscripts, integrals, ...) can be represented as 1D (as an expression written in a program source) but this 1D must be parsed to reach the graph that can be analyzed or executed. Thus, the graph that results after the parsing of the expression is the most practical representation.
Another example, if we want an agent that travels across a 3D world, this world can be seen as an empty space with objects that have some properties. Again, after the initial stage of scene analysis and object recognition (the equivalent to the parser in previous example), we reach a graph.
Thus, to really work in the area of AGI, I suggest skip the problems of scene analysis, object recognition, speech recognition (Narrow AI), and work directly over the representative graphs.
Upvotes: 3 <issue_comment>username_4: Of the answers so far, the one from @username_1 were the most provocative. For instance, the reference to the Chinese Room critique brings up Searle's contention that some form of intentionality might need support in the artificial environment. This might imply the necessity of a value system or a pain-pleasure system, i.e. something where good consequences can be "experienced" or actively sought and bad consequences avoided. Or some potential for individual extinction (death or termination) might need to be recognized. The [possibility of "ego-death"](https://ai.stackexchange.com/questions/2955/whats-the-term-for-death-by-dissolving-in-ai) might need to have a high negative value. That might imply an artificial world should include "other minds" or other agents, which the emerging or learning intelligent agent could observe (in some sense) and "reflect on", i.e recognize an intelligence like its own. In this sense the Cartesian syllogism "I think therefore I am" gets transmuted into: I (or rather me as an AI) see evidence of others thinking, and "by gawd, 'I' can, too". Those "others" could be either other learning systems (AGI's) or some contact with discrete inputs from humans mediated by the artificial environment. [Wikipedia discussion of the "reverse Turing test"](https://en.wikipedia.org/wiki/Reverse_Turing_test)
The mention of dimensionality should provoke a discussion of what would be the required depth of representation of a "physics" of the world external to the AI. Some representation of time and space would seem necessary, i.e., some dimensional substructure for progress to goal attainment. The Blocks World was an early [toy problem](https://en.wikipedia.org/wiki/Toy_problem) whose solution provoked optimism in the 60's and 70's of last century that substantial progress was being made. I'm not aware of any effort to program in any pain or pleasure in the SHRDLU program of that era (no dropping blocks on the program's toes), but all the interesting science fiction representations of AI's have some recognition of "physical" adverse consequences in the "real world".
Edit: I'm going to add a need for "entities with features" in this environment that could be "perceived" (by any of the "others" that are interacting with the AGI) as the data input to efforts at induction, identification, and inference about relationships. This creates a basis for a shared "experience".
Upvotes: 2 <issue_comment>username_5: Though a good answer by @username_3, I'd agree with @zooby that a graph might be too simplistic. If humans were in an environment where the options were drown or take 5000 unrelated steps to build a boat, we'd never have crossed any seas.
I think any graph, if designed by hand, would not be complex enough to call the agent within as general AI. The world would need enough in-between states that it would no longer be best described as a graph, but at least a multidimensional space.
I think there are 2 points you'd have to consider. What is "simple" and when would you recognise it as a "general AI". I don't find self aware AI satisfactory, as we can't measure anything called awareness; we can only see its state and its interaction with the environment.
For 1. I'd pose that the world we live in is actually fairly simple. There are 4 forces of nature, a few conservation laws, and a bunch of particle types that explain most of everything. It's just that there are many of these particles and this has led to a rather complex world. Of course, this is expensive to simulate, but we could take some shortcuts. People 200 years ago wouldn't need all of quantum mechanics to explain the world. If we replaced protons, neutrons and the strong force with the atoms in the periodic table, we'd mostly be fine. Problem is we replaced 3 more general laws with 100 specific instances. For the simulated environment to be complex enough I think this trend must hold. We could replace trillions of particles governed by general laws with thousands of instances that have different properties when interacting with the agent, and I think more importantly, when interacting with each other.
Which brings me to 2. I think we'd only truly be satisfied with the agent expressing general AI when it can purposefully interact with the environment in a way that would baffle us, while clearly benefiting from it (so not accidentally). Now that might be quite difficult or take a very long time, so a more relaxed condition would be to build tools that we'd expect it to build, thus showing mastery of its own environment. For example, [evidence of boats](https://en.wikipedia.org/wiki/Boat) have been found somewhere between 100k and 900k years ago, which is about the same time-scale when [early humans developed](https://en.wikipedia.org/wiki/Homo_sapiens). However although we'd consider ourselves intelligent, I'm not sure we'd consider a boat making agent to have general intelligence as it seems like a fairly simple invention. But I think we'd be satisfied after a few such inventions.
So I think we'd need a Sim like world, that's actually a lot more complicated than the game. With 1000s of item types, many instances of each item and enough degrees of freedom to interact with everything. I also think we need something that looks familiar to acknowledge any agent as intelligent. So a 3D, complicated, minecraft-like world would be the simplest world in which **we would recognise** the emergence of general intelligence.
Upvotes: 2 |
2018/05/10 | 287 | 1,238 | <issue_start>username_0: I want my neural network structure to not have a circular/looping structure something similar like a directed acyclic graph (DAG). How do I do that?<issue_comment>username_1: The naive way is to generate connections randomly as you would for a cyclic graph, but then perform a test to reject any connections that form a cycle. This is the current approach in SharpNEAT and there has been some effort directed at improving the performance of the cycle test in the work-in-progress refactor branch.
One alternative would be to track the depth of all nodes, store a list of node IDs sorted by depth, and sample the connection endpoint nodes in such a way that the target node depth is always higher than the source node. Now I think about it that's probably the better method.
Upvotes: 2 <issue_comment>username_2: I also struggled with this when I was implementing NEAT.
What worked for me was cycle detection using DFS search in this video <https://www.youtube.com/watch?v=tg96sZqhXyU>
Simply put, I did DFS on all my input nodes recording all the nodes visited if I encounter a node I've already visited then its a cycle thereby I for my neat to discard this and attempt to make another connection.
Upvotes: 1 |
2018/05/11 | 843 | 3,565 | <issue_start>username_0: I read about softmax from this [article](http://ufldl.stanford.edu/tutorial/supervised/SoftmaxRegression/). Apparently, these 2 are similar, except that the probability of all classes in softmax adds to 1. According to their last paragraph for `number of classes = 2`, softmax reduces to LR. What I want to know is other than the number of classes is 2, what are the essential differences between LR and softmax. Like in terms of:
* Performance.
* Computational Requirements.
* Ease of calculation of derivatives.
* Ease of visualization.
* Number of minima in the convex cost function, etc.
Other differences are also welcome!
I am asking for relative comparisons only, so that at the time of implementation I have no difficulty in selecting which method of implementation to use.<issue_comment>username_1: As written, SoftMax is a generalization of Logistic Regression.
Hence:
1. Performance: If the model has more than 2 classes then you can't compare. Given `K = 2` they are the same.
2. Computation Requirements: Please explain as the computational requirements require the data, enough memory to hold it and enough time to let run.
3. Ease of Calculation of Derivatives: The cost function is summation hence once you do it for one element you do it fol all.
4. Ease of Visualization: Well, it is easy to visualize the Confusion Matrix even for `K = 10` classes. So no issue here.
5. Cost Function: The cost function is convex. Yet not Strictly Convex hence there infinite number of minima.
Upvotes: 2 <issue_comment>username_2: I don't think that it's useful to differentiate logistic regression and softmax based on your terms. This is because you don't choose one or the other based on performance/computational requirements/ease of calculation of derivatives/...
The fact is that you use one or the other based on which is your problem.
If you need to recognize cat pictures vs. non-cat pictures you will use logistic regression (even with a very complex NN the last step will be always a logistic regression). Of course, you could use softmax but the outputs will be redundant, i.e. one output will always be one minus the other.
If you need to recognize cat pictures vs. dog pictures vs. other pictures you will use softmax. Note that, in order to use softmax, you need to have only mutually exclusive classes. Mutually exclusive classes mean that an example cannot belong to multiple classes. What if a picture represents both a dog and a cat? In this case, it should be marked as *other picture*. If you want to avoid these you could use one more class to denote pictures with both cats and dogs.
However, if you want to recognize cats, dogs, birds, fishes, boats, houses, etc. the number of mixed classes that you need to include will grow very fast. When you are dealing with non-mutually exclusive classes you should use multitask learning. In this case, the sum of the outputs is no longer 1. In the simplest case, you could think of multitask learning as a shared NN where the last step is made by multiple logistic regression. In more complex cases the last step could be made by a combination of different softmax and logistic regressions.
In conclusion:
* If you need to use non-mutually exclusive classes use **multitask learning**. Eventually, you will use in multitask learning softmax regression and/or logist regression.
* If you need to use more than two mutually exclusive classes use **softmax regression**.
* If you need to use only two exclusive classes use **logistic regression**.
Upvotes: 2 |
2018/05/12 | 2,089 | 8,135 | <issue_start>username_0: Why is it a bad idea to have a momentum factor greater than 1? What are the mathematical motivations/reasons?<issue_comment>username_1: If [gradient descent](https://en.wikipedia.org/wiki/Gradient_descent) is like walking down a slope, momentum would be the literal momentum of the agent traversing the [hyperplane](https://en.wikipedia.org/wiki/Hyperplane).
Under that analogy then, momentum factor would be analogous to the [friction coefficient](https://simple.wikipedia.org/wiki/Coefficient_of_friction), with 1 being max friction and 0 being no friction.
You should be able to see why there can't be friction beyond that range: if friction = 1 it would be identical to having no friction; if friction <= 0 then by conservation of energy gradient descent will not find a local [minima](https://en.wikipedia.org/wiki/Maxima_and_minima); if friction > 1 then gradient descent would be moving backwards.
Upvotes: 1 <issue_comment>username_2: Let's talk about gradient decent!
Analogy:
--------
So you're standing on a mountain side, and you want to get to the lowest part of this mountain. You have a notepad with you.
Although actual physics-momentum would be a good analogy here, I'm not gonna use it.
You're somewhere on this mountain side and you figure out which way is down\*, and you jump once a couple of meters in that direction. How big one jump is would depend on how steep the hill is (the length of the gradient), and how much extra you push with your feet. The first time you decide to not really push that much with your feet. The SGD momentum, comes here; you write down in your note pad which direction you went, and how far (e.g. south, 4 meters).
**Note:** here the PHYSICAL momentum would represent the length of the gradient.
You repeat this for some time you come to a place where there are only ways upwards.
Does this mean you hit the bottom? Not necessarily; you might have gotten stuck in a valley, or "local minima". You really want to get out of this valley, but all directions are upwards, so which way should you jump?
You now take out your notebook and notice that you've been jumping south east the last 40 steps, and pretty far. You then reason that it is likely that you want to go south east. So you jump south east with a lot of thrust from your feet: This is the intuition on what momentum does;
If you have a clear "pattern" of which way is down, then this should also count!
**Note:** the momentum only depends on the previous step, but the previous step depends on the steps before that and so on. This is just an analogy.
Maths:
------
For the maths, you just add a term that is the last gradient, times some constant.
`Heading(t)=γ Heading(t-1)+η Gradient(t)`
Where γ is the momentum factor and η is the learning rate.
[<NAME> blog on gradient descent](http://ruder.io/optimizing-gradient-descent/index.html#momentum) is brilliant to learn more details of the maths of it.
γ ≷ 1
-----
For mathematical conclusions:
`Heading(t)=γ Heading(t-1)+η Gradient(t)`
γ > 1: From the "expression", you could infer that this case would generate echos. That the gradient of the previous step would contribute more than the actual gradient. For the upcoming step, this effect would get enhanced, and 10 steps down the road, you're stuck going in one direction.
γ < 1 makes it "converge" to a "terminal velocity", if you like. It would depend on the preceding steps less and less instead of more and more.
These effects are pretty clear in the equation you find at [Ruders blog](http://ruder.io/optimizing-gradient-descent/index.html#momentum)
If your momentum term was greater than one, then the notebook would overcome the actual gradient. After a few steps, you wouldn't even look at the hill; you'd go "I've only gone east so far, so I'll just continue east" with your jumps getting longer and longer. This is not good.
In conclusion
-------------
A high momentum term would lead you in the wrong direction (blow up and always go in the same direction), and/or oscillate around the global minima (making you jump too far).
Hope it helps :)
\*: Strictly speaking, we're finding the "up" direction and go the opposite way. The "up" direction is the gradient.
Upvotes: 0 <issue_comment>username_3: If you want only the answer to your question in particular, you can skip to the last part of the answer. To answer in detail momentum is a technically incorrect term, I would rather call it [inertial](https://en.wikipedia.org/wiki/Inertia) learning.
**Inertia -** Inertia is the resistance of any physical object to any change in its position and state of motion.
First the equation of weight change in the momentum learning method at a particular iteration is given by the equation:
[](https://i.stack.imgur.com/l8jkA.png)
Where **`beta`** will be the momentum term. If we expand the expression we get something like:
[](https://i.stack.imgur.com/QTtc9.png)
Courtsey: [Stochastic Gradient Descent with momentum](https://towardsdatascience.com/stochastic-gradient-descent-with-momentum-a84097641a5d)
Here **`S_t`** are the gradients or **`dels`** for a particular training example. Clearly this is for a 3 example training set.
Now why do we use momentum?
As @username_2 has provided a [link](http://ruder.io/optimizing-gradient-descent/index.html#momentum), you can easily delve into the mathematics for its usage. But to make a more intuitive sensehere are a few points to note:
* The exponentially weighted terms can be thought of as past memory of what you learned. You don't want to forget it completely, so you keep on revising it with weight-age of revising it decreasing over time. The vector updation for an already iterated training example keeps getting smaller and smaller, whereas it is not present altogether in normal gradient descent.
* The momentum term can be thought to play a damping role, it is not allowing the new training example to have its way completely. You can visualize this by taking 2 points and a straight line, with the updation scheme directly proportional to the distance between the line and the points and then check for both normal and momentum gradient descent methods. Thus gradient descent with momentum is a damped oscillation and thus always has a higher chance of converging.
* The inertial learning also helps when you come to a point in your loss curve where slope is 0. Normal learning will result in very small weight updates in this position, but with inertial learning this position will be easily crossed.
As for your original question of why momentum term <1, here are a few points which most answers have missed:
* The first and foremost, if **`beta > 1`**, the weightage for previous training examples will increase exponentially. (Like **`1.01^1000 = 20959`** just after 1000 iterations). That maybe handled by increasing the **`learning rate`** accordingly, but not only it will require a lot of extra computation, it is almost mathematically impossible.
* Second, a exponential series with **`r >= 1`** common ratio never converges. It goes on going bigger and bigger. Also, if you can draw parallels with continuous functions, this is what we call a function which is not [Absolutely integrable function](https://en.wikipedia.org/wiki/Absolutely_integrable_function).
* Also as per our previous intuition why would one want to give high weight-age to things which you have learned previously. It may not be even important if you follow online learning method (you look at each training example only once due to high number of training examples).
All these leads to a single conclusion if **`beta >= 1`**, there will be a large amount of oscillation and error will go on increasing exponentially (can probably be proven by rigorous mathematical analysis). Although it might wotk for **`beta = 1`**(due to the [Perceptron Convergence Theorem](http://www.cs.ubbcluj.ro/~csatol/kozgaz_mestint/4_neuronhalo/PerceptConvProof.pdf))
Upvotes: 1 |
2018/05/14 | 2,149 | 8,430 | <issue_start>username_0: I need to design an algorithm such that it handles the request for shift swapping.
The algorithm will recommend a list of people who are more likely to swap that shift with the person by analyzing previous data.
Can anyone list the techniques that will help me to do this or a good starting point?
I was thinking about training a Naive Bayes Classifier and using Mahout for generating recommendations.<issue_comment>username_1: If [gradient descent](https://en.wikipedia.org/wiki/Gradient_descent) is like walking down a slope, momentum would be the literal momentum of the agent traversing the [hyperplane](https://en.wikipedia.org/wiki/Hyperplane).
Under that analogy then, momentum factor would be analogous to the [friction coefficient](https://simple.wikipedia.org/wiki/Coefficient_of_friction), with 1 being max friction and 0 being no friction.
You should be able to see why there can't be friction beyond that range: if friction = 1 it would be identical to having no friction; if friction <= 0 then by conservation of energy gradient descent will not find a local [minima](https://en.wikipedia.org/wiki/Maxima_and_minima); if friction > 1 then gradient descent would be moving backwards.
Upvotes: 1 <issue_comment>username_2: Let's talk about gradient decent!
Analogy:
--------
So you're standing on a mountain side, and you want to get to the lowest part of this mountain. You have a notepad with you.
Although actual physics-momentum would be a good analogy here, I'm not gonna use it.
You're somewhere on this mountain side and you figure out which way is down\*, and you jump once a couple of meters in that direction. How big one jump is would depend on how steep the hill is (the length of the gradient), and how much extra you push with your feet. The first time you decide to not really push that much with your feet. The SGD momentum, comes here; you write down in your note pad which direction you went, and how far (e.g. south, 4 meters).
**Note:** here the PHYSICAL momentum would represent the length of the gradient.
You repeat this for some time you come to a place where there are only ways upwards.
Does this mean you hit the bottom? Not necessarily; you might have gotten stuck in a valley, or "local minima". You really want to get out of this valley, but all directions are upwards, so which way should you jump?
You now take out your notebook and notice that you've been jumping south east the last 40 steps, and pretty far. You then reason that it is likely that you want to go south east. So you jump south east with a lot of thrust from your feet: This is the intuition on what momentum does;
If you have a clear "pattern" of which way is down, then this should also count!
**Note:** the momentum only depends on the previous step, but the previous step depends on the steps before that and so on. This is just an analogy.
Maths:
------
For the maths, you just add a term that is the last gradient, times some constant.
`Heading(t)=γ Heading(t-1)+η Gradient(t)`
Where γ is the momentum factor and η is the learning rate.
[<NAME>ers blog on gradient descent](http://ruder.io/optimizing-gradient-descent/index.html#momentum) is brilliant to learn more details of the maths of it.
γ ≷ 1
-----
For mathematical conclusions:
`Heading(t)=γ Heading(t-1)+η Gradient(t)`
γ > 1: From the "expression", you could infer that this case would generate echos. That the gradient of the previous step would contribute more than the actual gradient. For the upcoming step, this effect would get enhanced, and 10 steps down the road, you're stuck going in one direction.
γ < 1 makes it "converge" to a "terminal velocity", if you like. It would depend on the preceding steps less and less instead of more and more.
These effects are pretty clear in the equation you find at [Ruders blog](http://ruder.io/optimizing-gradient-descent/index.html#momentum)
If your momentum term was greater than one, then the notebook would overcome the actual gradient. After a few steps, you wouldn't even look at the hill; you'd go "I've only gone east so far, so I'll just continue east" with your jumps getting longer and longer. This is not good.
In conclusion
-------------
A high momentum term would lead you in the wrong direction (blow up and always go in the same direction), and/or oscillate around the global minima (making you jump too far).
Hope it helps :)
\*: Strictly speaking, we're finding the "up" direction and go the opposite way. The "up" direction is the gradient.
Upvotes: 0 <issue_comment>username_3: If you want only the answer to your question in particular, you can skip to the last part of the answer. To answer in detail momentum is a technically incorrect term, I would rather call it [inertial](https://en.wikipedia.org/wiki/Inertia) learning.
**Inertia -** Inertia is the resistance of any physical object to any change in its position and state of motion.
First the equation of weight change in the momentum learning method at a particular iteration is given by the equation:
[](https://i.stack.imgur.com/l8jkA.png)
Where **`beta`** will be the momentum term. If we expand the expression we get something like:
[](https://i.stack.imgur.com/QTtc9.png)
Courtsey: [Stochastic Gradient Descent with momentum](https://towardsdatascience.com/stochastic-gradient-descent-with-momentum-a84097641a5d)
Here **`S_t`** are the gradients or **`dels`** for a particular training example. Clearly this is for a 3 example training set.
Now why do we use momentum?
As @username_2 has provided a [link](http://ruder.io/optimizing-gradient-descent/index.html#momentum), you can easily delve into the mathematics for its usage. But to make a more intuitive sensehere are a few points to note:
* The exponentially weighted terms can be thought of as past memory of what you learned. You don't want to forget it completely, so you keep on revising it with weight-age of revising it decreasing over time. The vector updation for an already iterated training example keeps getting smaller and smaller, whereas it is not present altogether in normal gradient descent.
* The momentum term can be thought to play a damping role, it is not allowing the new training example to have its way completely. You can visualize this by taking 2 points and a straight line, with the updation scheme directly proportional to the distance between the line and the points and then check for both normal and momentum gradient descent methods. Thus gradient descent with momentum is a damped oscillation and thus always has a higher chance of converging.
* The inertial learning also helps when you come to a point in your loss curve where slope is 0. Normal learning will result in very small weight updates in this position, but with inertial learning this position will be easily crossed.
As for your original question of why momentum term <1, here are a few points which most answers have missed:
* The first and foremost, if **`beta > 1`**, the weightage for previous training examples will increase exponentially. (Like **`1.01^1000 = 20959`** just after 1000 iterations). That maybe handled by increasing the **`learning rate`** accordingly, but not only it will require a lot of extra computation, it is almost mathematically impossible.
* Second, a exponential series with **`r >= 1`** common ratio never converges. It goes on going bigger and bigger. Also, if you can draw parallels with continuous functions, this is what we call a function which is not [Absolutely integrable function](https://en.wikipedia.org/wiki/Absolutely_integrable_function).
* Also as per our previous intuition why would one want to give high weight-age to things which you have learned previously. It may not be even important if you follow online learning method (you look at each training example only once due to high number of training examples).
All these leads to a single conclusion if **`beta >= 1`**, there will be a large amount of oscillation and error will go on increasing exponentially (can probably be proven by rigorous mathematical analysis). Although it might wotk for **`beta = 1`**(due to the [Perceptron Convergence Theorem](http://www.cs.ubbcluj.ro/~csatol/kozgaz_mestint/4_neuronhalo/PerceptConvProof.pdf))
Upvotes: 1 |
2018/05/15 | 1,700 | 6,970 | <issue_start>username_0: I know that one of the recent fads right now is to train a neural network to generate screenplays and new episodes of e.g. the Friends or The Simpsons, and that's fine: it's interesting and might be the necessary first steps toward making programs that can actually generate sensible/understandable stories.
In this context, can neural networks be trained specifically to study the structures of stories, or screenplays, and perhaps generate plot points, or steps in the Hero's Journey, etc., effectively writing an outline for a story?
To me, this differs from the many myriad plot-point generators online, although I have to admit the similarities. I'm just curious if the tech or the implementation is even there yet and, if it is, how one might go about doing it.<issue_comment>username_1: As far as I am aware, this has not been done yet.
I see several problems with this. A neural network is basically a classifier, which matches an input to an output. Both input and output are usually numerical values, though they could be matched to concepts or words.
To train a NN you provide an appropriately encoded input, and the corresponding output. The NN learns the associations between the two, and can then classify unseen input accordingly. This has recently been used to transform images in a particular style etc.
What would the input and output be for generating screenplays? You could use previous scripts as inputs, but what would the output be? It could be narrative 'moves' of some sort, perhaps. So you could train an NN to recognise narrative elements from screenplays.
However, you are still not creating anything, but just recognising stuff. You would need some other input. I guess you could train an NN on "The Simpsons", get a narrative structure, and then present it with an Episode of "Friends" and see what happens. It won't be a new episode of a screenplay, though.
The other way round might work: you feed it narrative moves (a kind of story skeleton), and get a script out. But it would need a lot of (human) post-editing to be at all useful.
I think an NN is the wrong tool to use here. There has been work done with generating stories and screenplays, even way back in the early days of AI. But that was all based on symbolic AI, not on the kind of ML which seems to currently be *en vogue*. Have a look at [James Ryan's website](https://www.jamesryan.world/projects#/storygen-history/); he has recently written an overview over historic approaches to story (and screenplay) generation.
Upvotes: 1 <issue_comment>username_2: As far as I know, there isn't any system like you describe yet. However, there are some interesting approaches to narrative intelligence that can be found at the University of New Orleans Narrative Intelligence Lab site: <https://nil.cs.uno.edu/>
Hopefully those can be helpful in guiding a deep-learning approach to narrative generation problems.
Upvotes: -1 <issue_comment>username_3: Ideally, yes. Ideally, because the network should be fed with the words of an entire book (wich vary around 100k words). With an hypotetical amount of processing power, you should be able to just train the NN with like thousands of books.
It might be possible to be trained with quantum computers.... who knows...
For smaller stories, I think that the major problem is to know in what "shape" should the story be generated. Because if it simply outputs some words, then the first thing the network should be able to do is speaking, that means the model should evolve from a pretrained NLP model, and (from what I know) we still have some problems with that.
So.... I really think that to do such kind of things, the approach we take to make NNs learn should be changed.
The fact that humans exist proves that genetic algorithms would work 100%. But we obviously don't have 3+ billion years to evolve a "brain" from scratch, that's why we use training algorithms: we *force* them to learn from something.
But back to the question: humans do a lot of work by thinking about what outcome to chose. To just make a netork generate an outcome, without imitating humans, it would be easy to just choose randomly some aspects of this outcome. For example, a randomly chosen outcome might be "outcome: Dennis dies, Morty kills Eminem, sad sciene, happiness sciene, the end".
That means that the NN or any ML model doesn't actually produce an outcome to the story. In fact, what it does is to connect some generated "checkpoints" about that story.
Actually, you might train a model to generate checkpoints to
but this is just a random idea from a newbie, so I've got no clue about how to actually implement that.
I'm italian b.t.w., sorry about my english :)
Upvotes: 0 <issue_comment>username_4: The answer is yes, an AI can be trained to write even a whole story. I just want to tell you right off the bat that an AI already did something even more difficult than generating a story. I'm talking about that thing at the end of my explanation.
All the links in my explanations are leading to external sources that I found, you can go check them. Without any further do, here are the main reasons why I think AIs can generate the outline of a story:
1. AIs are really good at recognizing patterns, and generating things that are similar to others. Surprisingly, there's a lot of [patterns in stories](https://blog.writingacademy.com/story-patterns/). Stories are always structured, so this part isn't the real problem. There's a great Wiki about the [seven basic plots](https://en.wikipedia.org/wiki/The_Seven_Basic_Plots).
2. But even if an AI can generate a good story structure, can it make a story appealing? Well, it depends on how big the "brain" of the AI is. Because it turns out that the more neurons and synapses an AI have, the more it can "understand" human language or emotions. So, if an AI has a big enough brain, it can generate stuff that make sense. Here's the best example of an AI being able to generate human-like stuff : <https://ai.googleblog.com/2020/01/towards-conversational-agent-that-can.html>.
For the how, I think that the training data isn't insignificant. So, to be able to train an AI like that, we need a lot of examples. This is possible, because movies' screenplays are public, and can be downloaded by anyone. So, an AI can easily learn from this huge amount of screenplays. Here are some examples of websites where we can get screenplay of movies: <https://stephenfollows.com/resource/sites-to-find-movie-scripts/>, <https://www.simplyscripts.com/movie-screenplays.html>.
After that, we just need to format the data, so we can give it to our AI. In my opinion, it's completely possible to make a good AI that writes good stories, because Google already did something similar. I think that the [chatbot Meena](https://www.youtube.com/watch?v=3Wppf_CNvD0), created by Google, is the proof that an AI can learn way more than just pattern recognition.
Upvotes: 0 |
2018/05/18 | 357 | 1,545 | <issue_start>username_0: In an RNN to train it, you need to roll it out, and enter in the history of inputs and the history of expected outcomes.
This doesn't seem like a realistic picture of the brain since this would require, for example, for the brain to store a perfect history of every sense that comes in to it for many time-steps.
So is there an alternative to RNNs that doesn't require this history? Perhaps storing differences or something? Or storing some accumulator?
Perhaps there is a way to calculate with RNNs that doesn't require keeping hold of this history?<issue_comment>username_1: recurrent-neural-networks act on a sequence of inputs , that does not need to be only a time-sequence for example consider a sequence of characters like a passage or a book.once trained on a sequence of inputs , you could predict the previous and next values of an input-vector at a certain time step.
Upvotes: 0 <issue_comment>username_2: You don't necessarily need to roll out the inputs to an RNN; doing so makes it easier to optimize computation (if the sequence length is the same each batch), but it's not a necessity. Furthermore, RNNs (and, incidentally, the brain) doesn't necessarily remember the input history as is; rather, the history is encoded via the RNN's cell state (or states, in the case of LSTMs and other RNN cell architectures with multiple states). Neural Turing Machines (NMTs) and Differential neural computers expand on that concept by also using a larger "memory" storage (in the form of a matrix).
Upvotes: 1 |
2018/05/19 | 568 | 2,303 | <issue_start>username_0: What is the current research in artificial intelligence and machine learning in the field of data compression?
I have done my research on the [PAQ series of compressors](https://en.wikipedia.org/wiki/PAQ), some of which use neural networks for context mixing.<issue_comment>username_1: Recurrent neural networks can be trained on character level data to generate sentences which are very similar to human language. Go through this [link](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). You could experiment with them for compressing text.
Upvotes: 0 <issue_comment>username_2: It is already combined.
Adaptive entropy techniques are already used in most of the best compression encoders. This is true for file encoders, video encoders, and audio encoders. We use it in the solar lab to optimize sample rates in data acquisition.
In fact, pattern recognition and compression are very tightly coupled if you consider autoencoders and other feature extraction schemes and compare them mathematically with what compression does. See [*Data Compression – A Generic Principle of Pattern Recognition?*, <NAME>, <NAME>, VISIGRAPP 2008](https://link.springer.com/chapter/10.1007/978-3-642-10226-4_16)
Zlib and lz4 have more like hyper-parametric learning, however they don't persist what they learn. This work is interesting: [*Adaptive On-the-Fly Compression*, <NAME>, <NAME>, IEEE Parallel and Distributed Systems, v17 n1, January 2006](https://www.cs.ucsb.edu/~ckrintz/papers/acejournal.pdf).
\*\*Suggested Project:
Create a theoretical framework and software POC that learns correlations between these two sets.
1. Quickly ascertainable features of documents or audio or video streams (i.e file path components, media titles, date, file type, and first N bytes)
2. The existing parameters that existing open source compression software learns during its existing pattern recognition algorithms
Persisting those correlations between compression invocations may considerably improve file transfer, kernel operations (since lz4 is now native in kernels like LINUX), and media streaming.
How much effort is made to persist features extracted (pattern recognition) between frames in media streaming is worth investigating too.
Upvotes: 1 |
2018/05/19 | 659 | 2,809 | <issue_start>username_0: Let's say you want to do AI research and publish some papers just by your own. Would you send them to an AI journal using just your name? Which AI journals are recommended?<issue_comment>username_1: Let me preface this by acknowledging that this question is prone to opinion. This answer is, in so far as it is possible, primarily *observation* based.
---
My understanding is that, when it comes to publishing in a journal, doing so as an individual (without backing from an institution) is, in general, going to be frowned upon.
As you may know: scientific research, including AI research, is generally subject to [peer review](https://en.wikipedia.org/wiki/Scholarly_peer_review). When it comes to journals, this is mandatory. The review process attempts to enhance and preserve the integrety of the information published. As an additional safeguard, submissions will often expected to be backed by endorsement from someone within the academic community (typically an institution).
**However**, and as Pasaba correctly points out (in his comment on the OP), research and publishing are not neccessarily the same thing. Furthermore, not having an endorsement does not not stop you from making a contribution to the field.
For example, you can publish code and/or articles on websites such as Github, and engage with communities of professional and hobbiest researchers around the web, (e.g. this Stack Exchange).
Note that there is also some scope for endorsement without being a direct member of a research institution. For example: arXiv, whilst not strictly a journal, is an open archive that supports endorsement by [request](https://arxiv.org/help/endorsement).
Without knowing your circumstances, it's hard to know exactly what to do. However, my general advice is to find and engage with communities, and build a network of collaborative peers, rather than trying to succeed in an isolated fashion.
Upvotes: 3 [selected_answer]<issue_comment>username_2: To complement username_1's response and maybe address the "*using just your name*" part of your question, I would like to acknowledge that biased towards submissions do exist. Reviewers may be biased due to several reasons, such us authors' age, publications record, gender, nationality [(Lotfi and Mahian, 2014)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4214948/).
If you are concerned about these aspects, rest assured that there are mechanisms to ensure that the authors' reputation does not influence reviewers' judgments. A good example is the "[Double-Blind Review](https://www.journals.elsevier.com/social-science-and-medicine/policies/double-blind-peer-review-guidelines)" process, which means that identities from the author(s) and reviewer(s) are concealed throughout the review process.
Upvotes: 2 |
2018/05/19 | 938 | 3,475 | <issue_start>username_0: The [ReLU](https://en.wikipedia.org/wiki/Rectifier_(neural_networks)) activation function is defined as follows
$$y = \operatorname{max}(0,x)$$
And the linear activation function is defined as follows
$$y = x$$
The ReLU nonlinearity just clips the values less than 0 to 0 and passes everything else. Then why not to use a linear activation function instead, as it will pass all the gradient information during backpropagation? I do see that parametric ReLU (PReLU) does provide this possibility.
I just want to know if there is a proper explanation to using ReLU as default or it is just based on observations that it performs better on the training sets.<issue_comment>username_1: The ReLu is a **non-linear** activation function. Check out this [question](https://ai.stackexchange.com/questions/5601/mathematical-intuition-for-the-use-of-re-lus-in-machine-learning) for the intuition behind using ReLu's (also check out the comments). There is a very simple reason of why we do not use a linear activation function.
Say you have a feature vector $x\_0$ and weight vector $W\_1$. Passing through a layer in a Neural Net will give the output as
$W\_1^T \* x\_0 = x\_1$
(dot product of weights and input vector). Now passing the output through next layer will give you
$W\_2^T \* x\_1 = x\_2$
So expanding this we get
$x\_2 = W\_2^T \* W\_1^T \* x\_0 = W\_2^T \* W\_1^T \* x\_0 = W\_{compact}^T \* x\_0$
Thus as you can see there is a linear relationship between input and output, and the function we want to model is generally non-linear, and so we cannot model it.
You can check out my [answer](https://ai.stackexchange.com/questions/5493/what-is-the-purpose-of-an-activation-function-in-neural-networks/5521#5521) here on non-linear activation.
Parametric ReLu has few advantages over normal ReLu. Here is a great [answer](https://datascience.stackexchange.com/questions/5706/what-is-the-dying-relu-problem-in-neural-networks) by @NeilSlater on the same. It is basically trying to tell us that if we use ReLu's we will end up with a lot of redundant or dead nodes in a Neural Net (those which have a negative output) which do not contribute to the result, and thus do not have a derivative. Thus to approximate a function we will require a larger NN, whereas parametric ReLu's absolve us of this problem,(thus a comparatively smaller NN) as negative output nodes do not die.
**NOTE:** `alpha = 1` will be a special case of parametric ReLu. There must be a balance between the amount of liveliness you want in the negative region vs the linearity of the activation function.
Upvotes: 4 [selected_answer]<issue_comment>username_2: A multi-layer network in which all units have linear activation functions can always be collapsed to an equivalent network with two layers of units. That is why it is essential to use nonlinear unit activation functions.
The underlying reason for using nonlinear activation functions involves a remarkable theorem of Cybenko (1989), which states that one layer of nonlinear hidden units is sufficient to approximate any mapping from input to output units. Actually, I think there is a later proof which specifies that the nonlinearity can be any non-polynomial function (e.g. sigmoidal).
This text is based on the book: [Artificial Intelligence Engines: A Tutorial Introduction to the Mathematics of Deep Learning](https://jim-stone.staff.shef.ac.uk/BookBayes2012/books_by_jv_stone/).
Upvotes: 2 |
2018/05/21 | 910 | 2,114 | <issue_start>username_0: I am trying to track LIDAR objects using Kalman filter. The problem is that the innovation has the value 0, which makes the Kalman gain be Infinity. [Here](https://en.wikipedia.org/wiki/Kalman_filter) is a link with the Kalman equations. The values with which I initialized the measurement and process covariance matrix are listed below. The update code is also shown below. When I debug the code everything is fine until the innovation becomes 0.
```
this->lidar_R << std_laspx_, 0, 0, 0,
0, std_laspy_, 0, 0,
0, 0, 0, 0,
0, 0, 0, 0;
this->lidar_H << 1.0, 0.0, 0.0, 0.0, 0.0,
0.0, 1.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0;
P_ << 1000, 0, 0, 0, 0,
0, 1000, 0, 0, 0,
0, 0, 1000, 0, 0,
0, 0, 0, 1000, 0,
0, 0, 0, 0, 1000;
MatrixXd PHt = this->P_ * H.transpose();
//S becomes 0
MatrixXd S = H * PHt + R;
//S_inv becomes INFINITY
MatrixXd S_inv_ = S.inverse();
MatrixXd K = PHt * S_inv_;
VectorXd y = Z - Hx;
this->x_ = this->x_ + K*y;
MatrixXd I = MatrixXd::Identity(x_.size(), x_.size());
this->P_ = (I - K * H) * this->P_;
```<issue_comment>username_1: Check to see if the determinant of S is zero before you do the inverse. If that is the case, use pseudo inverse.
Upvotes: 0 <issue_comment>username_2: From your matrix definitions, the issue is S always singular so it can never be inverted
I reimplemented the computation with numpy and here are the numbers
```
H = np.array([[1.0, 0.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0]])
P = np.identity(5) * 1000
R = np.array([[100, 0, 0, 0], [0, 100, 0, 0], [0,0,0,0], [0,0,0,0]])
S=H.dot(P.dot(np.transpose(H)))+R
```
and `S` is
```
array([[1100., 0., 0., 0.],
[ 0., 1100., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.]])
```
Basically the definition of `H` with its all zeros lines cancels part of the information (make it unobservable) and you also do not have any observation noise component for them
Upvotes: 2 [selected_answer] |
2018/05/22 | 1,884 | 8,217 | <issue_start>username_0: I used the example at - <https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/5_DataManagement/tensorflow_dataset_api.py> - to create my own classification model. I used different data but the basic outline of datasets was used.
It was important for my data type to shuffle the data and then create the training and testing sets. The problem, however, comes as a result of the shuffling.
When I train my model with the shuffled train set I get a +- 80% accuracy for train and +- 70% accuracy for the test set. I then want to input all the data (i.e. the set that made the training and test set) into the model to view the fully predicted output of this data set that I have.
If this data set is shuffled as the training and testing set was I get an accuracy of around 77% which is as expected, but then, if I input the unshuffled data (as I required to view the predictions), I get a 45% accuracy. How is this possible?
I assume it's due to the fact that the model is learning incorrectly and that it learns that the order of the data points plays a role in the prediction of those data points. But this shouldn't be happening as I am simply trying to (like the MNIST example) predict each data point separately. This could be a mini-batch training problem.
In the example mentioned above, using data sets and batches to train, does the model learn from the average of all the data points in the mini-batch or does it think one mini-batch is one data point and learn in that manner (which would mean order matters of the data)?
Or if there are any other suggestions.<issue_comment>username_1: >
> if I input the unshuffled data (as I required to view the predictions) I get a 45% accuracy. How is this possible?
>
>
>
When you build your dataset and then split it into training, test, and validation sets, you have to be sure that the training encompasses aspects of the test set or the validation set and more. That's why shuffling the data is important.
In the case where you didn't shuffle your dataset, your NN was confronted to object it never sees before (i.e. in the training set) and then couldn't classify it properly.
>
> does the model learn from the average of all the data points in the mini-batch or does it think one mini-batch is one data point and learn in that manner (which would mean order matters of the data)?
>
>
>
When using a mini batch you decide the size (`b_size`) of the mini batch and the number of mini batches (`n_batch`). Then, for `n_batch` times you will draw `b_size` random indices from `X_train` and `y_train` arrays, build with these indices `X_batch` and `y_batch`, and tune your model parameters on the 2 latter arrays (and do it `n_batch` times).
So having shuffled your dataset and takes randomly parts of your dataset to train your model ensure you to get rid of correlations between two points of your datasets and better generalization
Upvotes: 1 <issue_comment>username_2: **Problem Statement**
These are the features of the runs.
* CNN class prediction using two 2D convolutions with associating max pooling
* mini-batch execution approach
* fixed shuffling process used
These are the results.
* shuffling obtained 80%/70% train/test accuracy
* shuffling for full set obtained 77% accuracy
* no shuffling for full set obtained 45% accuracy
**Listed Causes**
These are the potential Causes for the apparent anomaly that were listed in the question.
* model is learning incorrectly
* learns that the order of the data points plays a role in their prediction
* data point not predicted separately because of mini-batching
**Other Causal Possibilities not Listed**
Notice that both of these additional possibilities are related to an insufficiency in the simulation of randomness, just as can be the case in cryptographic protocols.
* The CNN learns the shuffling system or some aspect of it so that when the shuffling is removed, the training no longer applies to the input patterns
* How the training and testing samples are drawn is not sufficiently random
**Additional Questions**
Does the model learn from the average of all the data points in the mini-batch? — Yes.
Does it think one mini-batch is one data point? — No. It doesn't think, and the loop does not average the data points before the propagation. Mini-batch simply aggregates the results before back-propagating the correction signal to the parameter tensors.
Does order matter? — Order cannot matter in a stateless system, but often does if there is state remembered between discrete events. Mini-batch requires averaging, which requires statefulness to accumulate the addends. But that is not the likely cause. How the batches are selected from the sample is a more likely factor affecting accuracy.
**Principles to Comprehend**
The convergence of artificial networks in general is based on the statistical characteristics of the training scenario matching the statistical characteristics of the usage scenario. In other words, to use PAC (probably approximately correct) framework terminology, how the training sample is drawn from the total population must be identical to how the validation sample is drawn from the total population. Therefore, if the training sample is not drawn with sufficient randomness from the total population convergence cannot not guaranteed.
**Questions to Consider**
* How am I deciding the individual operations within the shuffling?
* How am I drawing the train and test samples?
* How am I deciding what samples go in what batch?
* What natural order is in the data examples, and is it really a sequence rather than a set?
* If a sequence, then is a classic CNN, not designed out of the box to handle temporal sequences, the correct network design to apply?
Answering these questions and gaining a full conceptual understanding of the probability and statistics aspects of the approach should occur prior to thinking about normalization, which could fix your problem accidentally, but cannot be the root cause of the anomaly.
Upvotes: 0 <issue_comment>username_3: The example followed in the question uses a relatively straightforward Convolutional Neural Network. These are not stateful, so the order in which predictions for instances from a test set are queried should have no influence on those predictions.
In a comment written by the author of the question on their own question, it was mentioned that the use of Batch Normalization appears to have been confirmed to be the cause of the issue. Given this info, one possible cause of the issue described in the question is incorrect usage of the `training` flag of TensorFlow's Batch Normalization implementation. The [official documentation](https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization) contains the following info on this flag:
>
> **`training`**: Either a Python boolean, or a TensorFlow boolean scalar tensor (e.g. a placeholder). Whether to return the output in training mode (normalized with statistics of the current batch) or in inference mode (normalized with moving statistics). **NOTE**: make sure to set this parameter correctly, or else your training/inference will not work properly.
>
>
>
If this is incorrectly set to `True` rather than `False` outside of the training phase (i.e. when evaluating performance), predictions can be expected to be poor. This alone doesn't explain why specifically the order of test data would matter though, if this alone were the issue then we'd expect test performance to be poor regardless of order.
A different possible explanation can be that there is a mistake in the code that still causes `moving_mean` and `moving_variance` ops of the Batch Normalization to be updated during the testing/evaluation. These should only be updated during the training phase, as explained in the documentation linked to above. If they are still getting updated during the test phase, and **if there is a meaningful structure in the unshuffled ordering of the test set** (i.e. unshuffled test set ordered by class, or ordered according to certain features, etc.), **then we would expect precisely the issue described in the question to occur**.
Upvotes: 1 |
2018/05/22 | 583 | 2,529 | <issue_start>username_0: Various texts on using CNNs for object detection in images talk about how their translation invariance is a good thing. Which makes sense for tasks where the object could be anywhere in the image. Let's say detecting a kitten in household images.
But let's say, you already have some information about the likely position of the object of interest in the image. For example, for detecting trees in a dataset of images of landscapes. Here in *most cases*, the trees are going to be in the bottom half of the image while in *some cases* they might be at the top (because it's on a hill or whatever). So you want your neural network to learn that information -- that trees are likely connected to the bottom part of the image (ground). Is this possible using the CNN paradigm?
Thank you<issue_comment>username_1: I think, your assumption about the location of trees in images is quite incorrect. Just google image search "landscape" (if not already) and you will see almost equal number of images where the trees constitute top part of the image to those images where they lie in only the middle and bottom part.
Talking about the CNN, it automatically learns (thats the beauty!) the properties of an object which are there in the training images. By properties I mean the object's likely position, location, its shape color etc. If you visualize the CNN layer (mostly later layers) output, using [class activated maps](https://jacobgil.github.io/deeplearning/class-activation-maps), you can see what CNN has learnt and paying attention to. Also, you can [visualize the filters](https://jacobgil.github.io/deeplearning/filter-visualizations) (or kernels) that are learned by the CNN.
Upvotes: 1 <issue_comment>username_2: According to your example:
Trees will likely be in the bottom half of the image. Still, you will not know whether there will be one, two or five trees. Thanks to translation invariance property of CNN's, each tree will activate filters responsible for tree detection. You still need to handle those few exceptions where trees are on hill.
To achieve better results in this particular case, you might want to consider some kind of focus mechanism, that will try to get rid of unwanted part of picture, in case when there are (for example) no hills. Take a look at [Spatial Transformer Networks](https://arxiv.org/abs/1506.02025). During training it learns to predict spatial transformation (for example zoom) that will help "main" classifier to predict class of image.
Upvotes: 2 |
2018/05/23 | 2,607 | 10,659 | <issue_start>username_0: I want to use computer vision to allow my robot to detect the corners of a soccer field based on its current position. Matlab has a detectHarrisFeatures feature, but I believe it is only for 2D mapping.
The approach that I want to try is to collect the information of the lines (using line detection), store them in a histogram, and then see where the lines intersect based on their angles.
My questions are:
>
> 1. How do I know where the lines intersect?
> 2. How do I find the angles of the lines using computer vision?
> 3. How do I update this information based on my coordinates?
>
>
>
I am in the beginning stages of this task, so any guidance is much appreciated!<issue_comment>username_1: Finding lines in an image often leads to the Hough line transform. Many libraries implement it, including [OpenCV](https://docs.opencv.org/3.4.1/d6/d10/tutorial_py_houghlines.html). Getting the lines should answer subsequent questions (and it doesn't, please consider having one question per post, and some other sites on [StackExchange](https://stackexchange.com/) may be better suited than AI.SE).
Alternative approaches based on Machine Learning may also exist. It may be interesting at this point to look into libraries that do human pose/gait recognition. Some library may be repurposed to recognize the "field gait", assuming the field outer mark is its "gait"-equivalent.
Upvotes: 2 <issue_comment>username_2: I assume that you are familiar with homogeneous transformations and the meaning of *global* and *local* coordinate frames. If not, global frame is the fixed frame; a reference frame for your whole problem, such as the starting position of your robot. Local frame should be placed anywhere **on** your robot, preferably in the middle-point of the virtual line (called "*robot base*") that connects the two actuating wheels in the back of your robot (given that you follow the differential drive setup). If not, just place the local frame anywhere that makes sense on the robot, such as its geometrical center.
Answering your questions:
>
> How do I know where the lines intersect?
>
>
>
The accepted answer in [this](https://stackoverflow.com/questions/12747955/how-do-i-quickly-find-the-closest-intersection-in-2d-between-a-ray-and-m-polylin "This") is **by far** the best I have seen around, which I have also used successfully for a project regarding robotic exploration in an unknown maze.
>
> How do I find the angles of the lines using computer vision?
>
>
>
You **DON'T** need computer vision for that. For every line, pick 2 points expressed in global frame (x1,y1),(x2,y2) and calculate the slope of the line as:
```
lambda = (y2-y1) / (x2-x1)
```
Then the angle of the line is `atan(lambda)`, in global frame. Do this for all lines and then subtract the angles of any two lines to find their relative angle (pay attention to the sign).
Alternatively, I would personally use the [RANSAC](https://en.wikipedia.org/wiki/Random_sample_consensus "RANSAC") algorithm to de-noise the detected points and give me the line equation based on the consensus of all points. This line equation should already have the slope in it:
```
y = ax + b
```
where *a* is the slope and *b* is the vertical offset. Then do the aforementioned steps, i.e. atan(a) and subtraction to find the relative angle between two lines.
If you **explicitly** want to use computer vision, maybe train a neural network on known angles and then classify images of lines to output their angles. This approach will be by far the most painful and I do NOT recommend it at all.
>
> How do I update this information based on my coordinates?
>
>
>
This can be quite tricky. If the line points are detected by your on-board camera, you first of all need to convert them from the camera's coordinate frame (found in the camera's datasheet) to your robot's local frame, as explained above. To do this, you need to calculate the static transformation between the two frames (static because it will never change if the camera is fixed on the robot, you only define it once).
Then you need to convert the detected line equation from your robot's frame to the global frame. In order to do so, you need to keep track of the robot's pose (position + orientation) while it moves in the 2D or 3D space. This is generally a hard task because it needs an excellent implementation of your localization algorithms, such as an Extended Kalman Filter or Particle Filter. The localization algorithm will provide you with the information that you need at any point in time in order to convert the lines/points from your robot frame to the global frame and visualize them.
In short: you need to transform the detected lines / points from **camera frame -> robot (local) frame -> global frame**. The first transformation is static (calculated once and never changes) whereas the second one is dynamic (changes every time the robot moves or turns). The first one is fairly easy to calculate but the second one can be a real pain.
Upvotes: 2 [selected_answer]<issue_comment>username_3: I suggest you first consider your coordinate systems. There are two.
**Field Coordinate Axis**
Field boundary corners are in field coordinates (for example): { (-50.0, -35.0, 0), (50.0, -35.0, 0), (-50.0, -35.0, 0), (50.0, -35.0, 0) }, all values in meters.
At any moment in time the camera in the robot: is at (x, y, z) and oriented relative to north by angle theta, measured clockwise when looking from above the field. The value of z may be 2.0 (for example).
**Image Coordinate Axis**
The coordinate axes of the camera images is (w, h).
You have frames in time (perhaps every 33 msec containing grids in a the w-h coordinate axis with 1080 x 960 pixels (for example) providing an index range (<0, 1079>, <0, 959>).
**Maintaining Orientation of Short Robots (small Z)**
You are correct that the Harris feature detection may not work because z (the distance from the surface of the field to the center of the camera lens) may not be sufficient for that algorithm unless the robot is near a corner. The rectangle of the field boundary is not at all rectangular in the camera's w-h focal plane. For the same reasons, finding lines and then locating their intersections is not the optimal approach either.
Pretend you are the robot. As the robot survey's the field, it can assemble a model of the 360 degree periphery. What it sees is a gradually curved line with three upside down v shapes representing the field corners. Unless the robot is almost on top of one of the corners, all four features that correspond to the corners of the field boundary will only vaguely appear to be corners at all.
**Mathematics of Obtuse Corner Detection**
Two tangent lines stem from each corner. They intersect at a discontinuity of the line's derivative, dw/dh, the slope in 2D phase space of the camera frame. The angle found between these two tangent lines will usually be closer to 175 degrees than 90 degrees, yet they are still detectable because the rest of line has no other like discontinuities of slope. From a Fourier transform perspective, the 360 degree line is actually a periodic waveform primarily comprised of the 4th, 12th, 20th, 28th, and 36th harmonics. If you are good with that level of mathematics and you record past frames, you can exploit Fourier's series and FFTs for high accuracy in corner detection.
As you develop your theory and your software, you may find that other aspects of play need to be considered. It may be best to think of those aspects now. Fortunately, if another player or official blocks a portion of the field's boundary line, it will create a discontinuity in the line itself, but not the slope of the line in the w-h plane of the camera's image. Your implementation will need to differentiate those two types of differences, which is hardly an insurmountable problem. Discontinuity in a line and discontinuity in its derivative are mathematically distinct naturally.
**Redundancy in Feedback Channels**
If the robot can sense its location and orientation in other ways and know x, y, z, and theta above with some degree of reliability, the expected location of the obtuse angles and the detected ones can be compared to determine the probability that the robot is properly detecting is orientation.
**Questions in This Context**
In this context the questions you listed need some reorientation.
**How do I know where the lines intersect?**
The line has two edges that may lie on the same pixel in many cases, so that is not easy to detect in an image with many other lines. If the line is of a particular color, hue detection can assist in line detection. If the above corroborative data analysis is employed, then misinterpretation of edges can be corrected quickly in real time. Once the lines are found, the detection of dh/dw at any given point on the line can be estimated using linear regression of segments and windowing (looking at short segments one at a time). When an otherwise relatively stable slope quickly shifts 5 or 10 degrees in angle between windows, you have a high probability that you've found a distant field corner. A shift in 70 to 80 degrees combined with a lower h value in the frame is indicative of a corner in close proximity.
**How do I find the angles of the lines using computer vision?**
Edge detection, systematic elimination of candidate edges that are not likely field boundaries, and then linear regression of the best candidates.
**How do I update this information based on my coordinates?**
Just save them in an appropriate array of x, y, z, theta vectors, indexed by frame number. You will probably want to keep track of what you think your robot's x, y, z, and theta values are and constantly test your assumptions against your most recent inputs. Otherwise, your robot can become disoriented. The more ways you can detect location and orientation, the higher reliability you will have in the overall system. If your vision can detect some feature at each goal that will not change during the game, it may help. Ultimately your x, y, z, and theta are the parameters in a model and the use of gradient descent and auto-correlation and other auto-correction techniques need to be applied to keep your robot's orientation model continuously updated.
**Recommend Diving Into the Math First**
The 3D trig to work all of the above out in detail is initially daunting but not that far beyond high school trig if the researcher develops some clear diagrams first and then takes the time to resurrect any rusty mathematics skills or hone some new ones.
Upvotes: 1 |
2018/05/24 | 882 | 3,591 | <issue_start>username_0: How do I decrease the accuracy value when training a model using [Keras](https://keras.io/); which parameters can I change to decrease the value?
My objective is not to actually decrease it, but just to know which parameters influence the accuracy
```
sgd = optimizers.SGD(lr=1e-2)
```<issue_comment>username_1: There are **many** things affecting accuracy. I'm gonna assume a lot here because you don't say anything about the model, what you're trying to achieve or how many classes you have. You're not even saying whether you're classifying or not. Also, you're not saying which accuracy you're using (classification, AUC, F1 etc.).
I'm gonna assume here that you have some classification problem.
**Accuracy** is the measure of how many classifications you got correct. In my experience 99% is a warning sign because it's too good to be true, and a result like that is often due to [overfitting](https://en.wikipedia.org/wiki/Overfitting). Since this, in my experience, is the main reason you'd actually want the accuracy down, this is what I'm going to assume is your problem.
Overfitting occurs when you train "too much" and the model only learns things that are within your training set, and fails on everything else. That is: it generalized bad.
To prevent this there are a number of things you could do;
**1) Data segmentation**
The most common is to split your data into three bulks: training (~70% of the data), validation(~20%) test (~10%). These percentages are indications and would vary depending on how much data you have, and the balancing.
The idea is that you train on the training data, you run the validation through the network, and calculate the accuracy. When this accuracy, call it validation accuracy, is satisfactory, then you stop the training and run the test data through it. The latter accuracy (test accuracy) is the one that most papers publish (combined with [AUC](https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve) and [F1 score](https://en.wikipedia.org/wiki/F1_score).
*Important*: When you have split the data into these bulks you should put away the test, and **not use it during training** at all. You only use this at the very end to do an extra check that you haven't overfitted.
**2) Regularization**
There are many types of [regularization](http://www.deeplearningbook.org/contents/regularization.html). Two very popular regularization methods for preventing overfitting are the L2-regularization (see previous link) and the [dropout](https://arxiv.org/pdf/1102.4807.pdf) methods.
Without going into detail, these methods prevent the model weights from becoming too large. This is a good thing since the model won't rely too much on one feature, which in turn attenuates overfitting.
---
I hope you learned something, and the most important lesson is that you should know what you're doing. If not, you could end up with a model that is not behaving like you thought. In the case of overfitting, you'd end up with a model that only works on your training data, which doesn't really do much good.
I really recommend the book by Goodfellow: [deeplearningbook.org](http://www.deeplearningbook.org/).
Upvotes: 2 [selected_answer]<issue_comment>username_2: If you want to decrease the accuracy given the same optimizer/epochs/batch, you could add more layers and increase the number of parameters, it should now take a bit longer to converge. Hence for the same epochs you would get lower accuracy. You could also initialise your parameters in a non sensical way.
Upvotes: 0 |
2018/05/24 | 947 | 4,052 | <issue_start>username_0: I would like to know how to teach an agent for performing prediction of the severity of disease and also for alerting patients using machine learning methods.
I found the [model-based reflex agent](https://ai.stackexchange.com/q/3243/2444) can be used in medical diagnosis in some literature.
May I know which architecture will be good, to make such an agent?<issue_comment>username_1: your question is quite broad, each disease has its own characteristics. And sometimes it takes a domain-expert or a pathologist to predict the severity of certain disease. you can't predict severity of all diseases with one algorithm (or a method).But in some cases you can use machine-learning methods to get assistance. I suggest you to go through "grand-challenge.org" competitions and read the writeups of teams which participated in those competitions to get a basic idea.
Upvotes: 1 <issue_comment>username_2: The question is really broad---as stated by @username_1---so difficult to give a meaningful answer. The following is about a clarification about the problem, and some directions.
A model-based reflex agent is a blueprint describing the key components necessary to build that agent. It is an *abstract* architecture to guide the creation of concrete agents. Whether software or hardware, the target agent should have a bit serving as "sensors" (a HW camera, or a software API"), "actuators", memory, etc. So if you aim at a HW agent, you need to decide what compenents, and collect them. A SW agent would be the definition of a [software architecture](https://en.wikipedia.org/wiki/Software_architecture), or the use of some modelling framework.
The Machine Learning part is about endowing the agent with some skill. It is the "smarts" programming part, what the "brains" will do in the instance of the blueprint. Please note that it does *not* have to be Machine Learning. It could be another "style" of programming, such as a rule-based system, or a plain hand-crafted program.
---
How to teach such agent is currently an open question. First steps usually start with drawing the agent as a black box, its inputs (e.g. symptom data), its desired output (diagnosis and alert). Then we detail the black box in terms of what sensors the agent needs to process the input, what actuators for the output, and what it needs to learn and memorize. Depending on the available input data and diagnosis/alert output system (email?), the next stage aims at refining step by step, until a good idea of what needs be implemented emerges.
At this point, all pieces are in place, except the internal model---the piece that really pertains to Machine Learning (if you choose ML for that). As the input data is available, and the output format decided, the "final" stage (before implementation) is to define the model. It really depends on the actual data and the goal (here we assume prediction). Labelled data (we can teach with input *and* output, as we already know them) usually leads to [supervised learning](https://en.wikipedia.org/wiki/Supervised_learning). Unlabelled data (we can teach with the input only) leads to unsupervised learning or reinforcement learning. Once one understands the situation, we can choose some algorithm like SVM, decision trees, neural networks, etc. Note that the final decisions (before implementation) require studying the data beforehand (Is it regular? Are there missing bits? Is it in a format useful for ML? Etc.) to make appropriate choices.
---
Final note: The question may get closed on this site, because it is too broad. Way more useful to ask narrow questions, with a clear answer or set of answers. As you see here, this long answer is just the tip of how to teach an agent (I could not sleep anyway). And no implementation yet. In fact, all this could be summed up as "typical system development, with an ML component". Two guiding principles: (1) isolate the ML component(s), so specialists can dig and make the best of them, and (2) keep it simple.
Upvotes: 3 [selected_answer] |
2018/05/25 | 1,202 | 5,074 | <issue_start>username_0: I am looking to detect thin objects, like pens, pencils, and surgical instruments. The bounding box is not important, but I am looking to see if I can train a model to detect both the object as well as its orientation.
Typical object detection networks, like R-CNN, YOLO, and SSD encode the class name and bounding boxes. Instead of bounding boxes, I'm looking to encode only 2 points, one starting $x,y$ point and one ending $x,y$ point. The start point for objects is where one would grip the object. For instance:
* The pencil eraser(start point) is pointed 50 degrees to the top right.
* The surgical instrument is 10 degrees from the x-axis and the handle is pointed to the bottom right.
* Pen tip (endpoint) is pointing vertically upwards.
* Fork, the start point would be the grip handle part, and the endpoint would be in the middle where the 4 prongs are.
As long as I can encode the start and endpoints, then I can determine the orientation. I would need to define these points during training.
The question is whether there is an existing model (mobile net/inception/RCNN) that I can encode this information in? One potential way I was thinking was to use YOLO and for the bounding box, the top left $x,y$ would be the starting point $x,y$ (handle), whereas the bounding box's width and height would be replaced with the endpoint $x,y$ (pencil writing tip, fork prongs.<issue_comment>username_1: your question is quite broad, each disease has its own characteristics. And sometimes it takes a domain-expert or a pathologist to predict the severity of certain disease. you can't predict severity of all diseases with one algorithm (or a method).But in some cases you can use machine-learning methods to get assistance. I suggest you to go through "grand-challenge.org" competitions and read the writeups of teams which participated in those competitions to get a basic idea.
Upvotes: 1 <issue_comment>username_2: The question is really broad---as stated by @username_1---so difficult to give a meaningful answer. The following is about a clarification about the problem, and some directions.
A model-based reflex agent is a blueprint describing the key components necessary to build that agent. It is an *abstract* architecture to guide the creation of concrete agents. Whether software or hardware, the target agent should have a bit serving as "sensors" (a HW camera, or a software API"), "actuators", memory, etc. So if you aim at a HW agent, you need to decide what compenents, and collect them. A SW agent would be the definition of a [software architecture](https://en.wikipedia.org/wiki/Software_architecture), or the use of some modelling framework.
The Machine Learning part is about endowing the agent with some skill. It is the "smarts" programming part, what the "brains" will do in the instance of the blueprint. Please note that it does *not* have to be Machine Learning. It could be another "style" of programming, such as a rule-based system, or a plain hand-crafted program.
---
How to teach such agent is currently an open question. First steps usually start with drawing the agent as a black box, its inputs (e.g. symptom data), its desired output (diagnosis and alert). Then we detail the black box in terms of what sensors the agent needs to process the input, what actuators for the output, and what it needs to learn and memorize. Depending on the available input data and diagnosis/alert output system (email?), the next stage aims at refining step by step, until a good idea of what needs be implemented emerges.
At this point, all pieces are in place, except the internal model---the piece that really pertains to Machine Learning (if you choose ML for that). As the input data is available, and the output format decided, the "final" stage (before implementation) is to define the model. It really depends on the actual data and the goal (here we assume prediction). Labelled data (we can teach with input *and* output, as we already know them) usually leads to [supervised learning](https://en.wikipedia.org/wiki/Supervised_learning). Unlabelled data (we can teach with the input only) leads to unsupervised learning or reinforcement learning. Once one understands the situation, we can choose some algorithm like SVM, decision trees, neural networks, etc. Note that the final decisions (before implementation) require studying the data beforehand (Is it regular? Are there missing bits? Is it in a format useful for ML? Etc.) to make appropriate choices.
---
Final note: The question may get closed on this site, because it is too broad. Way more useful to ask narrow questions, with a clear answer or set of answers. As you see here, this long answer is just the tip of how to teach an agent (I could not sleep anyway). And no implementation yet. In fact, all this could be summed up as "typical system development, with an ML component". Two guiding principles: (1) isolate the ML component(s), so specialists can dig and make the best of them, and (2) keep it simple.
Upvotes: 3 [selected_answer] |
2018/05/29 | 910 | 4,113 | <issue_start>username_0: In chapter 8 of "Reinforcement Learning: An Introduction" by Sutton and Barto, it is stated that Dyna needs a model to simulate the environment.
But why do we need a model? Why can't we just use the real environment itself? Wouldn't it be more helpful to use real environment instead of fake one?<issue_comment>username_1: Unlike algorithms presented in other chapters of Sutton and Barto, Dyna is a *planning* algorithm. That means that it makes decisions online, in a real environment, that attempt to be as optimal as possible given some constraints such as current knowledge and time available to compute between time steps. This differs from learning-only online algorithms which typically take a small step towards optimality on each piece of new experience as it happens.
A planning algorithm can only do its job well if it is allowed to "look ahead" at the consequences of its behaviour whilst learning online. In fact, this is the *definition* of planning - to choose an action based on reasoning about consequences of that action.
For an algorithm to look ahead before taking an action, it needs a model of how the environment will respond to that action. That model does not need to be coded up directly - e.g. you don't necessarily need to write a physics engine to predict the real world (although a basic one might be a good prior or pre-training step). Instead it can be a *learned* model, and typically in e.g. Dyna-Q, that is what you use.
There is a strong relation between Dyna-Q, and regular Q-learning with experience replay. In the most basic forms, they are essentially the same algorithm with a different framing. However, you can take the planning ideas further e.g. focus improvements around the currently experienced state and paths to a goal state in Dyna-Q, perhaps making it closer to MCTS conceptually.
>
> Wouldn't it be more helpful to use real env instead of fake one?
>
>
>
Most real environments do not let you take actions, see the consequences and then rewind in order to re-try. Essentially that is what planning algorithms are making up for - they try to predict consequences. This is important when mistakes made during training have real consequences, for example for a physical robot navigating an environment where there might be a possibility of a fall or collision that damaged something. Whilst online learning algorithms such as SARSA will also help with this in different ways (in SARSA by changing policy to allow for exploratory moves), typically Q-learning will be weaker than Dyna-Q when it comes to learning quickly from mistakes. With the usual caveat: Much still depends on the specific problem and choices of hyperparameters.
Upvotes: 2 <issue_comment>username_2: Sutton's Dyna has been shown to be more effective for many problem spaces than learning systems that work without a model, yet it requires fewer processing cycles than certainty-equivalence methods. It is advanced in that it, in parallel, builds a model and adjusts behavioral policy based on both incoming information and the model. The goal was to integrate both identified capabilities of the human brain.
*Why do we need Model in Dyna?*
The capacity to model is an essential part of the Dyna architecture and may, over time, prove to be an essential component to achieve the greater effectiveness. Many think so, myself included. In other words, there may be no equivalently effective mechanism than building and maintaining a model for many problem sets.
*Why can't we just utilize a real environment itself? Wouldn't it be more helpful to use real environment instead of fake one?*
The real environment cannot be placed in memory for many reasons. Primarily, it would not fit. Furthermore, not much of it can be acquired. Only images of the environment can be acquired and placed in memory. The most important characteristic of images, whether they be stock prices, temperature readings, or streamed video, is that they are grossly sparse and undifferentiated representations of the environment on which we are attempting to operate.
Upvotes: 0 |
2018/05/29 | 388 | 1,494 | <issue_start>username_0: What are feature embeddings in the context of convolutional neural networks? Is it related to bottleneck features or feature vectors?<issue_comment>username_1: >
> Feature embeddings are basically anything that can act as a hidden representation for given object.
>
>
>
In the case of images, a CNN architecture is built to create such hidden representation. Usually, the outcome of the bottleneck layer is flattened (and sometimes, converted to even lower dimensional space by adding one more dense layer) and used as feature embeddings.
Upvotes: 2 <issue_comment>username_2: The term *feature embedding* appears to be a synonym for *feature extraction, feature learning* etc. I.e. a form of embedding/dimension reduction (with the caveat the goal may not be a lower dimensional representation but one of equal dimensionality, but more meaningfully expressed):
>
> Feature embedding is an emerging research area which intends to transform features from the original space into a new space to support effective learning.
>
>
>
* [Generalized Feature Embedding for Supervised, Unsupervised, and Online Learning Tasks](https://link.springer.com/article/10.1007/s10796-018-9850-y) (2019)
>
> Feature embedding aims to learn a low-dimensional vector
> representation for each instance to preserve the information
> in its features.
>
>
>
* [Large-Scale Heterogeneous Feature Embedding](http://people.tamu.edu/~xhuang/Xiao_AAAI19.pdf) (2019)
Upvotes: 3 |
2018/05/29 | 438 | 1,934 | <issue_start>username_0: I study AI by myself with the book "Artificial Intelligence: A Modern Approach". I've just finished the chapters about the Bayesian network and probabilities, and I found them very interesting. Now, I want to implement different algorithms and test them in different cases and environments.
Is it worth it to spend time on these techniques?<issue_comment>username_1: \*AI, A Modern Approach," was given that title to break from previously narrow approaches to duplicating desirable qualities of human thinking.
Although Bayesian networks require somewhat resource intensive computational elements, the importance of Bayesian inference and probability are still of paramount importance in that some of the highest scientific thinking require mastery of them. Furthermore, developing in silicon dies (or possibly graphene nanites) the machinery to perform elementary probability computations in massively parallel architectures may arise over the next few decades. The use of video DSP circuits to implement ANNs is a notable segue into this kind of development.
I would not dismiss the techniques you just read about. If your intention is to capitalize on the recent crazes, you may enter the river of wannabees chasing every current trend, implement many systems that other people will rewrite later, and have a meaningless yet profitable career. I would recommend following your inquisitiveness instead.
Upvotes: 3 [selected_answer]<issue_comment>username_2: The **chapters** for Bayesian Networks are:
13. Quantifying Uncertainty
14. Probabilistic Reasoning
15. Dynamic Bayesian
don't forget:
20. Naive Bayes, hidden variables, Markov
Maybe helpful:
. Are We Going in the Right Direction? ... p.1049
If you find them interesting then invest more time to it. You might improve them and break new scientific ground.
Recent trend goes **Deep Convolutional Neural Networks** (ex: AlphaGo)
Upvotes: 1 |
2018/05/30 | 397 | 1,622 | <issue_start>username_0: I'm having trouble grasping how to output word embeddings from an LSTM model. I'm seeing many examples using a softmax activation function on the output, but for that I would need to output one hot vectors as long as the vocabulary (which is too long). So, should I use a linear activation function on the output to get the word embeddings directly (and then find the closest word) or is there something I'm missing here?<issue_comment>username_1: In the research papers, it is not clear how they do that. From what I understood, you need to add a dense layer after your RNN layer. This dense layer is the size of your vocabulary. From my experience, this works even for a large vocabulary (30 000 - 40 000 for me) if you have enough data. Here you don't try to reconstruct the embedding but a one-hot vector of the current word. You can then use a cross-entropy loss. This last layer will have a lot of parameters.
You will see several implementations which are using the MSE loss directly on the word embedding output. Personally I didn't succeed with this approach but if other people could share their experiences, it could be great.
Upvotes: 0 <issue_comment>username_2: Actually, LSTM is not used to get word2vec. Indeed, word2vec is extracted from corpus of words using MLP (Multi Layer Perceptron). There is a great tutorial on how to extract word2wec:
<http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/>
After representing word as vectors, you feed your text to LSTM in a deep architecture which the last layer must be softmax to categorize your text.
Upvotes: 1 |
2018/05/30 | 603 | 2,390 | <issue_start>username_0: There seems to be a major difference in how the terminal reward is received/handled in self-play RL vs "normal" RL, which confuses me.
I implemented TicTacToe the normal way, where a single agent plays against an environment that manages the state and also replies with a new move. In this scenario, the agent receives a final reward of $+1$, $0$ and $-1$ for a win, draw, and loss, respectively.
Next, I implemented TicTacToe in a self-play mode, where two agents perform moves one after the other, and the environment only manages the state and gives back the reward. In this scenario, an agent can only receive a final reward of $+1$ or $0$, because, after his own move, he will never be in a terminal state in which he lost (only agent 2 could terminate the game in such a way). That means:
1. In self-play, episodes end in such a way that only one of the players sees the terminal state and terminal reward.
2. Because of point one, an agent can not learn if he made a bad move that enabled his opponent to win the episode. Simply because he does not receive a negative reward.
This seems very weird to me. What am I doing wrong? Or if I'm not wrong, how do I handle this problem?<issue_comment>username_1: In the research papers, it is not clear how they do that. From what I understood, you need to add a dense layer after your RNN layer. This dense layer is the size of your vocabulary. From my experience, this works even for a large vocabulary (30 000 - 40 000 for me) if you have enough data. Here you don't try to reconstruct the embedding but a one-hot vector of the current word. You can then use a cross-entropy loss. This last layer will have a lot of parameters.
You will see several implementations which are using the MSE loss directly on the word embedding output. Personally I didn't succeed with this approach but if other people could share their experiences, it could be great.
Upvotes: 0 <issue_comment>username_2: Actually, LSTM is not used to get word2vec. Indeed, word2vec is extracted from corpus of words using MLP (Multi Layer Perceptron). There is a great tutorial on how to extract word2wec:
<http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/>
After representing word as vectors, you feed your text to LSTM in a deep architecture which the last layer must be softmax to categorize your text.
Upvotes: 1 |
2018/05/30 | 1,984 | 7,183 | <issue_start>username_0: I've been reading Google's DeepMind Atari [paper](http://web.stanford.edu/class/psych209/Readings/MnihEtAlHassibis15NatureControlDeepRL.pdf "RL") and I'm trying to understand the concept of "experience replay". Experience replay comes up in a lot of other reinforcement learning papers (particularly, the AlphaGo paper), so I want to understand how it works. Below are some excerpts.
>
> First, we used a biologically inspired mechanism termed experience replay that randomizes over the data, thereby removing correlations in the observation sequence and smoothing over changes in the data distribution.
>
>
>
The paper then elaborates as follows (I've taken a screenshot, since there are a lot of mathematical symbols that are difficult to reproduce):
[](https://i.stack.imgur.com/eOOTH.png)
What is experience replay and what are its benefits in laymen's terms?<issue_comment>username_1: In reinforcement learning (RL), an *agent* interacts with an *environment* in time steps. At each time step $t$, the agent and the environment are in some state $s\_t$. From that state $s\_t$, the agent chooses and executes an action $a\_t$ and the environment emits a reward $r\_t$ (which values the just taken action $a\_t$). Finally, the agent and the environment move to the next state, $s\_{t+1}$. This interaction proceeds until either the agent dies or some other termination criterion is met.
The goal of the agent is to obtain the highest amount of reward in the long run (that is, not just in the next time step, but in all successive time steps). To do that, ideally, the agent needs to find a way of behaving "optimally". In RL, the behaviour of the agent is called a "policy". An optimal policy is a policy that allows the agent to obtain the (expected) highest amount of reward in the long run.
In this context, we can describe a full and finite (in terms of time steps) interaction between an agent and an environment as a sequence (sometimes called a "rollout" or a "trajectory") of states, actions, rewards and next states. So, a rollout might look like this $$(s\_t, a\_t, r\_t, s\_{t+1}, a\_{t+1}, r\_{t+1}, s\_{t+2}, \dots, s\_{T-1}, a\_{T-1}, r\_{T-1}, a\_{T}),$$ where $T$ is the last time step of the interaction between the agent and the environment. During the interaction between the agent and the environment, the agent might decide to store this "experience" in a "buffer" (e.g. an array), so that it can use it later (you will see below a use case).
The elements of this type of sequences are often temporally *correlated*. What does this mean? For example, suppose that states are frames of a video game (that is, each frame is a different state). In this context, successive frames (or states) are *similar* to each other, which mathematically means that they are correlated.
It turns out that neural networks (NNs) are able to approximate (almost) any function. In RL, a policy is also a function: it is a function from a state to an action (or [probability distribution over actions](https://ai.stackexchange.com/q/12274/2444)). So, we can represent a policy using a NN. (Deep RL is essentially a combination of traditional RL algorithms, like Q-learning, with NNs).
Moreover, it also turns out that training a NN using back-propagation with data that is temporally correlated might lead the NN not to capture the essential characteristics of the data, which, in practice (during training), means that we are not able to find the NN that represents the optimal policy (or another function, e.g. $Q(s, a)$, used later to retrieve the policy). In such cases, we often say that the training of the NN is not stable.
In the case of RL, the data used to train such types of neural networks (which represent the policy, or other functions that are used in RL) are the "rollouts", which contain elements that are often temporally correlated. Hence, we can't just feed the NN with a rollout, in the same order that the elements (states, actions, rewards and next states) are collected. So, in order to use uncorrelated data to train an NN, we can randomly take tuples of the form $\langle s\_h, a\_h, r\_h, s\_{h+1} \rangle$ from the rollout (where $h$ is some time step between $t$ and $T$). For example, suppose we take (or "sample") $3$ tuples $\langle s\_7, a\_7, r\_7, s\_{8} \rangle$, $\langle s\_{97}, a\_{97}, r\_{97}, s\_{98} \rangle$ and $\langle s\_{2}, a\_{2}, r\_{2}, s\_{3} \rangle$. Given that these elements have been observed at quite different points in time, they are likely to be less correlated than e.g. $\langle s\_7, a\_7, r\_7, s\_{8} \rangle$, $\langle s\_{8}, a\_{8}, r\_{8}, s\_{9} \rangle$ and $\langle s\_{9}, a\_{9}, r\_{9}, s\_{10} \rangle$ (which are successive "tuples of experience").
In this context, "experience replay" (or "replay buffer", or "experience replay buffer") refers to this technique of feeding a neural network using tuples (of "experience") which are less likely to be correlated (given that "random sampling" procedure). The "buffer" part refers to a data structure (e.g. an array or list) that stores the trajectory (or rollout), that is, it stores the "experience" of the agent (hence the name "experience"). The "replay" refers to the fact that this "experience" is reused (or "replayed") by randomly sampling from it to train the NN.
See also this question [Why exactly do neural networks require i.i.d. data?](https://ai.stackexchange.com/q/10839/2444) regarding the fact that NNs often require i.i.d. data.
Upvotes: 2 <issue_comment>username_1: ### What is it?
An experience replay (ER) buffer is an array/list (or buffer) $D = [e\_1, \dots, e\_N ]$ where you store the transitions that the agent collects while interacting with the environment. These transitions are usually represented as tuples of the form $e\_t = (s\_t, a\_t, r\_t, s\_{t+1})$, where
* $s\_t$ is the state of the agent at time step $t$,
* $a\_t$ is the action taken by the agent when in state $s\_t$,
* $r\_t$ is the reward received by the environment after having taken action $a\_t$ in $s\_t$
* $s\_{t+1}$ is the next state the agent ended up in after that action
The agent then samples (e.g. uniformly) transitions from this ER buffer $D$ to perform an update of the value function $\hat{q}(s, a)$. The ER buffer $D$ can thus be thought of as a dataset.
### Why do we need it?
The motivation for experience replay (in the [DQN paper](https://www.cs.toronto.edu/%7Evmnih/docs/dqn.pdf) that popularized this technique) is that learning becomes more stable. In fact, the authors of the DQN paper write
>
> To alleviate the problems of correlated data and non-stationary distributions, we use an experience replay mechanism [[13](http://isl.anthropomatik.kit.edu/pdf/Lin1993.pdf)] which randomly samples previous transitions, and thereby smooths the training distribution over many past behaviors.
>
>
>
or
>
> By using experience replay the behavior distribution is averaged
> over many of its previous states, smoothing out learning and avoiding oscillations or divergence in the parameters.
>
>
>
Upvotes: 0 |
2018/05/31 | 2,404 | 9,298 | <issue_start>username_0: How will we recognize a conscious machine (or AI)? Is there any consciousness test? For example, if a machine is aware of its previous experiences, can it be considered conscious?<issue_comment>username_1: I think general artificial intelligence will only be possible with some form of self awareness included. Many aspects of human communication do not work if one of the communicating partners does not have [self awareness](https://en.wikipedia.org/wiki/Self-awareness). A good example are many of today's chat bots. They seem to not even hear what they say and only rarely seem to have episodic memory.
Advances in machine to human communication and collaboration will eventually create systems with an ever increasing complex inner model that allows the system to interact in a natural way with humans and to fulfill tasks which require human level intelligence and flexibility. However, unless we have developed a very advanced understanding of consciousness it will be hard to judge how similar or different such a machine consciousness is compared to human consciousness.
Upvotes: 2 <issue_comment>username_2: "Consciousness" does not have a universal definition. However, if you are really into "consciousness", you should probably read about [Searle's Chinese Room experiment](https://en.wikipedia.org/wiki/Chinese_room) or Marvin Minsky's [society of mind](https://en.wikipedia.org/wiki/Society_of_Mind).
In my opinion, there are many more fundamental obstacles in current AI research that we have to tackle first.
Furthermore, a more formal question would be about how an [artificial general intelligence (AGI)](https://en.wikipedia.org/wiki/Artificial_general_intelligence) would emerge. Even for that, there is no clear roadmap, since we are still very new to understanding the true power of neural networks or other successful AI methods.
[<NAME>](https://twitter.com/fchollet/status/837188765500071937) said in a tweet
>
> For all the progress made, it seems like almost all important questions in AI remain unanswered. Many have not even been properly asked yet
>
>
>
Upvotes: 2 <issue_comment>username_3: There are two main subjects you need to look at to understand the problem:
**The Turing Test**
>
> The Turing test, developed by <NAME> in 1950, is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.
> [wiki](https://en.wikipedia.org/wiki/Turing_test#Imitation_game)
>
>
>
See also: [Turing Test (Stanford Philosophical Dictionary)](https://www.iep.utm.edu/art-inte/#H2)
There is a linguistic element, which is that intelligence can be interpreted in many legitimate ways. Some maintain that Artificial Intelligence has not yet been achieved; other feel [a simple automated switch constitutes the most basic form of AI](https://ai.stackexchange.com/q/3847/1671).
Bear in mind that "intelligence" is distinct from consciousness in the sense of "self awareness", but the generalized Turing Test can also be understood as a gauge of the appearance of consciousness.
This leads to **Searle's Chinese Room Argument**. I highly recommend reading the [Stanford Philosophy link](https://www.iep.utm.edu/chineser/), but the [wiki](https://en.wikipedia.org/wiki/Chinese_room) gives a simpler synopsis:
>
> The Chinese room argument holds that a program cannot give a computer a "mind", "understanding" or "consciousness", regardless of how intelligently or human-like the program may make the computer behave.
>
>
>
* The real problem may be, how does one know an algorithm is truly conscious and not merely simulating consciousness?
<NAME> approaches from the opposite direction in [Electric Sheep](https://en.wikipedia.org/wiki/Do_Androids_Dream_of_Electric_Sheep%3F), where one of the conclusions is that "life is life" whether organic or artificial. This might be said to lead to the "Duck Test" for consciousness: "If it looks like a duck, and quacks like a duck, then it's probably a duck." (Dick's philosophy was heavily influenced by Christianity, and his view on artificial life may be though of as radically humanist.)
(A parallel philosophical argument might be "how is the appearance of free will distinct from the actuality of free will?" [Free will has been fruitlessly argued about](https://en.wikipedia.org/wiki/Free_will#Western_philosophy) for millennia, but whether the universe is actually deterministic or not, we perceive ourselves to have free will. If the universe turns out not to be strictly deterministic, it wouldn't functionally change anything. To make matters more fuzzy, true randomness is nature is only found at the quantum level and then only within certain models. Quantum uncertainty has been proposed as a basis for free will, but we don't know. Possibly uncertainty is a condition of individuality, which must be subjective in relation to a system or other subjectivities. Nevertheless, how do we know we are not "robots" acting out a pre-determined sequence of actions, which we perceive as decisions?)
I personally very much like the idea of **recursion as function of self awareness**.
>
> At the phenomenal level, consciousness can be described as a singular, unified field of recursive self-awareness, consistently coherent in a particular way; that of a subject located both spatially and temporally in an egocentrically-extended domain, such that conscious self-awareness is explicitly characterized by I-ness, now-ness and here-ness.
> SOURCE: Peters, Frederic [*Consciousness as Recursive, Spatiotemporal Self-Location*](http://precedings.nature.com/documents/2444/version/2)
>
>
>
I'd also posit that being able to read and interpret your own code is a form of self-awareness.
This brings me to the idea that understanding (Chinese Room) is a red herring, and what we're really talking about is interpretation.
We are trapped in subjectivity, humans and automata, and perfect certainty is only achievable in a very limited set of circumstances, such as [solved games](https://en.wikipedia.org/wiki/Solved_game). (There is also the idea of [completeness of a system](https://en.wikipedia.org/wiki/Complete_theory) vs. [consistency](https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems).)
When we talk about understanding in the sense of anything abstract, by which I means [semantics](https://en.wikipedia.org/wiki/Semantics) as opposed to [syntactics](https://en.wikipedia.org/wiki/Syntax_(programming_languages)), meaning vs. form, one can say that understanding is a function of interpretation, regardless of the accuracy of the interpretation.
--------------------------------------
--------------------------------------
Fun Speculation:
I've learned never to underestimate the insight of artists, and the best speculative fiction authors are narrative philosophers in the tradition of Plato. Dick related memory to identity, regardless of whether the memories are real or artificial. This might be thought of as the narrative conception of the self--I am a product of my experience. The real "me" is not merely my body, but "the story of me"--the subjective history that led to this moment of me-ness.
I think it is a not unreasonable assumption that artificial consciousness may arise out of understanding of narrative. Words are just symbols, and there are all kinds of [semantic](https://en.wikipedia.org/wiki/Semantics) issues, but actions have a concrete aspect. Game theory studies actions in the form of choices, not only in the sense of [equilibria](https://en.wikipedia.org/wiki/Nash_equilibrium), but also as a form of communication. (See: [iterated dilemma](https://en.wikipedia.org/wiki/Prisoner%27s_dilemma#The_iterated_prisoner's_dilemma)) The choices agents make in iterated dilemmas constitute a narrative history than can be analyzed and understood mathematically, and these analyses are used for decision making.
It seems to me that the idea of consciousness can be related to decision making. If you're just a transistor, that consciousness is quite limited, more akin to a cell than a complex organism. It may come down to whether one considers intelligence and consciousness to be [spectrums](https://en.wikipedia.org/wiki/Spectrum) as opposed to [thresholds](https://en.wikipedia.org/wiki/Threshold_model).
If you believe consciousness to be a spectrum, then limited artificial consciousness has already been achieved. Consciousness analogous to human-level self-awareness is still in the future.
See Also:
[Definitions of the Self (wiki)](https://en.wikipedia.org/wiki/Philosophy_of_self#Definitions_of_the_self); [Self Knowledge (wiki)](https://en.wikipedia.org/wiki/Philosophy_of_self#Self-knowledge)
Upvotes: 2 <issue_comment>username_4: As far as I think, the key to this answer lies in the area of unsupervised learning.
Why?
How do we define consciousness? Probably being aware of our existence. The key to understanding our existence starts by asking questions or finding answers which may not have any questions. This all may sound quite philosophical, but in terms of AI, it may just be finding patterns and logic from what we observe around us.
Just a theory but worth giving it a thought.
Upvotes: 0 |
2018/06/04 | 1,334 | 5,358 | <issue_start>username_0: I'm doing a project for my last university examination but I'm having some troubles! I'm making an expert system who should be able to assemble a computer after asking some questions to the user. It works but according to my teacher I need to define more rules, could you give me some suggestions please? I have facts like these:
```
processor(P, Proc_price, Price_range),
motherboard(M, Motherboard_price, Price_range),
ram(R, Ram_price, Price_range),
case(C, Case_price, Price_range),
ali(A, Ali_price, Price_range),
video_card(V, Vga_price, Price_range),
ssd(S, Ssd_price, Price_range),
monitor(D, Monitor_price, Price_range),
hdd(H, Hdd_price, Price_range).
```
I ask these questions to the user: 1) choose the price range 2) choose the display size 3) choose hard disk size Then I ask 3 questions about computer utilization to define the user: 1) do you surf on internet? 2) do you play? 3) do you use editing programs?
```
use(gaming) :- ask("Do you play games? (y/n)").
use(editing) :- ask("Do you use editing programs? (y/n)").
use(surfing) :- ask("Do you surf internet?(y/n)").
user(base) :-
use(surfing), \+ use(gaming), \+ use(editing).
user(gamer) :-
use(gaming), use(surfing), \+ use(editing).
user(professional) :-
use(editing), \+ use(gaming), use(surfing).
```
I should make more questions about user definition to make user definition more complex too and add some rules. Please help me, I'm desperate<issue_comment>username_1: The three questions *"1) do you surf on internet? 2) do you play? 3) do you use editing programs?"* are a good start, but I think your teacher is right that you need more granularity.
1) What do you use your computer for?
(a) Watching videos [leads to: "Streaming or Downloaded HD?" b/c downloading requires more local volume, and potentially a better video card.]
(b) Do you play Games [leads to: "High-end video games or simple games?" b/c playing AAA FPS requires much more powerful video cards. If they play words with friends or Tetris, a low-end card will be sufficient.]
(c) Do you use editing programs? [leads to: "photo editing? video editing? what size files?" b/c editing HD vid and high resolution photos is onerous with an underpowered system.]
You might want to ask in general what they use the computer for, because if it's just email, web surfing, Facebook and Youtube, etc., they can probably get by with a Windows surface *(I'm a regular critic of MS, but my understanding is you can get the fully functional Office Suite on Surface, which has utility value.)*
--------------------------
--------------------------
I think you're on the right track, but you should step back and think like a Product Manager here, as opposed to a developer. These questions might help you clarify your intent, and expand your initial template to a more fully featured system your teacher is looking for:
* Who is the customer for my system?
* What level of technical knowledge will the average user have?
* Am I covering every aspect with the proper amount of detail?
* Is the order of my questions correct? (Why ask price range before monitor size?)
* Is there any mechanism to help a user if they don't know what answer to choose?
Again, these are just some initial thoughts. Only you know what product you want to create, and what the capabilities should be.
Upvotes: 1 [selected_answer]<issue_comment>username_2: We cannot do homework for students in this network, however I can suggest that several items affecting cost and several usage patterns are missing and the number of rules is shy by an order of magnitude. I wholeheartedly agree with the educational directives you received.
Consider first developing your lists further to include peripherals like DVD burner, USB devices, and audio. Whether the user does scientific programming, watches movies on the monitor, develops software, and other specific usage scenarios is also more specific and therefore will produce a better tailored system than the answer to the question of whether the user is a professional.
It is not the metric of the number of rules that is of most importance. It is the number of operations contained in the rule set that is the guiding metric. This is because rules in Prolog can be aggregated. The rough estimate of rule operator count to complete a system is sqrt(i\*o)/4, where i is the number of input permutations and o is the number of output permutations.
(This is the application of Shannon Information Theory, that number of bits n = log2 (P'/P), where P' and P are the *a posteriorii* and *a priori* probabilities respectively. The divisor of four is because there are about 16 = 24 operators normally used.)
You may end up with thirty or forty rules.
Create some use cases that exercise the extremes as well as some of the typical cases from among the permutations in inputs and outputs. Run your system on those cases and observe the system behavior. Learn how to debug by outputting intermediate results or stepping through rule execution.
There are no shortcuts to researching and developing other than not wasting time worrying about how much time it will take. You can also optimize your homework time by learning the tools and then stepping back, taking a deep breath, and saying, "I can do this!"
Upvotes: 2 |
2018/06/04 | 3,675 | 16,287 | <issue_start>username_0: We are doing a research design project on autonomous vehicles and have some questions on AV Levels 4/5; specifically on the roles, impacts and consequences of AV on society, government, users and other stakeholders.
We're currently stuck on this main question:
Q: What functionally, does control look like in AV levels 4 and 5?
For example, is the whole purpose of a level 4/5 that a user has no input into the control?
Could a driver in AV (level 5) stop in an emergency, or say they want to "take corners harder, speed up, slow down"?
Could I choose to change the equi-distance between my AV and the others around me because I like space?
We're wondering about what functionally, does AV level 4/5 offer a user; and what it looks like?
---
Context:
Our remit is within the world of design (design thinking), not specifically technology, or expert system functionality. We're looking at the issue from a design perspective; who does it impact, who are the stakeholders, what are the consequences and impacts. What role does a driver have an in level 5? Could an auto-manufacturer want to give drivers control in level 5? How do emergency services act in these situations? What are the touchpoints to society and whom does it impact and what does it say about the design of AV for the future of society.<issue_comment>username_1: **Automation Levels**
Most cars have some Level 1 automation, such as cruise control and various skid/flip probability reduction systems. Most high volume passenger vehicles have higher levels. Some military and private air, land, and sea equipment are already at Level 5.
Level 4 requires that driving be automated during normal driving conditions, with manual override. However, to my knowledge, no one has published a mathematically terse and comprehensive distinction between normal and abnormal to aid in testing Levels 4 vehicles, so testing to a standard is probably not yet possible.
For legal and political reasons, Level 5 is essentially a statistical criteria. For fully automatic to be viable as a market product for general public use, the safety data for passengers and pedestrians must indicate at least the level of safety for manually driven vehicles. Although this will likely suffice from a law and public relations purpose, it is inadequate as a quality standard for automation engineering and testing. The ambiguities are numerous.
* Statistical criteria required to pass the test (i.e. sample size, duration, randomization, and double or single blindness)
* Mathematically terse and comprehensive scenarios for the test
* Allowable proportion of level 2, 3, or 4 vehicles in the control group.
* Probably others
There will be no vehicle driver in what they are calling Level 5 — only passengers. The idea is to give no power to the occupants of the AV other than destination or change in destination.
This has been the safety norm in other sub-sectors of transportation for a century. Passengers cannot talk to the pilot of a jet or the engineer of a train. In the majority of cases, safety is compromised whenever a person that has not undergone the discipline of intense safety training has control over any aspect of the vehicle's operation.
That is the primary impetus behind AV from the forward thinkers in government and academia.
**Specificity and Insight**
It is of paramount importance that researchers define system criteria more specifically and scientifically. Systems architecture, software engineering, safety evaluation, and quality control policies and procedures of the automated system driving the AV requires such.
With a billion plus lives at stake, the design should progress with the diligence and care as if designing a human-occupied drone aircraft or a civilian Mars lander from the ground up, even if your first phase is to only achieve what is being called Level 4 for basic passenger cars.
**Target Reliability and Safety**
Humans eat food, have emotional conversations, text each other, ingest mind altering substances, and fall asleep while driving. Judging the safety statistics of an AV by comparing with those of humans driving may sound practical, but it is absurd. It will become clear just how absurd the popularized Level 5 criteria is as the parameters of design are enumerated.
Design should instead minimize the possibility of any accident ever. The goal should be zero mistakes, both at point of sale (the dealership) and at later points in the product life-cycle as the AV learns.
**Defining a Mistake**
A mistake should be defined as follows:
>
> Any less than optimal state indicated by the correction signal used to direct reinforced learning in any of the system's re-entrant or coincident training mechanisms
>
>
>
The adaptive (i.e. machine learning) portion of the system must permit re-entrant or coincident training (reinforced learning) because there is no possible way to predict the common routes of the buyer at the vehicle's point of sale.
To comprehend the complexity of the problem space for AVs and begin to simplify it, consider the dimension of conditions, controls, and priorities (embodied in feedback signals) related to driving on roads for any vehicles that use roads.
**Control Channels**
* Starters (there are two in the case of most hybrids)
* Engine stop
* Breaking controls (there are two in the case of regenerative breaking)
* Steering shaft or hydraulic control position
* Break control position (three or four depending on emergency/parking break design)
* Transmission planetary gear clutch state, traditional automatic transmission control state, or clutch and traditional manual transmission control state
* External lighting switch positions (Widely variable, but at least six for headlight, high beam, break light, left signal, right signal, and tail light)
* The content of any messages broadcasted, multicasted, or sent specifically to any other vehicles with compatible reception (if birds do it, so can electronic systems) either via light, sound, or RF (this will require the development of layers of inter-vehicle communications protocols)
* Horn
* Probably others
**Data Acquisition Channels**
* Wheel positions (there may be 2 or 4 positions to read with an encoder since one cannot assume perfect alignment)
* Break torques (4 of them, which can be read by 16 redundant strain gauges)
* Break metal temperatures (4)
* Torques and temperatures for any independent emergency breaks
* Accelerometers (two devices x three dimensions per device to detect acceleration/deceleration, centripetal force, and, with some math, tire skid velocity for all four tires in two dimensions)
* Tachometers (one before the transmission and one on each wheel)
* Engine and coolant temperature detectors
* Cameras (must be high resolution to recognize animals, humans, shopping carts, curbs, road signs, train signals, speed bumps/humps, and hazards, which can be IR or visible and the more angles covered the better)
* Wind turbulence resistant external microphones to detect horns and sirens (at least four to detect likely orientation of audio source)
* Suspension strain gauges (to detect vertical road force on each tire)
* The content of any incoming messages from compatible systems
* Battery voltages and currents (two batteries for regenerative breaking or hybrid startup and motive assist, and possibly several currents)
* Fluid levels, pressures, viscosity, and transparency (fuel, oil, steering, transmission, break hydraulic, and possibly others)
* Probably others
A system can be operational and possibly reach what most would consider Level 5 with fewer channels of acquisition and control than above, but it would be poor technology planning to start designing systems with unnecessary limitations. Such limitations will also very likely increase the cost of engineering and the effectiveness of training while saving nothing.
**Why Vehicles Have Very Little Instrumentation Today**
A human cannot make use of all of all the information above. Nor could a human control all the channels listed above without a high frequency of mistakes. A properly designed electromechanical learning system can.
It would be lazy systems architecture not to capitalize on the positive impact the additional instrumentation would have on safety, total cost of ownership, and other quality criteria for the AV buyer who can afford the extra sensors and computing power. Furthermore, after a few years of manufacturing for a mass market, the cost of the additional components will become small in comparison with the cost of metal and plastic.
**Operational Criteria and the Formalization of a Mistake**
The problem space contains at least nineteen dependent variables (output channels) and forty-six independent variables (input channels). Some are binary, some are floating point, some are streamed data, some are streamed audio, and some are streamed video.
Together they form a space in sixty-five dimensions. That is what must be optimized according to some predetermined and possibly re-programmable formalization of what is optimal.
Let's consider this idea of optimum safety, thrift, and comfort as quality control criteria. Real time quality control should follow TQM ideals, continuously ensuring quality in multiple dimensions and at multiple points in the total system.
* Maximal distance from other vehicles
* Maximal distance from stationary objects (bridge abutments, buildings)
* Maximal distance from pedestrians on foot, bike, wheelchair, ...
* Maximal distance from edges of pavement
* Minimal lane switching
* Minimal loads and torques on wheels
* Minimal fuel consumption
* Within operation parameters for tires, breaks, engine RPM, and dozens of other parts and subsystems
* Shortest distance for route to destination
* Shortest time distribution mean for route to destination
* Safest route to destination
* Least stops on route to destination
* Probably others
Most of this must be balanced, so optimization criteria must be aggregated. Such aggregation must go beyond the simplicity of a loss function. Summing squares will not work at all, so give up aggregating in such a simplistic way. A multivariate extension of the Pythagorean Theorem is find for calculating distance in linear space, but driving cars is very non-linear. This kind of robotics system will require more thought in the formulation of balances, priorities, and the concept of emergency.
Further expanding on the above definition of a mistake, any real time control that does not optimize the sixty-five-plus dimensional surface is faulty. Now what is optimal must be defined. Consider the following quality control criteria, roughly in correct order.
* Pedestrian safety
* Passenger safety
* Mechanical system integrity
* Fuel conservation
* Vehicle external coating integrity
* Mechanical system wear
* Passenger comfort
* Time conservation in reaching destinations
* Probably others
**Applying Optimization in This Context**
Aggregation of the incoming signals acquired are not only based on multiple criteria but the prioritization is not always constant, implying the need for a vector of correction signals rather than a single floating point number.
A single dimension of signaling to feed the disparity between ideal operation and the current system behavior (called a loss function in gradient descent) will not suffice. There will, out of necessity, be a need for training and reinforcement with a complexity that involves the idea of preemption. Evolution has declared preemption the design of choice for nervous systems with brains.
For instance, the pedestrian safety feedback signaling must always preempt the fuel conservation feedback no matter how much fuel would be consumed in staying clear from pedestrians, in planning a route where pedestrian density is lower, steering the vehicle clear of people, choosing speed, and applying breaking.
All biological systems have these preemption mechanisms — even bacteria. A turtle doesn't balance the transportation aspect with safety when it retracts under its shell. The behavioral interest in the turtle's destination is shelved (stored and temporarily forgotten) until the preemptive system that detected danger indicates the danger has passed.
**Humans Should be Passengers on the Streets of the Future**
The reason humans are generally driving in a continuous state of mistaken control is because the priorities that maximize the parameters of transportation for society (above) are inconsistently followed by humans. Birds fly smarter than humans drive. The priorities of an emotional being that wants to get somewhere fast and while talking, texting, eating, and possibly getting high will often be mistaken.
Future people may look back at the period between the advent of Model-T market penetration and the complete transition to AV as a period of strange inequality. Stepping back, the worldwide interest in domestic security, airline safety, train and subway safety, and building code contrasts strongly against the cultural insistence of every household have instant access to drive anywhere, any time, and in any mental condition.
Upvotes: 3 [selected_answer]<issue_comment>username_1: Control should look like low numbers in highway and city accident report statistics.
There will be no drivers in Level 5 AV. In fact, there may be no driver's position in the vehicle as with train passenger cars and dining cars. This is quite different than Level 4. In fact, more levels will probably emerge because of the huge jump from 4 to 5 in the current understanding.
The AV will stop in an emergency because the AV technology to do so will have proven its ability to determine and react to an emergency condition more quickly and reliably than a human driver. The passengers will still be under control, just not THEIR control, except for their determining the destination address. Control should be a communal function, since the danger when control fails is communal.
This is the benefit of Level 5 over Level 4. Highway pileups and many other accident scenarios are caused by human inability to react to risks. Pileups are usually caused by human perception of a normal driving condition as an emergency and jamming on the breaks unnecessarily. Most drivers will not steer into a skid even though they've heard to do so. Rather than controlling the risk with skill, most people react in a way that reduces their tentative control of the vehicle when a skid begins.
Regarding the passengers liking space between cars, the AV will prefer space too because proximity creates risk, so that question is moot.
The primary impact of AV on society will be the gradual demolition of cultural values about engine power, which is reaching its time anyway because of the reduction of the burden of transportation on energy consumption that removing adrenaline and testosterone from the roads and highways will bring forth.
Government is pro AV simply because many humans like to "drive through" when traveling far, drive when on prescription medications that warn against operating heavy machinery (which cars are), and text in heavy traffic and in school zones.
The automotive, aeronautical, and transportation industry is not a primary driver of the AV technology. Their profit is maximized by occasional (but not frequent) accidents, stress on engine and body components, sales pitches about safety and power unsubstantiated with actual data, and marketing that equates freedom with control. With rare accidents, minimized wear, and accountability created by deeper scrutiny there may be significant profit lost.
Insurers will gain. Ticket and accident attorneys will lose. Parents of children in school will win. Body shops will lose. The electronics industry will win. Tire sales will drop. Government will be able to pocket more tax revenue. Gear heads may eventually be thought of like KKK members are in today's postmodernism. Custom wheel manufacturers will win. Towing services will lose. Bicyclers will win. Petroleum extractors will lose.
Just as the personal automobile changed the horse industry and the Internet changed publishing, AVs will lead to adjustments throughout government and the economy.
Upvotes: 1 |
2018/06/05 | 1,528 | 6,376 | <issue_start>username_0: I trained a DQN that learns tic-tac-toe by playing against itself with a reward of -1/0/+1 for a loss/draw/win. Every 500 episodes, I test the progress by letting it play some episodes (also 500) against a random player.
As shown in the picture below, the net learns quickly to get an average reward of 0.8-0.9 against the random player. But, after 6000 episodes, the performance seems to deteriorate. If I play manually against the net, after 10000 episodes, it plays okay, but by no means perfect.
Assuming that there is no hidden programming bug, *is there anything that might explain such a behavior? Is there anything special about self-play in contrast to training a net against a fixed environment?*
[](https://i.stack.imgur.com/y3xOx.png)
Here further details.
The net has two layers with 100 and 50 nodes (and a linear output layer with 9 nodes), uses DQN and a replay buffer with 4000 state transitions. The shown epsilon values are only used during self-play, during evaluation against the random player exploration is switched off. Self-play actually works by training two separate nets of identical architecture. For simplicity, one net is always player1 and the other always player2 (so they learn slightly different things). Evaluation is then done using the player1 net vs. a random player which generates moves for player2.<issue_comment>username_1: There are lots of ways that RL agents can fail to learn properly, so you are faced with a little bit of experimentation and maybe bug hunting unfortunately. However, from the description you have given in the question and comments, I can make a few observations and guesses about where to look:
* Your metric of average reward against a random player is sensible. In this case, you could also use a perfect player (that ideally randomised choice of any optimal move), where you would see a maximum averaged return of zero - this would be helpful to know if your agent had learned a fully optimal behaviour, because it would consistently score zero. In general for more complex games a perfect player is not available to test with, but as you are learning here it might help you.
* Your DQNs might be unable to fit the value function. You can test that in this case by getting the value function from an optimal self-play player (all the values will be -1, 0, or 1) and using a supervised learning approach, separately from your agent. You should be able to get a loss very close to zero - if you cannot do that, then something could be wrong with your network architecture.
* Whilst you are training, even though you are using a variation of Q-learning (which learns an optimal policy even whilst exploring other actions), your DQNs are not learning optimal play. That is because you have used two agents. In DQN, the algorithm is not aware that there are other learning agents, and it will treat any other agents as if they were part of the environment. Which means that the agents will spend some effort trying to set the game up for each other to make an exploration mistake. That could lead to non-optimal choices and a little bit of instability. Your decay of epsilon should help with that, although you are caught between a rock and a hard place here. You want to learn off-policy and explore, but are forced to reduce exploration. There are a couple of ways to resolve that, I will explain a bit further down . . .
* 10,000 games may not be enough. In the experiments I have done with TicTacToe agents, it seems between 20,000 and 50,000 games are required for a naive learner. More may be required if you have done something that makes learning inefficient. In addition, I found when adding more sophisticated learning approaches (in my case using eligibility traces) the agents appear to become close to optimal very quickly, but actually have flaws which take a long time to shake out, just as long as running a more naive algorithm. When the flaws got found and fixed, it upset the value function for a while and I saw fluctuations in my metrics similar to yours.
* Q-learning with NNs is inherently unstable. DQN implements some ideas to fix that, but it is not perfect. It is not uncommon to need to adjust the batch size and/or time steps between taking frozen copy of network for the TD target calculation. The initial stability followed by poor performance looks a lot like that instability too.
Regarding your use of two opposing agents, I can see two possible improvements:
1. Alternately train one or other agent in each game, don't train both at once. That will mean each agent is learning to play against the other agent playing its best without exploratory moves.
2. Combine networks into single agent description. As this is a zero-sum game, you can take player A's network for calculating values, and just have player B try to *minimise* the action value on its turn. That means use `min` and `argmin` functions for steps that represent player B's turn wherever player A would use `min` or `argmin`, including in the Q-value updates - this is typically easy to add to the inner loop of Q-learning, and should improve learning efficiency (essentially you are hard-coding knowledge that this is a zero-sum game and taking advantage of that symmetry).
Both of these ideas will free you up from caring about the value of epsilon, or decaying it - you can probably just leave it fixed at e.g. 0.1
Finally, as a test of whether your agent can cope with learning optimal play in general, you could have it learn against an already optimal agent. That is obviously not something you can do for more complex games, but might help you debug agent code and hyper-parameters of the network - it divides your problem up into "can it learn this at all" and "can it learn through self-play".
Upvotes: 2 <issue_comment>username_2: As epsilon is throttled down networks 1 and 2 can freely specialize to producing tic-tac-toe's well-known non-losing behaviour against quasi-perfect adversaries without encoding any non-losing or winning behaviour against random (in other words, bad) adversaries.
I suggest while training network 1 (1st mover) and reducing epsilon-1 you keep the 2nd mover's epsilon-2 to values distinctly above 0, indeed why not fixed. Vice-versa for training the 2nd mover.
Upvotes: 0 |
2018/06/06 | 1,469 | 6,224 | <issue_start>username_0: In the paper [Progressive growing of gans for improved quality, stability, and variation](https://research.nvidia.com/sites/default/files/pubs/2017-10_Progressive-Growing-of/karras2018iclr-paper.pdf) (ICLR, 2018) by Nvidia researchers, the authors write
>
> Furthermore, we observe that mode collapses traditionally
> plaguing GANs tend to happen very quickly, over the course of a dozen minibatches. Commonly
> they start when the discriminator overshoots, leading to exaggerated gradients, and an unhealthy
> competition follows where the signal magnitudes escalate in both networks. We propose a mechanism to stop the generator from participating in such escalation, overcoming the issue (Section 4.2)
>
>
>
What do they mean by "the discriminator overshoots" and "the signal magnitudes escalate in both networks"?
My current intuition is that the discriminator gets too good too soon, which causes the generator to spike and try to play catch up. That would be the unhealthy competition that they are talking about. Model collapse is the side effect where the generator has trouble playing catch up and decides to play it safe by generating slightly varied images to increase its accuracy. Is this way of interpreting the above paragraph correct?<issue_comment>username_1: There are lots of ways that RL agents can fail to learn properly, so you are faced with a little bit of experimentation and maybe bug hunting unfortunately. However, from the description you have given in the question and comments, I can make a few observations and guesses about where to look:
* Your metric of average reward against a random player is sensible. In this case, you could also use a perfect player (that ideally randomised choice of any optimal move), where you would see a maximum averaged return of zero - this would be helpful to know if your agent had learned a fully optimal behaviour, because it would consistently score zero. In general for more complex games a perfect player is not available to test with, but as you are learning here it might help you.
* Your DQNs might be unable to fit the value function. You can test that in this case by getting the value function from an optimal self-play player (all the values will be -1, 0, or 1) and using a supervised learning approach, separately from your agent. You should be able to get a loss very close to zero - if you cannot do that, then something could be wrong with your network architecture.
* Whilst you are training, even though you are using a variation of Q-learning (which learns an optimal policy even whilst exploring other actions), your DQNs are not learning optimal play. That is because you have used two agents. In DQN, the algorithm is not aware that there are other learning agents, and it will treat any other agents as if they were part of the environment. Which means that the agents will spend some effort trying to set the game up for each other to make an exploration mistake. That could lead to non-optimal choices and a little bit of instability. Your decay of epsilon should help with that, although you are caught between a rock and a hard place here. You want to learn off-policy and explore, but are forced to reduce exploration. There are a couple of ways to resolve that, I will explain a bit further down . . .
* 10,000 games may not be enough. In the experiments I have done with TicTacToe agents, it seems between 20,000 and 50,000 games are required for a naive learner. More may be required if you have done something that makes learning inefficient. In addition, I found when adding more sophisticated learning approaches (in my case using eligibility traces) the agents appear to become close to optimal very quickly, but actually have flaws which take a long time to shake out, just as long as running a more naive algorithm. When the flaws got found and fixed, it upset the value function for a while and I saw fluctuations in my metrics similar to yours.
* Q-learning with NNs is inherently unstable. DQN implements some ideas to fix that, but it is not perfect. It is not uncommon to need to adjust the batch size and/or time steps between taking frozen copy of network for the TD target calculation. The initial stability followed by poor performance looks a lot like that instability too.
Regarding your use of two opposing agents, I can see two possible improvements:
1. Alternately train one or other agent in each game, don't train both at once. That will mean each agent is learning to play against the other agent playing its best without exploratory moves.
2. Combine networks into single agent description. As this is a zero-sum game, you can take player A's network for calculating values, and just have player B try to *minimise* the action value on its turn. That means use `min` and `argmin` functions for steps that represent player B's turn wherever player A would use `min` or `argmin`, including in the Q-value updates - this is typically easy to add to the inner loop of Q-learning, and should improve learning efficiency (essentially you are hard-coding knowledge that this is a zero-sum game and taking advantage of that symmetry).
Both of these ideas will free you up from caring about the value of epsilon, or decaying it - you can probably just leave it fixed at e.g. 0.1
Finally, as a test of whether your agent can cope with learning optimal play in general, you could have it learn against an already optimal agent. That is obviously not something you can do for more complex games, but might help you debug agent code and hyper-parameters of the network - it divides your problem up into "can it learn this at all" and "can it learn through self-play".
Upvotes: 2 <issue_comment>username_2: As epsilon is throttled down networks 1 and 2 can freely specialize to producing tic-tac-toe's well-known non-losing behaviour against quasi-perfect adversaries without encoding any non-losing or winning behaviour against random (in other words, bad) adversaries.
I suggest while training network 1 (1st mover) and reducing epsilon-1 you keep the 2nd mover's epsilon-2 to values distinctly above 0, indeed why not fixed. Vice-versa for training the 2nd mover.
Upvotes: 0 |
2018/06/07 | 1,416 | 6,139 | <issue_start>username_0: Disclaimer: I am a novice in the world of machine learning, so please excuse my ignorance.
My dataset consists of things like age, days since last visit, etc. This information is medical related. None of which is geometrical, just data pertaining to particular clients.
The goal is to classify my dataset into three labels. The dataset is not labeled, meaning I'm dealing with an unsupervised learning problem. My dataset consists of ~20,000 records, but this will linearly increase overtime. The data is nearly all floats, with some being strings that can easily be converted into a float. [Using this cheat sheet for selecting a solution from the scikit site](http://scikit-learn.org/stable/tutorial/machine_learning_map/index.html), a KMeans Cluster seems like potential solution, but I've been reading that having high dimensionality can render the KMeans Cluster unhelpful. I'm not married to a particular implementation either. I've currently got a KMeans Cluster implementation using TensorFlow in Python, but am open for alternatives.
My question is: what would be some solutions for me to further explore that might be more optimal for my particular situation?<issue_comment>username_1: There are lots of ways that RL agents can fail to learn properly, so you are faced with a little bit of experimentation and maybe bug hunting unfortunately. However, from the description you have given in the question and comments, I can make a few observations and guesses about where to look:
* Your metric of average reward against a random player is sensible. In this case, you could also use a perfect player (that ideally randomised choice of any optimal move), where you would see a maximum averaged return of zero - this would be helpful to know if your agent had learned a fully optimal behaviour, because it would consistently score zero. In general for more complex games a perfect player is not available to test with, but as you are learning here it might help you.
* Your DQNs might be unable to fit the value function. You can test that in this case by getting the value function from an optimal self-play player (all the values will be -1, 0, or 1) and using a supervised learning approach, separately from your agent. You should be able to get a loss very close to zero - if you cannot do that, then something could be wrong with your network architecture.
* Whilst you are training, even though you are using a variation of Q-learning (which learns an optimal policy even whilst exploring other actions), your DQNs are not learning optimal play. That is because you have used two agents. In DQN, the algorithm is not aware that there are other learning agents, and it will treat any other agents as if they were part of the environment. Which means that the agents will spend some effort trying to set the game up for each other to make an exploration mistake. That could lead to non-optimal choices and a little bit of instability. Your decay of epsilon should help with that, although you are caught between a rock and a hard place here. You want to learn off-policy and explore, but are forced to reduce exploration. There are a couple of ways to resolve that, I will explain a bit further down . . .
* 10,000 games may not be enough. In the experiments I have done with TicTacToe agents, it seems between 20,000 and 50,000 games are required for a naive learner. More may be required if you have done something that makes learning inefficient. In addition, I found when adding more sophisticated learning approaches (in my case using eligibility traces) the agents appear to become close to optimal very quickly, but actually have flaws which take a long time to shake out, just as long as running a more naive algorithm. When the flaws got found and fixed, it upset the value function for a while and I saw fluctuations in my metrics similar to yours.
* Q-learning with NNs is inherently unstable. DQN implements some ideas to fix that, but it is not perfect. It is not uncommon to need to adjust the batch size and/or time steps between taking frozen copy of network for the TD target calculation. The initial stability followed by poor performance looks a lot like that instability too.
Regarding your use of two opposing agents, I can see two possible improvements:
1. Alternately train one or other agent in each game, don't train both at once. That will mean each agent is learning to play against the other agent playing its best without exploratory moves.
2. Combine networks into single agent description. As this is a zero-sum game, you can take player A's network for calculating values, and just have player B try to *minimise* the action value on its turn. That means use `min` and `argmin` functions for steps that represent player B's turn wherever player A would use `min` or `argmin`, including in the Q-value updates - this is typically easy to add to the inner loop of Q-learning, and should improve learning efficiency (essentially you are hard-coding knowledge that this is a zero-sum game and taking advantage of that symmetry).
Both of these ideas will free you up from caring about the value of epsilon, or decaying it - you can probably just leave it fixed at e.g. 0.1
Finally, as a test of whether your agent can cope with learning optimal play in general, you could have it learn against an already optimal agent. That is obviously not something you can do for more complex games, but might help you debug agent code and hyper-parameters of the network - it divides your problem up into "can it learn this at all" and "can it learn through self-play".
Upvotes: 2 <issue_comment>username_2: As epsilon is throttled down networks 1 and 2 can freely specialize to producing tic-tac-toe's well-known non-losing behaviour against quasi-perfect adversaries without encoding any non-losing or winning behaviour against random (in other words, bad) adversaries.
I suggest while training network 1 (1st mover) and reducing epsilon-1 you keep the 2nd mover's epsilon-2 to values distinctly above 0, indeed why not fixed. Vice-versa for training the 2nd mover.
Upvotes: 0 |
2018/06/08 | 962 | 4,098 | <issue_start>username_0: I am reading a book about OpenCV, it speaks about some derivative of images like `sobel`. I am confused about image derivative! What is derived from? How can we derived from an image? I know we consider an image(1-channel) as a n\*m matrix with 0 to 255 intensity numbers. How can we derive from this matrix?
EDIT: a piece of text of the book:
>
> Derivatives and Gradients
>
>
> One of the most basic and important convolutions is computing derivatives
> (or approximations to them). There are many ways to do this, but only a few
> are well suited to a given situation.
>
>
> In general, the most common operator used to represent differentiation is the
> Sobel derivative operator. Sobel
> operators exist for any order of derivative as well as for mixed partial
> derivatives (e.g., ∂ 2 /∂x∂y).
>
>
><issue_comment>username_1: Imagine a line laid through the image. All pixels along the line count as values, so you can graph the pixels along the line like a function.
The derivative is of that 'function'. A black picture and a white picture have the same derivative (0), but a black-fading-to grey image would have a constant derivative bigger or smaller than zero, depending on the direction of the line in relation to the fading. Hard contrasts have huge derivarives at the points in the line where the line crosses a white/black border. Usually the rows and columns are used as the lines, but you could also lay any oblique line, and some algorithms do.
The term 'derivative' is somewhat a misnomer in this case, as usually the pixel values do net get fitted by a function of which then a derivative is taken, but the 'derivative' is directly taken by looking at the differences from one pixel to it's neighbor.
There is a [thread](https://dsp.stackexchange.com/questions/16805/1d-first-order-derivative-mask) in dsp.stackexchange that deals with this, the following illustrative picture is from there:
[](https://i.stack.imgur.com/r2bUi.png)
Upvotes: 3 [selected_answer]<issue_comment>username_2: The term **`Derivative of an Image`** in the context you mention has two meanings.
1. A matrix, image, or floating point number that is derived from an image via convolution, passing the image through a two dimensional NN, the application of an FFT analysis, or some other process. In this context, the word **Derivative** implies the direction of calculation: Image B is derived from image A.
2. A matrix or cube that represents the rate of change at in the image being processed. The change being measured between only two adjacent pixels in a single dimension and only one direction at a time, but the applications of this technique is very limited, and such a sequence is of differences, not at all reasonable approximations of the derivative of light. What is more useful in real recognition systems are two dimensional or hexagonal windowing (Gausian, Hamming, Hanning, trapazoidal, cosine, ...) across space and, for video, through time. The calculus term derivative should always reference the theoretical surface being approximated using these techniques, not the discrete matrix or cube that approximates the surface.
Such multidimensional convolution and neural network based approaches are less sensitive to capture noise and orientation nuances. Two dimensional whole image or windowed FFT techniques have met with much success because filtering the expected frequency range of features to be detected is merely an attenuation process. Two and three dimensional splines can also be tuned to be useful in the detection of features in an orientation independent way.
In addition to gray scale analysis, color and transparency channels can be selected for independent or parallel analysis or added to the dimension of the fitting model from which the derivative is taken.
Advances in deep networks have blossomed into a new area of image processing and recognition research, bringing new hope to robotics, automated transportation, and cybernetics in general.
Upvotes: 2 |
2018/06/08 | 906 | 4,034 | <issue_start>username_0: I'm working with deep learning on some EEG data for classification, and I was wondering if there's any systematic/mathematical way to define the architecture of the networks, in order to compare their performance fairly.
Should the comparison be at the level of neurons (e.g. number of neurons in each layer), or at the level of weights (e.g. number of parameters for training in each type of network), or maybe something else?
One idea that emerged was to construct one layer for the MLP for each corresponding convolutional layer, based on the number of neurons after the pooling and dropout layers.
Any ideas? If there's any relative work or paper regarding this problem I would be very grateful to know.
Thank you for your time
Konstantinos<issue_comment>username_1: Imagine a line laid through the image. All pixels along the line count as values, so you can graph the pixels along the line like a function.
The derivative is of that 'function'. A black picture and a white picture have the same derivative (0), but a black-fading-to grey image would have a constant derivative bigger or smaller than zero, depending on the direction of the line in relation to the fading. Hard contrasts have huge derivarives at the points in the line where the line crosses a white/black border. Usually the rows and columns are used as the lines, but you could also lay any oblique line, and some algorithms do.
The term 'derivative' is somewhat a misnomer in this case, as usually the pixel values do net get fitted by a function of which then a derivative is taken, but the 'derivative' is directly taken by looking at the differences from one pixel to it's neighbor.
There is a [thread](https://dsp.stackexchange.com/questions/16805/1d-first-order-derivative-mask) in dsp.stackexchange that deals with this, the following illustrative picture is from there:
[](https://i.stack.imgur.com/r2bUi.png)
Upvotes: 3 [selected_answer]<issue_comment>username_2: The term **`Derivative of an Image`** in the context you mention has two meanings.
1. A matrix, image, or floating point number that is derived from an image via convolution, passing the image through a two dimensional NN, the application of an FFT analysis, or some other process. In this context, the word **Derivative** implies the direction of calculation: Image B is derived from image A.
2. A matrix or cube that represents the rate of change at in the image being processed. The change being measured between only two adjacent pixels in a single dimension and only one direction at a time, but the applications of this technique is very limited, and such a sequence is of differences, not at all reasonable approximations of the derivative of light. What is more useful in real recognition systems are two dimensional or hexagonal windowing (Gausian, Hamming, Hanning, trapazoidal, cosine, ...) across space and, for video, through time. The calculus term derivative should always reference the theoretical surface being approximated using these techniques, not the discrete matrix or cube that approximates the surface.
Such multidimensional convolution and neural network based approaches are less sensitive to capture noise and orientation nuances. Two dimensional whole image or windowed FFT techniques have met with much success because filtering the expected frequency range of features to be detected is merely an attenuation process. Two and three dimensional splines can also be tuned to be useful in the detection of features in an orientation independent way.
In addition to gray scale analysis, color and transparency channels can be selected for independent or parallel analysis or added to the dimension of the fitting model from which the derivative is taken.
Advances in deep networks have blossomed into a new area of image processing and recognition research, bringing new hope to robotics, automated transportation, and cybernetics in general.
Upvotes: 2 |
2018/06/09 | 1,198 | 4,995 | <issue_start>username_0: I have been looking at the Fibonacci series, the golden ratio, and its uses in nature, like how flowers and animals grow based on the series.
I was wondering whether we could use the Fibonacci series and the golden ratio in any way in AI, especially in evolutionary algorithms. Any ideas or insights?
Is this research material? If so where can we start?<issue_comment>username_1: [Face detection evaluation: A new approach based on the golden ratio Φ](https://www.researchgate.net/publication/236132090_Face_detection_evaluation_A_new_approach_based_on_the_golden_ratio_PH)
Abstract:
>
> Face detection is a fundamental research area in computer vision field. Most of the face-related applications such as face recognition and face tracking assume that the face region is perfectly detected. To adopt a certain face detection algorithm in these applications, evaluation of its performance is needed. Unfortunately, it is difficult to evaluate the performance of face detection algorithms due to the lack of universal criteria in the literature. In this paper, we propose a new evaluation measure for face detection algorithms by exploiting a biological property called Golden Ratio of the perfect human face. The new evaluation measure is more realistic and accurate compared to the existing one. Using the proposed measure, five haar-cascade classifiers provided by Intel©OpenCV have been quantitatively evaluated on three common databases to show their robustness and weakness as these classifiers have never been compared among each other on same databases under a specific evaluation measure. A thoughtful comparison between the best haar-classifier and two other face detection algorithms is presented. Moreover, we introduce a new challenging dataset, where the subjects wear the headscarf. The new dataset is used as a testbed for evaluating the current state of face detection algorithms under the headscarf occlusion
>
>
>
Upvotes: 0 <issue_comment>username_2: The use of the Golden Ratio is an interesting suggestion which has intrigued many lovers of the mathematical beauty represented in nature and in AI. The problem lies in the foundations of the AI applications. For example, in designing algorithms for recognizing naturally occurring phenomena, such as face recognition or human body movements (See <https://www.intechopen.com/books/machine-learning-and-biometrics/a-human-body-mathematical-model-biometric-using-golden-ratio-a-new-algorithm>) it is suitable. However, for non natural occurrences, the ratio is limited since the data is usually random or chaotic. However, in order to create a master algorithm for the future which encompasses all the best of the current AI algorithms, the use of mathematical concepts such as the golden ration and fractals will be vital. Watch this space...
Upvotes: 1 <issue_comment>username_3: I mean you can predict these sequences quite easily(with varying levels of accuracy) just by using LSTMs in the time series forecasting context. Obviously, as the number of digits you give it increases, it will increase the prediction accuracy of the next in the sequence(with some caveats), as we can think of neural networks more generally as connectionist function approximators(nonlinear in almost all cases).
As far as direct applications in AI, I suppose not beyond mathematical modeling and economic/financial modeling as these sorts of sequences emerge from a vast majority of pure and applied mathematical concepts. This research is quite relevant and ongoing([1](https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/95WR01955),[2](https://content.iospress.com/articles/algorithmic-finance/af176),[3](https://dl.acm.org/citation.cfm?id=1143891)).
Upvotes: 0 <issue_comment>username_4: It should be a short amount of time before we start seeing exponential complexity growth as the target of AI algorithms, likely involving the golden ratio.
<https://en.wikipedia.org/wiki/Golden_ratio#Relationship_to_Fibonacci_sequence>
We are already using the golden ratio to perform quantum computations :
<https://www.quora.com/How-are-quantum-physics-and-the-golden-ratio-connected>
So, once we scale parallelization to GPU-like networks of quantum processors, we can then be sure we have entered the territory where AI and the Golden Ratio are inherently more intrinsic.
As far as accelerating the learning models/algorithms with them, we can only hope something so fantastic would be found in the future of AI as well; who knows, maybe they will come out of course through necessity much like the biological computers we all execute and replicate our code from, and it's environment :
<https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6047800/>
Not to say it will be the only property present, as the article concludes the golden ratio is "most likely" only related by "chance". I'm sure the same random chance by which it relates to black hole entropy, and, everything else it relates to :P.
Upvotes: 0 |
2018/06/10 | 1,489 | 6,066 | <issue_start>username_0: In our lives, we meet different people and describe their common sense based on how they act on a situation. For example, highly extrovert people are able to deal with people without any awkwardness. For them, an action in how to deal with people come as common sense. But, in the case of scientists, approach to solving a problem may be common sense which ordinary people cannot see.
How can we define common sense in an AI agent?<issue_comment>username_1: I came up with a few ideas I would argue are valuable in motivating the idea of common sense for a machine learning model.
* Common sense is retrospective. We define it in terms of a past (sensible) actions and conditions, and we can say someone has good common sense on the basis of their behavior, which we can view as the sum of their historical actions and the degree to which they were sensible.
* The actions alone are not sufficient for demonstrating common sense; the mental model(s) that generated those actions is important as well. Why did the individual take those sensible actions? Is their reasoning behind their action sensible as well, or did they merely get lucky (put another way, does their generally not sensical mental model produce sensible actions in certain cases and non sensical actions in most others)? This hints that common sense is contingent on the rationalization of the actions themselves.
* Given the former, common sense is contingent on the enumeration of possible actions and choosing the right one, given the *context* of the situation. For example, I’m walking on a path and see a snail. What are some actions? I could keep walking, stop and admire it for a while, step on it, or eat it. The first two options are sensible if I value the snail as a living being. The first option is sensible if I’m in a rush. The second and fourth may be sensible if I’m a chef exploring nature for new potential ingredients. The third option if I recognize the snail as invasive. We say someone has common sense when, given the context, the actions chosen are sensible and we can reason about them intuitively.
My guess is that, the *intuition* behind the perceived sense of an action is what you’re after. I’d argue that, ultimately, the intuition of common sense is defined by the person developing the model and will have to do with the formulation of the model (e.g it’s assumptions, the objective function, etc). After all, common sense is subjective and context specific.
Concretely, a model can have common sense if the developer bakes it in and we can use inferential methods to demonstrate this. For example, in a Word2Vec model, we might see that $\mathsf{Paris} \mapsto \mathsf{France}$ and would expect that $\mathsf{Tokyo} \mapsto \mathsf{Japan}$. To interrogate this, we might do some vector math and find that $\mathsf{Paris} - \mathsf{France} + \mathsf{Tokyo} = \mathsf{Japan}$. How and if the AI model develops the larger association between $\mathsf{Capital} \mapsto \mathsf{Country}$, however, comes down to how the developer built and trained the model to recapitulate their own common sense.
Upvotes: 2 <issue_comment>username_2: I define "common sense" inline with human beings, to concatenate to intelligent agents;
>
> An algorithmic agent, which has the ability to solve effective decision in relation to the way humans perceive their environment and common situations.
>
>
>
According to <NAME>,
>
> The first artificial intelligence program proposed to address common sense was [Advice Taker](https://en.wikipedia.org/wiki/Advice_taker)
>
>
>
Currently, common sense is unsolved problem in AI. For more information, see <NAME>'s
*[Programs with Common Sense](http://www-formal.stanford.edu/jmc/mcc59.pdf)*.
Upvotes: 2 <issue_comment>username_3: My sense is that common sense tends to be axiomatic. To avoid pitfalls, a degree of wisdom may also be required in that axioms may not apply in all contexts. [See [Axiomatic System]](https://en.wikipedia.org/wiki/Axiomatic_system).
A major problem is that science often demonstrates that intuition, and "common sense", often lead to incorrect conclusions. <NAME> covers this topic for the general public in his book [Death By Black Hole](https://en.wikipedia.org/wiki/Death_by_Black_Hole):
>
> Chapter 3, "Seeing Isn't Believing", hints at the pitfalls of generalizing from too little evidence. It begins by making the point that although we know the Earth is round, it appears flat when one observes only a small, local portion of it.
>
>
>
A very famous example comes from mathematician <NAME>:
>
> During World War II, Wald was a member of the Statistical Research Group (SRG) where he applied his statistical skills to various wartime problems. These included methods of sequential analysis and sampling inspection. One of the problems that the SRG worked on was to examine the distribution of damage to aircraft to provide advice on how to minimize bomber losses to enemy fire. There was an inclination within the military to consider providing greater protection to parts that received more damage but Wald made the assumption that damage must be more uniformly distributed and that the aircraft that did return or show up in the samples were hit in the less vulnerable parts. Wald noted that the study only considered the aircraft that had survived their missions—the bombers that had been shot down were not present for the damage assessment. The holes in the returning aircraft, then, represented areas where a bomber could take damage and still return home safely. Wald proposed that the Navy instead reinforce the areas where the returning aircraft were unscathed, since those were the areas that, if hit, would cause the plane to be lost.
> Source: [Abraham Wald (wiki)](https://en.wikipedia.org/wiki/Abraham_Wald)
>
>
>
My sense is that "confidence levels" may be the main technique driving toward algorithmic "common sense", specifically in that the algorithm is questioning it's assumptions.
Upvotes: 1 |
2018/06/11 | 726 | 3,267 | <issue_start>username_0: I would like to develop a machine learning algorithm, given two photos, that can decide which image is more "artistic".
I am thinking about somehow combining two images, giving it to a CNN, and get an output 0 (the first image is better) or 1 (the second image is better). Do you think this is a valid approach? Or could you suggest an alternative way for this? Also, I don't know how to combine two images.
Thanks!
Edit: Let me correct "artistic" as "artistic according to me", but it doesn't matter, I am more interested in the architecture. You can even replace "artistic" with something objective. Let's say I would like to determine which photo belongs to a more hotter day.<issue_comment>username_1: I think the real problem is, what defines a more artistic image? It's really subjective, I think complexity of the work might be one aspect to consider but still a subjective one, you might want to make the purpose of your ML algorithm more objective or defined.
But then again, It's just my opinion
Upvotes: 1 <issue_comment>username_2: You need to define a scoring function, which returns a value of whatever criterion you are interested in, be it 'artisticity' or 'heat'. This could be something you use a machine learning algorithm for, provided you have a set of training data with labelled images.
You then need to extract features from the images. In the case of 'heat' this could be a colour spectrum (ie distribution of colours across the image), or whatever. If it's a reasonably small image, pixel values might be feasible. These features you can feed into your algorithm, and try to learn an association between the feature values and the (assigned by you) label of the image. You will end up with a classifier that takes image features as input, and returns the labels as output. The quality of the classifier depends on your images, the features you selected, and the task. If there is no structure in the data, then a classifier will not work properly.
If you have a continuous value (eg temperature in degrees C) you would run your two sample images through the classifier, and then compare the output values.
Upvotes: 2 <issue_comment>username_3: My feeling is that, because you are dealing with a subject that his highly subjective, you need to integrate whatever learning algorithm you use with human feedback.
This is to say, crowdsource human opinions on a data set of pictures, train the algorithm to try to intuit what qualities images with similar human rankings share. (Both the positive and negative rankings.) Run the algorithm on a new data set and crowdsource that data set to see if the algorithm gets it right. Rinse & repeat.
You may also want to utilize demographics in the human crowdsourcing. Art historians will likely have different criteria for what makes a photo artistic than the general public. Humans with different educational backgrounds will likely have different criteria for what makes a photo artistic.
Sans the crowdsourcing, the algorithm will have no ability to determine the aesthetic qualities other than what the programmer defines, and in that case, the algorithm will only be able to determine what photos are artistic to the programmer, limiting the utility.
Upvotes: 0 |
2018/06/14 | 371 | 1,426 | <issue_start>username_0: It seems that older RNNs have a limitation for their use cases and have been outperformed by other recurrent architectures, such as the LSTM and GRU.<issue_comment>username_1: These newer RNNs (LSTMs and GRUs) have greater memory control, allowing previous values to persist or to be reset as necessary for many sequences of steps, avoiding "gradient decay" or eventual degradation of the values passed from step to step. LSTM and GRU networks make this memory control possible with memory blocks and structures called "gates" that pass or reset values as appropriate.
Upvotes: 3 [selected_answer]<issue_comment>username_2: [LSTMs](https://www.bioinf.jku.at/publications/older/2604.pdf) or [GRUs](https://arxiv.org/abs/1406.1078) are **computationally** more effective than the standard RNNs because they explicitly attempt to address the [vanishing and exploding gradient problems](http://ai.dinfo.unifi.it/paolo/ps/tnn-94-gradient.pdf), which are [numerical problems](https://en.wikipedia.org/wiki/Numerical_analysis) related to the vanishing or explosion of the values of the gradient vector (the vector that contains the partial derivatives of the loss function with respect to the parameters of the model) that arise when training recurrent neural networks with gradient descent and [back-propagation through time](http://axon.cs.byu.edu/~martinez/classes/678/Papers/Werbos_BPTT.pdf).
Upvotes: 2 |
2018/06/15 | 2,108 | 8,938 | <issue_start>username_0: I read <NAME>'s [The Book of Why](https://rads.stackoverflow.com/amzn/click/com/046509760X), in which he mentions that deep learning is just a glorified curve fitting technology, and will not be able to produce human-like intelligence.
From his book there is this diagram that illustrates the three levels of cognitive abilities:
[](https://i.stack.imgur.com/vxTLx.png)
The idea is that the "intelligence" produced by current deep learning technology is only at the level of association. Thus the AI is nowhere near the level of asking questions like "how can I make Y happen" (intervention) and "What if I have acted differently, will X still occur?" (counterfactuals), and it's highly unlikely that curve fitting techniques can ever bring us closer to a higher level of cognitive ability.
I found his argument persuasive on an intuitive level, but I'm unable to find any physical or mathematical laws that can either bolster or cast doubt on this argument.
So, is there any scientific/physical/chemical/biological/mathematical argument that prevents deep learning from ever producing strong AI (human-like intelligence)?<issue_comment>username_1: It is a paradox, but a deep learning machine (defined as a NeuralNet variant) is unable to learn anything. It is a flexible and configurable hardware/software architecture that can be parametrized to solve a lot of problems. But the optimal parameters to solve a problem are obtained by an external system, i.e. back-propagation algorithm.
Back-propagation subsystem uses conventional programming paradigms, it is not a Neural Net. This fact is in absolute opposition to human mind, where learning and use of knowledge is done by the same system (the mind).
If all the real interesting things are done outside the NN, it is difficult to claim that a NN (in any variant) can develop in an AGI.
It is also possible to find some more differences. Neural nets are strongly numerical in its interface and internals. From this point of view, they are an evolution of support vector machines.
Too much differences and restrictions to expect an AGI.
Note: I strongly disagree in the draw included in the original question. "Seeing", "doing", "imaging" are levels absolutely wrong. It ignores from basic and common software concepts as "abstraction" or "program state" (of mind, in Turing words); applied AI ones as "foresee"; and AGI ones as "free will", "objectives and feelings", ...
Upvotes: 0 <issue_comment>username_2: <NAME>'s 2018 comment on ACM.org, in his [*To, Build Truly Intelligent Machines, Teach Them Cause and Effect*](https://cacm.acm.org/news/227877-to-build-truly-intelligent-machines-teach-them-cause-and-effect/fulltext) is piercing truth.
>
> All the impressive achievements of deep learning amount to just curve fitting.
>
>
>
It may be less sensational and more technically correct to state that it is not, "Just curve fitting," but rather, "sophisticated surface fitting." Nonetheless, his general assessment indicates the need to look beyond tuning nonlinear functions to fit a surface in $\mathbb{R}^n$ and consider whether cognition is achievable with a deep network. The split in answers to this question is odd. We have two conflicting assertions, often strongly stated.
1. Artificial networks cannot perform logic.
2. Artificial networks are the best approach to AI.
How can rationality be excluded from the list of important human features of intelligence, which is what these two assertions are taken together would mean?
Is the human brain a network of sophisticated curve fitters? Marvin Minsky's famous quote, "The brain happens to be a meat machine," was offered without proof, and neither a proof of his trivialization of the human brain nor a proof that the brain is beyond the reach of Turing computability has been offered since.
When you read these words, are your neural networks doing the following sequence of curve fits?
* Edges from retinal rods and cones
* Lines from edges
* Shapes from lines
* Letters from shapes
* Linguistic elements from groups of letters
* Linguistic structures from elements
* Understanding from linguistic structures
The case is strong for the affirmation that the first five is a convergence mechanism on a model, and all the machine learning structure is just a method to fit the data to the model.
Those last two bullet items are where the paradigm breaks down and where many AI researchers and authors have correctly stated that machine learning has significant limitations when based solely on layers of multi-layer perceptrons and convolution kernels. Furthermore, the last bullet item is grossly oversimplified in its current state, probably by orders of magnitude. Even if Minsky is correct that a computer can perform what the brain does, the process of reading and understanding this paragraph could easily have a thousand different kinds of unique process components in patterns of internal workflow with massive parallelism. Imaging technology indicates this probability. We have computers modelling only the simplest peripheral layers.
>
> Is there any scientific/mathematical argument that prevents deep learning from ever producing strong AI? — No. But there is no such argument that guarantees it either.
>
>
>
Other questions here investigate whether these sophisticated curve fitters can perform elements of cognition or reasoning.
* [Could a neural network detect primes?](https://ai.stackexchange.com/questions/3389/could-a-neural-network-detect-primes)
* [Can neural networks be used to prove conjectures?](https://ai.stackexchange.com/questions/7416/can-deep-networks-be-trained-to-prove-theorems)
The totem of three in the question's image, seeing, doing, and imagining, is not particularly complete, accurate, or insightful.
* There are at least five sensory paradigms in humans, not one
* Doing preceded human senses by billions of years — bacteria do
* Imagining is not a significantly higher process than scenario replay from models of past experience with some method to apply set functions to combine them and inject random mutations
* Creativity may just be imagining in the previous bullet item followed by weeding out useless imagination results with some market-oriented quality criteria, leaving the impressive creative products that sell
The higher forms are appreciation, a sense of realities beyond the scope of scientific measurement, legitimate doubt, love, sacrifice for the good of others or humanity.
Many recognize that the current state of AI technology is nowhere near the procurement of a system that can reliably answer, "How can I make Y happen?" or "If I have acted differently, will X still occur?"
There is no mathematical proof that some combination of small curve fitting elements can or cannot achieve the ability to answer those questions as well a typical human being can, mostly because there is insufficient understanding of what intelligence is or how to define it in mathematical terms.
It is also possible that human intelligence doesn't exist at all, that references to it are based on a religious belief that we are higher as a species than other species. That we can populate, consume, and exterminate is not actually a very intelligent conception of intelligence.
The claim that human intelligence is an adaptation that differentiates us from other mammals conflicts with whether we adapt well. We have not been tested. Come the next meteoric global killer with a shock wave of the magnitude of that of the Chicxulub crater's meteor, followed by a few and a thousand years of solar winter and we'll see whether it is our 160,000-year existence or bacteria's 4,000,000,000 year existence that proves more sustainable. In the timeline of life, human intelligence has yet to prove itself significant as an adaptive trait.
What is clear about AI development is that other kinds of systems are playing a role along with deep learners based on the multi-layer perceptron concept and convolution kernels which are strictly surface fitters.
Q-learning components, attention-based components, and long-short term memory components are all strictly a surface fitter too, but only by stretching the definition of surface fitting considerably. They have real-time adaptive properties and state, so they can be Turing complete.
Fuzzy logic containers, rules-based systems, algorithms with Markovian properties, and many other component types also play their role and are not surface fitters at all.
In summary, there are points made that have a basis in more than plausibility or a pleasing intuitive quality, however, many of these authors do not provide a mathematical framework with definitions, applications, lemmas, theorems, proofs, or even thought experiments that can be scrutinized in a formal way.
Upvotes: 2 |
2018/06/15 | 911 | 4,038 | <issue_start>username_0: I'm trying to detect the **visual attention** area in a given image and crop the image into that area. For instance, given an image of any size and a rectangle of say $L \times W$ dimension as an input, I would like to crop the image to the most important visual attention area.
What are the state-of-the-art approaches for doing that?
(By the way, do you know of any tools to implement that? Any piece of code or algorithm would really help.)
BTW, within a "single" object, I would like to get attention. So object detection might not be the best thing. I am looking for any approach, provided it's SOTA, but Deep Learning might be a better choice.<issue_comment>username_1: You can search for the following paper titles:
1. A Deep Multi-Level Network for Saliency Prediction.
2. Beyond Universal Saliency: Personalized Saliency Prediction with Multi-task CNN.
You can code in python using Pytorch framework.
Upvotes: 2 <issue_comment>username_2: "Attention" in neural network (visual) is the area of the image where the network can find most number of features to classify it with high confidence.Based on your description you are talking about "soft attention".
*Do we have any tools or SDK to implement that?* i don't think there are readymade SDKs available. It is much better to train a model on your dataset with attention. Once you have your base model ready , it is easy to add attention mechanism for it.I suggest you to check <https://arxiv.org/pdf/1502.03044.pdf>.
Upvotes: 0 <issue_comment>username_3: To get a computer to detect and provide the bounding box or circle around a visual attention area in an image, the basis for attention must be determined. Then the method of getting the computer system to make choices based on that basis can be selected. First things first.
Is it a face or body or game character that is to be the object of interest? Will it be the most dynamic object in the frame in terms of movement? If it is a person, is it always the same person? In either case, will their face be exposed to the angle of the camera? Are there only still shots, or will the images be frames in a movie?
Once you know how YOU would distinguish the object requiring attention from other objects and background, then you can begin to see how a computer might simulate that recognition. When training a deep network that involves convolution kernels (called a CNN or convolutional neural network) and possibly long-short term memory cells (LSTM), there are stages to the recognition.
Usually edges of things are detected first. In movies, the movement of edges are tracked as features of the image. Elements in the image that identify what kind of object the objects are is second. For instance a toy might be detected by the way plastic reflects light and the color types and shapes common to toys. A face might be first recognize by identifying eyes, nose, mouth, chin, and ears.
After parts are identified, then entire objects can be identified through another stage of feature extraction. Vision systems follow the same basic principles of recognition that our human visual system uses.
There are many frameworks and libraries to help with these tasks, but to use them, it is important to get a general picture of the process and to clarify what it is that will sets the objects of importance out from other objects that may be similar or completely different so that attention can be focused the way you want.
Once you have $(h\_{min}, v\_{min}); (h\_{max}, v\_{max})$, the coordinates of the two corners of your cropping operation, which would be the goal of your network training, then any image manipulation library could handle the crop.
That's the state of the art. There is no high level SDK that allows one to command the computer to find the most important item in the frame without any clarification of what is meant by that and training operations to teach the software to find what you've decided to be important based on some criteria. Not yet anyway.
Upvotes: 0 |
2018/06/16 | 1,026 | 4,430 | <issue_start>username_0: I understand how neural networks work and have studied their theory well.
My question is: On the whole, is there a clear understanding of how mutation occurs within a neural network from the input layer to the output layer, for both supervised and unsupervised cases?
Any neural network is a set of neurons and connections with weights. With each successive layer, there is a change in the input. Say I have a neural network with $n$ parameters, which does movie recommendations. If $X$ is a parameter that stands for the movie rating on IMDB. In each successive stage, there is a mutation of input $X$ to $X'$ and further $X''$, and so on.
While we know how to mathematically talk about $X'$ and $X''$, do we at all have a **conceptual** understanding as to what this variable is in its corresponding $n$-dimensional parameter space?
To the human eye, the neural network's weights might be a set of random numbers, but they may mean something profound, if we could ever understand what they 'represent'.
[](https://i.stack.imgur.com/JFcKa.jpg)
What is the nature of the weights, such that, despite decades worth of research and use, there is no clear understanding of what these connection weights represent? Or rather, why has there been so little effort in understanding the nature of neural weights, in a non-mathematical sense, given the huge impetus in going beyond the black box notion of AI.<issue_comment>username_1: I don't know if my intuition is correct but I will give it a try.
You could see weights as how much important one thing is, the problem is to understand what that thing represents. When I say *thing* I'm referring to the output of a specific neuron. I don't think that we can say what the output of a neuron represents in the real world unless we directly relate it through an error function or if the function used to compute that particular value have some meaning in the real world.
Edit:
If you want, you could actually build your neural network such that its neurons represent something. It's also very simple. you have only to write down all the equations relative to that particular topic. You could put them in a big system or, and this is better, you could put them in several systems such that the outputs of system 1 are the input of system 2 and so on. You could convert each system into a layer where each neuron represents an equation. Note that in this case, you would have the classical neuron with
`z = dot(w.T,x) + b
a = g(x)`
but a more complex equation for `z` (but still based on weights) and a linear activation function for `a`. In this case, you could name each neuron and say what they represent in the real world.
However, this isn't the purpose of a neural network. A neural network should have neurons with simple equations to be fast thus the linear interpolating function `dot(w.T,x) + b` is the best choice (the fact that the activation function is almost always non linear and in some cases a non-banal function is due to other thing and could be an interesting question). A neural network should also be as general as possible because usually is build upon a system that you don't know completely.
So I modify slightly my answer: is not simply that you don't know what a neuron represent, excluding the ones of the output layer, you don't want that they have a meaning in the real world.
Upvotes: 2 <issue_comment>username_2: It's a bit of a challenge to answer your question, since you appear to be not really familiar with the basics. You're talking about mutations, and changes to the input.
No. The input is a vector of data, which initializes the value of the input nodes. The first layer of weights is then used to calculate the values for the next layer of nodes. This next layer is not a "mutation" of the input layer; that suggest the second layer of nodes is similar but not exactly identical to the first layer.
In reality, it's very common that the second layer of nodes does not even have the same *shape* as the first layer.
You are even wondering if certain weights have a certain meaning. That's even easier to answer. We know these networks are quite robust. We can ignore a significant percentage of the weights, and the classifications will change only a little. This shows that no individual weight represents a specific aspect of the network.
Upvotes: 0 |
2018/06/18 | 1,510 | 6,346 | <issue_start>username_0: My knowledge
------------
Suppose you have a layer that is fully connected, and that each neuron performs an operation like
```
a = g(w^T * x + b)
```
were `a` is the output of the neuron, `x` the input, `g` our generic activation function, and finally `w` and `b` our parameters.
If both `w` and `b` are initialized with all elements equal to each other, then `a` is equal for each unit of that layer.
This means that we have symmetry, thus at each iteration of whichever algorithm we choose to update our parameters, they will update in the same way, thus there is no need for multiple units since they all behave as a single one.
In order to break the symmetry, we could randomly initialize the matrix `w` and initialize `b` to zero (this is the setup that I've seen more often). This way `a` is different for each unit so that all neurons behave differently.
Of course, randomly initializing both `w` and `b` would be also okay even if not necessary.
Question
--------
Is randomly initializing `w` the only choice? Could we randomly initialize `b` instead of `w` in order to break the symmetry? Is the answer dependent on the choice of the activation function and/or the cost function?
My thinking is that we could break the symmetry by randomly initializing `b`, since in this way `a` would be different for each unit and, since in the backward propagation the derivatives of both `w` and `b` depend on `a`(at least this should be true for all the activation functions that I have seen so far), each unit would behave differently. Obviously, this is only a thought, and I'm not sure that is absolutely true.<issue_comment>username_1: `w` should be randomized to small (nonzero) numbers so that the adjustments made by the backpropagation are more meaningful and each value in the matrix is updated a different amount. If you start with all zeros, it will still work, but take longer to get to a meaningful result. AFAIK, this was found empirically by various researchers and became common practice.
Randomizing `b` does not have the same effect of helping, therefore most people do not bother.
This choice is one of many that is made by the architect of the network and theoretically you could use an infinite number of `w` matrix initializations. The one commonly used just happens to be tested and generally works.
This video is better at explaining than I am: [Lecture 8.4 — Neural Networks Representation | Model Representation-II — [<NAME>]](https://www.youtube.com/watch?v=iPNN805konI).
Upvotes: 0 <issue_comment>username_2: Most of the explanations given for choosing something or not choosing something (like hyperparameter tuning) in deep learning are based on empirical studies, like analysing the error over a number of iterations. So, [this answer](http://papers.nips.cc/paper/6662-convergence-analysis-of-two-layer-neural-networks-with-relu-activation.pdf) is what people in deep learning side give.
Since you have asked for a mathematical explanation, I suggest you read the paper [Convergence Analysis of Two-layer Neural Networks
with ReLU Activation](http://papers.nips.cc/paper/6662-convergence-analysis-of-two-layer-neural-networks-with-relu-activation.pdf) (2017, NIPS). It talks about the convergence of SGD to global minima subject to weight initialisation being Gaussian using ReLU as an activation function. The paper considers a neural net with no hidden layer, just input, and output layers.
The very fact that analysis on such 'simple' network gets published in a very reputed and top conference itself suggests that the explanation you are seeking is not very easy and very few people work on the theoretical aspects of neural nets. IMHO, after some years as the research progresses, I might be able to edit this answer and give the necessary explanation that you sought. Till then this is the best I could do.
Upvotes: 2 <issue_comment>username_3: Randomising just `b` sort of works, but setting `w` to all zero causes severe problems with [vanishing gradients](https://en.wikipedia.org/wiki/Vanishing_gradient_problem), especially at the start of learning.
Using backpropagation, the gradient at the outputs of a layer `L` involves a sum multiplying the gradient of the inputs to layer `L+1` by the weights (and not the biases) between the layers. This will be zero if the weights are all zero.
A gradient of zero at `L`'s output will further cause all earlier layers(`L-1`, `L-2` etc all the way back to layer `1`) to receive zero gradients, and thus not update either weights or bias at the update step. So the first time you run an update, it will only affect the last layer. Then the next time, it will affect the two layers closest to the output (but only marginally at the penultimate layer) and so on.
A related issue is that with weights all zero, or all the same, maps all inputs, no matter how they vary, onto the same output. This also can adversely affect the gradient signal that you are using to drive learning - for a balanced data set you have a good chance of starting learning close to a local minimum in the cost function.
For deep networks especially, to fight vanishing (or exploding) gradients, you should initialise weights from a distribution that has an expected magnitude (after multiplying the inputs) and gradient magnitude that neither vanishes nor explodes. Analysis of values that work best in deep networks is how [Xavier/Glorot initialisation](https://prateekvjoshi.com/2016/03/29/understanding-xavier-initialization-in-deep-neural-networks/) were discovered. Without careful initialisation along these lines, deep networks take much longer to learn, or in worst cases never recover from a poor start and fail to learn effectively.
Potentially to avoid these problems you could try to find a good non-zero fixed value for weights, as an alternative to Xavier initialisation, along with a good magnitude/distribution for bias initialisation. These would both vary according to size of the layer and possibly by the activation function. However, I would suspect this could suffer from other issues such sampling bias issues - there are *more* weights, therefore you get a better fit to desired aggregate behaviour when setting all the weight values randomly than you would for setting biases randomly.
Upvotes: 4 [selected_answer] |
2018/06/19 | 1,220 | 5,194 | <issue_start>username_0: So I built a CNN without any scientific libraries like [TensorFlow](https://www.tensorflow.org/) or [Keras](https://keras.io/) (only [NumPy](http://www.numpy.org/)). It is taking a huge amount of time to train. What are some of the tricks and tips followed by people to speed up training of a CNN? (I am not talking about division of jobs into different processors but subtle redundant codes i.e. giving pre-calculated results which is not visible to common programmers).<issue_comment>username_1: `w` should be randomized to small (nonzero) numbers so that the adjustments made by the backpropagation are more meaningful and each value in the matrix is updated a different amount. If you start with all zeros, it will still work, but take longer to get to a meaningful result. AFAIK, this was found empirically by various researchers and became common practice.
Randomizing `b` does not have the same effect of helping, therefore most people do not bother.
This choice is one of many that is made by the architect of the network and theoretically you could use an infinite number of `w` matrix initializations. The one commonly used just happens to be tested and generally works.
This video is better at explaining than I am: [Lecture 8.4 — Neural Networks Representation | Model Representation-II — [<NAME>]](https://www.youtube.com/watch?v=iPNN805konI).
Upvotes: 0 <issue_comment>username_2: Most of the explanations given for choosing something or not choosing something (like hyperparameter tuning) in deep learning are based on empirical studies, like analysing the error over a number of iterations. So, [this answer](http://papers.nips.cc/paper/6662-convergence-analysis-of-two-layer-neural-networks-with-relu-activation.pdf) is what people in deep learning side give.
Since you have asked for a mathematical explanation, I suggest you read the paper [Convergence Analysis of Two-layer Neural Networks
with ReLU Activation](http://papers.nips.cc/paper/6662-convergence-analysis-of-two-layer-neural-networks-with-relu-activation.pdf) (2017, NIPS). It talks about the convergence of SGD to global minima subject to weight initialisation being Gaussian using ReLU as an activation function. The paper considers a neural net with no hidden layer, just input, and output layers.
The very fact that analysis on such 'simple' network gets published in a very reputed and top conference itself suggests that the explanation you are seeking is not very easy and very few people work on the theoretical aspects of neural nets. IMHO, after some years as the research progresses, I might be able to edit this answer and give the necessary explanation that you sought. Till then this is the best I could do.
Upvotes: 2 <issue_comment>username_3: Randomising just `b` sort of works, but setting `w` to all zero causes severe problems with [vanishing gradients](https://en.wikipedia.org/wiki/Vanishing_gradient_problem), especially at the start of learning.
Using backpropagation, the gradient at the outputs of a layer `L` involves a sum multiplying the gradient of the inputs to layer `L+1` by the weights (and not the biases) between the layers. This will be zero if the weights are all zero.
A gradient of zero at `L`'s output will further cause all earlier layers(`L-1`, `L-2` etc all the way back to layer `1`) to receive zero gradients, and thus not update either weights or bias at the update step. So the first time you run an update, it will only affect the last layer. Then the next time, it will affect the two layers closest to the output (but only marginally at the penultimate layer) and so on.
A related issue is that with weights all zero, or all the same, maps all inputs, no matter how they vary, onto the same output. This also can adversely affect the gradient signal that you are using to drive learning - for a balanced data set you have a good chance of starting learning close to a local minimum in the cost function.
For deep networks especially, to fight vanishing (or exploding) gradients, you should initialise weights from a distribution that has an expected magnitude (after multiplying the inputs) and gradient magnitude that neither vanishes nor explodes. Analysis of values that work best in deep networks is how [Xavier/Glorot initialisation](https://prateekvjoshi.com/2016/03/29/understanding-xavier-initialization-in-deep-neural-networks/) were discovered. Without careful initialisation along these lines, deep networks take much longer to learn, or in worst cases never recover from a poor start and fail to learn effectively.
Potentially to avoid these problems you could try to find a good non-zero fixed value for weights, as an alternative to Xavier initialisation, along with a good magnitude/distribution for bias initialisation. These would both vary according to size of the layer and possibly by the activation function. However, I would suspect this could suffer from other issues such sampling bias issues - there are *more* weights, therefore you get a better fit to desired aggregate behaviour when setting all the weight values randomly than you would for setting biases randomly.
Upvotes: 4 [selected_answer] |
2018/06/19 | 2,604 | 9,756 | <issue_start>username_0: The paper [*The Limitations of Deep Learning in Adversarial Settings*](https://arxiv.org/pdf/1511.07528.pdf) explores how neural networks might be corrupted by an attacker who can manipulate the data set that the neural network trains with. The authors experiment with a neural network meant to read handwritten digits, undermining its reading ability by distorting the samples of handwritten digits that the neural network is trained with.
I'm concerned that malicious actors might try hacking AI. For example
* Fooling autonomous vehicles to misinterpret stop signs vs. speed limit.
* Bypassing facial recognition, such as the ones for ATM.
* Bypassing spam filters.
* Fooling sentiment analysis of movie reviews, hotels, etc.
* Bypassing anomaly detection engines.
* Faking voice commands.
* Misclassifying machine learning based-medical predictions.
What adversarial effect could disrupt the world? How we can prevent it?<issue_comment>username_1: I believe it is, no system is safe, however I am not sure if I can still say this after 20-30 years of AI development/evolution. Anyways, there are articles that showed humans fooling AI (Computer Vision).
<https://www.theverge.com/2018/1/3/16844842/ai-computer-vision-trick-adversarial-patches-google>
<https://spectrum.ieee.org/cars-that-think/transportation/sensors/slight-street-sign-modifications-can-fool-machine-learning-algorithms>
Upvotes: 2 <issue_comment>username_2: I concur with username_1 that no system is completely safe, but the take away is AI systems are less prone to attacks when comparing with the old systems because of the ability to constantly improve.
As time passes by more people will get in the field bringing new ideas and hardware will be improving so that they are "strong AI."
Upvotes: 1 <issue_comment>username_3: >
> How we can prevent it?
>
>
>
There are several works about AI verification. Automatic verifiers can prove the robustness properties of neural networks. It means that if the input X of the NN is perturbed not more that on a given limit ε (in some metric, e.g. L2), then the NN gives the same answer on it.
Such verifiers are done by:
* Stanford: <https://arxiv.org/pdf/1702.01135.pdf>
* ETHZ: <https://www.sri.inf.ethz.ch/papers/sp2018.pdf>
* Google: <https://arxiv.org/pdf/1803.06567.pdf>, <https://arxiv.org/pdf/1805.10265.pdf>
* Bosch: <https://arxiv.org/pdf/1805.10265.pdf>
This approach may help to check robustness properties of neural networks. The next step is to construct such a neural network, that has required robustness. Some of above papers contain also methods of how to do that.
There are different techniques to improve the robustness of neural networks:
* adversarial training (see e.g. [<NAME> et al., ICLR 2017](https://arxiv.org/pdf/1611.01236.pdf))
* defensive distillation (see e.g. [<NAME> et al., SSP 2016](https://arxiv.org/pdf/1511.04508.pdf))
* MMSTV defence ([Maudry et al., ICLR 2018](https://arxiv.org/pdf/1706.06083)).
At least the last one can provably make NN more robust. More literature can be found [here](https://github.com/TrustAI/Literature-on-DNN-Verification-and-Testing/blob/master/defences.md).
Upvotes: 3 <issue_comment>username_4: Programmer vs Programmer
------------------------
It's a "infinity war": Programmers vs Programmers. All thing can be hackable. Prevention is linked to the level of knowledge of the professional in charge of security and programmers in application security.
**eg** There are several ways to identify a user trying to mess up the metrics generated by Sentiment Analysis, but there are ways to circumvent those steps as well. It's a pretty boring fight.
Agent vs Agent
--------------
An interesting point that @DukeZhou raised is the evolution of this war, involving two artificial intelligence (agents). In that case, the battle is one of the most knowledgeable. Which is the best-trained model, you know?
However, to achieve perfection in the issue of vulnerability, artificial intelligence or artificial super intelligence surpass the ability to circumvent the human. It is as if the knowledge of all hacks to this day already existed in the mind of this agent and he began to develop new ways of circumventing his own system and developing protection. Complex, right?
I believe it's hard to have an AI who thinks: "Will the human going to use a photo instead of putting his face to be identified?"
How we can prevent it
---------------------
Always having a human supervising the machine, and yet it will not be 100% effective. This disregarding the possibility that an agent can improve his own model alone.
Conclusion
----------
So I think the scenario works this way: a programmer tries to circumvent the validations of an AI and the IA developer acquiring knowledge through logs and tests tries to build a smarter and safer model trying to reduce the chances of failure.
Upvotes: 3 <issue_comment>username_5: AI is vulnerable from two security perspectives the way I see it:
1. The classic method of exploiting outright programmatic errors to achieve some sort of code execution on the machine that is running the AI or to extract data.
2. Trickery through the equivalent of AI optical illusions for the particular form of data that the system is designed to deal with.
The first has to be mitigated in the same way as any other software. I'm uncertain if AI is any more vulnerable on this front than other software, I'd be inclined to think that the complexity maybe slightly heightens the risk.
The second is probably best mitigated by both the careful refinement of the system as noted in some of the other answers, but also by making the system more context-sensitive; many adversarial techniques rely on the input being assessed in a vacuum.
Upvotes: 4 <issue_comment>username_6: There are many ways to hack an AI. When I was kid I figured how to beat a chess computer. I always followed the same pattern, once you learn you can exploit it. The worlds best hacker is a 4 year old that wants something he will try different things until he establishes pattern in his parents. Anyway, Get an Ai to learn the patterns of a AI and given a given combination you can figure the outcome. There is also just plain flaws or back door in code either on purpose or by chance. There is also the possibility the AI will hack itself. It is called misbehaving, remember the small child again...
BTW simple way is to make AI always fails safe... something people forget.
Upvotes: 0 <issue_comment>username_7: >
> Is Artificial Intelligence Vulnerable to Hacking?
>
>
>
Invert your question for a moment and think:
>
> What would make AI at less of a risk of hacking compared to any other
> kind of software?
>
>
>
At the end of the day, software is software and there will always be bugs and security issues. AIs are at risk to all the problems non-AI software is at risk to, being AI doesn't grant it some kind of immunity.
As for AI-specific tampering, AI is at risk to being fed false information. Unlike most programs, AI's functionality is determined by the data it consumes.
For a real world example, a few years ago Microsoft created an AI chatbot called Tay. It took the people of Twitter less than 24 hours to teach it to say "We're going to build a wall, and mexico is going to pay for it":
[](https://i.stack.imgur.com/ZvUkp.jpg)
(Image taken from the Verge article linked below, I claim no credit for it.)
And that's just the tip of the iceberg.
Some articles about Tay:
* [BBC](https://www.bbc.co.uk/news/technology-35890188)
* [The Verge](https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist)
Now imagine that wasn't a chat bot, imagine that was an important piece of AI from a future where AI are in charge of things like not killing the occupants of a car (i.e. a self-driving car) or not killing a patient on the operating table (i.e. some kind of medical assistance equipment).
Granted, one would hope such AIs would be better secured against such threats, but supposing someone did find a way to feed such an AI masses of false information without being noticed (after all, the best hackers leave no trace), that genuinely could mean the difference between life and death.
Using the example of a self-driving car, imagine if false data could make the car think it needed to do an emergency stop when on a motorway. One of the applications for medical AI is life-or-death decisions in the ER, imagine if a hacker could tip the scales in favour of the wrong decision.
>
> How we can prevent it?
>
>
>
Ultimately the scale of the risk depends on how reliant humans become on AI. For example, if humans took the judgement of an AI and never questioned it, they'd be opening themselves up to all sorts of manipulation. However, if they use the AI's analysis as just one part of the puzzle, it would become easier to spot when an AI is wrong, be it through accidental or malicious means.
In the case of a medical decision maker, don't just believe the AI, carry out physical tests and get some human opinions too. If two doctors disagree with the AI, throw out the AI's diagnosis.
In the case of a car, one possibility is to have several redundant systems that must essentially 'vote' about what to do. If a car had multiple AIs on separate systems that must vote about which action to take, a hacker would have to take out more than just one AI to get control or cause a stalemate. Importantly, if the AIs ran on different systems, the same exploitation used on one couldn't be done on another, further increasing the hacker's workload.
Upvotes: 2 |
2018/06/20 | 2,577 | 9,685 | <issue_start>username_0: I have a corpus, say an instruction manual. The text in this manual is grouped into chapters and each chapter is split up into sections. For example, Chapter 1/Section 1, Chapter 1/Section 2 and so on.
Assume the corpus has C chapters and each chapter has S sections. My goal is, given a sentence or question, to classify this sentence/question. In other words I want to compute three most probable chapters to which this sentence or question belongs to.
I tried MultinomialNB model using sklealrn, but it did not give me the desired result. I want to try another approach, for example using a Neural Network and compare it with the MultinomialNB model. I have Googled and found Doc2Vec but haven't tried yet.
Can anyone suggest a better or another possible approach so that I could try and compare? What is the standard approach to such kind of problem?<issue_comment>username_1: I believe it is, no system is safe, however I am not sure if I can still say this after 20-30 years of AI development/evolution. Anyways, there are articles that showed humans fooling AI (Computer Vision).
<https://www.theverge.com/2018/1/3/16844842/ai-computer-vision-trick-adversarial-patches-google>
<https://spectrum.ieee.org/cars-that-think/transportation/sensors/slight-street-sign-modifications-can-fool-machine-learning-algorithms>
Upvotes: 2 <issue_comment>username_2: I concur with username_1 that no system is completely safe, but the take away is AI systems are less prone to attacks when comparing with the old systems because of the ability to constantly improve.
As time passes by more people will get in the field bringing new ideas and hardware will be improving so that they are "strong AI."
Upvotes: 1 <issue_comment>username_3: >
> How we can prevent it?
>
>
>
There are several works about AI verification. Automatic verifiers can prove the robustness properties of neural networks. It means that if the input X of the NN is perturbed not more that on a given limit ε (in some metric, e.g. L2), then the NN gives the same answer on it.
Such verifiers are done by:
* Stanford: <https://arxiv.org/pdf/1702.01135.pdf>
* ETHZ: <https://www.sri.inf.ethz.ch/papers/sp2018.pdf>
* Google: <https://arxiv.org/pdf/1803.06567.pdf>, <https://arxiv.org/pdf/1805.10265.pdf>
* Bosch: <https://arxiv.org/pdf/1805.10265.pdf>
This approach may help to check robustness properties of neural networks. The next step is to construct such a neural network, that has required robustness. Some of above papers contain also methods of how to do that.
There are different techniques to improve the robustness of neural networks:
* adversarial training (see e.g. [<NAME> et al., ICLR 2017](https://arxiv.org/pdf/1611.01236.pdf))
* defensive distillation (see e.g. [<NAME> et al., SSP 2016](https://arxiv.org/pdf/1511.04508.pdf))
* MMSTV defence ([Maudry et al., ICLR 2018](https://arxiv.org/pdf/1706.06083)).
At least the last one can provably make NN more robust. More literature can be found [here](https://github.com/TrustAI/Literature-on-DNN-Verification-and-Testing/blob/master/defences.md).
Upvotes: 3 <issue_comment>username_4: Programmer vs Programmer
------------------------
It's a "infinity war": Programmers vs Programmers. All thing can be hackable. Prevention is linked to the level of knowledge of the professional in charge of security and programmers in application security.
**eg** There are several ways to identify a user trying to mess up the metrics generated by Sentiment Analysis, but there are ways to circumvent those steps as well. It's a pretty boring fight.
Agent vs Agent
--------------
An interesting point that @DukeZhou raised is the evolution of this war, involving two artificial intelligence (agents). In that case, the battle is one of the most knowledgeable. Which is the best-trained model, you know?
However, to achieve perfection in the issue of vulnerability, artificial intelligence or artificial super intelligence surpass the ability to circumvent the human. It is as if the knowledge of all hacks to this day already existed in the mind of this agent and he began to develop new ways of circumventing his own system and developing protection. Complex, right?
I believe it's hard to have an AI who thinks: "Will the human going to use a photo instead of putting his face to be identified?"
How we can prevent it
---------------------
Always having a human supervising the machine, and yet it will not be 100% effective. This disregarding the possibility that an agent can improve his own model alone.
Conclusion
----------
So I think the scenario works this way: a programmer tries to circumvent the validations of an AI and the IA developer acquiring knowledge through logs and tests tries to build a smarter and safer model trying to reduce the chances of failure.
Upvotes: 3 <issue_comment>username_5: AI is vulnerable from two security perspectives the way I see it:
1. The classic method of exploiting outright programmatic errors to achieve some sort of code execution on the machine that is running the AI or to extract data.
2. Trickery through the equivalent of AI optical illusions for the particular form of data that the system is designed to deal with.
The first has to be mitigated in the same way as any other software. I'm uncertain if AI is any more vulnerable on this front than other software, I'd be inclined to think that the complexity maybe slightly heightens the risk.
The second is probably best mitigated by both the careful refinement of the system as noted in some of the other answers, but also by making the system more context-sensitive; many adversarial techniques rely on the input being assessed in a vacuum.
Upvotes: 4 <issue_comment>username_6: There are many ways to hack an AI. When I was kid I figured how to beat a chess computer. I always followed the same pattern, once you learn you can exploit it. The worlds best hacker is a 4 year old that wants something he will try different things until he establishes pattern in his parents. Anyway, Get an Ai to learn the patterns of a AI and given a given combination you can figure the outcome. There is also just plain flaws or back door in code either on purpose or by chance. There is also the possibility the AI will hack itself. It is called misbehaving, remember the small child again...
BTW simple way is to make AI always fails safe... something people forget.
Upvotes: 0 <issue_comment>username_7: >
> Is Artificial Intelligence Vulnerable to Hacking?
>
>
>
Invert your question for a moment and think:
>
> What would make AI at less of a risk of hacking compared to any other
> kind of software?
>
>
>
At the end of the day, software is software and there will always be bugs and security issues. AIs are at risk to all the problems non-AI software is at risk to, being AI doesn't grant it some kind of immunity.
As for AI-specific tampering, AI is at risk to being fed false information. Unlike most programs, AI's functionality is determined by the data it consumes.
For a real world example, a few years ago Microsoft created an AI chatbot called Tay. It took the people of Twitter less than 24 hours to teach it to say "We're going to build a wall, and mexico is going to pay for it":
[](https://i.stack.imgur.com/ZvUkp.jpg)
(Image taken from the Verge article linked below, I claim no credit for it.)
And that's just the tip of the iceberg.
Some articles about Tay:
* [BBC](https://www.bbc.co.uk/news/technology-35890188)
* [The Verge](https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist)
Now imagine that wasn't a chat bot, imagine that was an important piece of AI from a future where AI are in charge of things like not killing the occupants of a car (i.e. a self-driving car) or not killing a patient on the operating table (i.e. some kind of medical assistance equipment).
Granted, one would hope such AIs would be better secured against such threats, but supposing someone did find a way to feed such an AI masses of false information without being noticed (after all, the best hackers leave no trace), that genuinely could mean the difference between life and death.
Using the example of a self-driving car, imagine if false data could make the car think it needed to do an emergency stop when on a motorway. One of the applications for medical AI is life-or-death decisions in the ER, imagine if a hacker could tip the scales in favour of the wrong decision.
>
> How we can prevent it?
>
>
>
Ultimately the scale of the risk depends on how reliant humans become on AI. For example, if humans took the judgement of an AI and never questioned it, they'd be opening themselves up to all sorts of manipulation. However, if they use the AI's analysis as just one part of the puzzle, it would become easier to spot when an AI is wrong, be it through accidental or malicious means.
In the case of a medical decision maker, don't just believe the AI, carry out physical tests and get some human opinions too. If two doctors disagree with the AI, throw out the AI's diagnosis.
In the case of a car, one possibility is to have several redundant systems that must essentially 'vote' about what to do. If a car had multiple AIs on separate systems that must vote about which action to take, a hacker would have to take out more than just one AI to get control or cause a stalemate. Importantly, if the AIs ran on different systems, the same exploitation used on one couldn't be done on another, further increasing the hacker's workload.
Upvotes: 2 |
2018/06/21 | 762 | 3,281 | <issue_start>username_0: Recently, I have been learning about new neural networks, which are used for specialized purposes, like speech recognition, image recognition, etc. The more I discover the more I get amazed by the cleverness behind models such as RNN's and CNN's. Questions about working, intuition, mathematics have been asked a lot in this community, all with vague answers and apparent understandings.
So, my question is: did the researchers come up with these specialized models accidentally, or did they follow particular steps to get to the model (like in a mathematical framework)? And how did they look at a particular class of problem and think "Yeah, a better solution might exist"?
Since the understanding of NN's is so vague, these are 'high risk, high reward' scenarios, since you might be chasing only the mirage (illusion) of a solution.<issue_comment>username_1: Researchers may follow specific mathematical frameworks, techniques to come-up with amazing works just like in any field, but I believe in ***Darwinian natural selection*** as a base theory for human's discoveries as well as for the Evolutionary Neural Net Architectures.
>
> *"Principle by which each slight variation [of a trait], if useful, is preserved".*
>
>
>
Upvotes: 1 <issue_comment>username_2: Although there is a strong element of "try and see" that has driven successful architectures, the drivers for what to try are often inspired by underlying theory or knowledge from other disciplines.
Specifically for basic CNN, which led to AlexNet and many of the best image processing, the concept of using local receptive fields in layers was inspired by [study of neurons in the cat visual system](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1359523/).
Modern RNNs also did not appear out of nowhere, there has long been an appreciation of the difference between a feed-forward network and a recurrently-connected one, and the different applications possible. The step change to LSTM was [deliberate response to analysis of problems training simplest forms of RNN](https://dl.acm.org/citation.cfm?id=1246450).
Like much of science, these things are also driven by success in the real world following the research. Many promising ideas have been tried and rejected. Some have been used for a while then superseded, e.g. using RBMs or stacked auto-encoders to pre-train deep networks before ReLUs and Xavier initialisation were discovered - although both RBMs and auto-encoders still have their niches.
Tweaks to architectures, such as variants of LSTM/GRU, may even be deliberately searched and assessed as part of research. That is done with the explicit knowledge that this part of finding a good design is best done as a search across possibilities.
Despite the evolution-like progress, presenting all such advances as completely random or pure GA-like search is ignoring the conscious effort and research that leads to the designs. If you search literature on any major successful design (such as the existence of RNNs or CNNs in the first place), and read the papers, you will often find that modern neural network architectures have deep roots in older research, plus have mathematical and/or scientific justifications for the choices made.
Upvotes: 3 [selected_answer] |
2018/06/22 | 512 | 1,853 | <issue_start>username_0: I thought I have implemented the code (from scratch, no library) for an **artificial neural network** (first endeavour in the field). But I feel like I miss something very basic or obvious.
To make it short: code works for a single pair of in-/out-values but fails for sets of value pairs. I do not really understand the training process. So I want to get this issue out of the way first. The following is my improvised training (aka all that I can think of) in pseudocode.
```
trainingData = [{in: [0,0], out:[0]}, {in: [0,1], out:[0]}, ...];
iterations = 10000
network = graphNodesToNetwork()
links = graphLinksToNetwork()
randomiseLinkWeights(links)
while(trainingData not empty) {
for(0
```
Is this how it is supposed to work? Do you feed in your training data set by set (while-loop)?
*Edit 1: because my final comment did distract from the issue at hand.*
*Edit 2: less wordy, more code-y*<issue_comment>username_1: It looks like you are training you model 10,000 times on one piece of data, and then dropping that piece and moving to the next.
This will not work: the model will become extremely good at learning one piece of data, but will then forget about it when optimizing for the next piece.
Instead, either pick one example at random in each iteration, or compute the gradient for all 4 examples and just update in that direction instead.
Upvotes: 1 <issue_comment>username_2: Your network must have something which persist like **weights** add **bias**
Your new implementation would be like this :
```
trainingData = [{in: [0,0], out:[0]}, {in: [0,1], out:[0]}, ...];
iterations = 10000
network = graphNodesToNetwork()
links = graphLinksToNetwork()
randomiseLinkWeights(links)
weights = []
while(trainingData not empty) {
for(0
```
Inshort retain **weights**, and backprapogate.
Upvotes: 2 |
2018/06/24 | 830 | 2,701 | <issue_start>username_0: I am a deep learning beginner recently reading this book "Deep learning with Python", the example explains the process of implementing a greyscale image classification using MNIST in keras, in the compilation step, it said,
>
> Before training, we’ll preprocess the data by reshaping it into the shape the network expects and scaling it so that all values are in the [0, 1] interval. Previously, our training images, for instance, were stored in an array of shape (60000, 28, 28) of type uint8 with values in the [0, 255] interval. We transform it into a float32 array of shape (60000, 28 \* 28) with values between 0 and 1.
>
>
>
Images stored in an array of shape (60000, 28, 28) of type uint8 with values in the [0, 255] interval. For my understanding, the values are between 0-255 of each px and storied as 3D matrix. Can someone explain why needs to "transform" it into the network expects by scaling it and make "all values are in the [0, 1]interval."?
Please also make suggestions if I didn't explain some parts correctly.<issue_comment>username_1: T'is easy, but misunderstood. What they mean is to map it from the range of 0-255 to the range of 0-1. This means that 0 would be 0, and 255 would be 1.
The code for this is as such in javascript:
```
function map (num, in_min, in_max, out_min, out_max) {
return (num - in_min) * (out_max - out_min) / (in_max - in_min) + out_min;
}
```
Use the function, like this:
```
var num = 5;
console.log(map(num, 0, 255, 0, 1)); // 0.0196078431372549
var num = 150
console.log(map(num, 0, 255, 0, 1)); // 0.5882352941176471
```
Iterate over the entire image, and use the function (or you programming languages equivalent) on every egg or each pixel.
By doing so, all values are in the 0,1 interval. Next, all you have to do is to feed it to the network.
Upvotes: 0 <issue_comment>username_2: The purpose of rescaling gradient descent problems is to reframe the problem for quicker convergence / calculation of linear coefficient parameters. in the [Stanford video](http://openclassroom.stanford.edu/MainFolder/VideoPage.php?course=MachineLearning&video=03.1-LinearRegressionII-FeatureScaling&speed=100) series, Andrew Ng provides a intuitive explanation and enables one to hone an intuitive understanding.
Multivariate input regression gradient descent converges faster when the inputs are in the same order of magnitude. For example, if predicting house prices based on X1= the number of rooms and X2= area of the home in square feet. X1 is on scale of 0-6 bedrooms and and X2 is typically 1000-3000 square feet. Given the diffence in magnitude, this problem is a good candidate for feature scaling.
Upvotes: 1 |
2018/06/25 | 236 | 1,042 | <issue_start>username_0: In a final project in diagnosing Attention deficit hyperactivity disorder (ADHD) using Machine Learning, we obtained parameters from real patients. We used this data and got much higher success rates with LDA than with SVM and Naive Bayes. We had only 100 examples in our training set. We are wondering why LDA specifically succeeded much more than the others?<issue_comment>username_1: If I had to guess(and it is nothing more), I would say it has quite a bit to do with the problem itself and the architectures involved. Simply, the problem is less suited to a bayesian approach(Highly dependent features, linear distribution).
Upvotes: 0 <issue_comment>username_2: It would be hard to tell if you don't provide what kind of data/problem you are working on, but LDA works well when data that are grouped in gaussian blobs surrounding centroids while vanilla SVM works well when the data is almost linearly separable and naive bayes works well when your features are relatively independent of each other.
Upvotes: 2 |
2018/06/25 | 1,012 | 4,006 | <issue_start>username_0: I have an image dataset where objects may belong to one of the hundred thousand classes.
What kind of neural network architecture should I use in order to achieve this?<issue_comment>username_1: A large one!
In all seriousness, imagenet had roughly 1000 classes and did not require anything special from the top submissions. Depending on how deep(contextually) these classes are, you may want to do something like multi-label classification. Your biggest problems will likely be differentiating between classes, as well as class distribution.
Good luck!
Upvotes: 1 <issue_comment>username_2: As you can imagine and as it has already been said, a large one for your network to tune weights and biases. But I wanted to nuance this statement with two points
First : you can use an [Autoencodeur](https://en.wikipedia.org/wiki/Autoencoder) to pre-process your images. It can reduce dimensionality and so improve the learning capability and efficiency (in a generalization point of view).
This kind of NN takes your images as inputs, encode and then decode them to provie new representation of your initial images. Dealing with the decoded dataset can allow you to consider less hidden layer with less hidden nodes and then speed your work up.
Second : architecture is sure a thing to deal with image recognition, but you can also play on the input representation (that is what the autoencoder aforesaid is about). You can look at [PCA](https://en.wikipedia.org/wiki/Principal_component_analysis) (Principal Component Analysis). It allows to reduce dimensionality to a certain number of components (that you specify). It is often used in face recognition where inputs and targets are various.
All that to say that architecture is sure a thing when dealing with large datasets, but there also few tools to reshape the inputs so that then can be more easily learnt.
And by doing so you can improve the capability of your network as much in term of time computation as in quality and accuracy of prediction
Upvotes: 1 <issue_comment>username_3: Classification tasks with a large number of classes are usually handled with [hierarchical softmax](http://ruder.io/word-embeddings-softmax/index.html#hierarchicalsoftmax) to reduce the complexity of the final layer. This is useful, for example, in applications such as word embedding where you have hundreds of thousands of classes (words), like in your case.
Upvotes: 2 <issue_comment>username_4: [Alexnet](https://www.cs.toronto.edu/~fritz/absps/imagenet.pdf) (2012), [Overfeat](https://arxiv.org/abs/1312.6229) (2013), [VGG](https://arxiv.org/abs/1409.1556) (2014) and [ResNet](http://openaccess.thecvf.com/content_cvpr_2016/html/He_Deep_Residual_Learning_CVPR_2016_paper.html) (2016) are cited in many image recognition or segmentation applications. There is also [GoogleLeNet](https://arxiv.org/abs/1409.4842) (2015).
The lastest is the publication the denser is the network.
The ResNet publication comments on how the network density affects accuracy depending on the image data set size. The article tends to give a motivated answer to the question
>
> Is learning better networks as easy as stacking more layers?
>
>
>
You might consider the training time since you have your own image data set depending on the kind of hardware you can you use ( [see this benchmark](https://lambdalabs.com/blog/2080-ti-deep-learning-benchmarks/) for instance ). The denser the more time it will takes.
You also have to consider the size of the traning data set w/r to the expected accuracy. If the set is too small the net will probably overfeat. In that case you migh consider a data augmentation strategy (one of the answers mentions *auto encoding*, I m not sure but this might help for this purpose).
All these publications refer to the [ImageNet](http://www.image-net.org/) data base and the associated image classification/detection [contest](http://www.image-net.org/challenges/LSVRC/) which has 1000 classes.
Upvotes: 0 |
2018/06/26 | 1,162 | 4,410 | <issue_start>username_0: The problem of adversarial examples is [known](https://arxiv.org/pdf/1312.6199.pdf) to be critical for neural networks. For example, an image classifier can be manipulated by additively superimposing a different low amplitude image to each of many training examples that looks like noise but is designed to produce specific misclassifications.
[](https://i.stack.imgur.com/5rfe9.png)
Since neural networks are applied to some safety-critical problems (e.g. self-driving cars), I have the following question
>
> What tools are used to ensure safety-critical applications are resistant to the injection of adversarial examples at training time?
>
>
>
Laboratory research aimed at developing defensive security for neural networks exists. These are a few examples.
* adversarial training (see e.g. [<NAME> et al., ICLR 2017](https://arxiv.org/pdf/1611.01236.pdf))
* defensive distillation (see e.g. [<NAME> et al., SSP 2016](https://arxiv.org/pdf/1511.04508.pdf))
* MMSTV defence ([Maudry et al., ICLR 2018](https://arxiv.org/pdf/1706.06083)).
However, do industrial-strength, production-ready defensive strategies and approaches exist? Are there known examples of applied adversarial-resistant networks for one or more specific types (e.g. for small perturbation limits)?
There are already (at least) two questions related to the problem of [hacking](https://ai.stackexchange.com/q/6800/2444) and [fooling](https://ai.stackexchange.com/q/92/2444) of neural networks. The primary interest of this question, however, is whether any **tools** exist that can defend against some adversarial example attacks.<issue_comment>username_1: >
> However, do industrial strength, production ready defensive strategies and approaches exist? Are there known examples of applied adversarial-resistant networks for one or more specific types (e.g. for small perturbation limits)?
>
>
>
I think it's difficult to tell whether or not there are any **industrial strength** defenses out there (which I assume would mean that they'd be reliable against all or most known methods of attacking). Adversarial Machine Learning is indeed a [highly active, and growing, area of research](https://github.com/yenchenlin/awesome-adversarial-machine-learning). Not only are new approaches for defending being published quite regularly, but there is also active research into different approaches for "attacking". With new attack methods being discovered frequently, it's unlikely that anyone can already claim to have approaches that would work reliably against them all.
>
> The primary interest of this question, however, is whether any **tools** exist that can defend against some adversarial example attacks.
>
>
>
The closest thing to a ready-to-use "tool" that I've been able to find is [IBM's Adversarial Robustness Toolbox](https://github.com/IBM/adversarial-robustness-toolbox), which appears to have various attack and defense methods implemented. It appears to be in active development, which is natural considering the area of research itself is also highly active. I've never tried using it, so I can't vouch personally for the extent to which it's easily usable as a tool for industry, or if it's maybe really only still suitable for research.
---
Based on comments by [Ilya](https://ai.stackexchange.com/users/16354/ilya-palachev), other frameworks that may be useful to consider are [Cleverhans](https://github.com/tensorflow/cleverhans) and [Foolbox](https://github.com/bethgelab/foolbox).
Upvotes: 3 [selected_answer]<issue_comment>username_2: Another point of view -
In safety-critical real world systems, this attack should be evaluated from other aspects as well.
In many systems the attack is somewhat mitigated to physical attacks only - for example, you can't add digital noise to a camera used for autonomous driving - you need to print an adversarial e.g. stop sign and locate it in a place, where it's still viewed and interpreted incorrectly from several points of view, angles, light and whether conditions, etc.
Given that, I think that the overall *current* risk of adversarial examples for scalable attacks on real-world mission-critical systems isn't very high *for now*.
That's why such work is existing in companies at research level, but not production, yet.
Upvotes: 1 |
2018/06/26 | 1,031 | 4,463 | <issue_start>username_0: I have a sort of mathematical problem and I'm not sure which model I should choose to make an LSTM neural network.
Currently in my country, there is a system in which certain groups of researchers upload information on products of scientific interest, such as research articles, books, patents, software, among others. Depending on the number of products, the system assigns a classification to each group, which can be A1, A, B and C, where A1 is the highest classification and C is the minimum.
The classification is done through a mathematical model whose entries are, the total number of each product, the total sum of all products, number of authors, among other indices that are calculated with the previous values.
Once the entries are obtained, these values are processed by a set of formulas and the final result is a single number.
This number is located in a range provided by the mathematical model and this is how the group is classified.
What I want to do is given the current classification of a group, give suggestions of different values to improve their classification.
For example, if there is a group with classification C, suggest how many products it should have, how many authors, what value should its indexes have, so that its category would be finally B.
I think the structure of my network should be:
-1 input, which would be the classification you want to get.
-Multiple output, one for each product and indexes.
But I do not understand how to make the network take into account the current classification of the group, in addition to the number of products and the value of the current indexes.
If you have further questions about the problem, please feel free to ask.
I appreciate your suggestions.<issue_comment>username_1: >
> However, do industrial strength, production ready defensive strategies and approaches exist? Are there known examples of applied adversarial-resistant networks for one or more specific types (e.g. for small perturbation limits)?
>
>
>
I think it's difficult to tell whether or not there are any **industrial strength** defenses out there (which I assume would mean that they'd be reliable against all or most known methods of attacking). Adversarial Machine Learning is indeed a [highly active, and growing, area of research](https://github.com/yenchenlin/awesome-adversarial-machine-learning). Not only are new approaches for defending being published quite regularly, but there is also active research into different approaches for "attacking". With new attack methods being discovered frequently, it's unlikely that anyone can already claim to have approaches that would work reliably against them all.
>
> The primary interest of this question, however, is whether any **tools** exist that can defend against some adversarial example attacks.
>
>
>
The closest thing to a ready-to-use "tool" that I've been able to find is [IBM's Adversarial Robustness Toolbox](https://github.com/IBM/adversarial-robustness-toolbox), which appears to have various attack and defense methods implemented. It appears to be in active development, which is natural considering the area of research itself is also highly active. I've never tried using it, so I can't vouch personally for the extent to which it's easily usable as a tool for industry, or if it's maybe really only still suitable for research.
---
Based on comments by [Ilya](https://ai.stackexchange.com/users/16354/ilya-palachev), other frameworks that may be useful to consider are [Cleverhans](https://github.com/tensorflow/cleverhans) and [Foolbox](https://github.com/bethgelab/foolbox).
Upvotes: 3 [selected_answer]<issue_comment>username_2: Another point of view -
In safety-critical real world systems, this attack should be evaluated from other aspects as well.
In many systems the attack is somewhat mitigated to physical attacks only - for example, you can't add digital noise to a camera used for autonomous driving - you need to print an adversarial e.g. stop sign and locate it in a place, where it's still viewed and interpreted incorrectly from several points of view, angles, light and whether conditions, etc.
Given that, I think that the overall *current* risk of adversarial examples for scalable attacks on real-world mission-critical systems isn't very high *for now*.
That's why such work is existing in companies at research level, but not production, yet.
Upvotes: 1 |
2018/06/27 | 1,181 | 4,002 | <issue_start>username_0: I'm just beginning to understand neural networks and I've performed a couple of successful tests with numerical series where the NN was trained to find the odd one or a missing value. It all works pretty well.
The next test I wanted to perform was to approxmimate the solution of a Sudoku which, I thought could also be seen as a special kind of numerical series. However, the results are really confusing.
I'm using an MLP with 81 neurons in each of the three layers. All output neurons show a strong tendency to yield values that are close to either 0 or 1. I have scaled and truncated the output values. The result can be seen below:
```
Expected/Actual Solution: Neural Net's Solution:
6 2 7 3 5 0 8 4 1 9 0 9 9 9 3 0 0 3
3 4 8 2 1 6 0 5 7 0 9 9 0 0 0 9 9 0
5 1 0 4 7 8 6 2 3 0 9 1 9 9 0 2 0 4
1 6 4 0 2 7 5 3 8 0 0 5 0 0 9 0 0 7
2 0 3 8 4 5 1 7 6 0 0 0 0 0 9 9 0 9
7 8 5 1 6 3 4 0 2 9 9 9 9 0 6 2 9 0
0 5 6 7 3 1 2 8 4 0 0 0 0 9 9 0 9 0
4 3 1 5 8 2 7 6 0 9 9 0 0 0 0 9 0 9
8 7 2 6 0 4 3 1 5 9 9 0 9 9 0 9 0 9
```
The training set size is 100000 Sudokus while the learning rate is a constant 0.5. I'm using NodeJS/Javascript with the Synaptic library.
**I don't expect a perfect solution from you guys, but rather a hint if that kind of behavior is a typical symptom for a known problem, like too few/many neurons, small training set, etc.**<issue_comment>username_1: I think it is the wrong way to frame sudoku as a regression problem in neural networks.
Firstly, you have to understand what regression is. "Regression" is when you predict a value given certain parameters, where the parameters are related to the value you have to predict. This happens because at the core neural networks are "function approximators", they model the function by adjusting their weights using lots of data. They tend to form a highly non-linear boundary to separate classes internally in a high dimensional data-space.
The sudoku doesn't fit in this scenario, the combinatorial complexity of sudoku is way too high for a neural network even if you add many layers to it, it is a totally different problem in its own right. You simple can't "regress" the right values of a perfect sudoku here, they are not numbers like "pixel " intensities in images.
However, you could apply reinforcement learning techniques to learn an optimal policy to solve sudoku.
And you have mentioned an "approximate" solution for the sudoku, what do you mean by "approximate"? If you mean by this that only a few squares are out of place, then it is a wrong assumption, because neural networks are proven to be good image classifiers, as they are robust to translational invariance in this case, that is not what you need.
You could, however, do a small experiment to see what the neural network actually learns, replace the numbers by pixel value intensities and train a generative adversarial network on the sudoku images and see the images of sudokus produced by it, to see what actually the network can't learn.
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can take a look at this [paper](https://arxiv.org/pdf/1905.12149.pdf) that solving your problem with a neural network. You can use the pytorch implementation of the satnet layer : [satnet layer API](https://github.com/locuslab/SATNet). In this supervised setup the layer also learn the boolean constraints of your model. You can find an example of a sodoku solver in the github repo.
Upvotes: 2 <issue_comment>username_3: I think DeemMind’s AlphaZero could provide an answer. Meaning, like others have said solving Sudoku is not a regression or classification problem. The net needs to learn the rules then implement them, just like in chess.
Maybe you could learn something by reading up on it: <https://github.com/nikcheerla/deeplearningschool/deeplearningschool/2018/01/01/AlphaZero-Explained/>
Upvotes: 0 |
2018/06/27 | 903 | 3,837 | <issue_start>username_0: Would people go far with Artificial Intelligence and machine learning to the point where machines could learn during a long period of time to distinguish what's 'good' from 'bad' according to people living in a restricted geographical area, and then the machines take control and turn what was learned into a set of 'rules' and 'laws' (think of it as an effective machine of 'politics') that match the majority of the people's view of issues.
That should be accepted by everyone, since a contract set at the beginning says: "Everyone is ok".<issue_comment>username_1: Even if you could do that (which I believe is a long way off), what would be the point?
If I understand you correctly, you want an AI system to learn through observation of human beings what their 'rules of interaction' are. It sees a person killing someone else, and then that person is punished by the community, so the AI learns that killing people is not right. However, that is already codified in laws... So what it would pick up are social behaviours, which hopefully be things like "be nice to other people", "don't do anybody any harm", "be honest and truthful", etc.
The first question then is how an AI could assess events[\*] as to whether they are 'good' or 'bad'. If everybody lies and steals, that would be learnt as normal behaviour, and the AI would not be able to pick up that most people would see this as 'bad'[\*\*]. Causal relations are also hard to grasp. Somebody steals something. Then later someone else buys him a drink. So stealing stuff means other people will buy you drinks? These are really hard problems to solve. You need to know about people's motivations, and the multitude of 'threads' of interaction happening at the same time, even in a very limited area.
So, recognising events and causal links, plus a moral evaluation of them is pretty difficult. I don't think we will get there anytime soon. Unsupervised learning of behavioural is also pretty difficult, as you only have unlabelled observations and no real criteria to classify them. Plus, many actions are morally ambiguous. Killing people is generally seen as bad. What about the officers who tried to assassinate Hitler in 1944? Life is complex, and our artificial models are not anywhere near that.
So even if you were able to do all that, what would you end up with? An AI system that has picked up a lot of unwritten rules about human behaviour, and then postulates that as laws? So everyone has to behave the same way? I just don't see the point of that, even as a thought experiment.
[\*] leaving aside here the question how you determine what an 'event' is in the first place
[\*\*] Please note that even if people steal things, they steal can view theft as a bad thing.
Upvotes: 1 [selected_answer]<issue_comment>username_2: The most interesting scientific field in terms of relevance to this question is probably "affective computing".
There are several problems with the model that you suggested. It is questionable if AGI should learn in the same way humans learn. In addition, there are several ethical problems surrounding this question, perhaps even metaethics, because the question peeks beyond human ethics.
A hardcoded ethics protocol might be possible to implement, similar to Isaac Asimov's "Three Laws of Robotics". The possibility of hacking is an issue that one would have to think proactively about - especially in this case.
If humanoid AGIs or robots are very similar to humans, they should probably be treated that way. For example, ethiologist <NAME> has studied empathy and social behavior in monkeys and suggested that we are very similar in behavior, so we are probably similar in terms of feelings (if it quacks like a duck and so forth). Perhaps we need an ethiology for androids too?
Upvotes: -1 |
2018/06/28 | 656 | 2,234 | <issue_start>username_0: In the data-sets like coco-text and total-text, the images are of different sizes (height\*width). I'm using these data sets for text detection. I want to create a DNN model for this. So the input data should be of same size. If I resize these images to a fixed size, the annotations given in the data-set, that is the location of the text in the images, will be changed.
So, how do I solve this problem?<issue_comment>username_1: Find the largest height and width amongst all the images. Let us call it H and W respectively. It is true that you cannot resize the images, but say if you have an image of height `h` and width `w` where `h < H, w < W`. To the right of the image append `W - w` number of columns and at the bottom of the image append `H - h` number of rows having some constant value (0 is okay for grey-scale and B/W images and 0 for each of the channel in case of colour images).
In this way all the images will be of same size. Since you appending at the right and bottom of the image, the annotations will not lose its meaning in the transformed image in terms of the position and content of the text to recognised.
You could also try [pixelRNN](https://arxiv.org/pdf/1601.06759.pdf) kind of ideas after you are done with DNN. RNN can handle variable length inputs and in your case it will be sequence of pixels. Here you don't need to append rows and columns to the image.
Upvotes: 1 <issue_comment>username_2: Finally, I found answer for the question. In the annotations, we have X min & max and Y min & max of bounding box. So take the width and height of bounding box and center of bounding box relative to the image.
For example, let the image dimension be `500*500`, the bounding box co-ordinates be `(200, 200)` and `(300, 300)`. So, the center of bounding box is `(250, 250)` and the height and width is `100`. Now, make it relative to image size.
```
center=(250/500,250/500)=(.5,.5)
height=width=(100/500)=.2
```
If you rescale the image, with this encodings you can bring back the bounding box with new rescaled image. if you enlarge the image to 1000\*1000 then,
```
center=(.5*1000,.5*1000)=(500,500)
height=width=(.2*1000)=200
```
Hope this helps someone.
Upvotes: 0 |
2018/06/28 | 1,532 | 5,431 | <issue_start>username_0: I just read about deep Q-learning, which is using a neural network for the value function instead of a table.
I saw the example here: [Using Keras and Deep Q-Network to Play FlappyBird](https://yanpanlau.github.io/2016/07/10/FlappyBird-Keras.html) and he used a CNN to get the Q-value.
My confusion is on the last layer of his neural net. Neurons in the output layer each represent an action (flap, or not flap). I also see the [other projects](http://edersantana.github.io/articles/keras_rl/) where the output layer also represents all available actions (move-left, stop, etc.)
*How would you represent all the available actions of a chess game?* Every pawn has a unique and available movement. We also need to choose how far it will move (rook can move more than one square). I've read [Giraffe chess engine's](https://arxiv.org/abs/1509.01549) paper and can't find how he represents the output layer (I'll read once again).
I hope somebody here can give a nice explanation about how to design NN architecture in Q-learning, I'm new in reinforcement learning.<issue_comment>username_1: To model chess as a Markov decision problem (MDP) you can refer to the AlphaZero paper ([Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm](https://arxiv.org/abs/1712.01815)). The exact details can be found starting from the bottom of page 13.
Briefly, an action is described by picking a piece and then picking a move with it. The size of the board is 8 by 8 so there can be 8x8 possibilities for picking a piece. Then we can either pick linear movements (in 8 directions) and then pick the number of steps in that direction (maximum 7 steps) or we can make a knight movement (maximum 8 possibilities). So far that is 8x7 + 8. Furthermore, we also need to consider underpromotions (promoting a pawn into a non-queen piece). In this scenario we can have 3 types of pawn movements (forward, left diagonal or right diagonal capture) and 3 types of promotions (rook, knight, bishop) so that makes it 9. So the total dimension of the action space is 8x8x(8x7+8+9) and this will be the number of neuron outputs you will need to use.
Note that this action space representation covers every possible scenario and for example at the start of the game the action of picking the tile E4 and promoting it to a bishop doesn't make sense (there are no pieces on tile E4 at the beginning of the game). Or if we pick a tile where there is a rook we cannot make a knight movement with it. Therefore you will also need to implement a function that can return you the set of possible actions in a given state and ignore all neural network outputs that is not contained in this set.
Obviously this action representation is not set into stone so if you can come up with something better or more compact you can use that one too. You can also make restrictions to your game for example by not allowing underpromotions.
Upvotes: 4 [selected_answer]<issue_comment>username_2: The accepted answer severely overcounts the actual action space because of the assumption that any piece can move a maximum of 7 squares in any direction from any square on the board. The calculation 8x8x(8x7+8+9) = 4672 is even much more than the most naive estimate of 64x63 = 4032.
The actual action space to specify any possible legal move is **1924**. The difference is not negligible as this number is less than half of 4672. So, if using a dense final layer, this would save about 60% of the work for that layer.
Here is the Python script using [`chess`](https://pypi.org/project/chess/) I wrote to confirm it:
```
import chess
# Each chess piece moves like either a queen or a knight from one square to another square.
# All possible moves except pawn promotions can be specified by the "from" square and the "to" square
# (even castling, ex. e1->g1, or e1->c1).
action_space = 0
b = chess.BaseBoard.empty()
for square in range(64):
# Place queen and see where it attacks
b.set_piece_at(square, chess.Piece.from_symbol('Q'))
q_moves = b.attacks(square)
# Place knight and see where it attacks
b.set_piece_at(square, chess.Piece.from_symbol('N'))
n_moves = b.attacks(square)
# Logical or to combine bitmaps (ex. 1100 | 0101 = 1101)
all_moves = q_moves | n_moves
# Convert bitmap to list of bools, so the sum
# is exactly the # possible moves from this square
action_space += sum(all_moves.tolist())
b.remove_piece_at(square)
# Count underpromotions manually:
# 8 forward promotions, 7 right-capture promotions, 7 left-capture promotions
# which can all promote to 4 pieces (but 1 is already counted above) for 2 colors
action_space += (8+7+7) * (4-1) * 2
print('actual action space:', action_space)
print('naive action space:', 64 * 63)
print('accepted answer\'s action space:', 8*8*(8*7 + 8 + 9))
# actual action space: 1924
# naive action space: 4032
# accepted answer's action space: 4672
```
**Note:** This assumes that the same output neuron will be used to represent for example the moves Ra7a8 and a8=Q, as well as the moves Re1g1 and O-O (by white). If this is not desirable, the action space that counts queen promotions and castling moves as distinct actions has size 1972. You can see this by changing `(4-1)` to `4` in the final step that accounts for underpromotions, then adding 4 for the 2 castling directions by both colors.
Upvotes: 2 |
2018/06/28 | 930 | 3,712 | <issue_start>username_0: In traditional computer vision and computer graphics, the pose matrix is a $4 \times 4$ matrix of the form
$$
\begin{bmatrix}
r\_{11} & r\_{12} & r\_{12} & t\_{1} \\
r\_{21} & r\_{22} & r\_{22} & t\_{2} \\
r\_{31} & r\_{32} & r\_{32} & t\_{3} \\
0 & 0 & 0 & 1
\end{bmatrix}
$$
and is a transformation to change viewpoints from one frame to another.
In the [Matrix Capsules with EM Routing](https://openreview.net/pdf?id=HJWLfGWRb) paper, they say that the "pose" of various sub-objects of an object are encoded by each capsule lower layer. But from the procedure described in the paper, I understand that the pose matrix they talk about doesn't conform to the definition of the pose matrix. There isn't any restriction on keeping the form of the pose matrix shown above.
1. So, is it right to use the word "pose" to describe the $4 \times 4$ matrix of each capsule?
2. Moreover, since the claim is that the capsules learn the pose matrices of the sub-objects of an object, does it mean they learn the viewpoint transformations of the sub-objects, since the pose matrix is actually a transformation?<issue_comment>username_1: Great question, and one that I think we could have done a better job of answering in the paper.
Essentially, the *pose matrix* of each capsule is set up so that it could learn to represent the affine transformation between the object and the viewer, but we are not restricting it to necessarily do that. So we talk about the output of a capsule as though it is an affine transformation matrix, but we can't ensure that it will be. We do things explicitly that make it more like such a matrix — like adding in the coordinates to the right-hand column — but we can't be sure. This somewhat embodies a large part of the capsule network theory — we set up scaffolding so that the network can learn to be equivalent to transformations that we think it ought to be invariant to, but we don't ensure that it is.
Upvotes: 3 [selected_answer]<issue_comment>username_2: I have tried to make it learn the affine transformation by giving it this as the label, and it works just fine. I'm really impressed and excited by capsule networks, and can't figure out why anyone didn't think of this before, because it's so obvious and simple. Spiking neurons also tell us that information between neurons can't be one dimensional only. It should be represented by Vectors of some kind.
UPDATE:
In the above comment I claim that it works "fine" when I make a capsule network learn the affine transformation by giving it this as the label. This is not true.. It doesn't work! - I'm sorry, I was too quick there.
I assume the reason is that the affine 4x4 matrix representation is redundant. Also it is impossible to make sensible linear interpolations between such transformations, which will affect the gradient (it will not point in the direction of the minimum).
What I have succeeded doing is to make the capsule network learn a quaternion (rotation) and a 3d vector (position) - 7 parameters in all. These can be contained in a 3x3 matrix, when fixing 2 of the parameters. But training is slow and the network cannot encode skews etc. in this 3x3 setup.
Affine transformations from images using capsule networks (matrix capsules), can also be achieved by just making the network learn its own 4x4 pose representation through the decoder part. Then a small network can be trained to transform these poses into a 7d vector (quaternion and 3d vector), from which the affine 4x4 transformation obviously can be calculated. This I have also succeeded doing. It seems like the rotation encoded in the pose has a more quaternion-like nature, which makes sense.
Upvotes: 1 |
2018/06/28 | 822 | 3,054 | <issue_start>username_0: Let's suppose I have an image with 16 channels that goes to a convolutional layer, which has 3 trainable $7 \times 7$ filters, so the output of this layer has depth 3.
How does the convolutional layer go from 16 to 3 channels? What mathematical operation is applied?<issue_comment>username_1: The reason why you go from 16 to 3 channels is that, in a 2d convolution, filters span the **entire depth** of the input. Therefore, your filters would actually be $7 \times 7 \times 16$ in order to cover all channels of the input.
Detailed procedure
------------------
The output of the convolution automatically has a depth equal to the number of filters (so in your case this is $3$) because you have an $m \times k$ filter matrix, where $m$ is the number of filters and $k$ is the number of elements in the **unrolled filter** (in your case, $m = 3$ and $k = 7 \times 7 \times 16 = 784$, so the filter matrix is $3 \times 784$).
The input is usually unrolled according to the im2col procedure, where each tile corresponding to a single filter location is stretched into a column equal to the **unrolled** filter size. This is repeated for each filter location, so you end up with a very large matrix of size $k \times n$, where $k$ is the same as $k$ above in the filter matrix, and $n$ depends on your padding and stride.
Multiplying the $m \times k$ filter matrix with the $k \times n$ input matrix gives you an $m \times n$ output matrix, where $m$ is the number of filters.
Further reading
---------------
You can find some very nice visual explanations of the convolution procedure [here](https://cs231n.github.io/convolutional-networks/) and [here](https://leonardoaraujosantos.gitbooks.io/artificial-inteligence/content/making_faster.html).
Upvotes: 3 [selected_answer]<issue_comment>username_2: Your input has 16 channels, each of dimension $m \times n$. There are 3 filters, namely $f\_1$, $f\_2$ and $f\_3$ of spatial dimensions $k \times h$.
We say that a filter is applied to a channel when it is superimposed on the image, starting left-most, performing the operation of multiplying the weights of the filter with the corresponding value in the image and then summing up to a single value and moving the filter to right (then down when it reaches rightmost part) across the image according to the stride of the filter.
When a filter, e.g. $f\_1$, is applied to a channel say $c$, there is a single value. Now, apply them to all channels, we get 16 values and all of them are added up to a single value. $f\_1$ is moved according to the stride and the same operation is repeated to get an output with a single channel (the number of rows and columns are determined by padding, stride, dilation, and kernel size of the filers).
The aforesaid process is done by all the 3 filters giving rise to 3 channels. In this way, the convolutional layer makes the input go from 16 to 3 channels.
More detailed explanations can be found [here](http://colah.github.io/posts/2014-07-Understanding-Convolutions/).
Upvotes: 0 |
2018/06/29 | 660 | 3,141 | <issue_start>username_0: In image classification we are generally told the main reason of using CNN's is that densely connected NN's cannot handle so many parameters (10 ^ 6 for a 1000 \* 1000 image). My question is, is there any other reason why CNN's are used over DNN's (densely connected NN)?
Basically if we have infinite resources will DNN trump CNN's or are CNN's inherently well suited for image classification as RNN's are for speech. Answers based either on mathematics or experience on the field is appreciated.<issue_comment>username_1: That is not the actual reason , "convolution" layers are inspired by cells in visual-system. This is derived from the work of hubel-wiesel.
for more information check hubel-wiesel experiment.
Upvotes: 0 <issue_comment>username_2: The keyword here is Parameter Sharing or Weight Sharing across various image portions.
If we take a simple example of grayscale binary image of an alphabet 'F', it is a combination of multiple patterns. The patterns here are vertical lines and horizontal lines. These patterns are based on relation between intensities of contiguous cells. This relation between contiguous cells is established using a weight matrix.
Also, for identifying multiple horizontal lines, we dont need multiple node-sets in hidden dense layer trying to identify different horizontal lines in the image. The pattern is same but present in different locations. Hence the sharing of weights comes into picture.
In the 1st hidden layer, encode the pattern horizontal line in a weight matrix(learnt during training and used in testing). Place it over small grid and check for presence. As this matrix is slided and tested across the image, the presence of horizontal lines is marked in various locations. This weight matrix is called a kernel.
Combining the above points, kernel provides a way of handling parameter / weight sharing between contiguous cells to identify patterns. Dense layer instead of kernels would solve it eventually but start in a random manner. Since a efficient way was identified, it is being used.
Next to identify vertical lines, another kernel needed and slide across.
Suppose next we have dense layer as 2nd hidden layer. This layer looks for combination of patterns ('p' horizontal lines and 'q' vertical lines in this case for 'F') present and learns combinations to identify output.
Just to compare with traditional programming, kernels are like regular expressions. dense layers are like loops. Just sharing my thoughts. Any better explanation is welcome.
Upvotes: 2 <issue_comment>username_3: Convolution Neural Networks can detect more of the spatial features compared to Densely Connected Network. Consider this in any given real world image the pixel values of neighboring cells to do not vary highly, But when this image are passed to a Densely Connected Neural network for training the spatial relations between neighboring pixels is lost as all other cells can heavily influence the training whereas in Convolutional networks due to operation of convolution by local information is preserved, it is called local connectivity.
Upvotes: 2 |
2018/06/29 | 2,277 | 9,646 | <issue_start>username_0: I am learning about Monte Carlo algorithms and struggling to understand the following:
* If simulations are based on random moves, how can the modeling of the opponent's behavior work well?
For example, if I have a node with 100 children, 99 of which lead to an instant WIN, whereas the last one leads to an instant LOSS.
In reality, the opponent would never play any of the 99 losing moves for him (assuming they are obvious as they are the last moves), and would always play the winning one. But the Monte Carlo algorithm would still see this node as extremely favorable (99/100 wins for me), because it sees each of the 100 moves as equally probable.
Is my understanding wrong, or does it mean that in most games such situations do not occur and randomness is a good approximation of opponent behavior?<issue_comment>username_1: I will point out that the Monte Carlo Tree Search algorithm does not make completely random moves. Instead it usually uses some metric to balance between exploration and exploitation when deciding which branch to search (see Upper Confidence Bound and others).
That being said, you are correct in that a specific line of play which is incredibly troublesome may not be seen and could cause Monte Carlo to make a major mistake. This may have been a cause of AlphaGo losing to Lee Sedol in game 4.
>
> A disadvantage is that, faced in a game with an expert player, there may be a single branch which leads to a loss. Because this is not easily found at random, the search may not "see" it and will not take it into account. It is believed that this may have been part of the reason for AlphaGo's loss in its fourth game against Lee Sedol. In essence, the search attempts to prune sequences which are less relevant. In some cases, a play can lead to a very specific line of play which is significant, but which is overlooked when the tree is pruned, and this outcome is therefore "off the search radar"
>
>
>
Upvotes: 1 <issue_comment>username_2: First, we need to distinguish plain **Monte-Carlo** from **Monte-Carlo Tree Search**. They're different things.
**Monte-Carlo search**, in the context of game AI search algorithms, is typically understood to mean that we search randomly many times, and average the results, and nothing else. If this is all we're doing, then yes, your understanding is correct. This is also sometimes referred to as "plain Monte-Carlo (search)" or "pure Monte-Carlo (search)", to make it explicitly clear that we're not doing any tree search as in Monte-Carlo Tree Search (sometimes when we just say "Monte-Carlo" in the context of game AI, people will automatically assume Monte-Carlo Tree Search, due to how popular it is).
**Monte-Carlo Tree Search** does a lot more than just that though. It gradually builds up a search tree (through the *Expansion* step), and within that search tree (which is growing over time), it uses a much more sophsticated strategy for traversal than pure random (the *Selection* step).
>
> For example suppose i have a node with 100 children, 99 of which leading to an instant WIN, and the last one leading to an instant LOSS.
>
>
>
Suppose that this node you're talking about, the one with those 100 children, is relatively close to the root node. Then, it is likely that it and all 100 of its children will end up "growing into the search tree" that is slowly built up by the Expansion step. Once they have been added to the search tree, the Selection step will make sure that the vast majority of further iterations visiting this part of the tree will select the instant loss (assuming the opponent is to move in this node). *In the limit* (after an infinite amount of search time), this bias towards selecting the loss node will be so large that the average evaluation tends to the (correct) minimax evaluation.
---
Another way to view the idea of evaluation through many random sequences of play is the following; the idea is that if we're in a very strong position, a good game state, then we're more likely to win than to lose if both players start playing at random. Consider, for example, a game of chess where Player 1 has many pieces left, and Player 2 only has a few pieces left. Imagine both players were to play completely at random from that game state onwards. On average, which player would you expect to win more often? Probably Player 1.
When we're considering game states that are still far away from terminal states, this basic idea tends to work relatively well. Obviously not always correct, it's still a heuristic, but it can kind of work. When we're already very close to a terminal state that can be reached through one highly specific sequence of play, yeah, we might miss that through random actions; this is where we need the more informed policy of the *Selection* step.
Upvotes: 4 [selected_answer]<issue_comment>username_3: **The Two Questions**
* Why does Monte Carlo work when a real opponent's behavior may not be random?
* If simulations are based on random moves, how can the modeling of the opponent's behavior work well?
**Directed Graphs Over Trees**
Games (or game-like strategic scenarios) should not be represented as trees. If the process paths being represented have the Markov property in that each decision lacks knowledge of history, a particular game state can be approached by more than one path, and there may be cyclic paths where a game state is revisited. Trees have neither feature.
It is best to use directed graph structures to think about these problems. The state of a game is a vertex and vertices are connected by unidirectional edges. This is normally drawn as shapes connected by arrows. When two arrows enter one shape or there is a closed path, it is not a tree.
**The Scenario Outlined in the Question**
In the case of the scenario outline in this question, there is a vertex representing a game state with 100 outgoing edges representing possible moves for player A. Ninety-nine of the edges lead to an obvious instant game win for A. Exactly one leads to an obvious instant game win for B.
Playing back the game to prior to the traversal of the incoming edge to the vertex before the final move, it cannot be assumed that game play allowed player B the same 100 options. Even if the same 100 were available to B, they would not necessarily be of similar value from B's perspective when deciding that previous move. More than likely, B will have had a different set of outgoing edges from which to choose, bearing little or no obvious resemblance to A's subsequent options.
Any game where this is not true, where the options remain constant, would be trivial even in comparison with tic-tack-toe.
**The Monte Carlo Approach and Its Algorithm Development**
Regarding the specification of a singular Monte Carlo algorithm, it does not exist. Goodfellow, Bengio, & Courville correctly state in their *Deep Learning*, 2016, that Monte Carlo algorithms (not a singular algorithm) draw a normally correct conclusion but with a non-deterministic occurrence of incorrect conclusion. There are varieties of approach details and associated algorithms in the literature.
* Cross-entropy (CE) method proposed by Rubinstein in 1997
* Continuation multilevel Monte Carlo algorithm; <NAME>, <NAME>, & Tempone; 2000
* Sequential Monte Carlo algorithm; Drovandi, McGree, & Pettitt; 2012
* Distributed consensus approach from Bayes and Big Data: The Consensus Monte Carlo Algorithm; Scott1, Blocker, Bonassi, Chipman, George, & McCulloch; 2014
* Hamiltonian Monte Carlo, a Markov chain based algorithm designed to avoid, "The random walk behavior and sensitivity to correlated parameters;" Hoffman & Gelman; 2014
There are several more. All attempt to use chaotic perturbation to minimize duration and resource consumption of decisioning by approximating a Monte Carlo simulation from a Bayesian posterior distribution.
The simulation of stochastic nature is usually, in these approaches, accomplished by the injection of a chaotic sequence from a pseudo random number generator. They are generally not truly stochastic because acquiring entropy from within a digital system is another bottleneck presenting immense difficulties, but that's an entirely tangential topic.
**Direct Answer to the Question**
To correct the misconception in the question, this use of chaotic perturbation does not equalize the selection of moves (represented by edges in the game-play's directed graph). The probabilities of success for each available option are still roughly calculated and followed, but only roughly so because of the psuedo-noise injected by design.
These disturbances in the application of pure optimization achieve time and resource thrift for the majority of game states (represented by vertices) but concurrently sacrifice some reliability.
**An Overview of Why the Sacrifice Works**
The introduction of chaotic perturbations, mentioned above, modifies the conditions of the optimization search through the achievement of two very specific gains.
1. Faster coverage of the contour being searched by increasing entropy (being less organized by adding synthetic Brownian motion) across the set of trials.
2. Avoidance of local minima in convergence by being less presumptuous about the contour being searched (slightly less reliant on gradient and curvature hints).
This is true of both reinforced networks (containing real time feedback during actual use) or pre-trained networks of the supervised training type (with labelled data) or unsupervised training where convergence is determined by fixed criteria.
Upvotes: 2 |
2018/07/01 | 1,334 | 6,122 | <issue_start>username_0: What does it mean when it is said that Machine Learning algorithm results can be "generalized"?
I don't understand what "generalized" algorithms, routines or functions are.
I have searched dictionaries and glossaries, and cannot find an explanation. Also, if anyone can tell me where a good source for this type of thing is? I am writing about AI and ML.<issue_comment>username_1: A machine learning model is said to "generalise" when it performs equally well on both train and test datasets.
For any supervised machine learning algorithm to work well, you train it on a large dataset (train) and evaluate its performance on a dataset whose probability distribution is similar to that of train set but not a part of the train set. This set is called "test" set , then the performance is evaluated on this set, if both train and test accuracies are almost same then the model is said to generalise well, if the training accuracy is much higher than that of test accuracy then the model in some way "memorises" the train set and called "overfitting".
Upvotes: 0 <issue_comment>username_2: Actually you have used 2 terminologies there:
* The first one is that Machine Learning algorithm results can be "generalised". This refers to how well your trained Machine Learning model will perform on previously unseen data (test set or implemented on field). This is particularly not easy as data trends may change over time resulting in loss of accuracy. There are various methods to implement this like (having a cross validation set and a test set, which comes under the broad scheme of [k-fold cross validation](https://machinelearningmastery.com/k-fold-cross-validation/))
* The second you mentioned is '"generalised" algorithms, routines or functions'. Most Machine Learning algorithms can be applied to a broad range of problems. For example the training of a NN is generally done by backprop which is universally applied to all NN's. Similarly, you can use CNN to find features of local interest (i.e. local dependencies) in anything that can be represented in a pictorial form (strings of DNA). Also combinations of CNN and RNN are being used to solve many problems. Thus, only a basic generalised algorithm is being applied to a lot of problems. NOTE: I have never seen any one use it in this context, but practically it happens.
Here are a few resources for general reading purposes (not mathematical):
* [Overfitting and Underfitting With Machine Learning Algorithms](https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learning-algorithms/)
* [Generalization and Overfitting](https://www.coursera.org/lecture/big-data-machine-learning/generalization-and-overfitting-tOvMb)
Upvotes: 1 <issue_comment>username_3: The brief answer is: generalized machine algorithm is an algorithm that can do well and give good results in new data that never seen before
Upvotes: 1 <issue_comment>username_4: Using the adjective generalized with algorithms, routines, or functions is obscure. The more appropriate term is generic.
Generics began with early loaders, before full fledged operating systems were developed. The idea was that a module could be loaded into computer memory and the loader was able to perform its function whether the information being loaded was a program, a routine called from a program, an update of the loader, or data to be used by a program. This abstraction was mostly the brainchild of mathematicians such as <NAME> and <NAME>, both of whom contributed to the same kind of generic representations in the field of abstract algebra.
The ideas of generics spread to the development of file systems. When LISP was developed, the deliberate blurring of the distinction between programs and data was built into the low level functions for working with both. Functional programming developed into higher levels of abstraction in the LISP community.
The idea of generics was introduced into C++ by <NAME> using the term *idiom* to solve maintainability and readability issues with early clumsy attempt at generic programming using the C pre-prossessing directives. The first highly usable form of generics appeared in <NAME>'s standard templates library (STL), developed at SGI. Frameworks like COBRA and DCOM were designed to permit generic messaging.
Generic programming constructs spread to Java and then to other languages. Many algorithms are written generically, such that the actual object types with which the algorithm works are not known until the template is employed at compile time.
The multilayer perceptron is a generic learning component, parameterized by a specific numerical representation for forward signal propagation, numerical representation for backward corrective signalling, layer depth, widths and activation functions for each layer, learning rate, and other hyper-parameters. (What are called network parameters are not parameters of the generic network but parameters of that control the mix of inputs into each layer and become the primary data output of the learning process.)
The other artificial network types are similar in this respect.
Even the cell types can be a type parameter in a network. See [Neural Network Cell (Node) Types](https://ai.stackexchange.com/questions/5898/neural-network-cell-node-types).
The question, "What does it mean when it is said that Machine Learning algorithm results can be generalized?" cannot be intelligently answered. Generalizing results is too general a description of what can be done to results to mean anything. It may just be rhetorical.
There are no universally, "good source[s] for this type of thing." There are many excellent sources for very specific things buried within a hundred times the volume of marginally correct and educationally barren sources. If you don't have the time to scrutinize what you are reading to carefully extract the gems, then it might be wise to attend some courses at a well respected university to get a more solid foundation from a more reliable source than the Internet.
Upvotes: 0 |
2018/07/01 | 490 | 2,036 | <issue_start>username_0: I want to detect drivers with or without seatbelts at crossroads. For that, as it is real-time, I am going to use the YOLO algorithm/model. For training data sets (the images) I need to collect, I placed a camera. By recording it and collecting images from there, I am getting images with more noise.
Can I use these images for training? Also, which YOLO version should I use? What are the important points that I should consider for training datasets?
I want to use any version of YOLO compatible with TensorFlow.<issue_comment>username_1: It is much better to know basic mechanics of convnets first ,rather than diving straight into complicated models .
*For training data sets (the images) I need to collect, I placed a camera. By recording it and collecting images from there, I am getting images with more noise. Can I use these images for training? Also, which yolo version should I use? What are the important points that I should consider for training datasets?*
After you are good with the theory part most of your questions will be answered , otherwise you would endup with nothing but buzzwords.
*I want to use any version of yolo compatible with tensorflow.*
Tensorflow is a framework for building neural networks , so in theory you can build any network with it so compatibility is not at all a problem.
Upvotes: 1 <issue_comment>username_2: As long as all training samples have same type and level of noise, it shouldn't affect the outcome/accuracy.
As far as real time performance is concerned, you should compare the "Params, FLops and Inference time @ B1 (Batch size 1)" numbers of different yolo versions.
Lower the number, faster the inference.
You can start by exploring yolov5 and Yolo-Fastestv2 at the links below:
<https://github.com/ultralytics/yolov5>
<https://github.com/dog-qiuqiu/Yolo-FastestV2>
Unfortunately most of the implementations are in PyTorch, but you can easily convert a PyTorch model to a Tensorflow one by first converting it to an ONNX format.
Upvotes: 0 |
2018/07/03 | 641 | 2,673 | <issue_start>username_0: I have a dataset of unlabelled emails that fall into distinct categories (around a dozen). I want to be able to classify them along with new ones to come in the future in a dynamic matter. I know that there are dynamic clustering techniques that allow the clusters to evolve over time ('dynamic-means' being one of them). However, I would also like to be able to start with a predefined set of classes (or clusters/centroids), as I know for a fact what the types of those emails will be.
Furthermore, I need some guidance in terms of what vectorisation technique to use for my type of data. Would creating a term matrix using TF-IDF be sufficient? I assume that the data I am dealing with could be differentiated on the basis of keyword occurrence, but I cannot tell to what degree. Are there more sophisticated vectorisation techniques based more on the text semantics? Are they worth exploring?<issue_comment>username_1: It sounds like you are trying to do some kind of [semi-supervised learning](https://en.wikipedia.org/wiki/Semi-supervised_learning). In semi-supervised learning, some data points are labelled (you know which class they belong to), and others are not. There are classification algorithms designed specifically for this kind of problem, like a [transductive-SVM](http://www.cs.cmu.edu/~guestrin/Class/10701-S06/Slides/tsvms-pca.pdf). I personally have not found these techniques to be more effective than simply discarding the unlabelled data and treating my problem as purely supervised, but YMMV.
TFIDF remains fairly popular, as do ngram-based approaches. A more modern vectorization to consider might be [word2vec](https://en.wikipedia.org/wiki/Word2vec), which translated something like a bag-of-words style vector into a more meaningful feature space for words.
Upvotes: 1 <issue_comment>username_2: >
> I would also like to be able to start with a predefined set of classes (or clusters/centroids) as I know for a fact what the types of those emails will be.
>
>
>
This is not a clustering problem, but a semi-supervised learning problem. If you don't have labeled data yet, then create some labels. You might also want to look into "active learning".
One approach is:
1. For each category, create 5 labeled samples
2. Train a classifier on them (e.g. tf-idf features and a small neural network)
3. Let the neural network label your dataset
4. Check the labels where it was most confident for all classes and the ones where the probabilities for all classes were most evenly spread. Use this to quickly create more labels.
5. Maybe Amazon mechanical Turk is an option to quickly generate more labels
Upvotes: 0 |
2018/07/03 | 1,000 | 3,411 | <issue_start>username_0: I am familiar with supervised and unsupervised learning. I did the SaaS course done by <NAME> on Coursera.org.
I am looking for something similar for reinforcement learning.
Can you recommend something?<issue_comment>username_1: Before that ask yourself if you really want to learn about "reinforcement learning." Although there is much hype about reinforcement learning, the real-world applicability of reinforcement learning is almost non-existent. Most of the online courses teach you a very little about machine learning, so it is much better to get thorough with it, rather than proceeding towards reinforcement learning. Learning reinforcement learning is somewhat different from learning about unsupervised/supervised learning techniques.
Having said that, the fastest way to get a good grasp of reinforcement learning is as follow:
1. Read [<NAME>'s blog post "Pong from Pixels."](http://karpathy.github.io/2016/05/31/rl/)
2. Watch [Deep RL Bootcamp lectures](https://sites.google.com/view/deep-rl-bootcamp/lectures).
3. To understand the math behind these techniques, refer to [Sutton and Barto's *Reinforcement Learning: An Introduction*](http://incompleteideas.net/book/the-book.html).
4. Read relevant papers (game-playing etc.).
P.S: Make sure that you are thorough with basics of neural networks, as most of the current papers in RL involve using DNNs in some or the other way as approximators.
Upvotes: 2 <issue_comment>username_2: I recently saw a course by Microsoft on edx. It is called 'Reinforcement Learning Explained'.
Here is the link:
<https://www.edx.org/course/reinforcement-learning-explained-0>
This is not quite comprehensive but at least gives a good starting point.
Upvotes: 2 <issue_comment>username_3: There's a Youtube playlist (in the [DeepMind channel](https://www.youtube.com/channel/UCP7jMXSY2xbc3KCAE0MHQ-A)) whose title is [*Introduction to reinforcement learning*](https://www.youtube.com/watch?v=2pWv7GOvuf0&list=PLqYmG7hTraZDM-OYHWgPebj2MfCFzFObQ), which is a course (of 10 lessons) on reinforcement learning by [<NAME>](https://www.wikiwand.com/en/David_Silver_(programmer)).
A person who followed and finished the course wrote (as a Youtube comment):
>
> Excellent course. Well paced, enough examples to provide a good intuition, and taught by someone who's leading the field in applying RL to games.
>
>
>
Upvotes: 2 <issue_comment>username_4: To the good answers here, I would add
* [A brief overview of RL](https://arxiv.org/pdf/1708.05866v1.pdf): Most essential concepts in one place.
* [Another brief overview](https://pemami4911.github.io/pdfs/deep-rl.pdf), in presentation format.
* <NAME>'s [An outsider's tour of RL](http://www.argmin.net/2018/06/25/outsider-rl/) is pretty comprehensive and accessible.
* [The Bellman equations](https://joshgreaves.com/reinforcement-learning/understanding-rl-the-bellman-equations/): central to the whole RL theory.
* [Policy gradients explained](https://karpathy.github.io/2016/05/31/rl/) by <NAME> (mentioned in other answers as "pong from pixels", this is the link).
These barely scratch the surface of RL, but they should get you started.
Upvotes: 4 [selected_answer]<issue_comment>username_5: I would say this post is a must to read:
>
> <https://rubenfiszel.github.io/posts/rl4j/2016-08-24-Reinforcement-Learning-and-DQN.html>
>
>
>
Upvotes: 1 |
2018/07/04 | 2,629 | 10,610 | <issue_start>username_0: Most humans are not good at chess. They can't write symphonies. They don't read novels. They aren't good athletes. They aren't good at logical reasoning. Most of us just get up. Go to work in a factory or farm or something. Follow simple instructions. Have a beer and go to sleep.
What are some things that a clever robot can't do that a stupid human can?<issue_comment>username_1: Survival, Imagining, Moral Reasoning
====================================
The thing that comes to mind is a new-born, when you said "the stupidest human", and it already has some basic “survival instincts”. It will avoid pain, consume food, and quickly learn to distinguish "safe" and "dangerous" conditions and people.
We have computer programs that can learn chess and calculate the optimal move in a split second, but isn't playing chess is a bit pointless. Merely being able to play a board game is of little value from a survival perspective, industrial perspective, or economic perspective.
There are programs that can do things that are very helpful for the modern world, but as far as I know, they just don't have survival instincts. A self-learning robot; left in a forest with all the tools it needs to generate power for, build duplicates of, maintain and defend its self; probably wouldn't be able to learn how to do so in time to ensure its survival. Our current self learning programs would need to be able to identify when it has succeeded or failed to improve its survival odds. A child of two may learn fast enough to survive if the conditions are not too severe and non-toxic food and some form of shelter is nearby.
A financially poor, marginally educated person with lower than average aptitude working at a farm or factory might not be able to play chess well, but they would definitely be able to tell if someone is murdering someone else, and know to flee and seek the authorities. A robot that can play chess would not.
Furthermore, humans can continue to learn when separated from the problem by thinking about the problem. The ability to construct arbitrary models and run thought experiments is currently unique to humans.
That said, I do hope that we will soon have programs that well replicate the human mind, and demonstrate some of the aspects of what we call consciousness.
Upvotes: 2 <issue_comment>username_2: **First Question**
To treat this question in a scientific way, because I think it is a reasonable enough question that draws on the realities of postmodern culture in post industrialized societies to be treated scientifically, we should define some things.
The most difficult is intelligence, which is the realm in which smartness, cleverness, and stupidity reside.
Let's go through the list.
* Most humans are not good at chess, but humans invented the game.
* Most humans can't write symphonies, but humans invented them.
* Most humans don't read novels, but digital computers can't write them yet and can't learn ethical balance through the reading of them like humans can.
* Most humans are not Olympic level athletes, yet humans developed Olympics and robots are not yet Olympiads.
* Most humans (whether or hot they are good at logical reasoning) don't employ it much other than to, "Get by."
The question is fine until it devolves into the dismissal of the intelligence that people apply to their method of earning income, which for many if not most people requires more than just getting up, commuting in, following some simple instructions, and going to sleep mildly inebriated. Let's replace this last part with this.
* Most humans do not work with the intention of optimizing quality of the product or service by measuring their own quality and seeking educational resources to improve the velocity, reliability, or accuracy of their work output (unless programs are instituted to incentivize these things in the workplace.)
If we define intelligence as the union of these things, for simplicity's sake, we have this (which is subject to change as AI develops).
* Playing chess: AI wins
* Designing games: Humans win
* Writing symphonies: Humans win
* Writing novels: Humans win
* Absorbing ethics from stories: Humans win
* Olympic gold: Humans win
* Logical consistency: AI wins
We must, to interpret the above list correctly concede two things:
1. Machines may have the ability to do something according to specific quality standards but not be configured, trained, or connected appropriately to prevail.
2. Humans may have the ability to do something according to specific quality standards but not be educated, trained, or be properly motivated to prevail.
**Second Question**
What are some things that a clever robot can't do that a stupid human can? These are a few, but they are of particular importance from certain perspectives.
* Love their family and friends
* Have compassion without reason
* Decide what to learn
* Hunt
* See a future danger approaching
* Entertain others
* Pray
I would not dismiss these human propensities as irrelevant, even from a scientific perspective. I would also not dismiss the possibility that these things are beyond the capabilities of silicon based entities.
Upvotes: 2 <issue_comment>username_3: The "[baseline](http://anathem.wikia.com/wiki/Sline) humans" you describe have been historically described in the media industry as "[the lowest common denominator](https://en.wikipedia.org/wiki/Lowest_common_denominator#Colloquial_usage)" (LCD).
The LCD is the broadest possible audience for content, traditionally for [network television shows](https://en.wiktionary.org/wiki/nobody_ever_went_broke_underestimating_the_intelligence_of_the_American_people). (Before the age of cable, there were only 3 to 4 networks and all video content was broadcast over the airwaves--no way to specifically target audience segments so content had to appeal to the LCD.)
* [Captcha](https://en.wikipedia.org/wiki/CAPTCHA)
Because captchas have to be solvable by the LCD, but trick bots. As long as captcha are viable, by definition they will always be something that baseline humans can do better than AI.
Upvotes: 2 <issue_comment>username_4: I do not know the precise definition of intelligence, but from lots of people I have interacted with, they regard people as intelligent on a **particular field**, if and only if:
* They are able to take **`split second correct decisions`** in a situation in that particular field.
Let us see where AI have succeeded in this case:
1. [<NAME>’s Dota 2 AI beats the professionals at their own game](https://arstechnica.com/gaming/2017/08/ai-bot-takes-on-the-pros-at-dota-2-and-wins/)
2. [AlphaZero AI beats champion chess program after teaching itself in four hours](https://www.theguardian.com/technology/2017/dec/07/alphazero-google-deepmind-ai-beats-champion-program-teaching-itself-to-play-four-hours)
These are the few famous cases. If we examine carefully these cases we see that computers are outperforming humans only due to:
* Huge memory available.
* Fast memory access.
* Due to high processor speeds, split second correct decisions (although algorithm for correct decisions are developed by humans).
So AI's are actually workhorses, working without fatigue and without any limitations. Human brains do not excel in the field of decision making or speed. Here is a comparison of [What makes animal brain so special?](https://ai.stackexchange.com/questions/5239/what-makes-animal-brain-so-special)
Human brains excel at creativity. We can learn how to make symphonies. Can an AI do the same? Possibly with correct programming. Much of our intelligence comes from its distributed nature. We learn from other peoples mistakes, we improve it. Large number of humans combined with record keeping has made this possible. Although scientists like Tesla, Einstein, Newton, Feynman discovered Calculus on their own, think of the possibilities of new inventions had they been made aware that Calculus already exited and a lot has been done to develop it? Check this: [Swarm intelligence vs Normal Human Intelligence.](https://ai.stackexchange.com/questions/5258/swarm-intelligence-vs-normal-human-intelligence)
So our intelligence and experience comes from the huge source of information rather than huge source of personal resources. As of now we can think of abstract concepts which an AI cannot (i.e. we can create new things, not new artworks or music by mixing things up as an AI does, but a new thing completely).
For example, it has been seen if you keep many deaf babies together and isolated they develop their own form of sign language, completely unique. Points to note here are:
* They were completely isolated.
* They worked **as a group** to develop the sign language.
So although machines might be performing well due to their algorithmic complexity and immense power they still have some catching up to do to be compared to even **stupidest humans**.
Main problem is we do not yet know the capacity of a brain. Some people can perform exceptional feats with their brain when the **need arises**. Some one did this during WW2 to find his family: [Grandmaster plays 48 games at once, blindfolded while riding exercise bike](https://www.theguardian.com/sport/2017/feb/10/timor-gareyev-48-chess-games-blindfolded-riding-exercise-bike-leonard-barden). But how is this suddenly possible? No one knows until we have uncovered our own **mind** fully.
Upvotes: 2 <issue_comment>username_5: Some fields that humans are born with advantages:
1. Fast and precise image processing ability. Even the stupidest human can tell the edge of two different objects precisely, e.g. which part of the image is a dog and which is a cat.
2. Fuzzy learning ability. Humans don't need to see all kinds of cats to identify a cat. As long as we see some cats (real ones or pictures or videos) we can identify a cat easily.
3. Reasoning. Current machine learning methods are mostly statistics-based high-dimensional model approximation. Instead of finding a solution or a pattern, I have never seen any AI entity can generate new ideas based on current facts.
4. Abstraction. Now GANs and other AI techniques can create vivid drawings. Yet currently I cannot find any model that can do abstraction drawings. E.g. human can doodle a cat from a real pic of cats, while AI currently can't do that.
There are more of these kinds that human born with skills in their genes because of millions of years of evolution. While I believe in the future we'll have better AI entities with better algos to defeat the advantage of humans eventually.
Upvotes: 2 |
2018/07/05 | 587 | 2,233 | <issue_start>username_0: Typical AI these days are question-answering machines. For example, Siri, Alexa and Google Home. But it is always the human asking the questions and the AI answering.
Are there any good examples of an AI that is curious and asks questions of its own accord?<issue_comment>username_1: You are referring to 'proactive AI' as opposed to 'reactive AI' like Alexa, Cortana, Siri, Bixby, Google Assistant, and others. There hasn't been much progress in this area of AI. [Google's recent demonstration of Duplex](https://www.youtube.com/watch?v=bd1mEm2Fy08&t=1s) addresses this to some extent. Some chatbots are proactive. [Genesys](https://www.genesys.com/customer-experience/innovations/meet-kate-blended-ai) provides such capability. Check out their [video](https://www.youtube.com/watch?v=vdOprvic_20)
Azure's bot service has a page on [how to implement proactivity](https://docs.microsoft.com/en-us/azure/bot-service/nodejs/bot-builder-nodejs-proactive-messages?view=azure-bot-service-3.0) and there is another video that walks through the whole process: [Learn to build Proactive Bot in 30 Minutes](https://www.youtube.com/watch?v=Am1NPUceFvA).
Upvotes: 3 <issue_comment>username_2: One of the simplest examples that I can think of is "Akinator". At the heart it uses decision trees to narrow down the search. It is not a "questioning" model like QA models used in Alexa, but it does asks questions.
Upvotes: 1 <issue_comment>username_3: It's certainly possible to create AI systems that ask questions. Various forms of expert systems and diagnostic support system applications already do that. As to the question of whether or not they are *curious*, that's one I'll leave to the philosophers. But it is absolutely possible to create an AI that attempts to reason out a solution to a problem, find that it is unable to generate an acceptable answer, and then prompts the user for more information.
One context where this can be done is Abductive Inference systems for medical decision making. I'd refer you to *Abductive Inference Models for Diagnostic Problem Solving* by <NAME> Peng, or *Computer Assisted Medical Decision Making 1* by Reggia and Tuhrim for more on that specific point.
Upvotes: 1 |
2018/07/06 | 2,701 | 11,990 | <issue_start>username_0: Are there possible models that have the potential to replace neural networks in the near future?
And do we even need that? What is the worst thing about using neural networks in terms of efficiency?<issue_comment>username_1: Neural networks require lots of data and training. For most tabular format datasets it is much better to use decision tree based models. Most of the time, simple models are enough to give good accuracy. However neural networks had their test of time. It has only been five to six years since the deep learning revolution started, so we still do not know the true potency of deep learning.
Upvotes: 0 <issue_comment>username_2: This is going backwards, but it kind of follows the logic of the arguments.
In terms of efficiency, I can see a few major problems with classical neural networks.
Data collection and preprocessing overhead
------------------------------------------
Large neural networks require *a lot* of data to train. The amount can vary depending on the size of the network and the complexity of the task, but as a rule of thumb it is usually proportional to the number of weights. For some supervised learning tasks, there simply isn't enough high-quality labelled data. Collecting large amounts of specialised training data can take months or even years, and labelling can be cumbersome and unreliable. This can be partially mitigated by data augmentation, which means "synthesising" more examples from the ones you already have, but it is not a panacea.
Training time vs. energy tradeoff
---------------------------------
The learning rate is usually pretty small, so the training progress is slow. A large model that could take weeks to train on a desktop CPU can be trained in, say, two hours by using a GPU cluster which consumes several kW of power. This is a fundamental tradeoff due to the nature of the training procedure. That said, GPUs are getting increasingly efficient - for example, the new [nVidia Volta](https://images.nvidia.com/content/volta-architecture/pdf/volta-architecture-whitepaper.pdf) GPU architecture allows for 15.7 TFLOPs while consuming less than 300 W.
Non-transferrability
--------------------
Right now, virtually every different problem requires a custom neural network to be designed, trained and deployed. While the solution often works, it is kind of *locked* into that problem. For example, [AlphaGo](https://deepmind.com/research/alphago/) is brilliant at Go, but it would be hopeless at driving a car or providing music recommendations - it was just not designed for such tasks. This overwhelming redundancy is a major drawback of neural networks in my view, and it is also a major impediment to the progress of neural network research in general. There is a whole research area called [transfer learning](https://machinelearningmastery.com/transfer-learning-for-deep-learning/) which deals with finding ways of applying a network trained on one task to a different task. Often this relates to the fact that there might not be enough data to train a network from scratch on the second task, so being able to use a pre-trained model with some extra tuning is very appealing.
---
The first part of the question is more tricky. Leaving purely statistical models aside, I haven't seen any prominent approaches to machine learning that are *radically* different from neural networks. However, there are some interesting developments that are worth mentioning because they address some of the above inefficiencies.
Neuromorphic chips
------------------
A bit of background first.
[Spiking neural networks](https://homepages.cwi.nl/%7Esbohte/publication/es2014-13Gruning.pdf) have enormous potential in terms of computational power. In fact, it has been [proven](https://igi-web.tugraz.at/PDF/85a.pdf) that they are *strictly more powerful* than classical neural networks with sigmoid activations.
Added to that, spiking neural networks have an intrinsic grasp of **time** - something that has been a major hurdle for classical networks since their inception. Not only that, but spiking networks are **event-driven**, which means that neurons operate only if there is an incoming signal. This is in contrast to classical networks, where each neuron is evaluated regardless of its input (again, this is just a consequence of the evaluation procedure usually being implemented as a multiplication of two dense matrices). So spiking networks employ a **sparse** encoding scheme, which means that only a small fraction of the neurons are active at any given time.
Now, the sparse spike-based encoding and event-driven operation are suitable for hardware-based implementations of spiking networks called **neuromorphic chips**. For example, IBM's [TrueNorth](https://www.ibm.com/blogs/research/2016/09/deep-learning-possible-embedded-systems-thanks-truenorth/) chip can simulate **1 million neurons** and **256 million connections** while drawing only about **100 mW** of power on average. This is **orders of magnitude** more efficient than the current nVidia GPUs. Neuromorphic chips may be the solution the training time / energy tradeoff I mentioned above.
Also, [memristors](http://www.memristor.org/reference/research/13/what-are-memristors) are a relatively new but very promising development. Basically, a memristor is a fundamental circuit element very similar to a resistor but with **variable** resistance proportional to the total amount of current that has passed through it over its entire lifetime. Essentially, this means that it maintains a "memory" of the amount of current that has passed through it. One of the exciting potential applications of memristors is modelling synapses in hardware extremely efficiently.
Reinforcement learning and evolution
------------------------------------
I think these are worth mentioning because they are promising candidates for addressing the problem of non-transferrability. These are not restricted to neural networks - being reward-driven, RL and evolution are theoretically applicable in a generic setting to any task where it is possible to define a reward or a goal for an agent to attain. This is not necessarily trivial to do, but it is much more generic than the usual error-driven approach, where the learning agent tries to minimise the difference between its output and a ground truth. The main point here is about transfer learning: ideally, applying a trained agent to a different task **should be** as simple as changing the goal or reward (they are not quite at that level yet, though...).
Upvotes: 3 <issue_comment>username_3: **Replacing Neural Nets**
There may exist new algorithms that have the potential to replace neural nets. However, one of the characteristics of neural nets is that they employ simple elements, each with low demands on computing resources in geometric patterns.
Artificial neurons can be run in parallel (without CPU time sharing or looping) by mapping the computations to DSP devices or other parallel computing hardware. That the many neurons are essentially alike is thus a strong advantage.
**What Would We Be Replacing?**
When we consider algorithmic replacements to neural nets, we imply that a neural net design is an algorithm. It is not.
A neural net is an approach to converging on a real time circuit to perform a nonlinear transformation of input to output based on some formulation of what is optimal. Such a formulation may be the minimization of a measure of error or disparity from some defined ideal. It may be a measure of wellness that must be maximized.
The source of the fitness determination for any given network behavior may be internal. We call that unsupervised learning. It may be external, which we call supervised when the external fitness information is coupled with input vectors in the form of desired output values, which we call labels.
Fitness may also originate externally as a scalar or vector not coupled with the input data but rather real time, which we call reinforcement. Such requires re-entrant learning algorithms. Net behavioral fitness may alternatively be evaluated by other nets within the system, in the case of stacked nets or other configurations such as Laplacian hierarchies.
The selection of algorithms has little to do with comparative intelligence once the mathematical and process designs are selected. Algorithm design is more directly related to minimizing demands for computing resources and reducing time requirements. This minimization is hardware and operating system dependent too.
**Is a Replacement Indicated?**
Sure. It would be better if networks were more like mammalian neurons.
* Sophistication of activation
* Heterogeneity of connection patterns
* Plasticity of design, to support meta-adaptation
* Governed by many dimensions of regional signaling
By regional signaling is meant the many chemical signals beyond signal transmission across synapses.
We can even consider going beyond mammalian neurology.
* Combining parametric and hypothesis-based learning
* Learning of the form employed when microbes pass DNA
**Neural Net Efficiency**
Efficiency cannot be quantified in some universal scale as temperature can be quantified in degrees Kelvin. Efficiency can only be quantified as a quotient of some measured value over some theoretical ideal. Note that it is an ideal, not a maximum, in the denominator. In thermodynamic engines, that ideal is the rate of energy input, which can never be fully transferred to the output.
Similarly, neural nets can never learn in zero time. A neural net cannot achieve zero error over an arbitrarily long time in production either. Therefore information is in some ways like energy, a concept investigated by <NAME> of Bell Labs during the dawn of digital automation, and the relationship between information entropy and thermodynamic entropy is now an important part of theoretical physics.
There can be no bad learning efficiency or good learning efficiency. There can be neither bad performance nor good performance, if we wish to think in logical and scientific terms — only relative improvement of some system configuration with respect to some other system configuration for a very specific set of performance scenarios.
Therefore, without an unambiguous specification of the two hardware, operating system, and software configurations and a fully defined test suite used for relative evaluation, efficiency is meaningless.
Upvotes: 1 <issue_comment>username_4: We do have some hope lurking in that front. As of now we have [capsule networks](https://arxiv.org/pdf/1710.09829) by J.Hinton which uses a different non-linear activation called the 'squash' function.
1. Hinton calls max-pooling in CNN as a 'big mistake', as CNN look only for presence objects in an image rather than the relative orientation between them. So they lose the spatial information while trying to achieve translation invariance.
2. Neural nets have fixed connections, whereas a capsule in a capsule network 'decides' to which other capsule it has to pass its activation during every epoch. This is called 'routing'.
3. The activation of every neuron in neural nets is a scalar. Whereas the activation of capsule is a vector capturing the pose and orientation of an object in an image.
4. CNN are considered to bad representations of human visual system. By human visual system I mean eyes and the brain/cognition together. We could identify Statue of Liberty from any pose, even if we have looked at it from one pose. CNN on most of the cases cannot detect same object in different poses and orientations.
Capsule networks themselves have some shortcomings. So there has been work in the direction of looking beyond neural nets. You can read this [blog](https://medium.com/ai%C2%B3-theory-practice-business/understanding-hintons-capsule-networks-part-i-intuition-b4b559d1159b) for a good understanding before you read the paper by J.Hinton.
Upvotes: 2 |
2018/07/07 | 2,767 | 12,233 | <issue_start>username_0: I am very new to machine learning. I am following the course offered by <NAME>. I am very confused about how we train our neural network for multi-class classification.
Let's say we have $K$ classes. For $K$ classes, we will be training $K$ different neural networks.
But do we train one neural network at a time for all features, or do we train all $K$ neural networks at a time for one feature?
Please, explain the complete procedures.<issue_comment>username_1: Neural networks require lots of data and training. For most tabular format datasets it is much better to use decision tree based models. Most of the time, simple models are enough to give good accuracy. However neural networks had their test of time. It has only been five to six years since the deep learning revolution started, so we still do not know the true potency of deep learning.
Upvotes: 0 <issue_comment>username_2: This is going backwards, but it kind of follows the logic of the arguments.
In terms of efficiency, I can see a few major problems with classical neural networks.
Data collection and preprocessing overhead
------------------------------------------
Large neural networks require *a lot* of data to train. The amount can vary depending on the size of the network and the complexity of the task, but as a rule of thumb it is usually proportional to the number of weights. For some supervised learning tasks, there simply isn't enough high-quality labelled data. Collecting large amounts of specialised training data can take months or even years, and labelling can be cumbersome and unreliable. This can be partially mitigated by data augmentation, which means "synthesising" more examples from the ones you already have, but it is not a panacea.
Training time vs. energy tradeoff
---------------------------------
The learning rate is usually pretty small, so the training progress is slow. A large model that could take weeks to train on a desktop CPU can be trained in, say, two hours by using a GPU cluster which consumes several kW of power. This is a fundamental tradeoff due to the nature of the training procedure. That said, GPUs are getting increasingly efficient - for example, the new [nVidia Volta](https://images.nvidia.com/content/volta-architecture/pdf/volta-architecture-whitepaper.pdf) GPU architecture allows for 15.7 TFLOPs while consuming less than 300 W.
Non-transferrability
--------------------
Right now, virtually every different problem requires a custom neural network to be designed, trained and deployed. While the solution often works, it is kind of *locked* into that problem. For example, [AlphaGo](https://deepmind.com/research/alphago/) is brilliant at Go, but it would be hopeless at driving a car or providing music recommendations - it was just not designed for such tasks. This overwhelming redundancy is a major drawback of neural networks in my view, and it is also a major impediment to the progress of neural network research in general. There is a whole research area called [transfer learning](https://machinelearningmastery.com/transfer-learning-for-deep-learning/) which deals with finding ways of applying a network trained on one task to a different task. Often this relates to the fact that there might not be enough data to train a network from scratch on the second task, so being able to use a pre-trained model with some extra tuning is very appealing.
---
The first part of the question is more tricky. Leaving purely statistical models aside, I haven't seen any prominent approaches to machine learning that are *radically* different from neural networks. However, there are some interesting developments that are worth mentioning because they address some of the above inefficiencies.
Neuromorphic chips
------------------
A bit of background first.
[Spiking neural networks](https://homepages.cwi.nl/%7Esbohte/publication/es2014-13Gruning.pdf) have enormous potential in terms of computational power. In fact, it has been [proven](https://igi-web.tugraz.at/PDF/85a.pdf) that they are *strictly more powerful* than classical neural networks with sigmoid activations.
Added to that, spiking neural networks have an intrinsic grasp of **time** - something that has been a major hurdle for classical networks since their inception. Not only that, but spiking networks are **event-driven**, which means that neurons operate only if there is an incoming signal. This is in contrast to classical networks, where each neuron is evaluated regardless of its input (again, this is just a consequence of the evaluation procedure usually being implemented as a multiplication of two dense matrices). So spiking networks employ a **sparse** encoding scheme, which means that only a small fraction of the neurons are active at any given time.
Now, the sparse spike-based encoding and event-driven operation are suitable for hardware-based implementations of spiking networks called **neuromorphic chips**. For example, IBM's [TrueNorth](https://www.ibm.com/blogs/research/2016/09/deep-learning-possible-embedded-systems-thanks-truenorth/) chip can simulate **1 million neurons** and **256 million connections** while drawing only about **100 mW** of power on average. This is **orders of magnitude** more efficient than the current nVidia GPUs. Neuromorphic chips may be the solution the training time / energy tradeoff I mentioned above.
Also, [memristors](http://www.memristor.org/reference/research/13/what-are-memristors) are a relatively new but very promising development. Basically, a memristor is a fundamental circuit element very similar to a resistor but with **variable** resistance proportional to the total amount of current that has passed through it over its entire lifetime. Essentially, this means that it maintains a "memory" of the amount of current that has passed through it. One of the exciting potential applications of memristors is modelling synapses in hardware extremely efficiently.
Reinforcement learning and evolution
------------------------------------
I think these are worth mentioning because they are promising candidates for addressing the problem of non-transferrability. These are not restricted to neural networks - being reward-driven, RL and evolution are theoretically applicable in a generic setting to any task where it is possible to define a reward or a goal for an agent to attain. This is not necessarily trivial to do, but it is much more generic than the usual error-driven approach, where the learning agent tries to minimise the difference between its output and a ground truth. The main point here is about transfer learning: ideally, applying a trained agent to a different task **should be** as simple as changing the goal or reward (they are not quite at that level yet, though...).
Upvotes: 3 <issue_comment>username_3: **Replacing Neural Nets**
There may exist new algorithms that have the potential to replace neural nets. However, one of the characteristics of neural nets is that they employ simple elements, each with low demands on computing resources in geometric patterns.
Artificial neurons can be run in parallel (without CPU time sharing or looping) by mapping the computations to DSP devices or other parallel computing hardware. That the many neurons are essentially alike is thus a strong advantage.
**What Would We Be Replacing?**
When we consider algorithmic replacements to neural nets, we imply that a neural net design is an algorithm. It is not.
A neural net is an approach to converging on a real time circuit to perform a nonlinear transformation of input to output based on some formulation of what is optimal. Such a formulation may be the minimization of a measure of error or disparity from some defined ideal. It may be a measure of wellness that must be maximized.
The source of the fitness determination for any given network behavior may be internal. We call that unsupervised learning. It may be external, which we call supervised when the external fitness information is coupled with input vectors in the form of desired output values, which we call labels.
Fitness may also originate externally as a scalar or vector not coupled with the input data but rather real time, which we call reinforcement. Such requires re-entrant learning algorithms. Net behavioral fitness may alternatively be evaluated by other nets within the system, in the case of stacked nets or other configurations such as Laplacian hierarchies.
The selection of algorithms has little to do with comparative intelligence once the mathematical and process designs are selected. Algorithm design is more directly related to minimizing demands for computing resources and reducing time requirements. This minimization is hardware and operating system dependent too.
**Is a Replacement Indicated?**
Sure. It would be better if networks were more like mammalian neurons.
* Sophistication of activation
* Heterogeneity of connection patterns
* Plasticity of design, to support meta-adaptation
* Governed by many dimensions of regional signaling
By regional signaling is meant the many chemical signals beyond signal transmission across synapses.
We can even consider going beyond mammalian neurology.
* Combining parametric and hypothesis-based learning
* Learning of the form employed when microbes pass DNA
**Neural Net Efficiency**
Efficiency cannot be quantified in some universal scale as temperature can be quantified in degrees Kelvin. Efficiency can only be quantified as a quotient of some measured value over some theoretical ideal. Note that it is an ideal, not a maximum, in the denominator. In thermodynamic engines, that ideal is the rate of energy input, which can never be fully transferred to the output.
Similarly, neural nets can never learn in zero time. A neural net cannot achieve zero error over an arbitrarily long time in production either. Therefore information is in some ways like energy, a concept investigated by <NAME> of Bell Labs during the dawn of digital automation, and the relationship between information entropy and thermodynamic entropy is now an important part of theoretical physics.
There can be no bad learning efficiency or good learning efficiency. There can be neither bad performance nor good performance, if we wish to think in logical and scientific terms — only relative improvement of some system configuration with respect to some other system configuration for a very specific set of performance scenarios.
Therefore, without an unambiguous specification of the two hardware, operating system, and software configurations and a fully defined test suite used for relative evaluation, efficiency is meaningless.
Upvotes: 1 <issue_comment>username_4: We do have some hope lurking in that front. As of now we have [capsule networks](https://arxiv.org/pdf/1710.09829) by J.Hinton which uses a different non-linear activation called the 'squash' function.
1. Hinton calls max-pooling in CNN as a 'big mistake', as CNN look only for presence objects in an image rather than the relative orientation between them. So they lose the spatial information while trying to achieve translation invariance.
2. Neural nets have fixed connections, whereas a capsule in a capsule network 'decides' to which other capsule it has to pass its activation during every epoch. This is called 'routing'.
3. The activation of every neuron in neural nets is a scalar. Whereas the activation of capsule is a vector capturing the pose and orientation of an object in an image.
4. CNN are considered to bad representations of human visual system. By human visual system I mean eyes and the brain/cognition together. We could identify Statue of Liberty from any pose, even if we have looked at it from one pose. CNN on most of the cases cannot detect same object in different poses and orientations.
Capsule networks themselves have some shortcomings. So there has been work in the direction of looking beyond neural nets. You can read this [blog](https://medium.com/ai%C2%B3-theory-practice-business/understanding-hintons-capsule-networks-part-i-intuition-b4b559d1159b) for a good understanding before you read the paper by J.Hinton.
Upvotes: 2 |
2018/07/08 | 1,230 | 5,517 | <issue_start>username_0: A neural network is usually programmed to learn from datasets to solve a specific problem. Essentially, they perform non-linear regression.
*Could a neural network be programmed to receive input from a human, like a terminal, to begin to grow and learn (similar to how a child learns)?*
A program that neither knows its purpose nor specific data sets but is given enough information to learn based on the input, ponder the input, and ask questions. A child discovers their purpose (in destiny based philosophy) through experience. Thus, could an AI be created that would learn its purpose over time?
Grow both by continued development, maybe adding extensions that add image recognition, speech analysis, etc., and through user interaction. Eventually learning "moral imperatives" or simple the do's and don't's and how to interact with data.
A case scenario would be a Question & Answer session with the neural network and a large data set. Where the human operator knows the answers. At first, the question and the answer are supplied to the neural network. Giving it the ability to find the answer supplied through deep learning. A guaranteed confidence score of (1) - as the question is pondered the closer it gets to the answer the more it "learns".
The next step is supplying the question and waiting for the answer. The human still knows these answers but is testing the "learning machine" to see if it is truly learning and not "repeating the answer". The answer is supplied by the machine and the human returns with either a percentage that the machine is right (hopefully and eventually matching its confidence score). and after an amount of failure provides the right answer to the machine to repeat the first step and improve learning.
The last step is being able to have the machine answer the question with the human not knowing the solution, thus completing the learning cycle. The human would test the solution and report the results to the machine and the machine would adapt the process and continue learning. However, this time it would begin learning from a data set of results. Hopefully learning "data mining" during its question and answer session.<issue_comment>username_1: Technological advancement has historically been measured by the processing speed of computing machines. Cogitative behavior psychology is proving to correct human processor disorders such as ADHD, anxiety disorders, addiction, and other psychological disorders preventing humans from normal human interactions and social learning. Cognitive processing speed is very different from the speed per second of programmed computations. Just as humans are learning that a new program for energetic children works better than amphetamines, we will learn a new way to teach AI. AI however is capable of the exponentially growing fast speed of computations per second. AI will indeed and is indeed guiding humanity to base the direction of machine learning along the human path to enlightenment. From providing a computer opponent for chess to face recognition robotics, human learning is based on our amazingly complex brain power and we will always be necessary for advancement of technology.
Parkaire Consultants, (2012, February 24). Cognitive Processing Speed. Retrieved July 7, 2018, from <http://parkaireconsultants.com/cognitive-processing-speed/>
<NAME>. (2018, June 21). The Human Brain vs. Supercomputers... Which One Wins? » Science ABC. Retrieved July 7, 2018, from <https://www.scienceabc.com/humans/the-human-brain-vs-supercomputers-which-one-wins.html>
Upvotes: 2 <issue_comment>username_2: In principle, yes, what you are proposing can be done. The exact details of *how* to do it are an open research question. The details would also depend on exactly what your goals for the system are. If you're just trying to build some domain specific system that learns a very specific kind of knowledge, then that's probably going to be easier than building an AGI that learns like a child does.
What I will suggest, although this is not proven, is that building a really powerful system of this sort will probably require more than deep learning. I would also caution anybody interested in AI against thinking that deep learning is the "be all, end all" of AI techniques. My guess is that doing this well will ultimately require a multi-agent system, maybe something like Minsky's "Society of Mind" approach, or a Blackboard model, with collaborating agents, each specialized for various aspects of intelligence. My feeling is that you will, indeed, need deep learning for classification / pattern matching, but possibly also things like Case Based Reasoning, K-Lines, a Semantic Network, Rule Learning, BDI, and other techniques, working together.
Upvotes: 2 <issue_comment>username_3: It might be possible, but, in my opinion, it won't be very successful, given that you need to somehow specify a purpose, even if that purpose is something like trying to be like humans. The most important thing of an intelligent machine is that it follows a goal, that is the very essence of intelligence.
Upvotes: 2 <issue_comment>username_4: You might also be looking for [active learning](https://en.wikipedia.org/wiki/Active_learning_(machine_learning)), where the machine learning algorithm interactively queries the user to label certain unlabeled training examples. Active learning is similar to semi-supervised learning, in which there are labeled and unlabeled examples.
Upvotes: 2 |
2018/07/09 | 1,219 | 5,023 | <issue_start>username_0: I choose the activation function for the output layer depending on the output that I need and the properties of the activation function that I know. For example, I choose the *sigmoid* function when I'm dealing with probabilities, a *ReLU* when I'm dealing with positive values, and a linear function when I'm dealing with general values.
In hidden layers, I use a *leaky ReLU* to avoid dead neurons instead of the ReLU, and the *tanh* instead of the *sigmoid*. Of course, I don't use a linear function in hidden units.
However, the choice for them in the hidden layer is mostly due to trial and error.
Is there any rule of thumb of which activation function is likely to work well in some situations?
Take the term *situations* as general as possible: it could be referring to the depth of the layer, to the depth of the NN, to the number of neurons for that layer, to the optimizer that we chose, to the number of input features of that layer, to the application of this NN, etc.
The more activation functions I discover the more I'm confused in the choice of the function to use in hidden layers. I don't think that flipping a coin is a good way of choosing an activation function.<issue_comment>username_1: It seems to me that you already understand the shortcomings of ReLUs and sigmoids (like dead neurons in the case of plain ReLU).
You may want to look at [**ELU**](https://arxiv.org/abs/1511.07289) (exponential linear units) and [**SELU**](https://arxiv.org/abs/1706.02515) (self-normalising version of ELU). Under some mild assumptions, the latter has the nice property of self-normalisation, which mitigates the problem of vanishing and exploding gradients. In addition, they *propagate* normalisation - i.e., they guarantee that the input to the next layer will have zero mean and unit variance.
However, it would be incredibly difficult to recommend an activation function (AF) that works for all use cases, although I believe that SELU was designed so that it would do the right thing with pretty much any input.
There are many considerations - how difficult it is to compute the derivative (if it is differentiable at all!), how quickly a NN with your chosen AF converges, how smooth it is, whether it satisfies the conditions of the [universal approximation theorem](https://en.wikipedia.org/wiki/Universal_approximation_theorem), whether it preserves normalisation, and so on. You may or may not care about some or any of those.
The bottom line is that there is no universal rule for choosing an activation function for hidden layers. Personally, I like to use sigmoids (especially *tanh*) because they are nicely bounded and very fast to compute, but most importantly because they *work for my use cases*. Others [recommend](https://www.coursera.org/lecture/ai/how-to-choose-the-correct-activation-function-foyh8) *leaky ReLU* for the input and hidden layers as a go-to function if your NN fails to learn. You can even [mix and match](https://arxiv.org/pdf/1703.07122.pdf) activation functions to evolve NNs for [fancy applications](http://picbreeder.org/).
At the end of the day, you are probably going to get as many opinions as there are people about the right choice of activation function, so the short answer should probably be: start with the AF of the day (leaky ReLU / SELU?) and work your way through other AFs in order of decreasing popularity if your NN struggles to learn anything.
Upvotes: 5 [selected_answer]<issue_comment>username_2: I don't know what kind of neural networks you are working on. But one should also consider tanh activation functions when dealing with recurrent neural network. The why is to avoid exploding gradient issues since the tanh function is bounded at the difference of the RELU function for instance.
Upvotes: 0 <issue_comment>username_3: \*\*\*Take my answer as a side note to that given by username_1:
If one can verify that an activation function perform well in some cases, that good behavior often extrapolates to other problems.
Thus, by testing activation functions on a few different problems, one can often infer how well (or badly) it will perform on most problems.
The following video shows how different activation functions perform in different problems:
<https://www.youtube.com/watch?v=Hb3vIYUQ_I8>
One can verify that an activation function usually perform well in all cases, or the other way around: it does it poorly in all cases.
As username_1 says, I would recommend always starting with leaky ReLU: it is simple, efficient, and generally produces nice results in a wide variety of problems.
It also evades the dying ReLU problem, and does not suffer from the vanishing gradient problem.
The only thing to keep in mind is the exploding gradient problem if the neural network is too deep, or if it is a recurrent neural network, which are essentially the same concept.
The video shows that other activation functions worth trying (in addition to leaky ReLU) are Gaussian, Sinusoid, or Tanh.
Upvotes: 2 |
2018/07/09 | 343 | 1,383 | <issue_start>username_0: I am training LSTM neural networks with Keras on a small mobile GPU. The speed on the GPU is slower than on the CPU. I found some articles that say that it is hard to train LSTMs (and, in general, RNNs) on GPUs because the training cannot be parallelized.
Is this true? Is LSTM training on large GPUs, like 1080 Ti, faster than on CPUs?<issue_comment>username_1: From Nvidia www (<https://developer.nvidia.com/discover/lstm>):
>
> ### Accelerating Long Short-Term Memory using GPUs
>
>
> The parallel processing capabilities of GPUs can accelerate the LSTM training and inference processes. GPUs are the de-facto standard for LSTM usage and deliver a 6x speedup during training and **140x higher throughput during inference when compared to CPU implementations**. cuDNN is a GPU-accelerated deep neural network library that supports training of LSTM recurrent neural networks for sequence learning. TensorRT is a deep learning model optimizer and runtime that supports inference of LSTM recurrent neural networks on GPUs. Both cuDNN and TensorRT are part of the NVIDIA Deep Learning SDK.
>
>
>
Upvotes: 3 <issue_comment>username_2: I found that there are cuDNN accelerated cells in Keras, for example, <https://keras.io/layers/recurrent/#cudnnlstm>.
They are very fast. The normal LSTM cells are faster on CPU than on GPU.
Upvotes: 4 [selected_answer] |
2018/07/10 | 380 | 1,615 | <issue_start>username_0: We have AI's predicting images, predicting objects in an image. Understanding audio, meaning of the audio if it is a spoken sentence.
In humans when we start seeing a movie halfway through, we still understand the entire movie (although this might be attributed to the fact that future events in movies have a link to past events). But even if we see a movie by skipping lots of bits in-between we still understand the movie.
So can a Machine Learning AI do this? Or do humans have some inherent experiences in life which makes AI incapable of performing such a feat?<issue_comment>username_1: From Nvidia www (<https://developer.nvidia.com/discover/lstm>):
>
> ### Accelerating Long Short-Term Memory using GPUs
>
>
> The parallel processing capabilities of GPUs can accelerate the LSTM training and inference processes. GPUs are the de-facto standard for LSTM usage and deliver a 6x speedup during training and **140x higher throughput during inference when compared to CPU implementations**. cuDNN is a GPU-accelerated deep neural network library that supports training of LSTM recurrent neural networks for sequence learning. TensorRT is a deep learning model optimizer and runtime that supports inference of LSTM recurrent neural networks on GPUs. Both cuDNN and TensorRT are part of the NVIDIA Deep Learning SDK.
>
>
>
Upvotes: 3 <issue_comment>username_2: I found that there are cuDNN accelerated cells in Keras, for example, <https://keras.io/layers/recurrent/#cudnnlstm>.
They are very fast. The normal LSTM cells are faster on CPU than on GPU.
Upvotes: 4 [selected_answer] |
2018/07/10 | 555 | 2,099 | <issue_start>username_0: [Maxout networks](https://arxiv.org/abs/1302.4389) were a simple yet brilliant idea of Goodfellow et al. from 2013 to max feature maps to get a universal approximator of convex activations. The design was tailored for use in conjunction with dropout (then recently introduced) and resulted of course in state-of-the-art results on benchmarks like CIFAR-10 and SVHN.
Five years later, dropout is definitely still in the game, but what about maxout? The paper is still widely cited in recent papers according to Google Scholar, but it seems barely any are actually using the technique.
So is maxout a thing of the past, and if so, why — what made it a top performer in 2013 but not in 2018?<issue_comment>username_1: Basically, if you read the full paper (especially, the abstract and the section 7), you find that the main accomplishment remains a marginal contribution on top of dropout.
If you see the empirical results on Table 5 (of the page 5) of the maxout's [original paper](https://arxiv.org/pdf/1302.4389.pdf), you find that the misclassification rate is only very, very slightly lower than that of dropouts. (2.47 % instead of 2.78%)
That could explain the relatively lower interest in the work.
Upvotes: 2 <issue_comment>username_2: A few years later, stumbling on my own question, I feel I could share my view on that point.
I think it boils down eventually to a question of efficiency. There is nothing wrong per se with maxout; it is a distant cousin of ReLU that is in theory more capable. But the crux of the matter is that to produce a single scalar output, maxout consumes $k$ neurons, $k\geq 2$. The minimum price to pay to replace a standard activation function like ReLU with maxout while keeping the latent dimension is doubling the width of the scalar product. It is rather steep.
Though, a nice property of maxout is that it tends to keep the signal very much alive, instead of killing it like ReLU does. That property might prove to be important enough for maxout to flourish in some niche market despite its inherent wastefulness.
Upvotes: 0 |
2018/07/12 | 1,352 | 5,609 | <issue_start>username_0: Lets say I have a list of **100k medical cases** from my hospital, **each row = patient** with symptoms (such as **fever** , **funny smell**, **pain** etc.. ) and my labels are medical conditions such as **Head trauma**, **cancer** , etc..
The patient come and say *"I have fever"* and I need to predict his medical condition according to the symptoms.According to my data set I know that both fever and vomiting goes with condition **X**. So i would like to ask him if he is vomiting to increase certainty in my classification.
What is the best algorithmic approach to find the right question (generating question from my data set of historical data). I thought about trying active learning on the features but I am not sure that it is the right direction.<issue_comment>username_1: **Feature Extraction**
<NAME> Gibson's *Deep Learning, A Practitioner's Approach, O'Reiley, 2017* states, "Convolutional Neural Networks (CNNs) ... consistently top image classification competitions," which is consistent with our experience in the lab. If your data is multi-dimensional in that pain is on a scale from one to ten, fever is in degrees, and smell can be a result of blood components which can be quantified in lab reports, you can have a hypercube that can be treated just as frames in a movie can. Movie learning is in ℝ4, the third being frame index and the fourth being sample index. With subjective pain, digital thermometer temperature, and three blood component concentrations, you have {P, T, C1, C2, C3} and learning in ℝ6 for your CNN design.
**Selecting Input Channels**
Asking 100 questions and taking 10 blood panels is probably prohibitive. So you will need to stuff all the data from limited questioning and panels into a hyper-cube and find what will similarly extract features from sparse data input. Then the weighting leading from input to feature layers will identify the questions from which the most important features can be extracted. By searching scholarly articles for, "Feature extraction sparse data," a large number of options will be presented.
[Breast cancer diagnosis based on feature extraction using a hybrid of K-means and support vector machine algorithms, <NAME>, <NAME>, SS Lam - Expert Systems with Applications, 2014 - Elsevier](https://www.sciencedirect.com/science/article/pii/S0957417413006659) may be particularly interesting, given the common domain.
**Outcomes Analysis**
The above is a limited approach because the loop is not closed. Only if the outcomes of treatment are used to produce labels or a real time (over the course of months or years) reinforcement will the system produce an optimization that is meaningful. Unsupervised learning for this particular problem is not likely to produce any significant improvement in treatment efficacy.
Upvotes: 1 <issue_comment>username_2: The problem you're trying to address can, in some sense, be viewed as a **Feature Selection** problem. If you look for literature using only those words, you're not going to find what you're looking for though. In general, "Feature Selection" simply refers to the problem where you already have a large amount of features, and you're simply deciding to select which ones to keep and which ones to throw away (because they're not informative or you don't have the processing power to try training with all features for example).
I'd recommend looking around for a combination of "**Feature Selection**" and "**Cost-Sensitive**". This is because, in your case, there are costs associated with selecting features; values may be costly to obtain for some features. Searching for this combination leads to publications which look to be interesting for you, such as:
* [Cost-sensitive feature selection using random forest: Selecting low-cost subsets of informative features](https://www.sciencedirect.com/science/article/pii/S0950705115004372)
* [Cost-sensitive Dynamic Feature Selection](https://hhexiy.github.io/docs/papers/dynafea_ws.pdf)
* [Cost-Effective Feature Selection and
Ordering for Personalized Energy Estimates](http://www.aaai.org/ocs/index.php/WS/AAAIW16/paper/download/12572/12369)
* and probably much more...
I cannot personally vouch for any of those techniques since I've never used them, but those papers certainly look relevant for your problem.
---
When you're looking around for more literature, terms like "cost", "cost-based", maybe "budgeted" are crucial to include. If you don't include those, you're just going to get papers on problems like:
* **Feature Selection**: given a set of features/columns, which ones am I going to use *across all samples/instances/rows*?
* **Feature Extraction**: given data (typically without clear human-defined features, like images, sound, etc.), how am I going to extract relevant features from this?
* **Active Learning**: given a bunch of samples without labels but feature values already assigned, which one would I like an oracle/human expert/etc. to have a look at so that they can tell me what the true label is?
Those kinds of problems all do not really appear to be relevant in your case. *Active Learning* may be somewhat interesting in that it is about trying to figure out which *rows* would be valuable to learn from, whereas your problem is about which *columns* would be valuable to learn from. There does seem to be a connection there, Active Learning techniques might to some extent be able to inspire techniques for your problem, but just that; inspire, they likely won't be 100% directly applicable without additional work.
Upvotes: 3 [selected_answer] |
2018/07/16 | 1,408 | 5,958 | <issue_start>username_0: How do I choose the best algorithm for a board game like checkers?
So far, I have considered only three algorithms, namely, minimax, alpha-beta pruning, and Monte Carlo tree search (MCTS). Apparently, both the alpha-beta pruning and MCTS are extensions of the basic minimax algorithm.<issue_comment>username_1: If you have to choose between minimax and alpha-beta pruning, you should choose alpha-beta. It is more efficient and fast because it can prune a substantial part of your exploration tree. But you need to order the actions from the best to the worst depending on max or min point of view, so the algorithm can quickly realize if the exploration is necessary.
Upvotes: 2 <issue_comment>username_2: **tl;dr:**
* None of these algorithms are practical for modern work, but they are good places to start pedagogically.
* You should always prefer to use Alpha-Beta pruning over bare minimax search.
* You should prefer to use some form of heuristic guided search if you can come up with a useful heuristic. Coming up with a useful heuristic usually requires a lot of domain knowledge.
* You should prefer to use Monte Carlo Tree search when you lack a good heuristic, when computational resources are limited, and when mistakes will not have outsize real-world consequences.
**More Details:**
In minimax search, we do not attempt to be very clever. We just use a standard dynamic programming approach. It is easy to figure out the value of difference moves if we're close to the end of the game (since the game will end in the next move, we don't have to look very far ahead). Similarly, if we know what our opponent will do in the last move of the game, it's easy to figure out what we should do in the second last move. Effectively we can treat the second last move as the last move of a shorter game. We can then repeat this process. Using this approach is certain to uncover the best strategies in a standard extensive-form game, but will require us to consider every possible move, which is infeasible for all but the simplest games.
Alpha-Beta pruning is a strict improvement on Minimax search. It makes use of the fact that some moves are obviously worse than others. For example, in chess, I need not consider any move that would give you the *opportunity* to put me in checkmate, even if you could do other things from that position. Once I see that a move might lead to a lose, I'm not going to bother thinking about what else might happen from that point. I'll go look at other things. This algorithm is also certain to yield the correct result, and is faster, but still must consider most of the moves in practice.
There are two common ways you can get around the extreme computational cost of solving these kinds of games exactly:
1. Use a Heuristic (A\* search is the usual algorithm for pedagogical purposes, but Quiescence search is a similar idea in 2 player games). This is just a function that gives an *estimate* of the value of a state of the game. Instead of considering all the moves in a game, you can just consider moves out to some finite distance ahead, and then use the value of the heuristic to judge the value of the states you reached. If your heuristic is consistent (essentially: if it always *overestimates* the quality of states), then this will still yield the correct answer, but with enormous speedups in practice.
2. Use Rollouts (like Monte Carlo Tree Search). Basically, instead of considering every move, run a few thousand simulated games between players acting randomly (this is faster than considering all possible moves). Assign a value to states equal to the average win rate of games starting from it. This may not yield the correct answer, but in some kinds of games, it performs reliably. It is often used as an extension of more exact techniques, rather than being used on its own.
Upvotes: 6 [selected_answer]<issue_comment>username_3: >
> So far, I have considered only three algorithms, namely, minimax, alpha-beta pruning, and Monte Carlo tree search (MCTS). Apparently, both the alpha-beta pruning and MCTS are extensions of the basic minimax algorithm.
>
>
>
**Given this context, I would recommend starting out with Minimax**. Of the three algorithms, Minimax is the easiest to understand.
**Alpha-Beta**, as others have mentioned in other answers, is a strict improvement on top of Minimax. Minimax is basically a part of the Alpha-Beta implementation, and a good understanding of Alpha-Beta requires starting out with a good understanding of Minimax anyway. If you happen to have time left after understanding and implementing Minimax, I'd recommend moving on to Alpha-Beta afterwards and building that on top of Minimax. Starting out with Alpha-Beta if you do not yet understand Minimax doesn't really make sense.
**Monte-Carlo Tree Search** is probably a bit more advanced and more complicated to really, deeply understand. In the past decade or so, MCTS really has been growing to be much more popular than the other two, so from that point of view understanding MCTS may be more "useful".
The connection between Minimax and MCTS is less direct/obvious than the connection between Minimax and Alpha-Beta, but there still is a connection at least on a conceptual level. **I'd argue that having a good understanding of Minimax first is still beneficial before diving into MCTS**; in particular, understanding Minimax and its flaws/weak points can provide useful context / help you understand why MCTS became "necessary" / popular.
---
To conclude, in my opinion:
* Alpha-Beta is strictly better than Minimax, but also strongly related / built on top of Minimax; so, start with Minimax, go for Alpha-Beta afterwards if time permits
* MCTS has different strengths/weaknesses, is often better than Alpha-Beta in "modern" problems (but not always), a good understanding of Minimax will likely be beneficial before starting to dive into MCTS
Upvotes: 3 |
2018/07/19 | 1,186 | 4,563 | <issue_start>username_0: I'm trying to create and test non-linear SVMs with various kernels (RBF, Sigmoid, Polynomial) in scikit-learn, to create a model which can classify anomalies and benign behaviors.
My dataset includes 692703 records and I use a 75/25% training/testing split. Also, I use various combinations of features whose dimensionality is between 1 and 14 features. However, the training processes of the various SVMs take much too long. Is this reasonable?
I have also examined the ensemble `BaggingClassifier` in combination with non-linear SVMs, by configuring the `n_jobs` parameter to `-1`; nevertheless, the training process proceeds again too slowly.
How can I speed up the training processes?<issue_comment>username_1: The most likely explanation is that you're using too many training examples for your SVM implementation.
SVMs are based around a [kernel function](https://en.wikipedia.org/wiki/Kernel_method). Most implementations explicitly store this as an NxN matrix of distances between the training points to avoid computing entries over and over again.
In your case, with 75% of 700,000 examples, this matrix will require approximately 250GB of RAM to store, which is more than you're likely to have in consumer hardware.
If your SVM implementation can avoid caching the values, you might get a speedup that way, or you might not (you'll waste a lot of time recomputing them).
A much better way to deal with this is to just [not use all](https://stats.stackexchange.com/questions/122409/why-downsample) of the data, since most of it will be redundant from the SVM's perspective (it only benefits from having more data near the decision boundaries). A good starting place would be to randomly discard 90% of the training data, and see what performance looks like.
Upvotes: 3 <issue_comment>username_2: I think you should use a linear kernel, 'cause training SVM with a linear kernel is faster than with another kernel, especially for text classification. Good luck
<https://www.svm-tutorial.com/2014/10/svm-linear-kernel-good-text-classification/>
Upvotes: 1 <issue_comment>username_3: Because you use various combinations of features whose dimensionality is between 1 and 14 features, You might try to use Linear SVM (linear Kernels) would be good for your problem. You could try LIBLINEAR library but the Data should be linearly separable, otherwise test accuracy would be very low.
Upvotes: 0 <issue_comment>username_4: To quickly train the SVM , you can try to Use Linear SVM or Use scaled data.
sources: <https://www.researchgate.net/publication/2926909_A_Practical_Guide_to_Support_Vector_Classification_Chih-Wei_Hsu_Chih-Chung_Chang_and_Chih-Jen_Lin>
Upvotes: 1 <issue_comment>username_5: You can speed up the training time by doing several steps:
1. scale the values of your features
2. use only a limited number of features because this will affect the training time; i.e. when you use 14 features, it means your model has 14 dimensions and it makes computation more complex and take much time.
3. choose a proper kernel, linear SVM kernel usually give the fastest result
Upvotes: 0 <issue_comment>username_6: The non-linear kernel SVMs can be slow if you have too many training samples. This is due to the fact that the algorithm creates an NxN matrix as @username_1 answered.
Now there are a few ways to speed up the non-linear kernel SVMs:
* Use the [SGDClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html?highlight=sgdclassifier#sklearn.linear_model.SGDClassifier) instead and provide proper parameters for loss, penalty etc. to make it behave like an SVM. The optimisation process is different than libsvm though.
* Use a kernel approximator like [Nystroem](https://scikit-learn.org/stable/modules/generated/sklearn.kernel_approximation.Nystroem.html#sklearn.kernel_approximation.Nystroem)
* Since you are yourself trying out feature combinations, a Linear SVM can also be good and fast :)
Upvotes: 0 <issue_comment>username_7: SVM scales rather badly with the number of training samples - from $O(n^2)$ to $O(n^3)$ as told in this answer <https://stackoverflow.com/questions/16585465/training-complexity-of-linear-svm>.
The vanilla approach requires inversion of $n \times n$ matrix, which is $O(n^3)$ operations in general.
As suggested in the other answers, the most apparent way to reduce the computational and storage complexity is the reduction of number of training samples.
I am even surprised that all this data fits into the memory.
Upvotes: 0 |
2018/07/20 | 2,018 | 8,782 | <issue_start>username_0: In the brain, some synapses are stimulating and some inhibiting. In the case of artificial neural networks, ReLU erases that property, since in the brain inhibition doesn't correspond to a 0 output, but, more precisely, to a negative input.
In the brain, the positive and negative potential is summed up, and, if it passed the threshold, the neuron fires.
There are 2 main non-linearities which came to my mind in the biological unit:
* potential change is more exponential than linear: small amount of ion channels is sufficient to start a chain-reaction of other channels activation's - which rapidly change global neuron's potential.
* the threshold of the neuron is also non-linear: neuron fires only when the sum of its positive and negative potentials passed given (positive) threshold
**So, is there any idea how to implement negative input to the artificial neural network?**
I gave examples of non-linearities in biological neurons because the most obvious positive/negative unit is just a linear unit. But, since it doesn't implement non-linearity, we may consider implementing non-linearities somewhere else in the artificial neuron.<issue_comment>username_1: **The Degree to Which Inhibition is in Common Use**
What could loosely be considered inhibitory effect occurs in MLPs (multilayer perceptrons) as they are normally designed and implemented already.
The gradient descent scheme implemented within a larger back propagation algorithm can produce a parameter adjustment delta that is either positive or negative.
* A positive value decreases the attenuation of that parameter's signal path, thereby increasing the signal strength there.
* A negative value increases the attenuation of that path, thereby
decreasing signal strength through that connection.
A decrease in a parameter's value as a result of back propagation bears some similarity to the inhibition of a neural signal path, however, you may already be aware of the significant differences in the signaling between biological neurons and the signalling between layers in the type of artificial networks commonly in machine learning.
The term inhibition is, as mentioned, only loosely applicable.
* One cannot inhibit a pulse through a MLP because there ARE NO PULSES in MLPs.
* One cannot alter the signal attenuation between neurons by varying a numeric parameter either, since there is no numeric parameter array in a biological net.
Stimulation and inhibition in the brains of mammals are also different in that neuro-chemistry impacts the network regionally, so the terms stimulation and inhibition are a bit ambiguous, since we have agonists and antagonists ranging from dopamine to serotonin and from cannabanoids to oxytocin receptors and from endorphins to other classes.
**Changes from Former Textbook Themes**
The former thinking was that a pulse travelling through a biological signal path strengthened that connection. No one in neurology research adheres to that simplistic a conception today.
For example, it is know that a signal pathway may be in common use but may close down by repeated sharp pains following its use. Although I am not well trained in electro-chemical processes in neural pathways, I recall *in vitro* experiments supporting that this is neither neuro-plastic nor electrical, that it is related to regional chemical feedback.
The current view of addiction as a brain disease is that a breakdown of the interrelationship between chemical state change and learned inhibition or transmission is causal. Inhibition or transmission is no longer decided upon based on organism survival and socialization but on the addictive stimuli, leading to behavioral dysfunction.
It may be useful to point out that, stimulation and inhibition are not strictly antonyms. The opposite of inhibiting a signal is the transmission of it. The opposite of stimulation is the lack of stimulation (no signal).
**Attempting Analogy in Largely Dissimilar Circuit Models**
It may not be an aid to general understanding to draw parallels between ReLU activation functions and the functions of synapses with their sensitivity to regional brain chemistry and with cell-level retention functions orchestrated by organelles.
Neural nets are not neural. They are a mathematical conception sharing only the ideal of learning as convergence on some ideal network behavior. Nothing else of significance is in common.
**Adding to the Disparity Between MLPs and Biology**
In a sense inhibition in the brain occurs at multiple architectural levels, inside the cell, between cells, and over structures of cells, and the alignment of pulses temporally (in the time domain) is not simulated at all in conventional machine learning constructs.
Some researchers have deviated entirely from the multilayer perceptron design and favored a pulse based system that requires specialized hardware. Follow the money there. It is not an inexpensive research avenue yet. But it may become one if they have success.
**Curvature in Functions**
Brief note on terminology: Nth degree polynomials fall under linear algebra, so the best term to use is 'curved functions' so as to not fall into the ambiguity of the term non-linear.
Nonetheless, you are correct that there are non-linearities of different types in biological neural circuits. Potential change is not only curved, but its function's curvature changes quickly. It is temporally sensitive.
On the longer time frame, the memory in a cell forms through neural plasticity and the cell behavior changes internally (within the cell membrane) employing cytoplasm and the suspended organelles. That memory function also attenuates at a roughly inverse exponential rate with respect to time but some have hypothesized based in empirical evidence that forgotten cellular function can be recalled. Again, this is at the cellular level.
The second non-linearity is not a sum of potentials. The surface of the function that aggregates incoming signals is not flat. It is curved. Also, as mentioned, the temporal alignment presents a complexity, since perfect pulse alignment is not treated the same as pulses not perfectly aligned in time.
(I brought up the absurdity of using an additive adjustment in MLP back propagation in a question I wrote for this site. The responses to the challenge to the status quo not particularly well understood by the majority of machine learning practitioners were not outstanding.)
**Linear Thinking Prevails Currently**
To a large degree linear thinking (in the wider sense of the term) pervades mainstream machine learning and data science today, the activation functions being a notable and welcomed exception.
Over time, I expect that will improve. I see current leading edge research going beyond that linear thinking and considering short and long term memory as in the LSTM and attention based networks, the simulation of the curved surfaces that represent pulse propagation in mammalian nets and the consideration of various applications of exponential decay here and there in the latest literature.
**Gratitude for the Question**
Questions like this one may help widen the mainstream understanding too.
Upvotes: 2 <issue_comment>username_2: In biology, when the presynaptic releases a neurotransmitter (a positive amount of them, obviously), this neurotransmitter reaches the postsynaptic vesicles causing an excitatory (depolarization) or inhibitory (hyperpolarization) effect, depending on the kind of postsynaptic vesicle in next cell dendrites. If the total amount of depolarization (all dendrites) is enough bigger than hyperpolarization, the neuron triggers an action potential or similar signal, continuing with the chain.
In the artificial NeuralNet parallelism, when the activation function of previous layer provides an output (say positive one) this value is multiplied by the weights of next layer cell. If the weight is positive, the effect is excitatory, if the weight is negative, the effect is inhibitory.
Thus, these two models are functionally equivalent (same excitatory/inhibitory target is covered), just make the analogy between kind of postsynaptic vesicle with the input weight sign of the artificial neuron.
Upvotes: 2 <issue_comment>username_3: [Principles of Computational Modelling in Neuroscience](http://www.biologia.buap.mx/ANTOLOGIA%20BIOFISICA%20I.pdf) by <NAME>, <NAME>, <NAME> and <NAME> discuss it in Chapter 7 (The synapse) and also in Chapter 8 (Simplified models of neurons). Especially in chapter 8, they discuss how to add excitatory or inhibitory synapses to integrate and fire neuron.
There are various ways to add inhibitory synapse: either substrate voltage, inject negative current.
Upvotes: 2 |
2018/07/20 | 1,434 | 5,783 | <issue_start>username_0: first of all I want to specify the data available and what needs to be achieved: I have a huge amount of vacancies (in the millions). The information about the **job title** and the **job description** of each vacancy are stored separately. I also have a **list of professions** (around 3000), to which the vacancies shall be mapped.
**Example**: *java-developer, java web engineer and java software developer* shall all be mapped to the profession *java engineer*.
Now about my current researches and problems: Since a lot of potential training data is present, I thought a machine learning approach could be useful. I have been reading about different algorithms and wanted to give neural networks a shot.
Very fast I faced the problem, that I couldn't find a satisfying way to **transform text of variable length to numerical vectors of constant size** (needed by neural networks). As discussed [here](https://stackoverflow.com/questions/14783431/processing-strings-of-text-for-neural-network-input), this seems to be a non trivial problem.
I dug deeper and came across [Bag of Words (BOW) and Text Frequency - Inverse Document Frequency (TFIDF)](https://skymind.ai/wiki/bagofwords-tf-idf), which seemed suitable at first glance. But here I faced other problems: If I feed all the job titles to TFIDF, the resulting word-weight-vectors will probably be very large (in the tenth of thousands). The search term on the other hand will mostly consist of between 1 and 5 words (we currently match the job title only). Hence, the neural network must be able to reliably map an ultra sparse input vector to one of a few thousand basic jobs. This sounds very difficult for me and I doubt a good classification quality.
Another problem with BOW and TFIDF is, that they cannot handle typos and new words (I guess). They cannot be found in TFIDF's word list, which results in a vector filled with zeros. To sum it up: I was first excited to use TFIDF, but now think it doesn't work well for what I want to do.
Thinking more about it, I now have doubt if neural networks or other machine learning approaches are even good solutions for this task at all. Maybe there are much better algorithms in the field of natural language processing.
This moment (before digging into NLP) I decided to first gather the opinions of some more experienced AI users, so I don't miss the best solution.
So **what would be a useful approach to this in your opinion** (best would be an approach that is capable of handling synonyms and typos)? Thanks in advance!
p. s.: I am currently thinking about **feeding the whole job description** into the TFIDF and also do matches for new incoming vacancies with the whole document (instead of job title only). This will expand the size of the word-weight-vector, but it will be less sparse. Does this seem logical to you?<issue_comment>username_1: This question has a number of parts to it.
First, you have a *representation* problem: what is the correct way to present textual data to your machine learning algorithm?
In this case, you chose to apply Bag-of-Words and then TFIDF scores. For English, this might be expected to produce on the order of 100,000 features, with each instance having only a few non-zero features.
If you want to go this route, you would typically also do some kind of [feature selection](https://en.wikipedia.org/wiki/Feature_selection) to eliminate unimportant features from consideration. Depending on your task, you may be able to reduce the size of your input vectors quite dramatically while still getting good performance (for some tasks, to just 100 or so).
You're right that this might not be the most promising approach however.
My choice for this problem would be to use a compression classifier, like [DMC](https://plg.uwaterloo.ca/~gvcormac/dmcspam.pdf). These have the advantage that they do not need any feature selection or pre-processing, and can easily handle new words or typos. They give state-of-the-art performance on tasks like spam-email classification.
Upvotes: 1 <issue_comment>username_2: This is a huge growth area in the impact of AI on HR -- see all the companies we've found that [do candidate matching for instance](https://cognitionx.com/directory/products?f%5Buc%5D%5B0%5D=Candidate%20Matching) (disclaimer I work for CognitionX). Under the hood, there are techniques that don't rely on vocabulary such as Facebook's FastText but need more training data.
Here are some other resources
[Job matching using unsupervised learning](https://github.com/JAIJANYANI/Automated-Resume-Screening-System) (k-nearest neighbour)
[](https://i.stack.imgur.com/1CFA9.png) [see paper](https://arxiv.org/abs/1810.04040)
Upvotes: 0 <issue_comment>username_3: Its definitely simpler task as NLP or mashine learning, its **keyword** based. And you have a little bit wrong view on that. See your example:
*Example: java-developer, java web engineer and java software developer shall all be mapped to the profession java engineer.*
Not at all : java web and java are **different** jobs, while java software developer = java-developer and word software means nothing, cause java already stand for software. The info can NOT be mined from texts like job applications -> you have no link to a sence what title is , and better just create the mapping by hand - its not so long. Then, just look in text for key words and ignore other words
Upvotes: 0 <issue_comment>username_4: I've been tackling a similar job title classification problem and used this paper as the basis for my approach: <https://web.stanford.edu/~gavish/documents/phrase_based.pdf>
Might find it useful.
Upvotes: 0 |
2018/07/22 | 3,474 | 14,125 | <issue_start>username_0: Is it possible to make a neural network that uses only integers by scaling input and output of each function to [-INT\_MAX, INT\_MAX]? Is there any drawbacks?<issue_comment>username_1: It is possible in principle, but you will end up emulating floating point arithmetic using integers in multiple places, so it is unlikely to be efficient for general use. Training is likely to be an issue.
If the output of a layer is scaled to [-INT\_MAX, INT\_MAX], then you need to multiply those values by weights (which you will also need to be integers with a large enough range that learning can progress smoothly) and sum them up, and then feed them into a non-linear function.
If you are restricting yourself to integer operations only, this will involve handling multiple integers to represent high/low words in a larger int type, that you then must scale (which introduces a multi-word integer division). By the time you have done this, it is unlikely there will be much benefit to using integer arithmetic. Although perhaps it is still possible, depending on the problem, network architecture and your target CPU/GPU etc.
There are instances of working neural networks used in computer vision [with only 8-bit floating point (downsampled after training)](https://petewarden.com/2015/05/23/why-are-eight-bits-enough-for-deep-neural-networks/). So it is definitely possible to simplify and approximate some NNs. An integer-based NN derived from a pre-trained 32-bit fp one could potentially offer good performance in certain embedded environments. I found [this experiment on a PC AMD chip](http://sharpneat.sourceforge.net/research/integer-neuralnet/integer-neuralnet.html) which showed a marginal improvement even on PC architecture. On devices without dedicated fp processing, the relative improvement should be even better.
Upvotes: 0 <issue_comment>username_2: Some people might argue we can use `int` instead of `float` in NN's as `float` can easily be represented as an`int / k` where `k` is a multiplying factor say `10 ^ 9` e.g 0.00005 can be converted to 50000 by multiplying with 10 ^ 9..
**From a purely theoretical viewpoint**: This is definitely possible, but it will result in a loss of precision since `int` falls in the `INTEGER` set of number whereas `floats` fall in the `REAL NUMBER` set. Converting real numbers to int's will result in high precision loss if you are using very high precisions e.g. `float64`. Real numbers have an uncountable infinity, whereas integers have countable infinity and there is a well known argument called Cantor's diagonalization argument which proves this. [Here](https://www.coopertoons.com/education/diagonal/diagonalargument.html) is a beautiful illustration of the same. After understanding the difference you will intuitionally gain an understanding why converting int's to float is not tenable.
**From a practical viewpoint:** The most well known activation activation function is sigmoid activation (tanh is also very similar). The main property of these activatons are they squash numbers to between `0 and 1` or `-1 and 1`. If you convert floating point to a integer by multiplying with a large factor, which will result in a large number almost always, and pass it to any such function the resulting output will always almost be either of the extremities (i.e `1 or 0`).
Coming to algorithms, algorithms which are similar to backpropagation with momentum cannot run on `int`. This is because,s since you will be scaling the `int` to a large number, and momentum algorithms typically use some sort of **`momentum_factor^n`** formulae, where `n` is the number of examples iterated already, you can imagine the result if **`momentum_factor = 100 and n = 10`**.
Only possible place where scaling might work is for `relu` activation. the problems with this approach will be if the data will probably not fit very good in this model, there will be relatively high errors.
**Finally:** All NN's do is approximate a `real valued` function. You can try to magnify this function by multiplying it with a factor, but whenever you are switching from, real valued function to integers you are basically representing the function as a series of `steps`. Something like this happens (Image only for representation purposes):
[](https://i.stack.imgur.com/1X5o2.gif)
You can clearly see the problem here, each binary number represent a step, to have better accuracy you have to increase the binary steps within a given length, which in your problem will translate to having very high value of bounds [-INT\_MAX, INT\_MAX].
Upvotes: 2 <issue_comment>username_3: TL;DR
=====
Not only it's possible, it even gets done and is [commercially available](https://cloud.google.com/blog/big-data/2017/05/an-in-depth-look-at-googles-first-tensor-processing-unit-tpu). It's just impractical on a commodity HW which is pretty good at FP arithmetic.
Details
=======
It is definitely possible, and you *might* get some speed for a lot of troubles.
The nice thing about floating point is that it works without you knowing the exact range. You can scale your inputs to the range `(-1, +1)` (such a scaling is pretty commonplace as it speeds up the convergence) and multiply them by `2**31`, so they used the range of signed 32-bit integers. That's fine.
You can't do the same to your weights as there's no limit on them. You can assume them to lie in the interval `(-128, +128)` and scale them accordingly.
If your assumption was wrong, you get an overflow and a huge negative weight where a huge positive weight should be or the other way round. In any case, a disaster.
You could check for overflow, but this is too expensive. Your arithmetic gets slower than FP.
You could *check for possible overflow from time to time* and take a corrective action. The details may get complicated.
You could use [saturation arithmetic](https://en.wikipedia.org/wiki/Saturation_arithmetic), but it's implemented only in some specialized CPUs, not in your PC.
Now, there's a multiplication. With use of 64-bit integers, it goes well, but you need to compute a sum (with a possible overflow) and scale the output back to same sane range (another problem).
*All in all, with fast FP arithmetic available, it's not worth the hassle.*
It might be a good idea for a custom chip, which could do saturation integer arithmetic with much less hardware and much faster then FP.
---
Depending on what integer types you use, there may be a precision loss when compared to the floating point, which may or may not matter. Note that TPU (used in AlphaZero) has 8-bit precision only.
Upvotes: 4 [selected_answer]<issue_comment>username_4: **Floating Point Hardware**
There are three common floating point formats used to approximate real numbers used in digital arithmetic circuitry. These are defined in IEEE 754, a standard that was adopted in 1985 with a revision in 2008. These mappings of bit-wise layouts real number representations are designed into CPUs, FPUs, DSPs, and GPUs either in gate level hardware, firmware, or libraries1.
* binary 32 has a 24 bit mantissa and an 8 bit exponent
* binary 64 has a 53 bit mantissa and an 11 bit exponent
* binary 128 has a 113 bit mantissa and a 15 bit exponent2
**Factors in Choosing Numerical Representations**
Any of these can represent signals in signal processing, and all have been experimented with in AI for various purposes related to three things:
* Value range — not a concern in ML applications where the signal is properly normalized
* Averting saturation of the signal with rounding noise — a key issue in parameter adjustment
* Time required to execute an algorithm on a given target architecture
The balance in the best designed AI is between these last two items. In the case of back-propagation in neural nets, the gradient-based signal that approximates the desired corrective action to apply to the attenuation parameters of each layer must not become saturated with rounding noise.3
**Hardware Trends Favor Floating-point**
Because of the demand of certain markets and common uses, scalar, vector, or matrix operations using these standards may, in certain cases, be faster than integer arithmetic. These markets include ...
* Closed loop control (piloting, targeting, countermeasures)
* Code breaking (Fourier, finite, convergence, fractal)
* Video rendering (movie watching, animation, gaming, VR)
* Scientific computing (particle physics, astrodynamics)
**First Degree Transforms to Integers**
On the opposing end of numerical range, one can represent signals as integers (signed) or non-negative integers (unsigned).
In this case, the transformation between the set of real numbers, vectors, matrices, cubes, and hyper-cubes in the world of calculus4 and the integers that approximate them is a first degree polynomial.
The polynomial can be represented as $n = r(as + b)$, where $a \ne 0$, $n$ is the binary number approximation, $s$ is the scalar real, and $r$ is the function that rounds real numbers to the nearest integer. This defines a super-set of the concept of fixed point arithmetic because of $b$.
Integer based calculations have also been examined experimentally for many AI applications. This gives more options:
* two's complement 16 bit integer
* 16 bit non-negative integer
* two's complement 32 bit integer
* 32 bit non-negative integer
* two's complement 64 bit integer
* 64 bit non-negative integer
* two's complement 128 bit integer
* 128 bit non-negative integer
**Example Case**
For instance, if your theory indicates the need to represent the real numbers in the range $[-\pi, \pi]$ in some algorithm, then you might represent this range as a 64 bit non-negative integer (if that works to the advantage of speed optimization for some reason that is algorithm and possibly hardware specific).
You know that $[-\pi, \pi]$ in the closed form (algebraic relation) developed from the calculus needs to be represented in the range $[0, 2^64 - 1]$, so in $n = r(as + b)$, $a = 2^61$ and $b = 2^60$. Choosing $a = \frac {2^64} {\pi}$ would likely create the need for more lost cycles in multiplication when a simple manipulation of the base two exponent is much more efficient.
The range of values for that real number would then be [0, 1100,1001,0000,1111,1101,1010,1010,0010,0010,0001,0110,1000,1100,0010,0011,0101] and the number of bits wasted by keeping the relationship based on powers of two will be $\log\_2 4 - log\_2 \pi$, which is approximately 0.3485 bits. That's better than 99% conservation of information.
**Back to the Question**
The question is a good one, and is hardware and domain dependent.
As mentioned above hardware is continuously developing in the direction to work with IEEE floating point vector and matrix arithmetic, particularly in 32 and 64 bit floating point multiplication. For some domains and execution targets (hardware, bus architecture, firmware, and kernel), the floating point arithmetic may grossly out perform what performance gains can be obtained in 20th century CPUs by applying the above first degree polynomial transformation.
**Why the Question is Relevant**
In contract, if the product manufacturing price, the power consumption, and PC board size and weight must be kept low to enter certain consumer, aeronautic, and industrial markets, the low cost CPUs may be demanded. By design, these smaller CPU architectures do not have DSPs and the FPU capabilities don't usually have hardware realization of 64 bit floating point multiplication.5
**Handling Number Ranges**
Care in normalizing signals and picking the right values for *a* and *b* are essential, as mentioned, more so than with floating point, where the diminution of the exponent can eliminate many cases where saturation would be an issue with integers6. Augmentation of the exponent can avert overflow automatically too, up to a point, of course.
In either type of numeric representation, normalizing is part of what improves convergence rate and reliability anyway, so it should always be addressed.
The only way to deal with saturation in applications requiring small signals (such as with gradient descent in back-propagation, especially when the data set ranges are over an order of magnitude) is to carefully work out the mathematics to avoid it.
This is a science by itself, and only a few people have the scope of knowledge to handle hardware manipulation at the circuitry and assembly language level along with the linear algebra, calculus, and machine learning comprehension. The interdisciplinary skill set is rare and valuable.
---
**Notes**
[1] Libraries for low level hardware operations such as 128 bit multiplication are written in cross assembly language or in C with the -S option turned on so the developer can check the machine instructions.
[2] Unless you are calculating the number of atoms in the universe, the number of permutations in possible game-play for the game Go, the course to a landing pad in a crater on Mars, or the distance in meters to reach a potentially habitable planet revolving around Proxima Centauri, you will likely not need larger representations than the three common IEEE floating point representations.
[3] Rounding noise naturally arises when digital signals approach zero and the rounding of the least significant bit in the digital representation begins to produce chaotic noise of a magnitude that approaches that of the signal. When this happens, the signal is saturated in that noise and cannot be used to reliably communicate the signal.
[4] The closed forms (algebraic formulae) to be realized in software driven algorithms arise out of the solution of equations, usually involving differentials.
[5] Daughter boards with GPUs are often too pricey, power hungry, hot, heavy, and/or packaging unfriendly in non-terrestrial and consumer markets.
[6] The zeroing of feedback is skipped in this answer because it points to either one of two things: (A) Perfect convergence or (B) Poorly resolved mathematics.
Upvotes: 2 |
2018/07/22 | 495 | 2,145 | <issue_start>username_0: I would like to know if having a really good evaluation function is as good as using any of the extensions of alpha-beta pruning, such as killer moves or quiescence search?<issue_comment>username_1: A perfect evaluation function would mean that you only had to do a local search - i.e. maximise over next set of decisions - in order for an agent to behave optimally in an environment.
As such if you could somehow create that function, it would make a search with alpha-beta pruning redundant.
In practice, evaluation functions for complex environments are usually approximate, and significant improvement can be made by adding a deeper search.
Optimisations in search algorithms and improvements in evaluation function work together to make more efficient and closer-to-optimal solutions overall. An evaluation function provides global/general knowledge about the environment and goals. A tree search function provides local focus on solving a relatively small subset of the optimisation problem that is currently relevant.
Upvotes: 2 <issue_comment>username_2: To build on Neil's answer a bit, you're right that the better your evaluation function gets, the less work your optimization function will need to perform. If your evaluation function gets good enough, you won't need to search at all.
This is not just an academic idea though! It's actually fairly widely used, and has been key to solving several games.
The first example I'm aware of is Tesauro's [TD-Gammon player](http://enzodesiage.com/wp-content/uploads/2017/08/tesauro-tdgammon-1995.pdf), from 1995. Tesauro used the ideas of reinforcement learning and self-play to train a Neural Network to act as an evaluation function. TD-Gammon played with just a 2-move lookahead using the best evaluation function that was found, and was deemed better than most (all?) human expert players at the time.
More recently, [AlphaGo Zero](http://file:///Users/jdoucette/Downloads/agz_unformatted_nature.pdf) used similar techniques to solve Go, but learning both an evaluation function and (separately) a function to randomize over possible moves.
Upvotes: 2 |
2018/07/23 | 951 | 4,163 | <issue_start>username_0: As it can be easily pointed out that true random numbers cannot be generated fully by programming and some random seed is required.
On the other hand, humans can easily generate any random number independently of other factors.
Does this suggest that absolute random number generation is an AI concept?<issue_comment>username_1: Such a great question. I would concur with <NAME> comment that humans are not great at thinking of random numbers (just think about any card trick). However, we are very good at creating randomness through our actions.
If you consider moving a computer mouse, the stock market, or playing a lottery, humans are very good at creating randomness through our constructs or actions.
I would pose that maybe randomness needs to be part of a AGI to make it more able to jump out of suboptimal valleys or change it’s topology rather than sticking always to the local minimax.
Upvotes: 2 <issue_comment>username_2: >
> As it can be easily pointed out that true random numbers cannot be generated fully by programming and some random seed is required.
>
>
>
This is true. In fact, it is impossible to solve using software. No software-only technique can generate randomness *without* an initial random seed or support from hardware.
This is also true for AI software. No AI design that uses deterministic software can do this - e.g. any Turing machine without a magic unexplained "random" function can be shown to remain deterministic, no matter how complex. That's because any combination of deterministic functions is deterministic. It may not be predictable without following the process exactly, it may be "chaotic" and depend critically on initial conditions, but it is 100% deterministic and repeatable.
>
> On the other hand, humans can easily generate any random number independently of other factors.
>
>
>
[Typically not high quality randomness](http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0041531). It is not clear *how* we make random decisions, but entirely possible that internally we rely effectively on noisiness from the environment or our own internal "hardware" doing something simple such as racing decisions between neurons (timing will vary due to speed of electrical impulses, diffusion time of neuro-transmitters across synapses etc)
>
> Does this suggest that absolute random number generation an AI cocept?
>
>
>
I think it is orthogonal issue. We can already produce very high quality artificial randomness - better than human quality used by humans for conscious decisions (such as "choose a random number between 1 and 10"). These artificial random number generating systems are part of modern cryptography and are tested thoroughly.
Essentially "true" artificial randomness is a solved problem using hardware, and does not involve anything that has traditionally been called AI.
In reverse, AI systems often rely on stochastic functions in order to break symmetry, break ties, regularise models etc. So it does look like some kind of RNG is necessary within an artificial agent. However, even pseudo-random number generators (PRNGs) seem to be fine for this purpose. Mersenne Twister is a very common choice for generating random numbers inside neural networks for weight initialisation, dataset shuffling, dropout regularisation, or when simulating environments for RL, or taking exploratory actions. Despite the fact that it is not "true" random, a PRNG will work just fine for these purposes.
---
Working definition of "true" randomness that I use: Even if you know the state of a system as accurately as modern physics allows, the output cannot be predicted with better accuracy than a fixed guess.
Human randomness already fails this test. If you ask someone to choose a number randomly between 1 and 9, you will generally have a better than 1 in 9 chance of guessing the correct value, based on statistical analysis. If we were able to take good state measurements of brains, it might be possible to predict with high accuracy - although this is unknown and not possible with current technology.
Upvotes: 4 [selected_answer] |
2018/07/30 | 2,810 | 12,131 | <issue_start>username_0: I want suggestions on literature on Reinforcement Learning algorithms that perform well with **asynchronous feedback** from the environment. What I mean by asynchronous feedback is, when an agent performs an action it gets feedback(reward or regret) from the environment after sometime not immediately. I have only seen algorithms with immediate feedback and asynchronous updates. I don't know if literature on this problem exists. This is why I'm asking here.
My application is fraud detection in banking, my understanding is when a fraud is detected it takes 15-45 days for the system to flag it as a fraud sometimes until the customer complains the system doesn't know its fraud.
How would I go about designing a real-time system using reinforcement learning to flag transactions that are fraudulent or normal?
Maybe my understanding is wrong, I'm learning on my own if someone could help me I would be grateful.
The reason I'm looking at reinforcement learning instead of supervised learning is, it's hard to get ground truth data in the banking scenario. Fraudsters are always up-to-date or exceeding the state of the art in fraud detection. So I've decided that reinforcement learning would be an optimal direction to look for solutions to this problem.<issue_comment>username_1: I have been looking for a while into pretty much precisely the problem you describe (including the same application domain), but haven't been able to find much.
The most obvious, **mathematically "correct" solution** would be to simply delay your standard Reinforcement Learning update rule (of whatever algorithm you choose to implement) by 45 days; if it still wasn't reported as a fraud then, assume it was genuine. This leads to some problems though;
* Need lots of memory to store experiences that were not yet used for updates
* Learning only starts after a significant delay, in which you don't learn anything at all yet and likely therefore run a suboptimal policy for a long time
* Very slow to adapt to new strategies of the fraudsters
* What to do with people who already report fraud cases earlier, like after 10 days? Delay them for the full 45 days anyway, or trigger updates immediately (and potentially mess up the ordering in which experiences actually occurred)?
---
**A quick and dirty "solution"** is the following;
* When a transaction occurs, immediately trigger a learning update under the assumption that it was a genuine transaction (for example, with a reward of `R = +1`).
* If that transaction is later reported as a fraud, trigger an additional update (with same `(state, action)` pair), but with the negation of the reward that was previously assigned erroneously *on top of* the normal negative reward for a fraudulent case. For example, if you would normally give `R = +1` for genuines, and `R = -100` for frauds, give a reward of `R = -101` now. This reward will not correct for the previously assigned wrong reward in completely the right way (potentially wrong position in sequence of updates, discounting due to `gamma` and maybe `lambda` depending on algorithm used, etc.), but it should be somewhat close (especially if `gamma` and `lambda` are close to `1.0`).
This is certainly not ideal, has very little theoretical basis and probably breaks quite a bit of Reinforcement Learning theory, but at least it is efficient in terms of computation and memory and in my experience it works alright in practice.
---
If you're using **off-policy** RL algorithms, you can use **Experience Replay** buffers (very popular in DQN-style things in Deep RL these days, but can also be used in tabular RL / RL with linear function approximation etc.). If you already have historical data generated through some non-RL policy in the past (which is typically the case in fraud detection / banking applications, they do have lots of data even if they don't always share it), you can use this to fill your experience replay buffer. In the case of the first solution (at the top of this answer), this can be used for training during the initial delay of 45 days.
Since you expect there to be concept drift though (fraudsters adapting their behaviour over time), you'll want to be careful with experience replay. Old data will become less useful.
---
A very different style of solution is to assume that you have a team of **human experts available who can investigate a very small portion of incoming transactions relatively quickly**. This tends to be true for large companies in practice ("investigating" often means a phone-call to a card holder). This enables you to generate accurate feedback for a small portion of your data more quickly, so that you can also do Reinforcement Learning with much less of a delay (albeit only on a small percentage of your experience).
You can read more about this idea in the following paper (**disclaimer:** I'm an author on it):
* <NAME>, <NAME>, <NAME>, <NAME>, and <NAME> (2018). [“Adapting to Concept Drift in Credit Card Transaction Data Streams Using Contextual Bandits and Decision Trees”](https://dennissoemers.github.io/assets/publications/SoemersCreditCardIAAI.pdf). In Thirtieth Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-18), pp. 7831-7836. AAAI Press.
Apart from that idea you might furthermore find it interesting for references to other related work, links to data you could use, etc.
---
I feel like it should be possible to **extend the existing Reinforcement Learning theory** with proper algorithms that can properly;
1. Take immediate learning steps with an assumed, default, potentially incorrect reward, and
2. Retroactively correct for previous incorrect updates if the reward turns out to be something else than previously assumed in hindsight.
I'm not aware of existing literature in which this is done though, and it certainly doesn't seem trivial; it will require starting pretty much from "first principles" (e.g., Bellman operator).
Intuitively, I also expect doing this completely correctly will always require a significant amount of memory (memory of all previous transactions of a card holder, such that state-action pairs can be re-generated if necessary). Banks likely already store that kind of data anyway for every customer, so it may not be a problem in practice.
If anyone's planning to work on this, feel free to contact me, I'll likely be happy to collaborate :D
Upvotes: 2 <issue_comment>username_2: That this question uses the word *Feedback* and made reference to more than one channel of feedback, "Reward and regret," indicates a comprehension of corrective signaling. Some of the reinforcement learning literature that appears scientific lacks that understanding, so beware of that.
The temporal delay of fed back information is not unique to the case of banking fraud detection. It is central to security breach detection in general, including web site hosting and telecommunications hacking. It is also central to many other technology domains ranging from cyber-combat to chemical engineering to petroleum exploration.
Early control systems were of the PID form used in speed or direction governors. In those, temporal elements were only analyzed to avert oscillation, overshoot, and undershoot. Those are still relevant in fraud detection systems, but there are more requirements on the control system, specifically non-linearity in multiple dimensions.
Consequently, control theory has been extended more in the direction of measuring behavioral wellness. Early temporal elements in digital systems included random access memory for applications and persistent memory for programs and data. With the emergence of production ready AI, the temporal elements include acquired rules, fuzzy rule weights, convergence of network parameters corresponding to machine learning components, and other learned information.
The proof of concept in financial fraud detection is the same as for many other domains where the feedback can occur minutes, hours, days, or months after a decision was made or a signal propagated through an artificial learning network: The neural networks of higher life forms, where asynchronous adaptation extends DNA based evolutionary adaptation, pain feedback is augmented by more abstract forms of feedback. In humanoid and primate species, social satisfaction involves a specific signaling that involves neuro-compounds such as serotonin and oxytocin.
This kind of adaptation fits in asynchronicity between reflex and DNA adaptation, in the realm that ranges from Pavlov's conditioned response to the social phenomenon of commitment. The importance of these capabilities is a result of the fact that not all sensory input that provides useful feedback about a behavior exhibited by the biological or artificial control system immediately after it is exhibited.
There is some suggested reading below, and you may want to examine Bayes' Theorem and some of the software you can download in nearly every common programming language that implements what is called Naive Bayesian Categorization. It is through the mathematics of probability theory that the best causal models can be realized. What you probably want to do is learn the key elements of modeling causality with numbers FIRST and then consider how basic probabilistic causality modeling might be augmented with artificial networks.
Although <NAME> and <NAME>'s *Reinforcement learning: An introduction* (1998 MIT Press) is considered an excellent overview, the early comparative works provide a more direct path to answer questions about algorithms.
When you embark on algorithm development that involves both learning and asynchronicity, it is important to know at the onset that real time programming, such as is now used in high speed trading, is not for the faint at heart. Real time processing places two reliability centered requirements on algorithms, and they should be addressed stringently if you want a stable, low maintenance system that works.
* State-safe — In machine learning, functions that process feedback must not alter a set of interrelated parameters while in use by the forward propagation of the circuit.
* Re-entrant — In machine learning, an interrupt from an incoming signal and a change of state must not frustrate the intent of the algorithm interrupted upon its resuming.
Regarding attacks to banking systems, there will be escalation. The countermeasures the banks take will be met by the countermeasure of the thieves. It is a game, and the banking industry is wise to employ researchers and engineers that understand that learning is feedback dependent.
You may not find the best final designs in the liturature for this reason. Banks naturally employ nondisclosure agreements (NDAs) to keep attackers from gaining knowledge about defensive strategies through web searches. (If it is on the web, it is probably already hacked.)
As researchers and engineers they employ, we are wise to employ asynchronous feedback and real time learning in fraud detection systems and seek a more informed position to stay ahead of engineers that don't value property rights for anyone but themselves.
**Suggested Literature**
[A Unified Analysis of Value-Function-Based Reinforcement-Learning Algorithms, <NAME>, <NAME>, October 27, 1998](https://www.researchgate.net/profile/Csaba_Szepesvari/publication/12723295_A_Unified_Analysis_of_Value-Function-Based_Reinforcement-Learning_Algorithms/links/09e4150cb771cd4eab000000.pdf)
[Asynchronous Methods for Deep Reinforcement Learning, <NAME> al, University of Montreal, 2016](http://proceedings.mlr.press/v48/mniha16.pdf)
[Deep Reinforcement Learning for Robotic Manipulation with
Asynchronous Off-Policy Updates, <NAME>, <NAME>, <NAME>, <NAME>, 2016](http://proceedings.mlr.press/v48/mniha16.pdf)
[Dynamic causal modelling, <NAME>, <NAME>, and <NAME>, Institute of Neurology, UK, 2003](http://web.mit.edu/swg/ImagingPubs/connectivity/Dcm_Friston.pdf)
Upvotes: 0 |
2018/07/30 | 279 | 1,137 | <issue_start>username_0: I want to implement a real-time system for image comparison (e.g. compare a face with a reference one) on an Odroid. I would like to know what are the most suitable architectures for this task. I started with methods based on triplet loss (like Facenet) but I realized that a real-time solution is not feasible. Are there good, light alternatives?<issue_comment>username_1: The problem might not be caused by your loss function. Deep learning models tend to be computationally demanding. Mobile devices are not usually prepared for handling models with high throughput. That being said, you might try to:
1. Prepare smaller model - less layers, less computation
2. Mobile models optimization - Google provides some materials on optimizing Tensorflow models for mobile inference:
<https://www.tensorflow.org/mobile/prepare_models>
<https://www.tensorflow.org/mobile/optimizing>
Upvotes: 1 <issue_comment>username_2: For face ID, Apple is using Siamese Network. You may get a better idea here <https://towardsdatascience.com/one-shot-learning-face-recognition-using-siamese-neural-network-a13dcf739e>
Upvotes: 0 |
2018/07/30 | 1,996 | 8,123 | <issue_start>username_0: What are the mathematical prerequisites for understanding the core part of various algorithms involved in artificial intelligence and developing one's own algorithms?
Please, refer to some specific books.<issue_comment>username_1: **Good Mathematics Foundation**
Begin by ensuring full competency with intermediate algebra and some other foundations of calculus and discrete math, including the terminology and basic concepts within these topics.
* Infinite series
* Logical proofs
* Linear algebra and matrices
* Analytic geometry, especially the distinction between local and global extremes (minima and maxima), saddle points, and points of inflection
* Set theory
* Probability
* Statistics
**Foundations of Cybernetics**
<NAME>, Cybernetics, 1948, MIT Press, contains time series and feedback concepts with clarity and command not seen in subsequent works; it also contains an introduction to information theory beginning with Shannon's log2 formula for a defining the amount of information in a bit. This is important to understand the expansion of the information entropy concept.
**Calculus**
Find a good calculus book and make sure you have clarity around key theory and application in these categories.
* Time series
* Infinite series
* Convergence — Artificial networks ideally converge to an optimum during learning.
* Partial differentials
* Jacobian and Hessian matrices
* Multivariate math
* Boundary regions
* Discrete math
Much of that is in [*Calculus*, Strang, MIT, Wellesley-Cambridge Press](https://ocw.mit.edu/ans7870/resources/Strang/Edited/Calculus/Calculus.pdf). Although the PDF is available on the web, it is basic and not particularly deep. The one in our laboratory's library is [*Intermediate Calculus*, Hurley, <NAME> & Winston, 1980](https://rads.stackoverflow.com/amzn/click/com/0030567831). It is comprehensive and in some ways better laid out than the one I have in my home library, which Princeton uses for sophomores.
Ensure you are comfortable working in spaces beyond ℝ2 (beyond 2D). For instance, RNNs are often in spaces such as ℝ4 thorugh ℝ7 because of the horizontal, vertical, pixel depth, and movie frame dimensions.
**Finite Math**
It is unfortunate that no combination of any three books I can think of has all of these.
* Directed graphs — Learn this BEFORE trees or circuits (artificial nets) because it is the superset topography of all those configurations
* Abstract symbol trees (ASTs)
* Advanced set theory
* Decision trees
* Markov chains
* Chaos theory (especially the difference between random and pseudo-random)
* Game Theory starting with <NAME> and Morgenstern's *Game Theory*, the seminal work in that field
* Convergence in discrete systems especially the application of theory to signal saturation in integer, fixed point, or floating-point arithmetic
* Statistical means, deviations, correlation, and the more progressive concepts of entropy, relative entropy, and cross-entropy
* Curve fitting
* Convolution
* Probability especially Bayes' Theorem
* Algorithmic theory (Gödel's uncertainty theorems and Turing completeness)
**Chemistry and Neurology**
It is good to recall chemical equilibria from high school chemistry. Balance plays a key role in more sophisticated AI designs. Understanding the symbiotic relationship between generative and discriminative models in GANs will help a student further this understanding.
The control functions within biological systems remain a primary source of proofs of concept in artificial intelligence research. As researchers become more creative in imagining forms of adaptation that do not directly mimic some aspect of biology (still a distance off as of this writing) creativity may play a larger role in AI research objective formulation.
Even so, AI will probably remain a largely interdisciplinary field.
Upvotes: 5 [selected_answer]<issue_comment>username_2: As far as simple algorithms like Gradient Descent are concerned, you need to have a good grasp of partial derivatives. Especially if you want to implement neural networks.
Also most algorithms are vectorised to improve computing speed and so you need to be comfortable with matrix math. This involves being really quick and comfy with dimensions of matrices, dimensions of products, multiplication of matrices, transpose and so on.
Very rarely, you might use matrix calculus to directly arrive at optimal solutions, so a few results from this area should do.
Moving on, you need to understand some function analysis. this is needed to get an intuition on what activation functions like sigmoid and tanh, log are doing.
A grasp of probability and expectations is also really useful.
You should also be clear with orthogonal vectors and inner products.
That being said, I would suggest you grasp basic calculus and matrix operations and try learning AI concepts. If you can't figure something out, explore the math.
Note: again this is only for starting.
Upvotes: 2 <issue_comment>username_3: I work as a professor, and recently designed the mathematics requirements for a new AI major, in consultation with many of my colleagues at other institutions.
The other answers, particularly [this one](https://ai.stackexchange.com/a/7353/2444) do a good job of cataloging all the specific topics that might be useful *somewhere* in AI, but not all of them are equally useful for understanding core topics. In other cases, understanding the topic is essentially the same as understanding the related AI algorithms, so we usually just teach them together instead of assuming prerequisite knowledge. For instance, *Markov Decision Processes* aren't hard to teach to someone who already knows the basics of *graph theory* and *probabilities*, so we usually just cover them when we teach *reinforcement learning* in an AI course, rather than as a separate topic in a mathematics course.
The mathematics requirements we settled on look like:
* A one or two semester course in discrete mathematics. This is as much to establish comfort with proof and mathematical rigor as with any specific topic in the area. It's mostly just "foundational" knowledge, but bits of it turn out to be very useful. Comfort with infinite summations, the basics of graphs, combinatorics, and asymptotic analysis are perhaps the most directly applicable parts. I like [<NAME>'s book](https://rads.stackoverflow.com/amzn/click/com/0495391328).
* A one or two semester course in linear algebra, which is useful across a wide variety of topics in AI, especially machine learning and data mining. [Lay & Lay](https://rads.stackoverflow.com/amzn/click/com/032198238X) is an okay book, but probably not the absolute best. [Shilov](https://rads.stackoverflow.com/amzn/click/com/048663518X) is a recommendation from Ian Goodfellow and others, but I've not tried it myself.
* A course in probability, and possibly a modern course in statistics (i.e. with a Bayesian focus). An older course in statistics, or one targeting social scientists, is not very useful though. My statistician colleagues are using [Lock5](http://www.lock5stat.com/) right now, and having good experiences with it.
* At least differential and integral calculus, and preferably at least partial derivatives in vector calculus, but perhaps the whole course. This is useful in optimization, machine learning, and economics-based approaches to AI. [Stewart](https://www.amazon.ca/Calculus-Early-Transcendentals-James-Stewart/dp/1285741552/ref=sr_1_1?s=books&ie=UTF8&qid=1533150713&sr=1-1&keywords=stewart%20calculus%208th%20edition) is the most common textbook. It's comprehensive, and can be used for all three courses, but it's explanations aren't always the very best. I'd still recommend it though.
Those are the core topics. If you don't *also* have a traditional background in programming, then a course in graph theory and the basics of asymptotic complexity or algorithm design and analysis might be good supplements. Usually AI'ers come from a standard computer science background though, which covers all those things very well.
Upvotes: 3 |
2018/07/31 | 966 | 3,615 | <issue_start>username_0: I'm now learning about reinforcement learning, but I just found the word "trajectory" in [this answer](https://datascience.stackexchange.com/a/24924/8432).
However, I'm not sure what it means. I read a few books on the Reinforcement Learning but none of them mentioned it. Usually these introductionary books mention agent, environment, action, policy, and reward, but not "trajectory".
So, what does it mean?
According to [this answer](https://www.quora.com/In-the-context-of-reinforcement-learning-what-is-the-difference-between-a-trajectory-and-a-policy-Also-what-is-the-difference-between-trajectory-optimization-and-policy-optimization) over Quora:
>
> In reinforcement learning terminology, a trajectory $\tau$ is the path of the agent through the state space up until the horizon $H$. The goal of an on-policy algorithm is to maximize the expected reward of the agent over trajectories.
>
>
>
Does it mean that the "trajectory" is the total path from the current state the agent is in to the final state (terminal state) that the episode finishes at? Or is it something else? (I'm not sure what the "horizon" mean, either).<issue_comment>username_1: In answer that you linked, I may have used an informal definition of "trajectory", but essentially the same thing as the quote. A "trajectory" is the sequence of what has happened (in terms of state, action, reward) over a set of contiguous timestamps, from a single episode, or a single part of a continuous problem.
So $(s\_3, a\_3, r\_4, s\_4, a\_4, r\_5, s\_5, a\_5, r\_6, s\_6)$ taken from any scenario where an agent was used in the problem environment would be a trajectory - at least as I intended it in the answer. This could be from real-world data, or a simulation. It could involve a totally random or untrained agent, or a fully-optimised policy.
In the other definition that you have found, the focus on states and a *horizon* could make it slightly different, but actually I suspect that it is the same thing, as it is not that useful to only know the states. The Quora answer is probably just using "path of the agent through the state space" as shorthand to describe the same data.
A "horizon" in reinforcement learning is a future point relative to a time step, beyond which you do not care about reward (so you sum the rewards from time $t$ to $t+H$). Fixed horizons can be used as an alternative to a discount factor for limiting sums of reward in continuous problems. They may also be used in other approaches, but basically mean the same thing - a time step beyond which you don't account for what happens.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Neil’s answer is good but I’m observing that strictly translating the following Quora’s answer statement
>
> In reinforcement learning terminology, a trajectory $\tau$ is the path of the agent through the state space up until the horizon
>
>
>
we get $\tau = \{ s\_{t} \}\_{t \in [t\_{0}, t\_{H}]} \quad s\_{t} \in \mathcal{S}$ with
* $\mathcal{S}$ the State Space
* $t\_{0}$ initial time
* $t\_{H} > t\_{0}$ time associated to a certain event $H$
So according to Quora's answer author, it should just be a temporal sequence of states (no action and rewards)
Upvotes: 2 <issue_comment>username_3: A trajectory ist just a **sequence of states and actions**. In RL, the goal is to maximize the reward, by finding the right trajectories.
$$
\operatorname{max}\_\tau R(\tau)
$$
This means
maximizing not immediate reward (caused by one action from a state), but cumulative reward (all states and actions: trajectory)
Upvotes: 2 |
2018/07/31 | 918 | 3,616 | <issue_start>username_0: In doing a project using neural networks with an input layer, 4 hidden layers and an output layer ,I used mini batch gradient descent. I noticed that the randomly initialised weights seemed to do a good performance and gave a low error. As the model started training after about 200 iterations there was large jump in error and then it came down slowly from there. I have also noticed that sometimes the cost just increases over a set of consecutive iterations.
Can anyone explain why these happen? It is not like there are outliers or a new distribution as every iteration exposes it to the entire dataset.
I used learning rate 0.01 and regularisation parameter 10. I also tried regularisation parameter 5 and also 1.
And by the cost I mean, the sum of squared errors of all minibatches/2m plus regularisation term error.
Further if this happens and my cost after the say 10000th iteration is more than my cost when I initialised with random weights (lol) can I just take the initial value? As those weights seem to be doing better.
The large jumps are the most puzzling.
This is the code
Any help would be greatly appreciated. Thanks<issue_comment>username_1: In answer that you linked, I may have used an informal definition of "trajectory", but essentially the same thing as the quote. A "trajectory" is the sequence of what has happened (in terms of state, action, reward) over a set of contiguous timestamps, from a single episode, or a single part of a continuous problem.
So $(s\_3, a\_3, r\_4, s\_4, a\_4, r\_5, s\_5, a\_5, r\_6, s\_6)$ taken from any scenario where an agent was used in the problem environment would be a trajectory - at least as I intended it in the answer. This could be from real-world data, or a simulation. It could involve a totally random or untrained agent, or a fully-optimised policy.
In the other definition that you have found, the focus on states and a *horizon* could make it slightly different, but actually I suspect that it is the same thing, as it is not that useful to only know the states. The Quora answer is probably just using "path of the agent through the state space" as shorthand to describe the same data.
A "horizon" in reinforcement learning is a future point relative to a time step, beyond which you do not care about reward (so you sum the rewards from time $t$ to $t+H$). Fixed horizons can be used as an alternative to a discount factor for limiting sums of reward in continuous problems. They may also be used in other approaches, but basically mean the same thing - a time step beyond which you don't account for what happens.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Neil’s answer is good but I’m observing that strictly translating the following Quora’s answer statement
>
> In reinforcement learning terminology, a trajectory $\tau$ is the path of the agent through the state space up until the horizon
>
>
>
we get $\tau = \{ s\_{t} \}\_{t \in [t\_{0}, t\_{H}]} \quad s\_{t} \in \mathcal{S}$ with
* $\mathcal{S}$ the State Space
* $t\_{0}$ initial time
* $t\_{H} > t\_{0}$ time associated to a certain event $H$
So according to Quora's answer author, it should just be a temporal sequence of states (no action and rewards)
Upvotes: 2 <issue_comment>username_3: A trajectory ist just a **sequence of states and actions**. In RL, the goal is to maximize the reward, by finding the right trajectories.
$$
\operatorname{max}\_\tau R(\tau)
$$
This means
maximizing not immediate reward (caused by one action from a state), but cumulative reward (all states and actions: trajectory)
Upvotes: 2 |
2018/07/31 | 454 | 1,788 | <issue_start>username_0: Does an AI exist that can automatically write software based on a formal specification of the software?<issue_comment>username_1: I think that the answer to your question is yes. In the article [New A.I. application can write its own code](https://www.futurity.org/artificial-intelligence-bayou-coding-1740702/), the authors state
>
> Computer scientists have created a deep-learning, software-coding application that can help human programmers navigate the growing multitude of often-undocumented application programming interfaces, or APIs.
>
>
> Designing applications that can program computers is a long-sought grail of the branch of computer science called artificial intelligence (AI). The new application, called Bayou, came out of an initiative aimed at extracting knowledge from online source code repositories like GitHub. Users can try it out at askbayou.com.
>
>
>
The paper [Neural Sketch Learning for Conditional Program Generation](https://arxiv.org/pdf/1703.05698.pdf) may also be useful.
Upvotes: 3 <issue_comment>username_2: There's Neural Program Synthesis, which can be used to generate a piece of code. Please, have a look at the article [Neural Program Synthesis](https://www.microsoft.com/en-us/research/project/neural-program-synthesis/) by Microsoft for an overview of the field.
Upvotes: 3 <issue_comment>username_3: The other answers cover modern work on this, but it's not even a new topic!
[Koza's work](http://www.genetic-programming.org/gpbook1toc.html) in Genetic Programming (1992) led to whole sub-fields doing this. The techniques are widely used, robust, and well understood. They're just very computationally expensive. Enough so that most of the time you're better off just hiring a programmer to do it.
Upvotes: 2 |
2018/08/01 | 438 | 1,929 | <issue_start>username_0: Is it possible to build a neural network that learns the connection between two images?
Let's say I have a number of X images that related to Y images. How can I build a neural network that takes an image as an input and outputs (generates) the output image?
The Y images are generated by applying some function to the X images.
Do I need a generate neural network for that? Are conventional neural networks capable of classification only?<issue_comment>username_1: It is possible to have both input and output be images that differ in a predictable way. For example, architectures similar to autoencoders have been used to remove blur, change weather conditions, change between day and night photos etc. In these architectures, the training data is matching pairs of images. If your goal is to replicate some image enhancement, then often the input is artificially processed e.g. to reduce its quality in a hard to reverse way. A good example of this would be to remove distortion or noise from an image.
You can also use generative models. These are harder to get working, but can be more flexible in that you don't need image pairs in order to train, just a set of images labelled with the traits that you want to learn. Converting an image using a generative model involves using an encoder stage to get its embedding, altering the embedding based on label you require and then feeding the new embedding into the decoder stage. This is how you might alter a face portrait from male to female, or young to old, because it is not possible to find good natural image pairs for that task.
Upvotes: 2 <issue_comment>username_2: Sure, for example for removing visual noise.
Look up noise2noise by NVIDIA. The made a net that is capable of removing almost
all noise from one picture.
<https://hothardware.com/news/nvidia-noise2noise-machine-learning-ai-magically-restores-your-grainy-photos>
Upvotes: 1 |
2018/08/01 | 454 | 1,789 | <issue_start>username_0: I need to train a convolutional neural network to classify snake images. The problem is that I have only a small number of images available for some snake types.
So, what is the best approach to train a neural network for image classification using a small data set?<issue_comment>username_1: Use Fine Tuning
---------------
You can simply use a pre-trained model on [ImageNet](https://en.wikipedia.org/wiki/ImageNet), as this data set has multiple snakes classes.
Then you can fine tune the model with your own small data set and outputs.
See this for further understanding :
[Fine Tuning in Keras](https://flyyufelix.github.io/2016/10/03/fine-tuning-in-keras-part1.html)
(if you don't use Keras, there are other tutorials on the internet using other Machine Learning framework)
The idea is just removing the last layer (1000 outputs if you use a model pre-trained with ImageNet) and adding a layer of your choice with random weights and a custom number of outputs (number of your classes).
Then your retrain your network, in general we retrained only the last layers (as first layers have more general features).
Upvotes: 2 <issue_comment>username_2: Besides using transfer learning described in other answer, you should consider using siamese network.
This type of network is used in cases when one does not posess many examples of objects he wants to distinguish. General idea is that instead of "telling" the network "This is a cobra", you provide information like: "This is a cobra, and that is a rattlesnake, learn the difference".
There is a whole subject dedicated to your problem and it is called one shot learning.
Take a look at this tutorial:
<https://hackernoon.com/one-shot-learning-with-siamese-networks-in-pytorch-8ddaab10340e>
Upvotes: 2 |
2018/08/02 | 1,931 | 7,929 | <issue_start>username_0: Can we make a chatbot that really "understands" (rather than just replies to) questions based on the database/options of replies that it has? I mean, can it come up with correct/non-stupid replies/communications that don't exist in its database?
For example, can we make it understand the words "but", "if", and so on? So, whenever it gets a question/order, it "understands" it based on "understanding". Like the movie [Her](https://en.wikipedia.org/wiki/Her_(film)), if you have watched it.
And all of this without using too much code, just the basics to "wake it up" and let it learn from YouTube videos and Reddit comments and other similar data sources.<issue_comment>username_1: **Of course you can** (read through to the end). You just need to teach it how it is taught to a baby. But first, you need to create the baby's brain. So you need to build a brain that learns from videos, poll videos, and not just understands but practices and understands other people's reactions.
Sorry, but that's not enough. You would have the same work as God (if it really does exist and took this job). You would have to raise a baby so he could grow up. If we can raise a baby in code, it will learn much faster than a human.
I've been studying and looking since creating this "baby." I've made some babies, but none of them have been enough. But it's what I chose to do in my life (one of things). So I'm still raising a baby, from time to time it's getting smarter.
It's easy to build a robot that goes into reddit to read and extract feeling from what people write. You can watch YouTube videos and differentiate objects, humans, colors, etc. But that is what we would codify for this "baby" to do.
Perhaps the first step would be to rebuild a brain through code. We are already creating some pieces, we are already studying for many years synapses, neural networks, etc. But there is still a whole brain that we still do not understand. And I'm talking about the human brain (biological).
When I say you can, it's an incentive. I told myself I can. I'm going down this road. If I really can, I do not know.
One tip I give you: Google is far from successful. But you are trying and reaching for it. That's enough, right?
Upvotes: 0 <issue_comment>username_2: Defining what it means to understand something is a complex philosophical question, with answers that can split the AI community into different camps.
Clearly an algorithm that associates the ASCII characters of word like "if" with a set of numbers based on statistics of where it appears in a corpus of reference texts is missing the essence of subjective experience that you or I might feel when reading it.
The related terms you should explore are <https://en.m.wikipedia.org/wiki/Qualia> and <https://en.m.wikipedia.org/wiki/Chinese_room> which explore subjective experience and whether an artificial system can possess it
With current knowledge of how our own minds create understanding, it is very hard to tell what is required. It may just be multi modal learning, so that words are associated with sensory experience. Experiments with virtual or real robots that experience an environment and need to communicate about it are one way to explore the subject.
In short, what it means to understand something, whether it is possible to replicate artificially, and whether it is an important trait of an AGI, are all open questions at the cutting edge of AI research.
Upvotes: 2 <issue_comment>username_3: This question has been studied academically for decades, and is really an extension of the work on Philosophy of Mind that was done in the two or three centuries before that.
A good resource is [Mind Design II](https://mitpress.mit.edu/books/mind-design-ii), though it's getting a little bit old now.
The modern schools of thought are:
1. **Cognitivism**. This is in decline, but was extremely popular in the 70's and 80's, and still fairly widespread in one form or another in the AI research community. It says that human brains really are just computers. If they're just computers, and that they're probably running a sort of symbolic reasoning algorithm like [unification](https://en.wikipedia.org/wiki/Unification_(computer_science)) (although I think it's hard to find anyone who really thinks it's unification anymore). This is the idea underpinning work like [SOAR](https://soar.eecs.umich.edu/). The main bottleneck, as Drayfus pointed out in the 1970's, is that you need to write down all the facts about something for a machine to "understand" it. "All the facts" turns out to rapidly turn into an infinite number for anything more complex than the smallest "microworlds" that you could deploy an AI program in. Searle also proposed his Chinese Room argument in response to this group, but it holds for Connectionist approaches as well (more on that later...).
2. **Connectionism** The connectionists hold that the complexity of our brains comes from massively parallel computation consisting of messages passed between billions of neurons in our heads. They think the correct approach to general AI is likely to involve simulations of similar architectures. It turns out that many of the things that are incredibly hard for Cognitivist projects (e.g. vision), are easy to solve with these approaches. The main criticisms from Cognitivists are that we don't have a very good idea of what these things are doing, and so the claim that they help us understand intelligence is false, and using them to solve practical problems might be dangerous. These are both somewhat fair, in my view. Older Cognitivist arguments, put forth most elegantly by <NAME>, have now been discredited. Fodor argued that properties like language could never be understood as statistical artifacts of parallel computation, but he was wrong: all the best computational systems for language are now connectionist, and no one's ever made a cognitivist one that's even half as convincing. This is the dominant paradigm behind most of the modern advances in the field. [Hinton's](http://www.cs.toronto.edu/~hinton/) work forms the basis of most of the recent advances.
3. **Dynamics** Searle's argument was rooted in the idea that mapping inputs to outputs *couldn't* be what's happening in our heads, and that such a system couldn't be called intelligent. This also seems to be an implicit assumption in your question. The Dynamicists believe a variety of things, but I'd characterize them as collectively rejecting this idea. Authors like <NAME> argue that Searle's argument is rooted in a sort of pre-enlightenment "folk psychology". It's a bit like the theories that predated modern chemistry. Everyone was sure that fire was a substance that lived inside wood. If you heated up the wood properly, it could get out of the wood, making more heat. On the surface this seems pretty reasonable, but of course, it's wrong: The fire is actually a mixture of the wood with oxygen in the air, forming a new gas. There's no fire inside the wood. Similarly, Churchland would argue that there's no "Consciousness" inside us, allowing us to control our actions in the way we popularly imagine. Subjective experience is more likely to be "along for the ride", and entirely or mostly separate from the intelligent behaviors we observe. Some researchers think it could be described by a sub-system that maps observations of what the rest of the brain does into "stories" for the rest of the brain to receive as a sort of summary digest. Active research in this area tends to focus on things like the insect metaphor, and the interaction of the machine with its environment. It was fairly popular in the 1990's, but the phenominal success of connectionist approaches in the 2000's has led to its decline. Probably the best known experiments were the work of [<NAME>](https://people.csail.mit.edu/brooks/).
Upvotes: 3 [selected_answer] |
2018/08/02 | 1,524 | 5,417 | <issue_start>username_0: I'm struggling to understand the difference between actor-critic and advantage actor-critic.
At least, I know they are different from asynchronous advantage actor-critic (A3C), as A3C adds an asynchronous mechanism that uses multiple worker agents interacting with their own copy of the environment and reports the gradient to the global agent.
But what is the difference between the actor-critic and advantage actor-critic (A2C)? Is it simply with or without *advantage* function? But, then, does the actor-critic have any other implementation except for the use of advantage function?
Or maybe are they synonyms and actor-critic is just a shorthand for A2C?<issue_comment>username_1: Actor-Critic is not just a single algorithm, it should be viewed as a "family" of related techniques. They're all techniques based on the policy gradient theorem, which train some form of critic that computes some form of value estimate to plug into the update rule as a lower-variance replacement for the returns at the end of an episode. They all perform "bootstrapping" by using some sort of prediction of value.
**Advantage Actor-Critic** specifically uses estimates of the advantage function $A(s, a) = Q(s, a) - V(s)$ for its bootstrapping, whereas "actor-critic" without the "advantage" qualifier is not specific; it could be a trained $V(s)$ function, it could be some sort of estimate of $Q(s, a)$, it could be a variety of things.
In practice, the critic of Advantage Actor-Critic methods actually can just be trained to predict $V(s)$. Combined with an empirically observed reward $r$, they can then compute the advantage estimate $A(s, a) = r + \gamma V(s') - V(s)$.
Upvotes: 5 [selected_answer]<issue_comment>username_2: According to Sutton and Barto, they are the same thing. Note 13.5-6 (page 338) of their *Reinforcement Learning: An Introduction, 2nd Edition* book:
>
> Actor-critic methods are sometimes referred to as advantage actor-critic methods in the literature
>
>
>
Upvotes: 2 <issue_comment>username_3: Though the word "Advantage" in the actor-critic realm has been used to refer to the difference between the state value and the state action value, A2C brings in the ideas of A3C. In A3C, several worker networks interact with different copies of the environment (asynchronous learning) and update a master network after a set if steps. This was meant to solve instability issues associated with both temporal difference update method and correlations within neural network generated prediction and target values. However it was noticed by OpenAI that there was no need of the asynchrony, i.e. there was no practical benefit of having different worker networks. Instead, they had the same copy of the network that interacted with different copies of the environment (one works from the beginning, another working backwards from the end) and they update at once without the master lagging behind like in A3C. The removal of asynchrony gave rise to A2C.
Upvotes: 0 <issue_comment>username_4: So, this is the formula for updating the weights $\theta$ of your policy network using the policy gradient theorem:
$$ \nabla\_\theta J(\theta) = E\_{a, s \sim \pi\_\theta} \Big[ \nabla \log \pi\_\theta(a|s) R(s, a) \Big]. $$
Obviously your policy network is the actor. The question is how do you evaluate $R$.
You could do a simple Monte-Carlo estimate:
$$ R(s\_t, a\_t) = \sum\_{i=t}^{T} r\_{i+1} $$
This is **not** an actor-critic algorithm.
Or, you could do for example a one-step bootstrap using a value network with weights $\phi$:
$$ R(s\_t, a\_t) = r\_{t+1} + V\_\phi(s\_{t+1}) $$
This is an actor-critic algorithm. In this case the value network is the critic. It essentially assigns a score on each of the actions taken by the actor.
Or, you could learn a q-value network with weights $\psi$ and use that:
$$ R(s\_t, a\_t) = Q\_\psi(s\_t, a\_t) $$
This is again an actor-critic algorithm and your q-value network is the critic. Same reasoning as above.
So, actor-critic algorithms are a set of algorithms where you have an actor (the policy network) that selects actions for the rollout and a critic (a second model), that is used to compute the returns.
But instead of using the return, you could estimate the policy gradient using the advantage:
$$ \nabla\_\theta J(\theta) = E\_{a, s \sim \pi\_\theta} \Big[ \nabla \log \pi\_\theta(a|s) A(s, a) \Big], $$
where $A(s,a) = R(s,a) - V\_\phi(s)$. Usually you will have a value network for estimating the value of the state.
Now, for the Monte-Carlo estimate $A(s\_t, a\_t) = \sum\_{i=t}^{T} r\_{i+1} - V\_\phi(s\_t)$ you get what is known as "policy gradient with baseline" - not an actor-critic algorithm. In the other two cases:
* one-step bootstrap $A(s\_t, a\_t) = r\_{t+1} + V\_\phi(s\_{t+1}) - V\_\phi(s\_{t})$,
* q-value network $A(s\_t, a\_t) = Q\_\psi(s\_t, a\_t) - V\_\phi(s\_t)$,
you get an Advantage Actor-Critic algorithm.
You could also use a neural network that models the advantage directly:
$ A(s\_t, a\_t) = A\_\zeta(s\_t, a\_t)$. Obviously, you also get an advantage actor-critic algorithm.
So, advantage actor-critic algorithms are a set of algorithms where again you have an actor and a critic, but now you estimate the gradient using the advantage instead of the return.
I recommend reading this [blog post](https://username_4.github.io/posts/actor-critic/).
Upvotes: 0 |
2018/08/02 | 1,137 | 4,717 | <issue_start>username_0: What is a support vector machine (SVM)? Is an SVM a kind of a neural network, meaning it has nodes and weights, etc.? What is it best used for?
Where I can find information about these?<issue_comment>username_1: I find the chapter on machine learning from [Russell & Norvig](https://rads.stackoverflow.com/amzn/click/0136042597) is a pretty good place to start with SVMs. I think this is Chapter 18?
One way to understand an SVM is as a kind of neural network, but this is not usually an intuitive approach for a beginner (unless your NN knowledge is already quite good).
A better way to understand SVMs is as consisting of three simple ideas rolled into one algorithm. Here's an attempt at a "For Dummies" answer though:
1. **Maximum Margin Classification.** SVMs are usually used to find a pattern in a set of data. Often, the data allow an infinite set of possible patterns that are all equally descriptive. For example, maybe The relationship is "Lives within 5 miles of a Coast -> Income High". It's easy to imagine that this pattern is just as good as "Lives within 5.0001 miles of a Coast -> Income High" or "Lives within 4.999 miles of a Coast -> Income High". There might actually be a lot more play than that in the data (e.g. 3 miles might work out too). If all these are equally good, then the maximum margin idea says you should pick the one that's "in the middle" of the data. So maybe all values between 5.5 and 4.8 are equally good. In that case, we might pick 5.15 (in the middle). This example is super simplified. Real world data would have a lot more variables, and the idea of "in the middle" ends up being a little more complex, but this is the intuition. It turns out that finding the maximum margin pattern is easy when the patterns are *linear*. That is, when they can be represented by drawing straight lines through a plot of the dataset.
2. **Projection into higher dimensions.** This one needs a bit of math to visualize. Consider a dataset consisting of a circular pattern (for instance, maybe the pattern is that higher incomes are found in the middle of the city). There is no *linear* relationship that captures this pattern. That is, you can't draw a straight line through the data, and say something meaningful about all the values on one side or the other. However, if you add a new feature to your data that is the square of the original coordinates, it's easy to find such a pattern. Basically, if you pre-compute "circular" functions of the original data, you can add them to the dataset, and then find a pattern that is a linear function of this new feature. This idea generalizes: if you compute a complex enough function of your original data, and then apply the maximum margin idea, you can learn any pattern you like. The problem is that it's slow: adding more features makes it take longer to find the patterns you want.
3. **The Kernel Trick**. The thing that made SVMs useful was the kernel trick: finding the maximum margin didn't depend on anything except the product of the coordinates of the various points. It turned out that this product could be computed first, and then run through certain functions to produce a problem that was *identical* to the one you'd get by first adding extra features and then doing the multiplication. However, computing the problem this way didn't require adding any new features! This made SVMs one of the first reliable, well understood, and fast methods for finding non-linear patterns in data.
Hope that provides a starting point. Consider reading Russell & Norvig as a next starting point, or [Bishop](https://rads.stackoverflow.com/amzn/click/0387310738) if you want to go deeper.
Upvotes: 4 [selected_answer]<issue_comment>username_2: In case this is still relevant to you I can share my tutorial on SVM with Python implementation in Jupyter notebook:
[Primer to support vector machines](https://github.com/username_2/Primer-to-support-vector-machines/blob/master/Primer_to_SVM.ipynb)
The tutorial assumes some mathematics and programming background knowledge. The SVM codes utilize no external machine learning packages and tries to teach the reader to build a SVM model him-/herself.
I hope it helps you!
Upvotes: 1 <issue_comment>username_3: Support vector machines are supervised learning models with associated learning algorithms that analyze data and are used for classification and regression analysis.
Here is a link where you can learn more about it from a introduction level: ["Support Vector Machine — Introduction to Machine Learning Algorithms" (Medium)](https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47)
Upvotes: 1 |