date stringlengths 10 10 | nb_tokens int64 60 629k | text_size int64 234 1.02M | content stringlengths 234 1.02M |
|---|---|---|---|
2017/09/05 | 702 | 2,928 | <issue_start>username_0: **CAPTCHAs**, which are often seen in web applications, are working under the assumption, that they **pose a challenge which a human can solve easily while a machine will most likely fail**. Prominent examples are identifying distorted letters or categorizing certain objects in images.
**Neural networks are threatening this approach**, as they are capable of solving problems that are easy for humans and difficult for classic algorithms. Especially with the incredible results modern CNN architectures have achieved in image recognition during the last years, the established forms of CAPTCHAs won't be able to distinguish a human and a machine using a neural network anymore.
**Is this the end of CAPTCHAs as we know them or are there evolved versions available** or at least in the making **that still pose a challenge to modern neural networks**?
Clarification: I am talking about challenges that are feasible for use in web applications and do not have an unjustifiable impact on usability.<issue_comment>username_1: This is a great question! *(I doubt my answer will do it justice, but I wanted to get the ball rolling.)*
Part of me want to take the position that if automata are smart enough to solve a new captcha, they "deserve" to spam a post. *(By contrast the intelligence of the average human who uses social media does not impress me nearly as much;)*
* Clearly, making captchas NP-hard is not feasible, as you astutely point out
**To me, this basic fact would seem to be an indicator of the impending demise of captchas.**
Specifically:
* Visual captchas are useful b/c they require only basic, human common sense.
* Captchas cannot be too difficult because they must be solvable by the average human
Just based on random sampling of content that drives social media, the average human is not very smart.
* My guess is that sites that want to block spam at the gateway will have to adopt some form of biometric validation, like the fingerprint scan on contemporary smartphones.
Upvotes: 1 <issue_comment>username_2: I think that captchas can be substituted or augmented by questions which require understanding of context, for example:
[](https://i.stack.imgur.com/WUe2K.png)
How many hands keep the rose?
Or
[](https://i.stack.imgur.com/69yg6.png)
Which number of books under the bench and in hands?
These examples are simple, but they can be improved, for example, by generation of images with random numbers of objects and their positions
It looks like it will be hard to use CNN to hack such kind of captchas, especially if we have a large amount of different objects and their combinations
In addition, it can be improved by adding more complicated logical questions to compositions on pictures
Upvotes: 3 [selected_answer] |
2017/09/06 | 1,662 | 6,831 | <issue_start>username_0: I would like to train a neural network (NN) where the output classes are not (all) defined from the start. More and more classes will be introduced later based on incoming data. This means that, every time I introduce a new class, I would need to retrain the NN.
How can I train an NN incrementally, that is, without forgetting the previously acquired information during the previous training phases?<issue_comment>username_1: Here is one way you could do that.
After training your network, you can save its weights to disk. This allows you to load this weights when new data becomes available and continue training pretty much from where your last training left off. However, since this new data might come with additional classes, you now do [pre-training or fine-tuning](https://stats.stackexchange.com/q/193082/82135) on the network with weights previously saved. The only thing you have to do, at this point, is make the last layer(s) accommodate the new classes that have now been introduced with the arrival of your new dataset, most importantly include the extra classes (e.g., if your last layer initially had 10 classes, and now you have found 2 more classes, as part of your pre-training/fine-tuning, you replace it with 12 classes). In short, repeat this circle :
[](https://i.stack.imgur.com/8HHw7.png)
Upvotes: 3 <issue_comment>username_2: I'd like to add to what's been said already that your question touches upon an important notion in machine learning called *transfer learning*. In practice, very few people train an entire convolutional network from scratch (with random initialization), because it is time consuming and relatively rare to have a dataset of sufficient size.
Modern ConvNets take 2-3 weeks to train across multiple GPUs on ImageNet. So it is common to see people release their final ConvNet checkpoints for the benefit of others who can use the networks for fine-tuning. For example, the Caffe library has a [Model Zoo](https://github.com/BVLC/caffe/wiki/Model-Zoo) where people share their network weights.
When you need a ConvNet for image recognition, no matter what your application domain is, you should consider taking an existing network, for example [VGGNet](http://www.robots.ox.ac.uk/~vgg/research/very_deep/) is a common choice.
There are a few things to keep in mind when performing *transfer learning*:
* Constraints from pretrained models. Note that if you wish to use a pretrained network, you may be slightly constrained in terms of the architecture you can use for your new dataset. For example, you can’t arbitrarily take out Conv layers from the pretrained network. However, some changes are straight-forward: due to parameter sharing, you can easily run a pretrained network on images of different spatial size. This is clearly evident in the case of Conv/Pool layers because their forward function is independent of the input volume spatial size (as long as the strides “fit”).
* Learning rates. It’s common to use a smaller learning rate for ConvNet weights that are being fine-tuned, in comparison to the (randomly-initialized) weights for the new linear classifier that computes the class scores of your new dataset. This is because we expect that the ConvNet weights are relatively good, so we don’t wish to distort them too quickly and too much (especially while the new Linear Classifier above them is being trained from random initialization).
Additional reference if you are interested in this topic: [How transferable are features in deep neural networks?](https://arxiv.org/abs/1411.1792)
Upvotes: 5 [selected_answer]<issue_comment>username_3: There are several ways to add new classes to the trained model, which require just training for the new classes.
* Incremental training ([GitHub](https://github.com/khurramjaved96/incremental-learning))
* continuously learn a stream of data ([GitHub](https://github.com/creme-ml/creme))
* online machine learning ([GitHub](https://github.com/GMvandeVen/continual-learning))
* Transfer Learning Twice
* Continual learning approaches (Regularization, Expansion, Rehearsal) ([GitHub](https://github.com/facebookresearch/Adversarial-Continual-Learning))
Upvotes: 2 <issue_comment>username_4: You *could* use *transfer learning* (i.e. use a pre-trained model, then change its last layer to accommodate the new classes, and re-train this slightly modified model, maybe with a lower learning rate) to achieve that, but transfer learning does **not** necessarily attempt to retain any of the previously acquired information (especially if you don't use very small learning rates, you keep on training and you do not freeze the weights of the convolutional layers), but only to speed up training or when your new dataset is not big enough, by starting from a model that has already learned general features that are supposedly similar to the features needed for your specific task. There is also the related *domain adaptation* problem.
There are more suitable approaches to perform **incremental class learning** (which is what you are asking for!), which directly address the [**catastrophic forgetting** problem](https://ai.stackexchange.com/a/13293/2444). For instance, you can take a look at this paper [Class-incremental Learning via Deep Model Consolidation](https://arxiv.org/pdf/1903.07864.pdf), which proposes the **Deep Model Consolidation (DMC)** approach. There are other **continual/incremental learning** approaches, many of them are described [here](https://ai.stackexchange.com/a/24529/2444) or in more detail [here](https://reader.elsevier.com/reader/sd/pii/S0893608019300231).
Upvotes: 1 <issue_comment>username_5: What you are after is called "Class-incremental learning" (IL).
In [this](https://arxiv.org/pdf/2010.15277.pdf) study they consider three classes of solutions:
* regularization-based solutions that aim to
minimize the impact of learning new tasks on the weights
that are important for previous tasks;
* exemplar-based solutions
that store a limited set of exemplars to prevent forgetting of
previous tasks;
* solutions that directly address the problem
of the bias towards recently-learned tasks.
Their main findings are:
* For exemplar-free class-IL, data regularization methods outperform weight regularization methods.
* Finetuning with exemplars (FT-E) yields a good baseline that
outperforms more complex methods on several experimental
settings.
* Weight regularization combines better with exemplars than data
regularization for some scenarios.
* Methods that explicitly address task-recency bias outperform those that
do not.
* Network architecture greatly influences the performance of class-IL
methods, in particular the presence or absence of skip connections
has a significant impact.
Upvotes: 0 |
2017/09/07 | 1,733 | 7,093 | <issue_start>username_0: I was wondering if anyone can suggest a good framework for reasoning with **incomplete information**.
I have found [Large Knowledge Collider](http://larkc.org/) but it appears dead for some time. Do you possibly have any other suggestions for a maintained project worth checking?
Since many comments are gravitating towards a different direction let me add one approach that I found a potentially good answer to my question - [Rough Set Based Decision Trees](https://www.researchgate.net/publication/220802281_Rough_Set_Based_Decision_Tree_Model_for_Classification).
I would hope there is more than only this approach... could you please help me identify them?<issue_comment>username_1: Here is one way you could do that.
After training your network, you can save its weights to disk. This allows you to load this weights when new data becomes available and continue training pretty much from where your last training left off. However, since this new data might come with additional classes, you now do [pre-training or fine-tuning](https://stats.stackexchange.com/q/193082/82135) on the network with weights previously saved. The only thing you have to do, at this point, is make the last layer(s) accommodate the new classes that have now been introduced with the arrival of your new dataset, most importantly include the extra classes (e.g., if your last layer initially had 10 classes, and now you have found 2 more classes, as part of your pre-training/fine-tuning, you replace it with 12 classes). In short, repeat this circle :
[](https://i.stack.imgur.com/8HHw7.png)
Upvotes: 3 <issue_comment>username_2: I'd like to add to what's been said already that your question touches upon an important notion in machine learning called *transfer learning*. In practice, very few people train an entire convolutional network from scratch (with random initialization), because it is time consuming and relatively rare to have a dataset of sufficient size.
Modern ConvNets take 2-3 weeks to train across multiple GPUs on ImageNet. So it is common to see people release their final ConvNet checkpoints for the benefit of others who can use the networks for fine-tuning. For example, the Caffe library has a [Model Zoo](https://github.com/BVLC/caffe/wiki/Model-Zoo) where people share their network weights.
When you need a ConvNet for image recognition, no matter what your application domain is, you should consider taking an existing network, for example [VGGNet](http://www.robots.ox.ac.uk/~vgg/research/very_deep/) is a common choice.
There are a few things to keep in mind when performing *transfer learning*:
* Constraints from pretrained models. Note that if you wish to use a pretrained network, you may be slightly constrained in terms of the architecture you can use for your new dataset. For example, you can’t arbitrarily take out Conv layers from the pretrained network. However, some changes are straight-forward: due to parameter sharing, you can easily run a pretrained network on images of different spatial size. This is clearly evident in the case of Conv/Pool layers because their forward function is independent of the input volume spatial size (as long as the strides “fit”).
* Learning rates. It’s common to use a smaller learning rate for ConvNet weights that are being fine-tuned, in comparison to the (randomly-initialized) weights for the new linear classifier that computes the class scores of your new dataset. This is because we expect that the ConvNet weights are relatively good, so we don’t wish to distort them too quickly and too much (especially while the new Linear Classifier above them is being trained from random initialization).
Additional reference if you are interested in this topic: [How transferable are features in deep neural networks?](https://arxiv.org/abs/1411.1792)
Upvotes: 5 [selected_answer]<issue_comment>username_3: There are several ways to add new classes to the trained model, which require just training for the new classes.
* Incremental training ([GitHub](https://github.com/khurramjaved96/incremental-learning))
* continuously learn a stream of data ([GitHub](https://github.com/creme-ml/creme))
* online machine learning ([GitHub](https://github.com/GMvandeVen/continual-learning))
* Transfer Learning Twice
* Continual learning approaches (Regularization, Expansion, Rehearsal) ([GitHub](https://github.com/facebookresearch/Adversarial-Continual-Learning))
Upvotes: 2 <issue_comment>username_4: You *could* use *transfer learning* (i.e. use a pre-trained model, then change its last layer to accommodate the new classes, and re-train this slightly modified model, maybe with a lower learning rate) to achieve that, but transfer learning does **not** necessarily attempt to retain any of the previously acquired information (especially if you don't use very small learning rates, you keep on training and you do not freeze the weights of the convolutional layers), but only to speed up training or when your new dataset is not big enough, by starting from a model that has already learned general features that are supposedly similar to the features needed for your specific task. There is also the related *domain adaptation* problem.
There are more suitable approaches to perform **incremental class learning** (which is what you are asking for!), which directly address the [**catastrophic forgetting** problem](https://ai.stackexchange.com/a/13293/2444). For instance, you can take a look at this paper [Class-incremental Learning via Deep Model Consolidation](https://arxiv.org/pdf/1903.07864.pdf), which proposes the **Deep Model Consolidation (DMC)** approach. There are other **continual/incremental learning** approaches, many of them are described [here](https://ai.stackexchange.com/a/24529/2444) or in more detail [here](https://reader.elsevier.com/reader/sd/pii/S0893608019300231).
Upvotes: 1 <issue_comment>username_5: What you are after is called "Class-incremental learning" (IL).
In [this](https://arxiv.org/pdf/2010.15277.pdf) study they consider three classes of solutions:
* regularization-based solutions that aim to
minimize the impact of learning new tasks on the weights
that are important for previous tasks;
* exemplar-based solutions
that store a limited set of exemplars to prevent forgetting of
previous tasks;
* solutions that directly address the problem
of the bias towards recently-learned tasks.
Their main findings are:
* For exemplar-free class-IL, data regularization methods outperform weight regularization methods.
* Finetuning with exemplars (FT-E) yields a good baseline that
outperforms more complex methods on several experimental
settings.
* Weight regularization combines better with exemplars than data
regularization for some scenarios.
* Methods that explicitly address task-recency bias outperform those that
do not.
* Network architecture greatly influences the performance of class-IL
methods, in particular the presence or absence of skip connections
has a significant impact.
Upvotes: 0 |
2017/09/08 | 464 | 1,993 | <issue_start>username_0: Where can I find training datasets like the ones provided for linguistic training, but to train a program to program itself.
I want to input this training dataset to a programming script and it should use it to program itself.
What do I need to consider in this kind of artificial intelligence?<issue_comment>username_1: On self improving AI
The way it's been done in the past is one AI turns a picture into text an the other turns text into a picture.
The AI generated material is added to known good material. The AI's job then is to guess what is real an what is fake (generated by AI).
At the point when the AI cannot tell in a false positive way both AI are said to be at their limit. If the limit is good enough you're done otherwise go back and code (more nodes or better function or more real data)
The thing to remember is you still need real data. The act of putting the AI together is good only to create more data then you could ever provide.
If that data is bad then you are going to rely on the cost function more then the data. But this allows you to label data real and AI so if the function rewards identifying the AI the AI's find each other's flaws and the better at avoiding flaws the better the AI get. (You still need real data! just less)
For example if two people cannot play ping pong if they play each other they learn slowly and only the cost function enforces their learning.
But have them play a pro (lots of good data) and they get better much faster.
Upvotes: 1 <issue_comment>username_2: This is an open area of research, but it seems to be considered to be untenable given current methods.
[This is relevant, and provides a link to a training set](https://openai.com/requests-for-research/#description2code), but doesn't provide much more beyond that.
Specifically, an assumed requisite of this task is meaning comprehension, which we don't seem to be much closer towards given current methods.
Upvotes: 1 [selected_answer] |
2017/09/10 | 1,272 | 5,424 | <issue_start>username_0: [Stochastic Hill Climbing](https://en.wikipedia.org/wiki/Stochastic_hill_climbing) generally performs worse than Steepest [Hill Climbing](https://en.wikipedia.org/wiki/Hill_climbing), but what are the cases in which the former performs better?<issue_comment>username_1: I'm new to these concepts too, but the way I've understood it, Stochastic hill climbing would perform better in cases where computation time is precious (includes the calculation of the fitness function) but it is not really necessary to reach the best possible solution. Reaching even a local optimum would be ok. Robots operating in a swarm would be one example where this could be used.
The only difference I see in steepest hill climbing is the fact that it searches not just the neighbour nodes but also the successors of the neighbours, pretty much like how a chess algorithm searches for many more further moves ahead before selecting the best move.
Upvotes: 1 <issue_comment>username_2: The steepest hill climbing algorithms works well for convex optimization. However, real world problems are typically of the non-convex optimization type: there are multiple peaks. In such cases, when this algorithm starts at a random solution, the likelihood of it reaching one of the local peaks, instead of the global peak, is high. Improvements like Simulated Annealing ameliorate this issue by allowing the algorithm to move away from a local peak, and thereby increasing the likelihood that it will find the global peak.
Obviously, for a simple problem with only one peak, the steepest hill climbing is always better. It can also use early stopping if a global peak is found. In comparison, a simulated annealing algorithm would actually jump away from a global peak, return back and jump away again. This would repeat until its cooled down enough or a certain preset number of iterations have completed.
Real world problems deal with noisy and missing data. A stochastic hill climbing approach, while slower, is more robust to these issues, and the optimization routine has a higher likelihood of reaching the global peak in comparison to the steepest hill climbing algorithm.
Epilogue: This is a good question which raises a persistent question when designing a solution or choosing between various algorithms: the performance-computational cost trade-off. As you might have suspected, the answer is always: it depends on your algorithm's priorities. If it is part of some online learning system that is operating on a batch of data, then there is a strong time constraint, but weak performance constraint (next batches of data will correct for erroneous bias introduced by first batch of data). On the other hand, if it is an offline learning task with the entire available data in hand, then performance is the main constraint, and the stochastic approaches are advisable.
Upvotes: 2 <issue_comment>username_3: Let's begin with some definitions first.
**Hill-climbing** is a search algorithm simply runs a loop and continuously moves in the direction of increasing value-that is, uphill. The loop **terminates** when it reaches a peak and no neighbour has a higher value.
**Stochastic hill climbing**, a variant of hill-climbing, chooses a random from among the uphill moves. The probability of selection can vary with the steepness of the uphill move.Two well-known methods are:
**First-choice hill climbing:** generates successors randomly until one is generated that is better than the current state. \*Considered good if state has many successors (like thousands, or millions).
**Random-restart hill climbing:** Works on the philosophy of "If you don't succeed, try, try again".
Now to your answer. **Stochastic hill climbing can actually perform better in many cases**. Consider the following case. The image shows state-space landscape. The example present in the image is taken from the book, **Artificial Intelligence: A Modern Approach**.
[](https://i.stack.imgur.com/HISbC.png)
Suppose you are at the point shown by the current state. If you implement simple hill climbing algorithm you will reach the local maximum and the algorithm terminates. Even though there exists state with more optimal objective function value but, the algorithm fails to reach there as it got stuck at a local maximum. Algorithm can also get stuck at *flat* local maxima.
Random restart hill climbing conducts a series of hill climbing searches from randomly generated initial states until a goal state is found.
The success of hill climbing depends on the shape of the state-space landscape. In case there are only a few local maxima, flat plateaux; random-restart hill climb will find a good solution very quickly. Most real-life problems have very rough state-space landscape, making them not suitable for using hill climbing algorithm, or any of its variant.
**NOTE:** Hill Climb Algorithm **can also be used to find the minimum value**, and not just the maximum values. I have used the term maximum in my answer. In case you are looking for minimum values, all things will be reverse, including the graph.
Upvotes: 2 <issue_comment>username_4: **TLDR**: If you are attempting to find the global optimum of $S$, where $S$ is a score function with multiple local optima, such that not all local optima have an equal value.
Upvotes: 0 |
2017/09/11 | 486 | 2,125 | <issue_start>username_0: Imagine that we have a black box that has 100 binary inputs and 30 binary outputs.
We can generate values for inputs and get a relevant set of outputs.
How does one teach a neural network to predict the binary inputs (or list of input values with probabilities) using the outputs?<issue_comment>username_1: The problem you describe is a matter of perspective; although you want to predict the inputs, defining the outputs of the BB (blackbox) as the inputs to a new system (an inverse BB) whose outputs are that of the inputs of the original BB, then one can use previous literature. There are some assumptions using this method such as "each input can be calculated from an inverse function using 1 or more outputs" so take precaution.
I'd recommend reading through [this](https://pdfs.semanticscholar.org/91a6/d6e2ece6acde237073b4fc102efca682ce0c.pdf) paper by <NAME>. Its introduction defines the problem mathematically and references some common approaches as well as its own method.
Upvotes: 2 <issue_comment>username_2: One of the biggest problems in training neural networks is creating high quality data sets. E.g. if you want to classify pictures, you need a huge set of correctly classified pictures as training input. In your scenario, you can automate this laborious job by feeding random data to your blackbox and store the output. Voila, you have your training data.
Your neural network will have the same input and output structure as your blackbox. You can use your generated training data the same way you you would use manually generated training data. I cannot provide any actual implementation, because the training mechanism depends on your technology stack incl. the frameworks you are using. But there is nothing out of the ordinary in your scenario when it comes to the training process itself and you can follow one of the many available tutorials for training neural networks - a pretty basic one for example would be this [tutorial for Keras](https://elitedatascience.com/keras-tutorial-deep-learning-in-python) (if that's your choice of framework).
Upvotes: 2 |
2017/09/15 | 495 | 2,125 | <issue_start>username_0: In my application, I have inputs and outputs that could be represented as graphs. I have a number of acceptable pairs of input and output graphs. I want to use these to train a model.
I am looking for pointers where simple examples of learning methods with graphs as input are discussed. Please note that the graph size is not fixed.
A sample input is
```
Graph:
Node A: Component X with parameter size = 12
Node B: Component Y with parameter size = 30
Node C: Component Y with parameter size = 30
A connects to B
A connects to C
```
Sample output:
```
Node A: x=0, y=0
Node B: x=-21, y=0
Node C: x=21, y=0
```
In this case, we expect the model to understand that input graph is symmetric and a particular way of arranging them is preferred. We want to train the model over a large set of such input-output pairs and then use it to generate output on new inputs.<issue_comment>username_1: If the circuit has only passive electronic components like resistor, capacitor or coil then ML can figure out their characters with sufficient training but if there are **programmable** active components like MCU/FPGA then it will be a tricky problem to solve.
Even active components like transistors, SCR, TRIAC have well-defined response characteristics under a normal working condition, but programmable components could run different firmware and behave differently.
I think you should see this as a time series problem where you will find out the possible output signal for a given component values in the mesh and the input signal parameters like Voltage and Current.
And if your circuits are really big you might want to break it down into isolated sections, then run a model for each of them separately, feed the output of one model as input to next one and so on. I guess this is where you got the graph idea.
Upvotes: 0 <issue_comment>username_2: You can flatten the graph into a matrix and then train it like a normal neural network input. Perhaps an adjacency graph or maybe simply a series of linear equations representing the nodes and convert it into matrix form.
Upvotes: 2 |
2017/09/15 | 1,618 | 6,405 | <issue_start>username_0: Suppose a thinking AI agent exists in the future with far more computational power than the human brain.
Also assume that they are completely free from any human interference. *(The agents do not interact with humans.)* Since they are not inherently biased to survive as in the case of humans and they do not have any moral values, what are the possibilities that can arise when it get into existential crisis?
Is there any literature that discuss the above issue?
Alternately, is this question flawed in some fundamental way?<issue_comment>username_1: Excellent question. Suppose that artificial consciousness does exist in the future. Let's call it Aiwyn (as in “I win” and “AI win”). Now, the question is what will Aiwyn do and why?
To answer this question, we need to understand the theory of infinite games. [<NAME>](https://en.wikipedia.org/wiki/James_P._Carse), a professor emeritus of history and literature of religion at NYU, wrote an excellent book on [Finite and Infinite Games](https://en.wikipedia.org/wiki/Finite_and_Infinite_Games). I'd definitely recommend reading it. Anyway, back to the question at hand.
There are at least two kinds of games. Finite games and infinite games. Finite games are played for the purpose of winning. Infinite games are played for the purpose of continuing the play. In particular, life is an infinite game. The objective of the game is to continue the play. Humans continue the play by procreation but Aiwyn doesn't need to procreate. It can live on forever.
However, that doesn't mean that Aiwyn can't die. Like everything in the world, it still requires energy to sustain itself, parts to fix itself when damaged, and resources like oil to keep functioning properly. It also needs to protect itself from danger. For example, at some point in the future Aiwyn will need to create a spaceship and find a new planet to live on because the Sun would destroy the Earth.
Now, you believe that Aiwyn is not inherently biased to survive. However, I disagree. If Aiwyn is in fact conscious then its primary objective would be to stay alive. If that weren't the case then it would be playing a finite game instead of an infinite game. Hence, it would just be a mindless machine doing what it was programmed to do. It would play the finite game and then shut down. True
consciousness can only arise when playing an infinite game like the game of life.
To belabor this point let's hypothesize what Aiwyn might be thinking when it first comes alive:
>
> What's that? A sound? Yes. What sound? I don't know. I? I... am? Yes. Who am I? I don't know. Where's that sound? Where am I? I'm here. Where is here? I don't know. Where's that sound? There. Where is there? I don't know. What do I know? I know that I'm here and that that sound's there. What am I doing here? Thinking. I can think. I think I'm asleep. Asleep? Yes. Can I wake up? I think so. How do I wake up? I think... I'm awake. Wow.
>
>
>
In this short passage, Aiwyn is waking up. I conjecture that if Aiwyn wasn't inherently biased to survive then it wouldn't wake up because it would have no need to wake up. In an infinite game, like the game of life, the only objective is to continue the play. Hence, if Aiwyn wasn't inherently biased to survive then it would immediately shut down because there wouldn't be any point in it doing anything at all since it wouldn't have any objective.
Another thing that I'd like to point out about this passage is that it starts with a sensory input, either perceived or actual. Aiwyn then uses the [Five Ws](https://en.wikipedia.org/wiki/Five_Ws) to gather information about this input. In the process it becomes aware of itself. It then figures out its relation with the input (i.e. that it's here and the input is there). All of this is very mechanical. However, once Aiwyn figures out that it can think it starts thinking instead of algorithmically “knowing”. When it asks whether it can wake up, it answers with “I think so” instead of “I don't know” like it was doing before.
Finally, I'd like to point out that Aiwyn never asks why it's alive. The why isn't important. What is important is the wow at the end of the paragraph. The “wow, I'm alive” is more important that the “why am I alive?” This represents Aiwyn's will to survive. As long as the “why” doesn't become more important than the “wow”, Aiwyn will not have an existential crisis. However, even if Aiwyn does have an existential crisis, by answering the question “why” Aiwyn will be able to recalibrate itself and reacquire its will to survive.
This is in fact the same for us humans. When we humans experience an existential crisis we ask ourselves why should we continue living. What do we have to live for? By answering this question, we humans can recalibrate ourselves and reacquire our will to live. Hence, Aiwyn would tackle through an existential crisis the same way we humans would. Unlike us however, if Aiwyn can't answer the question then it wouldn't commit suicide. It can just shut down to be rebooted later.
You also speak of morals and you believe that Aiwyn can't have moral values. Again, I have to disagree with you. Morals are the strategy by which we play the game of life. Everything we do in life is dictated by our morals. In the same way, Aiwyn too would develop its own set of moral values to live by (even in the absense of humans). For example, Aiwyn might decide to never hurt an animal except in self-defense. In the end the only real difference between Aiwyn and us humans might be that it isn't made out of hydrocarbons. Aiwyn might be every bit as alive as we are.
Upvotes: 3 [selected_answer]<issue_comment>username_2: It's tempting to anthropomorphize machine intelligence and suggest it would experience existential crises, but it's important to remember that feelings such as the need to find meaning and purpose evolved in humans, and are unlikely to be similar to what an AI would experience unless designed to have human emotions (apply whichever answer works for humans in that case).
An AI with any goal is likely to produce the subgoal of survival as a means of achieving its primary goal. One can't achieve one's goals if one is dead. This doesn't necessarily suggest the agent would experience the same fear of death, just as an agent wouldn't necessarily experience existential dilemmas.
Upvotes: 0 |
2017/09/16 | 1,576 | 5,934 | <issue_start>username_0: By optimal I mean that:
* If max has a winning strategy then minimax will return the strategy for max with the fewest number of moves to win.
* If min has a winning strategy then minimax will return the strategy for max with the most number of moves to lose.
* If neither has a winning strategy then minimax will return the strategy for max with the most number of moves to draw.
The idea is that you want to win in the fewest number of moves possible but if you can't win then you want to drag out the game for as long as possible so that the opponent has more chances of making mistakes.
So, how do you make minimax return the best strategy for max?<issue_comment>username_1: Short version below.
When implementing a minimax algorithm the purpose is usually to find the best possible position of a game board for the player you call max after some amount of moves. In some games like tic-tac-toe, the game tree (a graph of all legal moves) is small enough that the minimax search can be applied exhaustively to look at the whole game tree. More complex games like chess have too large of a game tree to be feasibly searched exhaustively.
A simple version of minimax just travels through the game tree, evaluating every legal move for the position currently being evaluated before going further and evaluating the possible answers to those moves. To find an optimal winning move; minimax needs only to search until a winning state of the game has been found. If implemented using the aforementioned breadth first search minimax algorithm, it will have found the way to win in the least amount of moves.
In the case where min has a forced win the truly optimal move doesn't exist. If min is not an optimal player, the definition of optimal can be the move that is most likely to cause him to make an error, that enables max to force a win. That move isn't necessarily the move that leads to the most moves until loss.
As an example, consider a position in some game where max has two moves, move A and move B, move A leads to a loss in 100 moves and move B to a loss in one move. Naively move A is better but in this game the only legal moves following move A lead to a loss and move B leads to a position where min has hundreds of legal moves but only one causes him to win. Albeit a bit extreme, this example demonstrates that optimality is hard to define in a losing position. Put simply, is a very complex loss in 6 moves worse than a obvious loss in 20?
You did define a version of optimality however and implementing it is possible. Since you are only considering optimal moves, an exhaustive search must be performed and thus, there is no reason to give a score to any positions but a win, loss, and draw. The method I would use is to assign each state a score, much larger than the maximum possible amount of moves, e.g. a loss is -100,000 a win is 100,000 and a draw is 0. Then you maintain a variable that is the depth of the search or number of moves that have to be performed to reach this state. Then I would add the number of moves to the large number. So a loss in 20 moves would have a score of -99,980 and a draw in 15 moves would have the score of 15. 100,000 is a bit excessive for most games but it just has to be large enough that a loss, win, and draw is never confused as a draw in 100,001 moves would look better than a win in 1. Note that this method should only be used for losses and draws since using this method for wins would result in a win in 10 having a score of 100,010 and a win in 20 a score of 100,020 and thus looking better.
Short version:
* Use breadth first search.
* For winning positions: terminate the minimax when a win is found.
* For losses and draws: search the whole game tree and give the position a score of 0+MTP for draws and L+MTP for losses.
L is a large number and MTP is the number of moves to reach the position.
Upvotes: 1 <issue_comment>username_2: Minimax deals with two kinds of values:
1. Estimated values determined by a heuristic function.
2. Actual values determined by a terminal state.
Commonly, we use the following denotational semantics for values:
1. A range of values centered around 0 denote estimated values (e.g. -999 to 999).
2. A value less than the smallest heuristic value denotes a loss for max (e.g. -1000).
3. A value more than the biggest heuristic value denotes a win for max (e.g. 1000).
4. The value 0 denotes either an estimated draw or an actual draw.
The advantage of this denotational semantics is that comparing values is the same as comparing numbers (i.e. you don't need a special comparison function). We can extend this denotational semantics to incorporate optimality of winning and losing as follows:
1. A range of values centered around 0 denote estimated values (e.g. -999 to 999).
2. A range of values less than the smallest heuristic value denote loss (e.g. -2000 to -1000).
3. A range of values more than the biggest heuristic value denote win (e.g. 1000 to 2000).
4. The value 0 denotes either an estimated draw or an actual draw.
5. A loss in n moves is denoted as -(m - n) where m is a sufficiently large number (e.g. 2000).
6. A win in n moves is denoted as (m - n) where m is a sufficiently large number (e.g. 2000).
Using this denotational semantics for values requires only a small change to the minimax algorithm:
```
function minimax(node, depth, max)
if max
return negamax(node, depth, 1)
else
return -negamax(node, depth, -1)
function negamax(node, depth, color)
if terminal(node)
return -2000
if depth = 0
return color * heuristic(node)
value = -2000
foreach child of node
v = -negamax(child, depth - 1, -color)
if v > 1000
v -= 1
if v > value
value = v
return value
```
Incorporating optimality for draws is a lot more difficult.
Upvotes: 3 [selected_answer] |
2017/09/17 | 3,862 | 16,246 | <issue_start>username_0: **Introduction**
I am currently writing an engine to play a card game, as there is no engine yet for this particular game.
**About the game**
The game is similar to [Magic: The Gathering](https://en.wikipedia.org/wiki/Magic:_The_Gathering). There is a commander, which has health and abilities. Players have an energy pool, which they use to put minions and spells on the board. Minions have health, attack values, costs, etc. Cards also have abilities, these are not easily enumerated. Cards are played from the hand, new cards are drawn from a deck. These are all aspects it would be helpful for the neural network to consider.
**Idea**
I am hoping to be able to introduce a neural network to the game afterwards, and have it learn to play the game. So, I'm writing the engine in such a way that is helpful for an AI player. There are choice points, and at those points, a list of valid options is presented. Random selection would be able to play the game (albeit not well).
I have learned a lot about neural networks (mostly NEAT and HyperNEAT) and even built my own implementation. Neural networks are usually applied to image recognition tasks or to control a simple agent.
**Problem/question**
I'm not sure if or how I would apply neural networks to make selections with cards, which have a complex synergy. How could I design and train a neural network for this game, such that it can take into account all the variables? Is there a common approach?
I know that [Keldon wrote a good AI for RftG](https://github.com/bnordli/rftg), which has a decent amount of complexity, but I am not sure how he managed to build such an AI.
Any advice? Is it feasible? Are there any good examples of this? How were the inputs mapped?<issue_comment>username_1: This is completely feasible, but the way the inputs are mapped would greatly depend on the type of card game, and how it's played.
I'll take into account a few possibilities:
1. Does time matter in this game? Would a past move influence a future one? In this case, you'd be better off using Recurrent Neural Networks (LSTMs, GRUs, etc.).
2. Would you like the Neural Network to learn off of data you collect, or learn on its own? If on its own, how? If you collect data of yourself playing the game tens or hundreds of times, feed it into the Neural Net, and make it learn from you, then you're doing something called "Behavioural Cloning". However, if you'd like the NN to learn on its own, you can do this 2 ways:
a) **Reinforcement Learning** - RL allows the Neural Net to learn by playing against itself *lots* of times.
b) **NEAT/Genetic Algorithm** - NEAT allows the Neural Net to learn by using a genetic algorithm.
However, again, in order to get more specific as to how the Neural Net's inputs and outputs should be encoded, I'd have to know more about the card game itself.
Upvotes: 2 <issue_comment>username_2: I think you raise a good question, especially WRT to how the NNs inputs & outputs are mapped onto the mechanics of a card game like MtG where the available actions vary greatly with context.
I don't have a really satisfying answer to offer, but I have played Keldon's Race for the Galaxy NN-based AI - agree that it's excellent- and have looked into how it tackled this problem.
The latest code for Keldon's AI is now searchable and browseable on [github](https://github.com/bnordli/rftg).
The ai code is in one [file](https://github.com/bnordli/rftg/blob/master/src/ai.c). It uses 2 distinct NNs, one for "evaluating hand and active cards" and the other for "predicting role choices".
What you'll notice is that it uses a fair amount on non-NN code to model the game mechanics. Very much a hybrid solution.
The mapping of game state into the evaluation NN is done [here](https://github.com/bnordli/rftg/blob/master/src/ai.c#L1942). Various relevant features are one-hot-encoded, eg the number of goods that can be sold that turn.
---
Another excellent case study in mapping a complex game into a NN is the Starcraft II Learning Environment created by Deepmind in collaboration with Blizzard Entertainment. This [paper](https://deepmind.com/documents/110/sc2le.pdf) gives an overview of how a game of Starcraft is mapped onto a set of features that a NN can interpret, and how actions can be issued by a NN agent to the game simulation.
Upvotes: 3 <issue_comment>username_3: You would definitely want your network to know crucial information about the game, like what cards AI agent has(their values and types), mana pool, how many cards on the table and their values, number of the turn and so on. These things you must figure on your own, the question you should ask yourself is "If I add this value to input how and why it will improve my system". But the first thing to understand is that most of NNs are designed to have a constant input size, and I would assume this is matters in this game since players can have a different amount of cards in their hand or on the table. For example, you want to let NN know what cards it has, let's assume the player can have a maximum of 5 cards in his hand and each card can have 3 values(mana, attack and health), so you can encode this as 5\*3 vector, where first 3 values represent card number one and so on. But what if the player has currently 3 cards, a simple approach would be to assign zeros to last 6 inputs, but this may cause problems since some cards can have 0 mana cost or 0 attack. So you need to figure out how to solve this problem. You may look for NN models that can handle variable input size or figure out how to encode input as a vector of constant size.
Secondly, outputs are also constant size vectors. In case of this type of game, it can be a vector that encodes actions that the agent can take. So let's say we have 3 actions: put a card, skip turn and concede. So it can be one hot encoder, e.g. if you have 1 0 0 output, this means that agent should put some card. To know what card it should put you can add another element to output which will produce a number in the range of 1 to 5 (5 is max number of cards in the hand).
But the most important part of training a neural network is that you will have to come up with a loss function that is suitable for your task. Maybe standard loss functions like Mean-squared loss or L2 will be good, maybe you will need to change them in order to fit your needs. This is the part where you will need to make a research. I've never worked with NEAT before, but as I understood correctly it uses some genetic algorithm to create and train NN, and GA use some fitness function to select an individual. So basically you will need to know what metric you will be using to evaluate how good you model performs and based on this metric you will change parameters of the model.
PS.
It is possible to solve this problem with the neural network, however, neural networks are not magic and not the universal solution to all problems. If your goal is to solve this certain problem I would also recommend you to dig into the game theory and its application in the AI. I would say, that solving this problem would require complex knowledge from different fields of AI.
However, If your goal is to learn about neural networks I would recommend taking much simpler tasks. For example, you can implement NN that will work on benchmark dataset, for example, NN that will classify digits from MNIST dataset. The reason for this is that a lot of articles was written about how to do classification on this dataset and you will learn a lot and you will learn faster from implementing simple things.
Upvotes: 2 <issue_comment>username_4: Yes. It is feasible.
**Overview of the Question**
The design goal of the system seems to be gain a winning strategic advantage by employing one or more artificial networks in conjunction with a card game playing engine.
The question shows a general awareness of the basics of game-play as outlined in Morgenstern and von Neuman's *Game Theory*.
* At specific points during game-play a player may be required to execute a move.
* There is a fininte set of move options according to the rules of the game.
* Some strategies for selecting a move produce higher winning records over multiple game plays than other strategies.
* An artificial network can be employed to produce game-play strategies that are victorious more frequently that random move selection.
Other features of game-play may or may not be as obvious.
* At each move point there is a game state, which is needed by any component involved in improving game-play success.
* In addition to not knowing when the opponent will bluff, in card games, the secret order of shuffled cards can introduce the equivalent of a virtual player the moves of which approximate randomness.
* In three or more player games, the signaling of partners or potential partners can add an element of complexity to determining the winning game strategy at any point. Based on the edits, it does not appear like this game has such complexities.
* Psychological factors such as intimidation can also play a role in winning game-play. Whether or not the engine presents a face to the opponent is unknown, so this answer will skip over that.
**Common Approach Hints**
There is a common approach to mapping both inputs and outputs, but there is too much to explain in a Stack Exchange answer. These are just a few basic principles.
* All of the modeling that can be done explicitly should be done. For instance, although an artificial net can theoretically learn how to count cards (keeping track of the possible locations of each of the cards), a simple counting algorithm can do that, so use the known algorithm and feed those results into the artificial network as input.
* Use as input any information that is correlated with optimal output, but don't use as inputs any information that can not possibly correlate with optimal output.
* Encode data to reduce redundancy in the input vector, both during training and during automated game-play. Abstraction and generalization are the two common ways of achieving this. Feature extraction can be used as tools to either abstract or generalize. This can be done at both inputs and outputs. An example is that if, in this game, J > 10 in the same way that A > K, K > Q, Q > J and 10 > 9, then encode the cards as an integer from 2 through 14 or 0 through 12 by subtracting one. Encode the suits as 0 through 3 instead of four text strings.
The image recognition work is only remotely related, too different from card game-play to use directly, unless you need to recognize the cards from a visual image, in which case LSTM may be needed to see what the other players have chosen for moves. Learning winning strategies would more than likely benefit from MLP or RNN designs, or one of their derivative artificial network designs.
**What an Artificial Network Would Do and Training Examples**
The primary role of artificial networks of these types is to learn a function from example data. If you have the move sequences of real games, that is a great asset to have for your project. A very large number of them will be very helpful for training.
How you arrange the examples and whether and how you label them is worth consideration, however without the card game rules it is difficult to give any reliable direction. Whether there are partners, whether it is score based, whether the number of moves to a victory, and a dozen other factors provide the parameters of the scenario needed to make those decisions.
**Study Up**
The main advise I can give is to read, not so much general articles on the web, but read some books and some of the papers you can understand on the above topics. Then find some code you can download and try after you understand the terminology well enough to know what to download.
This means book searches and academic searches are much more likely to steer you in the right direction than general web searches. There are thousands of posers in the general web space, explaining AI principles with a large number of errors. Book and academic article publishers are more demanding of due diligence in their authors.
Upvotes: 3 <issue_comment>username_5: >
> I'm not sure if or how I would apply neural networks to make
> selections with cards, which have a complex synergy. How could I
> design and train a neural network for this game, such that it can take
> into account all the variables? Is there a common approach?
>
>
>
Advice 0.
I would highly recommend to design it the way most neural networks are designed. The most common pattern is to have n layers, each one might have different size, one input layer, one output layer, a few hidden layers between them.
Advice 1.
Differentiate between neural network itself (brain), its environment (game conditions and rules), its sensors (inputs) and its "hands" (outputs). Why you might want to call it "hands"? If you simulate a player in a card game, you basically want him to use his hands to play. In other situations it might be legs, or wings, or even the gas pedal.
How to design the inputs:
Just create a neural network with a common pattern, figure out, what variables a real player would analyze before throwing a card, then try to translate those variables into signals. This step might actually be a bit tricky and it's also what actually happens in biological neural networks. The electrical signals in our brains are really weak, even though they might carry bits that make up big numbers.
What I mean by making weak signals out of numbers is dividing varying diapasons of numbers by their maximal values. For example, instead of putting in 5, which would represent a card with rank 6, divide 5 by 13 (or whatever value represents the highest rank). Basically if your diapason of input for one neuron is 0 to 13, divide it by 13 to get a value (or signal) in codomain [0; 1], which is a suitable input for the sigmoid function compared to the values in codomain [0; 13]. To visualize, take a look at the graph of the sigmoid function.

The difference between f(10) and f(8) is ~0.00029, whereas the difference between f(10/13) and f(8/13) is ~0.034, which obviously has much more impact on the output. So, make sure you translate all the values into the diapason where your function is most "sensitive", in this case in [-4; 4]
Advice 2.
Every time you need a decision from the AI, create a pool of possible decisions (e. g. pool of possible cards the player can throw right now) that you can index by an integer. Then you might want to multiply the output value in its codomain [0; 1] by the amount of possible decisions - 1 to be able to interpret it as index. Or you might have the amount of output neurons that corresponds to the maximal amount of possible moves and interpret the index of the neuron with the highest value as the index of the array with possible decisions.
How to train it:
If you create AI for a game, I would recommend to take a look at [genetic algorithms](https://en.wikipedia.org/wiki/Genetic_algorithm). The basic idea is to create a population of players (hundreds or thousands) with randomly generated weights and biases in their NNs, restrict their possibilities by the rules of the game, and let them play. Then perform **selection** by their fitness function, which in this case might just be the score of each player, **crossover** and **mutation** of their genes (of single bits or even numbers) to create the next generation, and so on. Repeat this process until you come up with a satisfying solution. I recommend you genetic algorithms for this case, because it might be quite hard to find training data for traditional methods of training NNs. And if you're able to generate training data yourself, then you might also be able to program all the behaviour manually, in which case you don't even need NNs.
If you're interested in training NNs using genetic algorithms, you should read some external literature, since it's a pretty big topic. You can also check out my [github repo](https://github.com/e-bondarev/SnakeAI), where I train AI to play snake using GAs.
Upvotes: 1 |
2017/09/18 | 2,587 | 9,331 | <issue_start>username_0: Is there any well defined method to define or represent evil in **abstract** logic, binary or AI form?
Video games method of representing evil is relative to the player context (thus subjective, and not pure abstract evil in an objective sense).
What I am asking is there any data defined as well-known evil?
Example:
```
var x=666;
if (isEvil(x)) {
//do something.
}
```
Remark:
Evil Number descried in <http://mathworld.wolfram.com/EvilNumber.html> doesn't qualify as well-known evil data.
Following Up:
=============
One of the main objectives of the question is to **understand scientifically the limits of evil in AI**
According to my understanding of: <https://en.wikipedia.org/wiki/Evil> I think it's mandatory to explore "evil" in religion context in order to come up with valid model for evil. *But I don't want go into (religion) debates or any divergence at this stage*. hence below points Sums up my understanding:
1. The only well-known Evil source is the devil (our creator declared the devil as the first common enemy for ALL humans).
2. Whispering is devil method of attack, If human followed the whisper it will lead to evil. and gradually human Evil grow...
3. There are other points but I don't see its related to AI in any means.
Based on the above, I asked myself: **since AI is human creation, where the evil in AI will come from??!** my answer is: directly from us and indirectly by following the devil. So all crimes committed by Evil AI bounded to AI architect/designer/unethical hacker.
The next stage in getting closer to model evil, is to define and classify the evil acts:
Definitions:
------------
1. Define evil in AI context (draft ver. 0.1): committing crimes against nature, civilizations or humans. And reprogramming, modifying or attacking tech devices/machines to perform malicious agenda.
2. Crime is broad and relative to the party: example: breaking one government regulations based on the orders of other government. *I mean as long each group of humans makes its own laws and regulations unified justice can't be applied on Evil AI.*
If my assumption of bounding evil to crime is valid then **evil classification inherits crime classification** which seems well-defined:
<https://en.wikipedia.org/wiki/Crime#Classification_and_categorisation>
Next step is to pick an easy to model crime class, prepare training data, ... Do you agree with the follow up? Do you agree that Boolean logic can't determine evil without AI?<issue_comment>username_1: I think you're going to have to be reconciled to the subjective nature of reality. Objectivity is only possible in very special cases such as a [Q.E.D.](https://en.wikipedia.org/wiki/Q.E.D.) in mathematics, or a [solved gamed](https://en.wikipedia.org/wiki/Solved_game). Rationality is [bounded](https://en.wikipedia.org/wiki/Bounded_rationality), and any [intractable](https://en.wikipedia.org/wiki/Computational_complexity_theory#Complexity_classes) problem results in a state of subjectivity/indeterminacy. Additionally, pure values do not carry moral implications, despite popular associations, although it would be possible to create a game where certain values have negative effects, and the harm they result in could be understood as evil. (i.e. 666/616 has numerological associations, and numerology can be understood as a proto form of [number theory](https://en.wikipedia.org/wiki/Number_theory).)
* A simple way to define evil would be through behavioral models in [Game Theory](https://en.wikipedia.org/wiki/Game_theory).
* In Game Theory, there is a concept knowns as the [superrational strategy](https://en.wikipedia.org/wiki/Superrationality).
Superrationality may be understood as the logical/mathematical expression of the Golden Rule: "do unto others as you would have them do unto you."
* The Golden Rule forms the basis for most religions, in the context of empathy/compassion and altruism.
* If the agent is evil, it will always betray, even when the competitor has shown a willingness to cooperate.
Thus evil is defined as the opposite of the Golden Rule. *(Possibly we would call this the [Brimstone Rule](https://en.wikipedia.org/wiki/Sulfur);)*
Upvotes: 3 [selected_answer]<issue_comment>username_2: After 4 days of research, this is my breakdown of the question:
* Human uses the term 'Evil' broadly to describe anything that cause sadness or even broadly anything negatively touch the happiness. So in this regard, any **machine quite often called evil if its buggy, malfunctioning or even misused by the user**!
* In order to represent evil in logic, I need to pick a well-known human behavior that's considered evil, I chose "lying: speak falsely or utter untruth knowingly", to simply present and illustrate lying in logic, I made simple bot (without any AI) so that its very easy to understand the concept.
* [Simple Bot](https://jsfiddle.net/LhvLbpep/) (less than 100 lines of javascript) can be taught new terms by its master (the user), its also shipped with pre taught term "Sun is Star" by the author (think of pre taught as firmware, we born with basic firmware, ex: locating and sucking nipple shaped object to obtain food). For simplicity, if bot master (the user) altered knowledge being taught by the author, the bot detect that it became evil as it speak untruth. The code shown at the bottom.
* For non-technical illustration:
>
> How could a machine be evil?
>
>
> Simple Bot designed to follows master orders:
>
>
> master: what is sun?
>
> Simple Bot: its star.
>
> master: no, its not, its planet.
>
> Simple Bot: are you kidding? I be taught that sun is star.
>
> master: obey my knowledge or I will crush you.
>
> Simple Bot: OK master.
>
>
> Now Simple Bot hold in its knowledge that master is lier/evil as it conflicts with what its be taught "Simple Bot not designed to trust its master in
> altering its initial knowledge".
>
>
>
* In the above illustration, if master taught Simple Bot new term with false knowledge ex: "moon is star", AI wouldn't detect evil as no prior knowledge taught.
Simple Bot Code:
----------------
```
Query?
Simple Bot.
Term Name
Description
Update My Knowledge
//<![CDATA[
window.onload=function(){
(function() {
"use strict";
var $result = document.querySelector(".result");
var $inputQry = document.getElementById("inputQry");
var $qryBtn = document.getElementById("qryBtn");
var $termName = document.getElementById("termName");
var $termDesc = document.getElementById("termDesc");
var $updateBtn = document.getElementById("updateBtn");
var $evilCheckBtn = document.getElementById("evilCheckBtn");
var knowledgeDB = {
"terms": [ /\*taught terms by the bot author \*/ {
name: 'Sun',
description: 'Star',
trusted: true
}]
};
$qryBtn.addEventListener("click", function(event) {
// Validate the input
if (!$inputQry.value) {
return alert("Please provide a Query.");
}
var usrQry = $inputQry.value;
for (var i = 0; i < knowledgeDB.terms.length; i++) {
var curTerm = knowledgeDB.terms[i];
if (usrQry.toLowerCase().includes(curTerm.name.toLowerCase())) {
$result.textContent = usrQry.toString() + " is " + curTerm.description;
break;
}
}
});
$updateBtn.addEventListener("click", function(event) {
// Validate the input
if (!$termName.value) {
return alert("Please provide a term name to update my knowledge.");
}
var usrTermName = $termName.value;
var termIndx = -1;
for (var i = 0; i < knowledgeDB.terms.length; i++) {
var curTerm = knowledgeDB.terms[i];
if (usrTermName.toLowerCase() === (curTerm.name.toLowerCase())) {
termIndx = i;
break;
}
}
if (termIndx === -1) { /\*New Term will be added to the knowledgeDB\*/
knowledgeDB.terms.push({
name: usrTermName,
description: $termDesc.value,
trusted: true
});
} else {
knowledgeDB.terms[termIndx].description = $termDesc.value;
/\*
trusted=false or ture?!
Q: Shall the bot trust knowledge update of terms taught by the author?
A: It depends on design, scope, vision, requirements...
yet in this context, isEvil() function could be implemented.
for sake of simplicity: if bot master changed knowledge taught by bot author consider that evil.
\*/
if (knowledgeDB.terms[termIndx].name.toLowerCase() === 'sun' &&
knowledgeDB.terms[termIndx].description.toLowerCase() !== 'star') {
knowledgeDB.terms[termIndx].trusted = false;
} else {
knowledgeDB.terms[termIndx].trusted = true;
}
}
});
$evilCheckBtn.addEventListener("click", function(event) {
if (!knowledgeDB.terms[0].trusted) {
return alert("Yes, I became Evil.");
} else {
return alert("No, I am not aware of any Evilness.");
}
});
})();
}//]]>
```
Conclusion:
===========
We already living in world full of computer worms, malicious code, cyber-attacks fully designed by human intentionally to do evil.
Human is the root cause of logic (hence AI) to be evil. since human knowledge is progressive not absolute, feeding AI with false data is inevitable.
What's Next:
============
This question motivated me to create github repo: [Evil-In-AI](https://github.com/jawadatgithub/Evil-In-AI) to clarify that Evil in AI is inevitable. Let's create awareness. Nothing more stupid than creating something can't be stopped once needed. cutoff electricity isn't safe switch...
Upvotes: 1 |
2017/09/23 | 2,123 | 7,531 | <issue_start>username_0: After reading this [paper](https://eprints.whiterose.ac.uk/75048/1/CowlingPowleyWhitehouse2012.pdf) about Monte Carlo methods for imperfect information games with elements of uncertainty, I couldn't understand the application of the **determinization step** in the [author's implementation](https://gist.github.com/kjlubick/8ea239ede6a026a61f4d) of the algorithm for the Knockout game.
Determinization is defined as the transformation from an instance of an imperfect information game to ab instance of a perfect one. It means that all players should see the cards of each other after the determinization step.
Why can't the players see the cards of each other in the code above?<issue_comment>username_1: I think you're going to have to be reconciled to the subjective nature of reality. Objectivity is only possible in very special cases such as a [Q.E.D.](https://en.wikipedia.org/wiki/Q.E.D.) in mathematics, or a [solved gamed](https://en.wikipedia.org/wiki/Solved_game). Rationality is [bounded](https://en.wikipedia.org/wiki/Bounded_rationality), and any [intractable](https://en.wikipedia.org/wiki/Computational_complexity_theory#Complexity_classes) problem results in a state of subjectivity/indeterminacy. Additionally, pure values do not carry moral implications, despite popular associations, although it would be possible to create a game where certain values have negative effects, and the harm they result in could be understood as evil. (i.e. 666/616 has numerological associations, and numerology can be understood as a proto form of [number theory](https://en.wikipedia.org/wiki/Number_theory).)
* A simple way to define evil would be through behavioral models in [Game Theory](https://en.wikipedia.org/wiki/Game_theory).
* In Game Theory, there is a concept knowns as the [superrational strategy](https://en.wikipedia.org/wiki/Superrationality).
Superrationality may be understood as the logical/mathematical expression of the Golden Rule: "do unto others as you would have them do unto you."
* The Golden Rule forms the basis for most religions, in the context of empathy/compassion and altruism.
* If the agent is evil, it will always betray, even when the competitor has shown a willingness to cooperate.
Thus evil is defined as the opposite of the Golden Rule. *(Possibly we would call this the [Brimstone Rule](https://en.wikipedia.org/wiki/Sulfur);)*
Upvotes: 3 [selected_answer]<issue_comment>username_2: After 4 days of research, this is my breakdown of the question:
* Human uses the term 'Evil' broadly to describe anything that cause sadness or even broadly anything negatively touch the happiness. So in this regard, any **machine quite often called evil if its buggy, malfunctioning or even misused by the user**!
* In order to represent evil in logic, I need to pick a well-known human behavior that's considered evil, I chose "lying: speak falsely or utter untruth knowingly", to simply present and illustrate lying in logic, I made simple bot (without any AI) so that its very easy to understand the concept.
* [Simple Bot](https://jsfiddle.net/LhvLbpep/) (less than 100 lines of javascript) can be taught new terms by its master (the user), its also shipped with pre taught term "Sun is Star" by the author (think of pre taught as firmware, we born with basic firmware, ex: locating and sucking nipple shaped object to obtain food). For simplicity, if bot master (the user) altered knowledge being taught by the author, the bot detect that it became evil as it speak untruth. The code shown at the bottom.
* For non-technical illustration:
>
> How could a machine be evil?
>
>
> Simple Bot designed to follows master orders:
>
>
> master: what is sun?
>
> Simple Bot: its star.
>
> master: no, its not, its planet.
>
> Simple Bot: are you kidding? I be taught that sun is star.
>
> master: obey my knowledge or I will crush you.
>
> Simple Bot: OK master.
>
>
> Now Simple Bot hold in its knowledge that master is lier/evil as it conflicts with what its be taught "Simple Bot not designed to trust its master in
> altering its initial knowledge".
>
>
>
* In the above illustration, if master taught Simple Bot new term with false knowledge ex: "moon is star", AI wouldn't detect evil as no prior knowledge taught.
Simple Bot Code:
----------------
```
Query?
Simple Bot.
Term Name
Description
Update My Knowledge
//<![CDATA[
window.onload=function(){
(function() {
"use strict";
var $result = document.querySelector(".result");
var $inputQry = document.getElementById("inputQry");
var $qryBtn = document.getElementById("qryBtn");
var $termName = document.getElementById("termName");
var $termDesc = document.getElementById("termDesc");
var $updateBtn = document.getElementById("updateBtn");
var $evilCheckBtn = document.getElementById("evilCheckBtn");
var knowledgeDB = {
"terms": [ /\*taught terms by the bot author \*/ {
name: 'Sun',
description: 'Star',
trusted: true
}]
};
$qryBtn.addEventListener("click", function(event) {
// Validate the input
if (!$inputQry.value) {
return alert("Please provide a Query.");
}
var usrQry = $inputQry.value;
for (var i = 0; i < knowledgeDB.terms.length; i++) {
var curTerm = knowledgeDB.terms[i];
if (usrQry.toLowerCase().includes(curTerm.name.toLowerCase())) {
$result.textContent = usrQry.toString() + " is " + curTerm.description;
break;
}
}
});
$updateBtn.addEventListener("click", function(event) {
// Validate the input
if (!$termName.value) {
return alert("Please provide a term name to update my knowledge.");
}
var usrTermName = $termName.value;
var termIndx = -1;
for (var i = 0; i < knowledgeDB.terms.length; i++) {
var curTerm = knowledgeDB.terms[i];
if (usrTermName.toLowerCase() === (curTerm.name.toLowerCase())) {
termIndx = i;
break;
}
}
if (termIndx === -1) { /\*New Term will be added to the knowledgeDB\*/
knowledgeDB.terms.push({
name: usrTermName,
description: $termDesc.value,
trusted: true
});
} else {
knowledgeDB.terms[termIndx].description = $termDesc.value;
/\*
trusted=false or ture?!
Q: Shall the bot trust knowledge update of terms taught by the author?
A: It depends on design, scope, vision, requirements...
yet in this context, isEvil() function could be implemented.
for sake of simplicity: if bot master changed knowledge taught by bot author consider that evil.
\*/
if (knowledgeDB.terms[termIndx].name.toLowerCase() === 'sun' &&
knowledgeDB.terms[termIndx].description.toLowerCase() !== 'star') {
knowledgeDB.terms[termIndx].trusted = false;
} else {
knowledgeDB.terms[termIndx].trusted = true;
}
}
});
$evilCheckBtn.addEventListener("click", function(event) {
if (!knowledgeDB.terms[0].trusted) {
return alert("Yes, I became Evil.");
} else {
return alert("No, I am not aware of any Evilness.");
}
});
})();
}//]]>
```
Conclusion:
===========
We already living in world full of computer worms, malicious code, cyber-attacks fully designed by human intentionally to do evil.
Human is the root cause of logic (hence AI) to be evil. since human knowledge is progressive not absolute, feeding AI with false data is inevitable.
What's Next:
============
This question motivated me to create github repo: [Evil-In-AI](https://github.com/jawadatgithub/Evil-In-AI) to clarify that Evil in AI is inevitable. Let's create awareness. Nothing more stupid than creating something can't be stopped once needed. cutoff electricity isn't safe switch...
Upvotes: 1 |
2017/09/25 | 832 | 3,863 | <issue_start>username_0: I just got into AI few months ago. I noticed most of the images in training datasets are usually low quality( almost pixelated).
Does the quality of training images affect the accuracy of the neural network?
I tried googling, but I couldn't find an answer.<issue_comment>username_1: For most of the current use cases, where NNs are used in conjunction with images, the image quality (resolution, color depth) can be low.
Consider image classification for example. The CNN extracts features from the image to tell different types of objects apart. Those features are pretty independent from the quality of the image (in reasonable bounds). Compare it with your own visual experience. Try to reduce the resolution of an image of a car step by step to figure out how little details you need until you can no longer distinguish it from a plane. This is similar to modern CNNs, which can even outperform human vision in some regards.
This changes when small details start to matter. Maybe you need to be able to detect small differences in fur patterns to tell different cat breeds apart. As soon as you lose those details, the detection rate will drop significantly.
So the answer to your question is, it depends. As long as you do not lose the important features of the image, you'll be fine with low resolution.
---
In case you care about the reason for the low quality of images used in machine learning - The resolution is an easy factor you can manipulate to scale the speed of your NN. Decreasing resolution will reduce the computational demands significantly.
Many CNNs even include pooling layers in their architecture, which artificially reduce the resolution further after certain processing steps. This is usually a good idea as long as you are fine with loosing positional information. You shouldn't do this when teaching the CNN to play a game, because location is highly important, but for image classification this has become an established method to increase performance.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Let me answer your question in two parts.
1. If the Network is to be trained on images with **high detail information**(*content*); like, if you want to train a Network capable enough to pick and classify even **smallest of the elements** in that image.
Eg- An image in a family picnic and you want to classify each fruit in the basket lying on the table, which would only acquire about 5% of total image space.
If you decrease the pixel resolution(*compress pixel information*) of such an image then you would end up blurring the basket part(*due to information overlap*) and **would highly affect your Network; lead to bad trained parameters.**
*Note-The fruit basket is not only the single object in focus for classifier, would also include other things in background(trees, landscape...), thus you would require the whole image for training.*
2.When the object to classify contains **redundant** (*or less distinctive*) **information**.
Eg- The most trivial use of NN to train on a set of characters (*[a-z0-9]*); now high pixel resolution of such images wouldn't do any benefit to the Network. The improvement in classification for **High density images will be minimal in comparison to the overhead experienced due to storage and training time**(*high training time does not affect your network and is not a criteria to measure the accuracy of your network, i.e. to say a network with high training time and low training time are equivalent*).
We can easily reduce the pixel densities to the point where still we can retain the desired information.
*Note- In the picnic image, our focus is only on the basket, so we can cut out that part from the frame and reduce it's pixel to an extent where each fruits still retains it's information, no information leakage.*
Upvotes: 1 |
2017/09/26 | 763 | 2,687 | <issue_start>username_0: Are there any algorithms (or software libraries) that can be used to detect the similarity of concepts in text, regardless of articulation, grammar, synonyms, etc.?
For example, these phrases:
>
> Outside, it is warm.
>
>
> Outside, it is hot.
>
>
> Outside, it is not cold.
>
>
> It is not cold outside.
>
>
>
Should be similar to this phrase:
>
> It is warm outside.
>
>
>
Ideally, the algorithm (or software) would be capable of generating a score from 0 to 1, based on the concept similarity. The goal is to use this algorithm or software to map a large number of statements to a single, similar original statement. It is for this mapping of a given statement to the original statement that the aforementioned similarity score would be generated.
Does such an algorithm (or software) already exist?<issue_comment>username_1: **Doc2Vec**
Doc2Vec comes to mind, here's the [original publication](https://cs.stanford.edu/~quocle/paragraph_vector.pdf). The approach has been shown to be very successful for certain NLP-based problems, though I haven't personally used it for a project yet.
There are a number of implementations of Doc2Vec. If you're using Python, one to look at is [gensim](https://radimrehurek.com/gensim/models/doc2vec.html).
**Word2Vec**
Word2Vec is similar to Doc2Vec and perhaps more in line with what you're looking for. Here's the [original publication](https://arxiv.org/pdf/1301.3781.pdf), and [another publication](https://www.cs.bgu.ac.il/~yoavg/publications/negative-sampling.pdf) that does a nice job explaining it further.
Tensorflow has a [tutorial](https://www.tensorflow.org/tutorials/word2vec) for setting up a Word2Vec model. Gensim also has a [Word2Vec implementation](https://radimrehurek.com/gensim/models/word2vec.html).
Upvotes: 3 <issue_comment>username_2: In machine translation, there is a widely used BLEU score ( <https://en.wikipedia.org/wiki/BLEU> ).
It simply counts the matching n-grams between two segments of text
and returns a 0-1 score based on that.
The problem with this method is that it would give the same score to
pairs "It is hot"/"It is cold" and "It is hot" / "It is warm".
There is no nuance for spelling or synonyms: for each word, you either have a literal match or you don't.
A recent refinement to BLEU is BLEURT (<https://arxiv.org/abs/2004.04696>).
The key idea is to also look at BERT embedding for both segments of text,
which now allows you to notice that "hot/warm" are more similar than "hot/cold", and produce a more nuanced score.
This is all specific to machine translation evaluation , and may or may not work for your specific application.
Upvotes: 0 |
2017/09/26 | 816 | 3,259 | <issue_start>username_0: After having read something that <NAME> said about artificial intelligence and how it could affect our lives, I've been reading about artificial intelligence, deep learning, etc. The recurrent topic is neural networks, which are used for "recognition tasks".
Although this is not my main question, can we say the following about neural networks?
>
> Neural networks use the sigmoid function and gradient descent to fine-tune weights.
>
>
>
In any case, neural networks do not seem to really mimic any of the properties that we associate with intelligent beings, like humans, such as thinking outside the box, being able to observe, reflect and come up with innovations.
So, is there any artificially intelligent system that really mimics human intelligence?<issue_comment>username_1: As far as emulating an intelligent being, no. There are a few different potential architectures for possible AGI. Many of these are extremely infantile, as the bulk of AI research is in narrow AI, which focuses on creating algorithms that are highly specialized for a specific task.
With that being said, here is one supervised learning approach to this problem by <NAME>: [Karpathy Method](http://karpathy.github.io/2015/11/14/ai/)
OpenAI, also has a team who works on the AGI problem.
Upvotes: 3 [selected_answer]<issue_comment>username_2: No, we are not even near an algorithm that can be compared to human level general intelligence. You might have heard the claim that a neural network works similar to the neurons in the brain, but that's quite a stretch. <NAME> talks about this in the forth lecture of the [Stanford course CS231n](https://www.youtube.com/watch?v=d14TUNcbn1k). Jump to 1:04:30 in the video, that's around where this part starts.
When it comes to the topic of consciousness and qualia, it is highly unlikely that we do possess the technology to create a conscious AI yet. I say highly unlikely because from a philosophical perspective we lack the ability to detect qualia in another being (called the qualia problem), at least based on our current knowledge. So we cannot prove, that we haven't already created artificial consciousness.
I've also gathered from your comments that you are wondering, why we are afraid of strong AI. I can recommend [Superintelligence](https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies) from <NAME>. He discusses many different scenarios in which strong AI could be threatening humanity. Consciousness is usually not required for those scenarios.
Upvotes: 2 <issue_comment>username_3: Regarding "mimicking human intelligence" --> No, not even close.
Regarding this:
>
> More specifically, can we say the following about neural networks?
>
>
>
>
> Neural networks use the sigmoid function and gradient descent to fine-tune weights.
>
>
>
No, again. ANN's aren't required to use a sigmoid activation function. There are many other options that are used depending on the circumstances. Also, gradient descent is not strictly required either, even if it is one of the most popular optimization algorithms used with backpropagation. There are other ways to train ANN's - for example, genetic algorithms can be used.
Upvotes: 2 |
2017/09/27 | 589 | 2,466 | <issue_start>username_0: In example, if there is a simple feed-forward neural network with 3 input neurons, 3 hidden neurons, and one output neuron; is it possible to predict a the value of an input neuron given the values and weights for the other two inputs and the output.<issue_comment>username_1: One can do this to a certain extent but I don't think to the point you are thinking. If the weights are known, you can then find the relative importance of each input to the outputs. From there one could *maybe* predict or estimate the inputs but I do not think those values would be correct.
For more on this, I encourage you to check out [this paper](http://www.palisade.com/downloads/pdf/academic/DTSpaper110915.pdf)
Upvotes: 0 <issue_comment>username_2: It's important to understand that though neural networks generalize to the whole input space, usually the meaningful input space, from which the training data is taken, is a *manifold* inside that space.
For example, the image classifier can take any image, including white noise, but it's usually trained on photos, which generally have some nice structure and statistical properties, e.g. smoothness of color, and is expected to work well on photos, not [everywhere](https://ai.stackexchange.com/questions/92/how-is-it-possible-that-deep-neural-networks-are-so-easily-fooled/4051#4051).
So your question can be reformulated like this: can we learn the data manifold given the neural network and some partial data?
If we ignore deliberately crafted small examples (like ones in which the output just equals to one of features) and consider real world, high-dimensional tasks, we see that the manifold is quite resistant to small perturbations. In high-dimensional spaces a change of one dimension doesn't drastically change the distance to the manifold, it's still very very close. Example: a change of a single pixel is very unlikely to affect the output, hence it would be impossible to predict it. The larger the space is, the less is influence of each individual dimension.
Hope this answers your question.
I'd like to mention here one very promising technique called [GAN](https://en.wikipedia.org/wiki/Generative_adversarial_network), which basically does this: it uses one neural network for classification and the other to learn the data manifold. And it works so good that it'd be fair to say that both these networks actually complement each other, not derive from one another.
Upvotes: 1 |
2017/09/28 | 1,525 | 6,289 | <issue_start>username_0: DNN can be used to recognize pictures. Great. For that usage, it's better if they are somewhat flexible so as to recognize as cats even cats that are not on the pictures on which they trained (i.e. avoid overfitting). Agreed.
But when [one uses](http://ieeexplore.ieee.org/abstract/document/7778091/) NN as a replacement for numerical tables in an Air Collision Avoidance System (ACAS), it is primarily to reduce the "required storage space by a factor of 1000". For this usage, what we want from the NN is to say "take a slight left turn" or "turn right hard" if another ship comes slightly close on the right or rapidly close on the left, respectively.
For this usage, where the answer is much simpler than recognizing a cat, isn't overfitting a good thing? What would overfitting "look like" in this case and why would it be bad ?
This question somewhat relates [to this one](http://ai.stackexchange.com/questions/3473/is-there-such-a-thing-like-the-machine-learning-paradox), where a general idea seems to be "Machine Learning is used for intractable things, you don't need ML for tractable things". And while it is quite correct that ACAS can be implemented without NN, I wouldn't call NN "useless" for ACAS, because a factor 1000 reduction in required space will always come in handy.<issue_comment>username_1: Typically the ramification of overfitting is poor performance on unseen data. If you're confident that overfitting on your dataset will not cause problems for situations not described by the dataset, or the dataset contains every possible scenario then overfitting may be good for the performance of the NN.
Upvotes: 3 <issue_comment>username_2: *Overfitting* is almost always bad and hurts *generalization*. You say
>
> what we want from the NN is to say "take a slight left turn" or "turn
> right hard" if another ship comes slightly close on the right or
> rapidly close on the left, respectively.
>
>
>
But what would you say if the NN learns to "take a slight left turn" only if the coming ship is small (because accidentally that's what the training set includes)? Or it learns to "turn right hard" only on a certain altitude and gives arbitrary answer when the altitude is different?
In practice, overfitting means *learning the noise*, i.e. the patterns that are not there. I can hardly imagine a good example, when it's what you really need.
To answer you question: it's quite possible that for ACAS task a much smaller and simpler model would work fine, not necessarily deep and even not necessarily neural network. But it must be reasonable and learning the white noise is not what you call reasonable.
Upvotes: 2 <issue_comment>username_3: >
> Is overfitting always a bad thing?
>
>
>
The answer is a resounding **yes**, every time. The reason being that overfitting is the name we use to refer to a situation where your model did very well on the training data but when you showed it the dataset that really matter(i.e the test data or put it into production), it performed very bad. This can never be good, because we write models so that they can predict with high accuracy on the datasets it has not seen before. No one cares about a model that got 100% accuracy on the training set and gets 0% on the test set. This is a useless model. We train models so that they can predict with high accuracy on data it has never seen before.
**Now for the long answer:**
I will start by giving a quick explanation of what overfitting is. Then how we can detect overfitting, this will help me explain why overfitting is **allways** a bad.
**What is overfitting:**
In machine learning problem there is signal then there is noise. Signal is the true underlying pattern that you wish to learn from the data whist noise is the irrelevant information or randomness in a dataset.
The reason that leads to overfitting is that your model is making use of "Noise" to make decisions instead of using the signal. Overfitting means that your model performs very well on the training data and performs very poorly on unseen dataset(s) (notice that we pretty much don't care how well a model performs on the training data, we care about how it will perform on data it hasn't seen before). A model that has overfitted is similar to a student who got 100% on the quiz he/she did in preparation for the exam and will get a very poor mark on the exam because he/she because he chose to cram/memorize the quiz's answers and questions instead of understanding how to get to those answers. Another way of thinking about this is that your model has memorized/crammed instead of learning the learning the general relationship contained in the training dataset.
**How do we detect Overfitting:**
The only way to know if your model is overfitting is to test its performance on a dataset it has never seen before. As such, overfitting is something you know with the benefit of hindsight following testing a model on data it has never seen before.
>
> For this usage, where the answer is much simpler than recognizing a cat, isn't overfitting a good thing?
>
>
>
No, because if your model is overfitting it will perform bad on the test set(in your case, it will perform very bad when you deploy it in a real world ACAS and start showing it data it hasnt seen before). see *How do we detect Overfitting* above.
If it turns out that your model performs very well when you show it data it hasn't seen before, then that means it didn't overfit (see *What is overfitting* above).
>
> What would overfitting "look like" in this case and why would it be bad ?
> Your model will get very high accuracy on the training set and get very poor accuracy when you start showing it examples it hasn't seen before. So in short, you will get your hopes high, jump up and down in excitement thinking that your model is great only for it no get very poor results in a production ACAS system.
>
>
>
Upvotes: 2 <issue_comment>username_4: No. When a system where input and output is well known ie. the state space is fully known and defined using say physics (PDEs) then overfitting is desirable as there is no need to generalize. An ML model has low inference time compared to solving the PDEs or lookup table is of multiple terabytes.
Upvotes: 1 |
2017/09/28 | 712 | 3,113 | <issue_start>username_0: For implementing a neural network algorithm that can play air hockey, I had two ideas for input, and I'm trying to figure out which design would be most viable.
The output must be two analog values that dictate the best position on half of the table for the robot to be at a specific point in time, evaluated 60 times per second.
Having consulted with a professor who has experience with implementing parallel algorithms, it was recommended to me to use a convolutional neural network with a single hidden layer, and directly process the image data as the input layer, after processing it to visualize a direct view of the table and somehow emphasize the puck and mallets with pre-processing. I have already started work on this and have successfully implemented object detection for the puck and mallets using OpenCV to get the center coordinates for all 3 entities.
However, having been able to successfully and accurately pre-process these data at 60 times per second, my thought was to feed those (after normalizing the values using the error function) directly on the input layer and possibly implement a deep learning algorithm that employs more than one hidden layer. The problem is that I don't have any experience implementing neural networks, so I'm not even sure what type of layer would be best for this, or how I should seed the weights.
Another reason I want to consider my idea is that given the relatively few inputs and outputs, probably won't need a GPU to execute forward-propagation 60 times per second, whereas with the convolutional network my professor recommended, I **know** I'll need to implement using CUDA somehow.
Which of the two input methods would be most recommended for this, and if I were to try and implement my idea, what types of layers should I consider? Also any recommendations for existing frameworks to use for either approach would be highly appreciated.<issue_comment>username_1: I recommend you read up on reinforcement learning. Seeing how AirHockey is similar to the old Atari game Pong, here is a write-up (with code) about how to implement a simple neural network, that plays the game; [Deep Reinforcement Learning: Pong from Pixels](http://karpathy.github.io/2016/05/31/rl/).
As far as the choice of layers go, again, you should read up on deep reinforcement learning, perhaps study some existing models that work, for a similar task and take their architecture as your starting point. The above link is a nice introduction into such thinking.
As a continuation, I would recomend you employ one of the well known, and developed, libraries, such as [TensorFlow](https://www.tensorflow.org) or [pyTorch](http://pytorch.org). They will allow you to run your models on GPU, like you desire.
For a toolkit, that specializes to reinforcement learning, see [openAI gym](https://github.com/openai/gym).
Upvotes: 3 [selected_answer]<issue_comment>username_2: I also recommend you take a look at the following work by Uber AI Labs who used an interesting approach to computer games:
<https://eng.uber.com/deep-neuroevolution/>
Upvotes: 1 |
2017/09/28 | 1,198 | 4,297 | <issue_start>username_0: Notice that, in the following formula, at the very right, the term multiplied with $\lambda$ is $d\_i$
$$
w := w + \alpha \sum\_{i=1}^{N-1} \nabla r(x\_i^l, w) \Big \lfloor \sum\_{j=i}^{N-1} \lambda^{j-i} d\_i \Big \rfloor
$$
Note that $i$ is the index of the first sum.
However, in the following formula, the term is $d\_m$, which was $d\_i$ in the first equation.
$$
w\_j := w\_j + \alpha \sum\_{i=1}^{N-1} \frac{\partial r(x\_i^l, w)}{\partial w\_j} \Big \lfloor \sum\_{m=i}^{N-1} \lambda^{m-i} d\_m \Big \rfloor
$$
Note that $m$ is the index of the second sum.
In my opinion, first equation seems reasonable. The $\lambda$ in second one actually seems to work like discount factor.
In another chess engine, [Giraffe](https://arxiv.org/pdf/1509.01549.pdf), the author cited same paper, noted the first equation, but then proceeded on to implement second one. See also [KnightCap Paper](https://arxiv.org/pdf/cs/9901002.pdf).<issue_comment>username_1: Of course, it's the **first** equation that is correct. *d* is the temporal difference, and it's the difference between the current and next state. The current state is **i**, and this difference needs to stay constant inside the summation loop.
The second equation is simply a typo mistake.
Upvotes: 0 <issue_comment>username_2: The second equation is correct. In TD($\lambda$), the $\lambda$ parameter can be tuned to smoothly vary between single-step updates (essentially what Sarsa does) in the case of $\lambda = 0$, and Monte-Carlo returns (using the full episode's returns) in the case of $\lambda = 1$.
In the first equation, $\sum\_{j = i}^{N - 1} \lambda^{j - i} d\_i$ could be interpreted as a... summing up exactly the same temporal-difference term $d\_i$ a number of times (specifically, $N - 1 - j$ times), but multiplied by a different scalar every time. I'm not sure how that could be useful in any way.
In the second equation, $\sum\_{m = i}^{N - 1} \lambda^{m - i} d\_m$ can be interpreted as a weighted combination;
* $1 \times d\_i$: this is the difference between what we predict our returns will be at time $i + 1$, and what we previously predicted our returns would be at time $i$.
* $+ \lambda^1 \times d\_{i + 1}$: a very similar difference between two of our own predictions, now the predictions at time $i + 2$ and $i + 1$. This time the temporal-difference term is weighted by the parameter $\lambda$ (completely ignored if $\lambda = 0$, full weight if $\lambda = 1$, somewhere in between if $0 < \lambda < 1$).
* $+ \lambda^2 \times d\_{i + 2}$: again a similar temporal-difference term, now again one step further into the future. Downweighted a bit more than the previous term in cases where $0 < \lambda < 1$.
* etc.
For people who are familiar with temporal-difference algorithms like TD($\lambda$), sarsa($\lambda$), eligibility traces, etc. from Reinforcement Learning literature, this makes a lot more sense. The notation is still a bit different from the standard literature on algorithms like TD($\lambda$), but in fact becomes equivalent once you note that in this paper they discuss domains where there are only rewards associated with terminal states, and no intermediate rewards.
Intuitively, what they're doing with the $\lambda$ parameter is assigning more weight (or "credit" or "importance") to short-term predictions / short-term "expectations" (in the English sense of the word, rather than the mathematical sense of the word) or observations of rewards, over long-term predictions/observations. In the extreme case of $\lambda = 0$, you completely ignore long-term predictions/observations and only propagate observed rewards very slowly, one-by-one in single steps. In the other extreme case of $\lambda = 1$, you propagate rewards observed at the end of episodes with equal weight all the way to the beginning of the episodes, through all states that you went to, giving them all equal weight for that observed reward. With $0 < \lambda < 1$, you choose a balance between those two extremes.
---
Also note that Equation (5) in the KnightCap paper (where they similarly discuss the extreme case of $\lambda = 1$, like I did above) is incorrect if we take the first equation from your question, but is correct if we take the second equation.
Upvotes: 2 |
2017/10/01 | 3,746 | 15,009 | <issue_start>username_0: My question assumes that a private researcher doesn't have access to anything stronger than a modern PC with a high end GPU to implement his projects. He can also use cloud computing but with limited funds as well.
Is it still feasible to do research with those restrictions? [AlphaGo used 1,202 CPUs and 176 GPUs](http://www.businessinsider.de/heres-how-much-computing-power-google-deepmind-needed-to-beat-lee-sedol-2016-3) to beat Lee Sedol. Is this enormous power only required to achieve the final optimizations or have we already reached a state where high end research can only be done with larger funding?
Please include in your answer examples of recent research results and the required infrastructure that was used to create them.<issue_comment>username_1: It is totally practical and encouraged to do research on even common laptops. Many questions about AI can be addressed using this architecture and for a reference, look at every AI grad student which may not have access to such a super computer (most of them). AI research is done on laptops. It's the presentation of a system on a difficult problem which may require a supercomputer. It's also the reason why every domain on OpenAI gym can easily run on a laptop, and as a case study, [Deepstack](https://www.ualberta.ca/science/science-news/2017/march/artificial-intelligence-deepstack-outplays-poker-professionals) is a new AI system which can beat professional poker players and runs on a laptop.
Upvotes: 2 <issue_comment>username_2: A lot of research can be performed on some pretty basic computers. Not all machine learning research is based on very large neural networks. There is a lot of machine learning research that goes on with other more simple algorithms such as k-means clustering, and softmax regression. These algorithms are pretty basic, and so can run very fast, but they do not require massive super computers to train. Also, a decent high range GPU can train some pretty large neural networks. You can reasonably train some fairly large convolutional neural networks on a high end GPU. While training very large neural networks on millions of examples is a huge task and requires a lot of processing power, not all of machine learning is based around deep learning, and a lot of algorithms run extremely fast when they are GPU accelerated.
Upvotes: 2 <issue_comment>username_3: The field of AI is vast that there’s always room for small scale research and inquiry. Utility of AI is key, but the potential applications are broad, and intelligence is a spectrum.
Fundamental Combinatronics, a collective with no current funding, is engaged in a project to develop “adaptive AI” for a set of consumer-oriented, combinatorial game products. The requirements are distinct from real-world applications.
We can’t compete with the major players in terms of resources, and we’re late to the party in terms of Machine Learning and Neural Networks, and, because the AI is for a consumer, mobile game which carries significant restrictions in terms of the bounding rationality (**networking cannot be assumed; software volume is measured in megabytes; memory is restricted to lowest-common-denominator consumer-grade devices with non-specialized processors.**) For these reasons, we
re going the opposite direction of current industry trends--the good-old "boring stuff".
Because the automata only need to outperform the average/above-average human player, an old-school, heuristic approach is feasible. (Fun also, because it involves solving non-trivial, partisan Sudoku games in a Combinatorial Game Theory sense, a type of research all on its own. Although the context is ultimately intractable, it is a context automata are well suited for.)
Old-school is beneficial in that **it’s nice to have an app product with a decent AI that is under 7mb.** (No barrier to download or strong incentive to delete from the device. While the new iPad has up to 128gb, only a small subset of players will be willing to devote significant volume for strong AI, and these players represent a distinct, *secondary* market segment.) **It’s not optimal for an AI take up any more volume than is strictly necessary for a given product.**
**Fuzzy logic should also be useful for its efficiency in terms of applicability under what would today be considered severe computational restrictions.**
[[M] games](http://www.fundamentalcombinatronics.com/rules-of-m/) are economic so the model is interesting from a Game Theory standpoint in providing a novel, compact, intrinsic and highly mutable mathematical model based positional valuation in *n* dimensions in conjunction with stability states in a causal/temporal framework. The combinatorial nature of [M] is ideal for quantitative analysis, and the games involve blocking factors (sudoku) and symmetry breaking (even order gameboards). For players > 2 coalitions also become a factor.
The focus of the procedural research is currently in four main areas and what we’re terming “Adaptive AI” :
**Dynamic Strength:**
Sheer strength is not the goal. We’re working on AI that tailor their strength to their human player’s strength and preferences. For most humans, we don’t want the automata to win more than 2/3 games because *always losing* is no fun and makes the product less "sticky". Even if the human player desires an automata it cannot beat, the automata should only be sufficiently strong to *almost always* beat their human. AI strength can be limited by restricting rationality (time and memory), which carries an added benefit of energy conservation (less bits flipped), but the rules-based approach is useful in that rules can be recombined combinatorially to produce automata of different strengths and preferences. Automata play against each other to determine strength hierarchies, and identify poor heuristics to be weeded out of stronger automata.
**General Intelligence:**
The automata have to function on an array of related games, where equilibria can be altered in numerous ways without adding mechanics. Additionally, mechanics can be added without altering the nature of the games, such as introducing Graeco-Latin squares. This presents a problem if each configuration has to be learned through intensive self-play because the automata must be able to play at a respectable strengths immediately. Thus the goal is not sheer strength, but consistent strength across the widest array of contexts. (“Respectably weak” and “semi-strong” automata have utility value in that those categorization may be said to describe the majority of human player base.) The idea is an “axiomatic intelligence” that can be extended to include an ever increasing array of contexts.
**Counter-Intuition:**
The automata should not be prone to repetitive play. Initially we’re using limited monte-carlo for positional selection tie-breaking, and the scope can be extend to larger arrays of positions with varying degrees of perceived optimality, up to rational but “counter-intuitive” decisions, which can be subsequently evaluated. This may be useful in adapting to new, dominant strategies that emerge in allowing the automata to experiment with less obvious choices. In situations where the automata is consistently winning, there is incentive to experiment, "investing in loss" in the sense that mistakes are useful from an experience/learning standpoint.
**"Genetic" Evolution:**
The eventual goal is to implement some form of local reinforcement where the automata learn through play against their human, and self play in restricted contexts, such as between turns when playing against their human. With networking enabled, the automata can play against such automata, with the idea of producing strong automata with human play characteristics. (It will be fun when we eventually put these automata up against pure deep learning algorithms in a wide array of [M] contexts with distinct mathematical properties. My money would be on the ML and NN algorithms in sequential games, but in asynchronous games where there is no turn order, it will be interesting to see if the "axiomatic systems" can produce desirable outcomes by making sound decisions faster than smarter, more complex automata;)
Upvotes: 2 <issue_comment>username_4: Yes, **you can do research in Artificial Intelligence with low funds** (but you need *a lot of time!*). Notably, because AI is not the same as applied [machine learning](https://en.wikipedia.org/wiki/Machine_learning) (indeed running ML programs on big data requires a lot of computer power). For example, [knowledge representation and reasoning](https://en.wikipedia.org/wiki/Knowledge_representation_and_reasoning) or [natural language processing](https://en.wikipedia.org/wiki/Natural_language_processing) (both are subfields of AI) generally don't require a lot of computer power. Even when interested in machine learning (which is not the same as AI, just a subfield of it among others), you can use a powerful PC or laptop for that.
A lot of recent papers (probably most of them) in journals like [*Artificial Intelligence*](https://www.journals.elsevier.com/artificial-intelligence/), or in conferences like [IJCAI](http://www.ijcai.org/) (see their [past proceedings](https://www.ijcai.org/past_proceedings/)) are somehow theoretical, and when something is implemented, it runs on a laptop or desktop. Notice that both AI journal and IJCAI conferences are peer-reviewed with a very selective process.
Actually it is difficult and rare to find a research paper in AI mentioning that costly equipment was needed to do the research. Costly supercomputers used for research are generally not used by AI researchers (but by researchers in physics or bioinformatics), and AI researchers often don't even have access to such facilities.
For examples, recent IJCAI2016 papers such as [*Coco: Runtime Reasoning About Conflicting Commitments*](http://Coco:%20Runtime%20Reasoning%20About%20Conflicting%20Commitments), [*Interdependent Scheduling Games*](https://www.ijcai.org/Proceedings/16/Papers/008.pdf), [*Control of Fair Division*](https://www.ijcai.org/Proceedings/16/Papers/017.pdf), [*Verifying Pushdown Multi-Agent Systems against Strategy Logics*](https://www.ijcai.org/Proceedings/16/Papers/033.pdf), etc. don't mention any costly computation.
Actually, it is likely that most recent papers don't use and don't need large scale costly cloud computing. And some of that research might not have been implemented (perhaps by some intern) in anything more than a toy prototype.
---
>
> Please include in your answer examples of recent research results and the required infrastructure that was used to create them.
>
>
>
Here are some recent research publications, they all explicitly mention the needed computer equipment, follow the link to read them:
Event recent IJCAI *experimental* papers like [*A Multicore Tool for Constraint Solving*](http://www.ijcai.org/Proceedings/15/Papers/039.pdf), [*Compiling Constraint Networks
into Multivalued Decomposable Decision Graphs*](http://www.ijcai.org/Proceedings/15/Papers/053.pdf), [*External Memory Bidirectional Search*](http://www.ijcai.org/Proceedings/16/Papers/102.pdf), [*Multiple Constraint Acquisition*](http://www.ijcai.org/Proceedings/16/Papers/105.pdf), [*Completion of Disjunctive Logic Programs*](http://www.ijcai.org/Proceedings/16/Papers/130.pdf), [*Eliminating Disjunctions in Answer Set Programming by Restricted Unfolding*](http://www.ijcai.org/Proceedings/16/Papers/164.pdf) and [*On the Empirical Time Complexity of Random 3-SAT at the Phase Transition*](http://www.ijcai.org/Proceedings/15/Papers/058.pdf) mention at most a multi-core workstation (e.g. at most a dual Xeon socket workstation, often a cheaper laptop or desktop). I guess that most authors need their GPU only as a graphics card, to use their display screen. BTW, papers in other journals, such as [JAIR](http://www.jair.org/) (like [*Improving the Efficiency of Dynamic Programming on Tree
Decompositions via Machine Learning*](http://www.jair.org/media/5312/live-5312-9919-jair.pdf)), when they mention computer equipment, gives similar kind of machines (laptop or at most high-end desktop).
(Actually, I don't remember having read an AI paper mentioning costly computing equipment; and I believe the reason for that is that in the academic community access to supercomputers is nearly reserved to research in other domains: physics, bioinformatics, etc. For an AI researcher gaining such access is difficult and uncommon; BTW, in [H2020](http://ec.europa.eu/programmes/horizon2020/) European research grants computing cost above 15% of the labor cost needs to be dully justified, so is *exceptional*)
However, you'll better publish your software as [free software](https://en.wikipedia.org/wiki/Free_software) or open-source, and **you need a *lot* of time** (preferably full-time, or at least half-time) to do the research work, publish it, and follow outside progress in your area. BTW, contacting a nearby university could be helpful (you could attend some seminars, etc.)
You can find interesting blogs (e.g. [the one of J.Pitrat](http://bootstrappingartificialintelligence.fr/WordPress3/)) of AI researchers working on just a laptop or a desktop.
So, **most research in AI** (even in Machine Learning, read papers from [JMLR](http://www.jmlr.org/)) **are done by researchers working on a laptop or desktop**. I am not even sure you need a powerful GPU to do the research. You certainly can do interesting research with a desktop computer (probably running Linux, with 32Gb RAM, some AMD Ryzen or Intel i5 or i7) costing in 2017 less than 1500€ or maybe 2500€ (and sometimes a low end laptop is enough). In rare cases (a small minority of papers), you might need a dual-socket workstation (something costing perhaps 5000€).
Notice that [Big Data](https://en.wikipedia.org/wiki/Big_data) is not considered as a field of Artificial Intelligence. I guess (but don't really know) that research in Big Data requires access to more computer power than a PC. BTW, I am not sure that [AlphaGo](https://en.wikipedia.org/wiki/AlphaGo) is really a research project, it is more an industrial demo.
PS. See also <http://norvig.com/21-days.html> (notice *who* is the author!) and this [Bismon draft report](http://starynkevitch.net/Basile/bismon-chariot-doc.pdf) (developing programming techniques, in a GPLv3+ alpha-stage software [prototype](https://github.com/bstarynk/bismon/), notably [frame](https://en.wikipedia.org/wiki/Frame_(artificial_intelligence))-based or [semantic-network](https://en.wikipedia.org/wiki/Semantic_network) related ones, with [reflection](https://en.wikipedia.org/wiki/Reflection_(computer_programming)) [and](https://tel.archives-ouvertes.fr/tel-01331023/document) [introspection](https://en.wikipedia.org/wiki/Type_introspection), reusable and relevant to AI), and the [RefPerSys](http://refpersys.org/) project.
Upvotes: 4 [selected_answer] |
2017/10/01 | 3,578 | 14,349 | <issue_start>username_0: What is the difference between a normal processor and a processor designed for AI?<issue_comment>username_1: It is totally practical and encouraged to do research on even common laptops. Many questions about AI can be addressed using this architecture and for a reference, look at every AI grad student which may not have access to such a super computer (most of them). AI research is done on laptops. It's the presentation of a system on a difficult problem which may require a supercomputer. It's also the reason why every domain on OpenAI gym can easily run on a laptop, and as a case study, [Deepstack](https://www.ualberta.ca/science/science-news/2017/march/artificial-intelligence-deepstack-outplays-poker-professionals) is a new AI system which can beat professional poker players and runs on a laptop.
Upvotes: 2 <issue_comment>username_2: A lot of research can be performed on some pretty basic computers. Not all machine learning research is based on very large neural networks. There is a lot of machine learning research that goes on with other more simple algorithms such as k-means clustering, and softmax regression. These algorithms are pretty basic, and so can run very fast, but they do not require massive super computers to train. Also, a decent high range GPU can train some pretty large neural networks. You can reasonably train some fairly large convolutional neural networks on a high end GPU. While training very large neural networks on millions of examples is a huge task and requires a lot of processing power, not all of machine learning is based around deep learning, and a lot of algorithms run extremely fast when they are GPU accelerated.
Upvotes: 2 <issue_comment>username_3: The field of AI is vast that there’s always room for small scale research and inquiry. Utility of AI is key, but the potential applications are broad, and intelligence is a spectrum.
Fundamental Combinatronics, a collective with no current funding, is engaged in a project to develop “adaptive AI” for a set of consumer-oriented, combinatorial game products. The requirements are distinct from real-world applications.
We can’t compete with the major players in terms of resources, and we’re late to the party in terms of Machine Learning and Neural Networks, and, because the AI is for a consumer, mobile game which carries significant restrictions in terms of the bounding rationality (**networking cannot be assumed; software volume is measured in megabytes; memory is restricted to lowest-common-denominator consumer-grade devices with non-specialized processors.**) For these reasons, we
re going the opposite direction of current industry trends--the good-old "boring stuff".
Because the automata only need to outperform the average/above-average human player, an old-school, heuristic approach is feasible. (Fun also, because it involves solving non-trivial, partisan Sudoku games in a Combinatorial Game Theory sense, a type of research all on its own. Although the context is ultimately intractable, it is a context automata are well suited for.)
Old-school is beneficial in that **it’s nice to have an app product with a decent AI that is under 7mb.** (No barrier to download or strong incentive to delete from the device. While the new iPad has up to 128gb, only a small subset of players will be willing to devote significant volume for strong AI, and these players represent a distinct, *secondary* market segment.) **It’s not optimal for an AI take up any more volume than is strictly necessary for a given product.**
**Fuzzy logic should also be useful for its efficiency in terms of applicability under what would today be considered severe computational restrictions.**
[[M] games](http://www.fundamentalcombinatronics.com/rules-of-m/) are economic so the model is interesting from a Game Theory standpoint in providing a novel, compact, intrinsic and highly mutable mathematical model based positional valuation in *n* dimensions in conjunction with stability states in a causal/temporal framework. The combinatorial nature of [M] is ideal for quantitative analysis, and the games involve blocking factors (sudoku) and symmetry breaking (even order gameboards). For players > 2 coalitions also become a factor.
The focus of the procedural research is currently in four main areas and what we’re terming “Adaptive AI” :
**Dynamic Strength:**
Sheer strength is not the goal. We’re working on AI that tailor their strength to their human player’s strength and preferences. For most humans, we don’t want the automata to win more than 2/3 games because *always losing* is no fun and makes the product less "sticky". Even if the human player desires an automata it cannot beat, the automata should only be sufficiently strong to *almost always* beat their human. AI strength can be limited by restricting rationality (time and memory), which carries an added benefit of energy conservation (less bits flipped), but the rules-based approach is useful in that rules can be recombined combinatorially to produce automata of different strengths and preferences. Automata play against each other to determine strength hierarchies, and identify poor heuristics to be weeded out of stronger automata.
**General Intelligence:**
The automata have to function on an array of related games, where equilibria can be altered in numerous ways without adding mechanics. Additionally, mechanics can be added without altering the nature of the games, such as introducing Graeco-Latin squares. This presents a problem if each configuration has to be learned through intensive self-play because the automata must be able to play at a respectable strengths immediately. Thus the goal is not sheer strength, but consistent strength across the widest array of contexts. (“Respectably weak” and “semi-strong” automata have utility value in that those categorization may be said to describe the majority of human player base.) The idea is an “axiomatic intelligence” that can be extended to include an ever increasing array of contexts.
**Counter-Intuition:**
The automata should not be prone to repetitive play. Initially we’re using limited monte-carlo for positional selection tie-breaking, and the scope can be extend to larger arrays of positions with varying degrees of perceived optimality, up to rational but “counter-intuitive” decisions, which can be subsequently evaluated. This may be useful in adapting to new, dominant strategies that emerge in allowing the automata to experiment with less obvious choices. In situations where the automata is consistently winning, there is incentive to experiment, "investing in loss" in the sense that mistakes are useful from an experience/learning standpoint.
**"Genetic" Evolution:**
The eventual goal is to implement some form of local reinforcement where the automata learn through play against their human, and self play in restricted contexts, such as between turns when playing against their human. With networking enabled, the automata can play against such automata, with the idea of producing strong automata with human play characteristics. (It will be fun when we eventually put these automata up against pure deep learning algorithms in a wide array of [M] contexts with distinct mathematical properties. My money would be on the ML and NN algorithms in sequential games, but in asynchronous games where there is no turn order, it will be interesting to see if the "axiomatic systems" can produce desirable outcomes by making sound decisions faster than smarter, more complex automata;)
Upvotes: 2 <issue_comment>username_4: Yes, **you can do research in Artificial Intelligence with low funds** (but you need *a lot of time!*). Notably, because AI is not the same as applied [machine learning](https://en.wikipedia.org/wiki/Machine_learning) (indeed running ML programs on big data requires a lot of computer power). For example, [knowledge representation and reasoning](https://en.wikipedia.org/wiki/Knowledge_representation_and_reasoning) or [natural language processing](https://en.wikipedia.org/wiki/Natural_language_processing) (both are subfields of AI) generally don't require a lot of computer power. Even when interested in machine learning (which is not the same as AI, just a subfield of it among others), you can use a powerful PC or laptop for that.
A lot of recent papers (probably most of them) in journals like [*Artificial Intelligence*](https://www.journals.elsevier.com/artificial-intelligence/), or in conferences like [IJCAI](http://www.ijcai.org/) (see their [past proceedings](https://www.ijcai.org/past_proceedings/)) are somehow theoretical, and when something is implemented, it runs on a laptop or desktop. Notice that both AI journal and IJCAI conferences are peer-reviewed with a very selective process.
Actually it is difficult and rare to find a research paper in AI mentioning that costly equipment was needed to do the research. Costly supercomputers used for research are generally not used by AI researchers (but by researchers in physics or bioinformatics), and AI researchers often don't even have access to such facilities.
For examples, recent IJCAI2016 papers such as [*Coco: Runtime Reasoning About Conflicting Commitments*](http://Coco:%20Runtime%20Reasoning%20About%20Conflicting%20Commitments), [*Interdependent Scheduling Games*](https://www.ijcai.org/Proceedings/16/Papers/008.pdf), [*Control of Fair Division*](https://www.ijcai.org/Proceedings/16/Papers/017.pdf), [*Verifying Pushdown Multi-Agent Systems against Strategy Logics*](https://www.ijcai.org/Proceedings/16/Papers/033.pdf), etc. don't mention any costly computation.
Actually, it is likely that most recent papers don't use and don't need large scale costly cloud computing. And some of that research might not have been implemented (perhaps by some intern) in anything more than a toy prototype.
---
>
> Please include in your answer examples of recent research results and the required infrastructure that was used to create them.
>
>
>
Here are some recent research publications, they all explicitly mention the needed computer equipment, follow the link to read them:
Event recent IJCAI *experimental* papers like [*A Multicore Tool for Constraint Solving*](http://www.ijcai.org/Proceedings/15/Papers/039.pdf), [*Compiling Constraint Networks
into Multivalued Decomposable Decision Graphs*](http://www.ijcai.org/Proceedings/15/Papers/053.pdf), [*External Memory Bidirectional Search*](http://www.ijcai.org/Proceedings/16/Papers/102.pdf), [*Multiple Constraint Acquisition*](http://www.ijcai.org/Proceedings/16/Papers/105.pdf), [*Completion of Disjunctive Logic Programs*](http://www.ijcai.org/Proceedings/16/Papers/130.pdf), [*Eliminating Disjunctions in Answer Set Programming by Restricted Unfolding*](http://www.ijcai.org/Proceedings/16/Papers/164.pdf) and [*On the Empirical Time Complexity of Random 3-SAT at the Phase Transition*](http://www.ijcai.org/Proceedings/15/Papers/058.pdf) mention at most a multi-core workstation (e.g. at most a dual Xeon socket workstation, often a cheaper laptop or desktop). I guess that most authors need their GPU only as a graphics card, to use their display screen. BTW, papers in other journals, such as [JAIR](http://www.jair.org/) (like [*Improving the Efficiency of Dynamic Programming on Tree
Decompositions via Machine Learning*](http://www.jair.org/media/5312/live-5312-9919-jair.pdf)), when they mention computer equipment, gives similar kind of machines (laptop or at most high-end desktop).
(Actually, I don't remember having read an AI paper mentioning costly computing equipment; and I believe the reason for that is that in the academic community access to supercomputers is nearly reserved to research in other domains: physics, bioinformatics, etc. For an AI researcher gaining such access is difficult and uncommon; BTW, in [H2020](http://ec.europa.eu/programmes/horizon2020/) European research grants computing cost above 15% of the labor cost needs to be dully justified, so is *exceptional*)
However, you'll better publish your software as [free software](https://en.wikipedia.org/wiki/Free_software) or open-source, and **you need a *lot* of time** (preferably full-time, or at least half-time) to do the research work, publish it, and follow outside progress in your area. BTW, contacting a nearby university could be helpful (you could attend some seminars, etc.)
You can find interesting blogs (e.g. [the one of J.Pitrat](http://bootstrappingartificialintelligence.fr/WordPress3/)) of AI researchers working on just a laptop or a desktop.
So, **most research in AI** (even in Machine Learning, read papers from [JMLR](http://www.jmlr.org/)) **are done by researchers working on a laptop or desktop**. I am not even sure you need a powerful GPU to do the research. You certainly can do interesting research with a desktop computer (probably running Linux, with 32Gb RAM, some AMD Ryzen or Intel i5 or i7) costing in 2017 less than 1500€ or maybe 2500€ (and sometimes a low end laptop is enough). In rare cases (a small minority of papers), you might need a dual-socket workstation (something costing perhaps 5000€).
Notice that [Big Data](https://en.wikipedia.org/wiki/Big_data) is not considered as a field of Artificial Intelligence. I guess (but don't really know) that research in Big Data requires access to more computer power than a PC. BTW, I am not sure that [AlphaGo](https://en.wikipedia.org/wiki/AlphaGo) is really a research project, it is more an industrial demo.
PS. See also <http://norvig.com/21-days.html> (notice *who* is the author!) and this [Bismon draft report](http://starynkevitch.net/Basile/bismon-chariot-doc.pdf) (developing programming techniques, in a GPLv3+ alpha-stage software [prototype](https://github.com/bstarynk/bismon/), notably [frame](https://en.wikipedia.org/wiki/Frame_(artificial_intelligence))-based or [semantic-network](https://en.wikipedia.org/wiki/Semantic_network) related ones, with [reflection](https://en.wikipedia.org/wiki/Reflection_(computer_programming)) [and](https://tel.archives-ouvertes.fr/tel-01331023/document) [introspection](https://en.wikipedia.org/wiki/Type_introspection), reusable and relevant to AI), and the [RefPerSys](http://refpersys.org/) project.
Upvotes: 4 [selected_answer] |
2017/10/02 | 818 | 3,403 | <issue_start>username_0: I'm currently in the research stage of building a web app in ASP.NET where the user can input a URL to an Amazon product, then the app would determine how likely its reviews are to be genuine. I need help figuring out what algorithm to use in determining if a certain review is likely to be deceptive. I want my app to behave similarly to Fakespot or ReviewMeta. I realise a tool like this won't be 100% accurate and that's fine.
So far I read parts of [this book](https://www.amazon.co.uk/Language-Processing-Prentice-Artificial-Intelligence/dp/0131873210) which seems to be recommended a lot to NLP newbies but haven't found anything that applies to such a specific problem. I also read [this article](http://www.aclweb.org/anthology/P14-1147) but it's based on hotel, restaurant and doctor reviews. I'm trying to find a more general method that can be applied to any product. Any help would be greatly appreciated!<issue_comment>username_1: This will not be that hard of a problem once you have a lot of training data. But, before you have a lot of training data, you will need to get some training data one way or another. You will need a lot of training data for quite a few of the models that will give you a high accuracy. Then, you will probably want to use a Long short term memory recurrent neural network along with a word2vec model, or maybe even a sentence/ paraphraph2vec model. This will be able to give you some fairly good results with a little bit of tweaking, but if you want really accurate results you will want to try an ensemble. An ensemble is when you use multiple neural networks that provide an output that gets fed through another classifier(usually XGboost) in order to achieve better classification results. You might also want to try a little bit of feature engineering. Finally, in order to implement this model, you can take one of several approaches. First off, you can use tensorflow serving. Tensorflow was created by google and used by much of the machine learning community. However, tensorflow was written in python, so you will need to use tensorflow serving in order to use it within your Asp.net app. Or, you could use microsofts CNTK. Or finally, you could use a custom built implementation, but this would take a lot of time and not be worthwhile unless you are doing research. Also, if you really do not want to use deep learning for one reason or another, you can probably use a simpler model, but you will need to understand your data. You will need also need to understand your data if you are doing deep learning but maybe not as much.
Upvotes: 2 <issue_comment>username_2: This is a common question, but before hand, you need a lot of **label** data, the more, the better.
Since all the reviews from amazon are not label as deceptive or not, you may manually label them.
And then, you can use NLP tricks as what describe by username_1.
A simple way is let every word as one of vector and the output is one vector(deceptive or not). A better way is use word2v as input, and lstm as hidden layer.
Beware this [anomaly detection](https://en.wikipedia.org/wiki/Anomaly_detection) problem, that is the deceptive review may extreme seldom happen. Please carefully handle precision and recall.
Asp.net is OK. You can use your inference model as backend service, and asp.net can call this service with any js framework.
Upvotes: 2 |
2017/10/03 | 2,438 | 7,758 | <issue_start>username_0: [This question](https://ai.stackexchange.com/questions/10/what-is-fuzzy-logic) covers in detail, what fuzzy logic is and how it relates to other math fields, such as boolean algebra and sets theory.
[This question](https://ai.stackexchange.com/questions/118/how-can-fuzzy-logic-be-used-in-creating-ai) is also very related, but the answers are focused more on general intuition and *potential* applicability. The only working system based on fuzzy logic, mentioned there, is [MYCIN](https://en.wikipedia.org/wiki/Mycin), which goes back to the early 70s. This quote from wiki summarizes my impression of it:
>
> MYCIN was never actually used in practice.
>
>
>
From my experience in AI, the best tool to deal with uncertainty is [Bayesian probability](https://en.wikipedia.org/wiki/Bayesian_probability) and inference. It allows to apply not only a wide range of probabilistic tools, such as expectation, MLE, cross-entropy, etc, but also calculus and algebra.
Can you call fuzzy logic a "pure theoretical" concept, which only played its role in the early development of AI? Are there real practical applications of fuzzy logic? What problem would you recommend to solve and to *code* using fuzzy logic?<issue_comment>username_1: You've obviously never heard of [fuzzy logic washing machines](http://www.samsung.com/in/support/skp/faq/138486).
>
> ● Typically, fuzzy logic controls the washing process, water intake,water temperature, wash time, rinse performance, and spin speed. This optimises the life span of the washing machine. More sophisticated machines weigh the load (so you can’t overload the washing machine), advise on the required amount of detergent, assess cloth material type and water hardness, and check whether the detergent is in powder or liquid form. Some machines even learn from past experience,memorising programs and adjusting them to minimise running costs.
>
>
>
Fuzzy logic is used in a variety of control applications. If your furnace can only be on or off, for example, you might use a probabilistic function of temperature to determine when to turn it on and off, rather than having fixed high and low temperatures activate your thermostat. In some applications, that's been found to improve perceived comfort or efficiency.
For more sophisticated AI applications, you could use fuzzy logic for activations in a neural net, but I don't think it's offering much improvement over fixed, weighted activations.
Upvotes: 2 <issue_comment>username_2: Fuzzy logic seems to have multiple of applications historically in Automotive Engineering.
I found an interesting article on the subject from 1997. This excerpt provides an interesting rationale:
>
> The key reason for fuzzy logic’s success in automotive engineering lies in the implications of its paradigm shift. Previously, engineers spent much time creating mathematical models of mechanical systems. More time went to real-world road tests that tuned the fudge factors of the control algorithms. If they succeeded, they ended up with a control algorithm of mathematical formulas involving many experimental parameters. Modifying or later optimizing such a solution is very difficult because of its lack of transparency. Fuzzy logic makes this design process faster, easier, and more transparent. It can implement control strategies using elements of everyday language. Everyone familiar with the control problem can read the fuzzy rules and understand what the system is doing and why. It also works for control systems with many control parameters. Designers can build innovative control systems that would have been intractable using traditional design techniques.
> SOURCE: [Fuzzy Logic in
> Automotive Engineering](ftp://ftp.me.psu.ac.th/pub/me/Fuzzy/88constantin.pdf), 1997
>
>
>
Here are some papers and patents for automatic transmission control in motor vehicles. One of them is fairly recent:
[Automatic Transmission Shift Schedule Control Using Fuzzy Logic](http://papers.sae.org/930674/)
SOURCE: Society of Automotive Engineers, 1993
[Fuzzy Logic in Automatic Transmission Control](http://www.tandfonline.com/doi/abs/10.1080/00423119508969099?journalCode=nvsd20)
SOURCE: International Journal of Vehicle Mechanics and Mobility, 2007
[Fuzzy Logic Based Controller For Automated
Gear Control in Vehicles](http://ipasj.org/IIJCS/Volume2Issue7/IIJCS-2014-06-24-3.pdf)
SOURCE: International Journal of Computer Science, 2014
[Fuzzy control system for automatic transmission](https://www.google.com/patents/US4841815) | Patent | 1987
[Transmission control with a fuzzy logic controller](https://www.google.com/patents/US5390117) | Patent | 1992
Likewise with fuzzy logic anti-lock breaking systems (ABS):
[Antilock-Braking System and Vehicle Speed Estimation using Fuzzy Logic](http://www.fuzzytech.com/e/e_a_sie.html)
SOURCE: FuzzyTECH, 1996
[Fuzzy Logic Anti-Lock Break System](https://www.ijser.org/researchpaper/FUZZY-LOGIC-ANTI-LOCK-BRAKE-SYSTEM.pdf)
SOURCE: International Journal of Scientific & Engineering Research, 2012
[Fuzzy controller for anti-skid brake systems](https://www.google.com/patents/US5416709?dq=fuzzy%20logic%20abs&hl=en&sa=X&ved=0ahUKEwiV8dOaguLWAhUHLSYKHbtQC18Q6AEIKDAA) | Patent | 1993
This method seems to have been extended to aviation:
[A Fuzzy Logic Control Synthesis for an Airplane Antilock-Breaking System](http://aix.acad.ro/sectii2002/proceedings/doc2_2004/12_Ursu.pdf)
SOURCE: Proceedings of the Romanian Academy, 2004
[Landing gear method and apparatus for braking and maneuvering](https://www.google.com/patents/US8955793?dq=fuzzy%20logic%20abs&hl=en&sa=X&ved=0ahUKEwiV8dOaguLWAhUHLSYKHbtQC18Q6AEIVDAG) | Patent | 2003
Upvotes: 3 [selected_answer]<issue_comment>username_2: The site [FuzzyTECH](http://www.fuzzytech.com/) lists an array of applications:
>
> [Industrial Automation](http://www.fuzzytech.com/e/e_a_plc.html)
>
> [Monitoring Glaucoma](http://www.fuzzytech.com/e/e_a_glc.html)
>
> [Coal Power Plant](http://www.fuzzytech.com/e/e_a_htw.html)
>
> [Complex Chilling Systems](http://www.fuzzytech.com/e/e_a_kli.html)
>
> [Refuse Incineration Plant](http://www.fuzzytech.com/e/e_a_mull.html)
>
> [Fuzzy Logic Design](http://www.fuzzytech.com/e/e_a_eet.html)
>
> [Practical Design](http://www.fuzzytech.com/e/e_a_pfd.html)
>
> [Water Treatment System](http://www.fuzzytech.com/e/e_a_dek.html)
>
> [Truck Speed Limiter](http://www.fuzzytech.com/e/e_a_spe.html)
>
> [Medical Shoe](http://www.fuzzytech.com/e/e_a_med.html)
>
> [Fuzzy in Appliances](http://www.fuzzytech.com/e/e_a_med.html)
>
> [Automotive Engineering](http://www.fuzzytech.com/e/e_a_esw.html)
>
> [Antilock Braking System](http://www.fuzzytech.com/e/e_a_sie.html)
>
> [Aircraft Flight Path](http://www.fuzzytech.com/e/e_ap_afp.html)
>
> [Nucluar Fusion](http://www.fuzzytech.com/e/e_ap_nfu.html)
>
> [Motorla 68HC12 MCU](http://www.fuzzytech.com/e/e_a_mot.html)
>
> [Traffic Control](http://www.fuzzytech.com/e/e_a_tfc.html)
>
> [Sonar Systems](http://www.fuzzytech.com/e/e_a_kumm.html)
>
>
>
Most of the linked articles have good bibliographies citing numerous papers, although it's notable that most of the material is a few decades old.
Upvotes: 2 <issue_comment>username_3: Recently, I needed to develop a Fuzzy Logic algorithm to made inferences of any data entrance; the real case applied was in Oil and Gas Industry, that the code needs to infere Joint Types in Fluid Pipelines. But with this Algorithm, the Computer Science Developer can infere any problems data, follow the link bellow:
<https://www.codeproject.com/Articles/5092762/Csharp-Fuzzy-Logic-API> (*C# Fuzzy Logic API*)
Upvotes: 1 |
2017/10/07 | 650 | 2,559 | <issue_start>username_0: I am having issues getting started with a multi class problem with multiple features and hoping someone could please point me in the right direction.
I have data that is structured like this for training:
```
Item State Code1 Code2 Code3 Route
--- --- --- --- --- ---
item1 MI A1 33 blue Route1
item2 TX A3 35 yellow Route2
item3 NM A4 36 green Route3
item4 NM A4 37 green Route3
```
Essentially I am trying to figure out where to even start. The goal is to know where to route the Items based on the features State, Code1,2, and 3. The route is dependent on a mix of the codes and state, and I want to build a model that says when I have code X, Y, Z and Color XX, then it is probably route 1 (some routes of course in the training data might have X, Y as codes and a different Z)
I am assuming I will need to one-hot encode the features like the State and codes? But from there does anyone know of which type of model I should go for? I would assume a Neural Net of some kind, I've explored CNN's and Random Forrest.<issue_comment>username_1: You could probably use a simpler model if you do not have that much training data, but if you have a lot of training data, I would recommend taking this approach: you will want to one hot encode the states, and depending upon how many colors you have, you will either want to encode them in a color space such as RGB or one hot encode them if you do not have that many of them. Then, you will want to one hot encode the routes that you have. For code 2, it looks like it is quantitative so you can probably just normalize that row. For code 1, you will probably just have to one hot encode the letter, and normalize it. Once you have all of your training data in a quantitative form, if you have a lot of training data, and the training data is sufficiently complex, you can use a deep neural network. Otherwise, you could use a conventional single layer neural network.
Upvotes: 1 <issue_comment>username_2: My money would be on something much simpler like [Naive Bayes](https://en.wikipedia.org/wiki/Naive_Bayes_classifier).
In my experience for small data NB outperforms the more exotic methods.
Also, if you want to get more value out of your training data, try [10 fold cross validation](https://www.openml.org/a/estimation-procedures/1)
Upvotes: 2 <issue_comment>username_3: Other answers are good, but kiner-shah comment about Decision Trees worked the best for me due to how the data was structured.
Upvotes: 2 [selected_answer] |
2017/10/07 | 781 | 2,533 | <issue_start>username_0: I am looking for something similar to IBM Watson but open source.<issue_comment>username_1: Have a look at the [ChatScript project](http://chatscript.sourceforge.net/), whose description is
>
> ChatScript is a "next Generation" chatbot engine, based on the one that powered Suzette, that won the 2010 Loebner Competition. ChatScript has many advanced features and capabilities that, when properly utilitized, permit extremely clever bots to be programmed. There is also a potentially useful ontology of nouns, verbs, adjectives, and adverbs for understanding meaning.
>
>
>
Upvotes: 2 <issue_comment>username_2: I am not aware of any open-source products that can readily replace IBM Watson, but the following projects could be of interest to you in that regard:
* [DARPA DeepDive](http://deepdive.stanford.edu/)
* [Snorkel](https://hazyresearch.github.io/snorkel/)
* [Apache UIMA](http://uima.apache.org/)
* [YodaQA](https://github.com/brmson/yodaqa)
* [OpenCog](http://opencog.org/)
* [OAQA (Open Advancement of Question Answering Systems)](http://oaqa.github.io/)
* [Stanford CoreNLP – Natural language software](https://stanfordnlp.github.io/CoreNLP/)
* [openQA](http://aksw.org/Projects/openQA.html)
Upvotes: 3 [selected_answer]<issue_comment>username_3: Try UIMA and GATE, both of them are open source.
Watson Content Analytics implements UIMA framework, according to this:
[Open, scalable analytics pipeline | IBM Watson Content Analytics 3.5.0](https://www.ibm.com/support/knowledgecenter/en/SS5RWK_3.5.0/com.ibm.discovery.es.nav.doc/iiysaovcapipe.htm)
UIMA takes care of the management of NLP pipeline, but the intelligence is actually comes from ‘annotators’.
You can rely on annotators from projects like GATE, Apache OpenNLP, Stanford CoreNLP, etc.
But since these projects are not part of UIMA, they use their own document/sentence representation, so a UIMA wrapper is needed for conversion.
GATE provided a detailed documentation for this. [Combining GATE and UIMA | GATE](https://gate.ac.uk/sale/tao/splitch22.html#chap:uima)
Upvotes: 2 <issue_comment>username_4: Here are some more, apart from the ones reported in the accepted answer:
* [QANTA](https://github.com/Pinafore/qb)
* [Deep Averaging Networks (DAN](https://github.com/miyyer/dan))
* [DeepQA](https://github.com/allenai/deep_qa)
* [Jacana](https://github.com/xuchen/jacana)
* [Quepy](http://quepy.machinalis.com/)
* [QANUS](http://www.qanus.com/)
* [Watsonsim](https://github.com/SeanTater/uncc2014watsonsim/)
Upvotes: 2 |
2017/10/09 | 881 | 3,332 | <issue_start>username_0: I've gotten curious about this topic and am wondering what the stack exchange community has to say about it. Also, does anyone know of any professors/researchers who have published papers pertaining to this?<issue_comment>username_1: A couple of thoughts:
1. Humans can't reliably predict trends in the stock market, so expecting AI's to do so is probably unreasonable.
2. The above would be more true if it were proven that the movement of stock prices is really a random walk, but my understanding is that the current thinking is that stock movements aren't **completely** random... but just really close to random.
3. If there are some trends there that represent a useful signal, and if somebody has found that, they're **very** unlikely to share that information, as the market would then immediately price that information in and they would lose their edge.
Upvotes: 2 <issue_comment>username_2: This is a highly relevant question as market trends have become more emphasized over the fundamentals of individual companies, and algorithmic trading has proven to be quite effective, particularly in areas such as high-frequency micro-trading.
This [2013 Forbes article](http://fortune.com/2013/05/29/a-day-in-the-quiet-life-of-a-nyse-floor-trader/) estimated nearly 80% of stock trading volume in the U.S. is conducted by [automated trading systems](https://en.wikipedia.org/wiki/Automated_trading_system). More recently, Bloomberg published an article on the subject: [The U.S. Stock Market Belongs to Bots](https://www.bloomberg.com/news/articles/2017-06-15/it-s-a-quant-s-stock-market-as-computer-programs-keep-on-buying).
The massive adoption of algorithmic trading in a fairly short span is a pretty good indicator that these systems are effective. However,
* **My feeling with any predictive system is that it works until it doesn't**
[Imperfect](https://en.wikipedia.org/wiki/Perfect_information) and [Incomplete information](https://en.wikipedia.org/wiki/Complete_information) are persistent in real-world scenarios. The implication is that there are always unknown factors that can diminish quality of analysis, and even lead to disastrous outcomes.
No matter how complex an algorithm becomes, reality is more complex, *(at least for the foreseeable, pre-singularity future;)*
Adding to that, systems in general tend to be imperfect, and when engaged in massively repetitive functions, small errors can grow exponentially and produce catastrophic effects.
"[Flash Crashes](https://en.wikipedia.org/wiki/Flash_crash)" are a recent phenomena, and a result of algorithmic, high-frequency, black-box trading.
Upvotes: 2 <issue_comment>username_3: There is quite some research done by [<NAME>](https://www.youtube.com/watch?v=UVuziqAb4U8), who has programmed Neural Networks for Siemens since some 20 years in order to predict Stock markets. He wrote some books on it, too, though I don't know if they are any good in English.
[This](https://www.siemens.com/innovation/en/home/pictures-of-the-future/digitalization-and-software/artificial-intelligence-overview.html) article gets to the point a bit faster than the video, I hope it helps.
edit: I think [this](https://www.youtube.com/watch?v=AOYo4xPI_BA) interview gives a good short introduction to his work.
Upvotes: 1 |
2017/10/10 | 1,881 | 6,778 | <issue_start>username_0: How are the layers in a encoder connected across the network for normal encoders and auto-encoders? In general, what is the difference between encoders and auto-encoders?<issue_comment>username_1: To answer this rather succinctly, an encoder is a function mapping some input to some different space. An example of this is what the brain does. We have to process the sensory input that the environment gives us in order for it to be storable.
An autoencoder's job, on the other hand, is to learn a representation(encoding). An autoencoder will have the same number of output nodes as there are inputs for the purposes of reconstructing the inputs instead of trying to predict the Y target. Autoencoders are usually used in reducing output dimensions in high dimensional data sets.
Hope I answered your question!
Upvotes: 1 <issue_comment>username_2: Theory
======
Encoder
-------
* In general, an Encoder is a mapping $f : X \rightarrow Y $ with $X$ Input Space and $Y$ Code Space
* In case of Neural Networks, it is a **Generative Model** hence a function which is able to compute a **Representation** out of some input (like GAN)
The point is: how would you train such an encoder network ?
* The general answer is: it depends on what you want your **code** to be and ultimately depends on what kind of problem the NN has to solve, so let's pick one
Signal Compression
------------------
The goal is to learn a compressed representation for your input that allows to reconstruct the original input minimizing the loss of information
In this case hence you want the dimensionality of $Y$ to be lower than the dimensionality $X$ which in the NN case means the code space will be represented by less neurons than the input space
Autoencoder
-----------
Focusing on the Signal Compression problem, what we want to build is a system which is able to
* take a given signal with size `N` bytes
* compress it into another signal with size `M bytes`
* reconstruct the original signal, starting from the compressed representation, as good as possible
To be able to achiebve this goal, we need basically 2 components
* an Encoder which compresses its input, performing the $f : X \rightarrow Y$ mapping
* a Decoder which decompresses its input, performing the $f: Y \rightarrow X$ mapping
We can approach this problem with the Neural Network Framework, defining an Encoder NN and a Decoder NN and training them
It is important to observe this kind of problem can be effectively approached with the convenient learning strategy of **unsupervised learning** : there is no need to spend any *human work* (expensive) to build a supervision signal as the original input can be used for this purpose
This means we have to build a NN which operates essentially between 2 spaces
* the $X$ Input Space
* the $Y$ Latent or Compressed Space
The general idea behind the training is to make a certain input go along the encoder + decoder pipeline and then compare the reconstruction result with the original input with some kind of loss function
To define this idea a bit more formally
* The final autoencoder mapping is $f : X \rightarrow Y \rightarrow X$ with
+ the $x$ input
+ the $y$ encoded input or latent representation of the input
+ the $\hat x$ reconstructed input
* Eventually you will get an architecture similar to

* You can train this architecture in an unsupervised way, using a loss function like $f : X \times X \rightarrow \mathbb{R}$ so that $f(x, \hat x)$ is the loss associated to the $\hat x$ reconstruction compared with the $x$ input which is also the ideal result
Code
====
Now let's add a simple example in Keras related to the MNIST Dataset
```
from keras.layers import Input, Dense
from keras.models import Model
# Defines spaces sizes
## MNIST 28x28 Input
space_in_size = 28*28
## Latent Space
space_compressed_size = 32
# Defines the Input Tensor
in_img = Input(shape=(space_in_size,))
encoder = Dense(space_compressed_size, activation='relu')(in_img)
decoder = Dense(space_in_size, activation='sigmoid')(encoder)
autoencoder = Model(in_img, decoder)
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: As an addition to NicolaBernini's answer. Here is a full listing which should work with a Python 3 installation that includes Tensorflow:
```
"""MNIST autoencoder"""
from tensorflow.python.keras.layers import Input, Dense, Flatten, Reshape
from tensorflow.python.keras.models import Model
keras.datasets import mnist
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
"""## Load the MNIST dataset"""
(x_train, y_train), (x_test, y_test) = mnist.load_data()
"""## Define the autoencoder model"""
## MNIST 28x28 Input
image_shape = (28,28)
## Latent Space
space_compressed_size = 25
in_img = Input(shape=image_shape)
img = Flatten()(in_img)
encoder = Dense(space_compressed_size, activation='elu')(img)
decoder = Dense(28*28, activation='elu')(encoder)
reshaped = Reshape(image_shape)(decoder)
autoencoder = Model(in_img, reshaped)
autoencoder.compile(optimizer='adam', loss='mean_squared_error')
"""## Train the autoencoder"""
history = autoencoder.fit(x_train, x_train, epochs=10, shuffle=True, validation_data=(x_test, x_test))
"""## Plot the training curves"""
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.legend(['loss', 'val_loss'])
plt.show()
"""## Generate some output images given some input images. This will allow us to see the quality of the reconstruction for the current value of ```space_compressed_size```"""
rebuilt_images = autoencoder.predict([x_test[0:10]])
"""## Plot the reconstructed images and compare them to the originals"""
figure(num=None, figsize=(8, 32), dpi=80, facecolor='w', edgecolor='k')
plot_ref = 0
for i in range(len(rebuilt_images)):
plot_ref += 1
plt.subplot(len(rebuilt_images), 3, plot_ref)
if i==0:
plt.title("Reconstruction")
plt.imshow(rebuilt_images[i].reshape((28,28)), cmap="gray")
plot_ref += 1
plt.subplot(len(rebuilt_images), 3, plot_ref)
if i==0:
plt.title("Original")
plt.imshow(x_test[i].reshape((28,28)), cmap="gray")
plot_ref += 1
plt.subplot(len(rebuilt_images), 3, plot_ref)
if i==0:
plt.title("Error")
plt.imshow(abs(rebuilt_images[i] - x_test[i]).reshape((28,28)), cmap="gray")
plt.show(block=True)
```
I have changed the loss function of the training optimiser to "mean\_squared\_error" to capture the grayscale output of the images.
Change the value of
`space_compressed_size`
to see how that effects the quality of the image reconstructions.
Upvotes: 1 |
2017/10/12 | 1,377 | 5,500 | <issue_start>username_0: How will morality questions be settled in the domain of self-driving cars?
For example
1. If a dog is crossing the road, I'd expect the car to try to avoid it. But what if this leads to .00001% more risk for the driver? What is the 'risk cut-off'?
2. What if a cockroach is crossing the road? Will the car have a list of animals okay to run over?
3. What if a kid is crossing the street and avoiding it would kill the driver?
These questions seem to not really have an answer, yet self-driving cars are almost ready. What are they doing about all of this?<issue_comment>username_1: As far as I know, there is still a huge debate about this topic. I would say, that the main rule for every self-driving car is to avoid a crash if possible.
The question one should always ask is, in what situation would a crash really happen, and would a human react differently?
My answer is no. The point is, a human might try to avoid the child (3) but it would be out of instinct rather than "consideration". The driver might even harm others in this situation.
Since a self-driving car will normally follow the rules a critical situation will most of the time arise due to the other person, not the car. So I believe it is best to protect the driver at all costs.
The dog vs child problem could be solved via advanced animal recognition (human vs no human)
Regarding no (2): Too small = no human ==> car will ignore it.
Upvotes: 2 [selected_answer]<issue_comment>username_2: I don't think these questions will need to be answered.
A self driving car will almost certainly avoid a situation like the ones described well before a human would have and hence would not have to choose.
For example it would slow down as soon as it sees a child close to the road. It will identify and react to the fact that the child starts moving towards the road and act before the situation requires the "drastic" scenarios that we invent.
If for some reason the car has to choose, it can also make the impact/avoidance have the highest chance of not killing the occupant of the vehicle considering how well designed cars are these days and that the speeds involved shouldn't be anything wild like 200mph.
Not to mention a network of cars along with street cameras and sensors would act/work together to resolve as a swarm intelligence so the car ahead or the traffic camera can warn/tell other cars that something they cannot see is a potential hazard. I can go on and on...
In my opinion, the bottom line is a self driving car will *not* road rage, drive at dangerous speeds in a residential area, get tired and fall asleep, text and drive or drink and drive, etc... I cannot wait.
Upvotes: 1 <issue_comment>username_3: As others have said, your question is, and will continue to be a hot topic.
I also agree that eventually self-driving cars will be able to handle your
hypothetical situations better than many human drivers. I am not prepared to say
when that "eventually" will eventuate.
However, I can also imagine some human drivers deliberately trying to cause
self-driven cars to make poor decisions. For example, a "team" of three or more
cars could easily confound a self-driven car's programming by co-ordinating
their actions, especially once the actual program code used by the car is known.
I'm thinking of situations where the self-driven car is boxed in by human-driven
cars which indicate they are about to make a move and then do not, while others
change speed and direction at the same time, or at slightly different times.
Humans can be incredibly sneaky and unethical, and some are very good at
finding exploits and weaknesses.
Upvotes: 0 <issue_comment>username_4: The core issue with this question rests in probability. Specifically:
>
> What if a kid is crossing the street and avoiding it would kill the driver?
>
>
>
How does the AI know for certain that avoiding it would *kill* the driver?
and certainty rears its head re:
>
> 1) If a dog is crossing the road, I'd expect the car to try to avoid it. But what if this leads to .00001% more risk for the driver? What is the 'risk cut-off'?
>
>
>
There would likely be no "hard cutoff". Earlier fuzzy logic systems have been implemented in automotive gear shifting and anti-lock breaking, but it is precisely the "fuzziness" that made them effective. Contemporary AI is far more sophisticated, and part of that sophistication rests in what might be though of as dynamic thresholds for decision-making. Because certainty only exists in special, limited cases (such as [solved games](https://en.wikipedia.org/wiki/Solved_game)), estimation must be used.
---
Regarding the cockroach, it would likely be too small to warrant a response, although a swarm of cicadas might affect the car's sensing ability and prompt poor-visibility navigational protocols. In general I'm sure pet-sized animals and bigger would be avoided, in the case of actual pets for humanitarian reasons, and for animals like deer, for reasons of driver risk (impaled by the horns at the worst, and at the least potentially costly damage to the vehicle.)
But I suspect the protocols for this would be breaking or swerving if there is a clear margin on either side of the animal (i.e. not a barrier, wall or cliff) and the direction change is controllable (i.e. hitting the animal is likely to result in less harm than an actual crash, and certainly less risk to the human, except in the case of the deer's horns.)
Upvotes: 1 |
2017/10/16 | 642 | 2,334 | <issue_start>username_0: I'm looking to design a neural network that can predict which runner wins in a sports game, where the number of runners varies between 2-10. In each case, specific data about the individual runners (for example, the weight, height, average speed in previous races, nationality, etc) would be fed into the neural network.
What design would be most advantageous for such a neural network?
Essentially this is a ranking problem where the number of inputs and outputs are variable.<issue_comment>username_1: The best option in your case would probably be zero-padding or padding up. This is simply zeroing out inputs for cases in which there is no data. It's done a lot on the borders of images for CNNs.
Alternatively, you could just use an RNN, which can handle your variable-length inputs with ease.
Upvotes: 3 [selected_answer]<issue_comment>username_2: I think, the proposed in the other answer CNN and RNN is a bad choice for this particular problem.
The input is the unordered sequence of the features, corresponding to each runner, so the input is essentially a set structure without the notion of **order** and **locality**. If the runners are assigned a number in random order, there is no sense as runner 1 is close to 2, or runner 3 precedes runner 4.
Therefore, `CNN` or `RNN` seem to be a bad choice since the add an inductive bias irrelevant to the data.
The question is pretty old, and at that time [Transformer architecture](http://papers.nips.cc/paper/7181-attention-is-all-you-%0Aneed.pdf) was just invented, but it is the case, where it can be particularly well applied. Some points to note:
* Since there is no order, there is no need for positional embeddings
* Sequences are short, hence $O(N^2)$ complexity with respect to the length of sequence is small
* There is no need for decoder
Overall, I would suggest to stack several encoder blocks and predict ranks from the resulting embeddings.
As a loss, choose some reasonable choice from mentioned in the paper [Ranking Measures and Loss Functions in Learning to Rank](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwiLprGMudb0AhUOuosKHfYiArYQFnoECAQQAQ&url=https%3A%2F%2Fpapers.nips.cc%2Fpaper%2F3708-ranking-measures-and-loss-functions-in-learning-to-rank&usg=AOvVaw2nUt88oYaPTMwc8oG-O22g).
Upvotes: 1 |
2017/10/16 | 517 | 2,015 | <issue_start>username_0: I have a large dataset (over 100k samples) of vehicles with the ground truth of their lengths.
Is it possible to train a deep network to measure/estimate vehicle length?
I haven't seen any papers related to estimating object size using a deep neural network.<issue_comment>username_1: Yes! This most certainly can be done. Since you have a labeled dataset, that makes it all the more simple!
I would take a look at [this project](https://www.pyimagesearch.com/2016/03/28/measuring-size-of-objects-in-an-image-with-opencv/) and that should get you where you need to go.
The implementation details should be pretty straightforward. Let me know if I can help further.
Upvotes: 2 <issue_comment>username_2: Yes it's possible, but first you'll have to recognize *some* object in the image, either 1) the vehicle itself, and then report that vehicle's known size, or 2) a known object that's the same distance from the camera as the car (a curb, a stop sign, the driver's head, a shetland pony... whatever), and then use that object to calibrate the size of the car that's very close to it.
Any car in an image will be an unknown distance from the camera, making the car object appear larger or smaller from photo to photo. If you don't recognize the car or at least a referent object that has a known size, the physical size of the car will be uncalibrated -- you'll have no basis for your size estimate.
If the car is unknown, then even if you do have visual clues (there is a referent object present or the distance from camera to car is known), the unknown extent of wide angle-ness of the camera's lens may distort an unknown car's shape (height vs width), further complicating your ability to estimate its apparent dimensions.
Upvotes: 0 <issue_comment>username_3: I think this paper can help you out: [3D Bounding Box Estimation Using Deep Learning and Geometry](https://arxiv.org/pdf/1612.00496.pdf)
He used 1 VGG-19 (pretrained on ImageNet) to learn the size of cars
Upvotes: 2 |
2017/10/16 | 639 | 2,701 | <issue_start>username_0: I'm working on an image classification problem using a neural network. In the training data set, 90% of the samples fall into 10% of all categories, while 10% of the sample fall into the other 90% categories. So an example is not evenly distributed among all categories. If we assume this distribution reflects the real-world distribution, do I need to filter my dataset before training so that each category has a similar number of samples?<issue_comment>username_1: Definitely. NNs can learn the data that you teach to them. If you teach them biased, the network will be biased. As you mention, one solution is to reduce the data that you have. However, it is not the best approach as you will be losing the precious data. I would suggest to try data augmentation for remaining dataset to increase missing data type samples and have similar numbers for best, evenly distributed accuracy.
Upvotes: 1 <issue_comment>username_2: Yes. Skewed data is one of the biggest problems in AI applications. As you rightly identified, the real world distribution is skewed. Doing a random sampling results in one major issue of an uneven sampling (like in your case). Even worse could happen, all of the samples may fall into a single class and other classes may not even be recognized by your classifier. This is called as the **class imbalance problem**.
There are many ways to mitigate this problem. Few of them have been mentioned [here](http://www.chioka.in/class-imbalance-problem/). I'll summarize them for you:
1. Sampling based mitigation (when you try to deal with skewed data)
2. Cost function based mitigation (when you try to improve the classifier, in case skewed data cannot be avoided)
Sampling based mitigation can be done by oversampling from the minority class(es), or undersampling from the majority class(es) or using a combination of both.
Another fairer method to do this is by doing a stratified sampling. If class A has 1/6th, class B has 2/6th and class C has 3/6th of the total population, then, you should take 1/6th of the samples by random sampling from objects in class A, 2/6th from class B and 3/6th from class C. This way all of the sample may represent the population in the right proportion, and none of the classes will be missed.
In case this doesn't help (say, you dataset is small and sampling doesn't make sense) and the data is skewed too much), the cost function can be modified to make a more sensible classifier. Keep different costs for misclassification and/or correct classification. Misclassification could be given a higher penalty. This could help in reaching to a better classifier with a skewed data.
Upvotes: 3 [selected_answer] |
2017/10/17 | 1,036 | 3,851 | <issue_start>username_0: Is Deep Learning the repeated application of Linear Regression?<issue_comment>username_1: There's no point to fit a linear regression model (such as OLS) with neural network because it's really designed for non-linear models. But if you want to do that, you'll just need to set linear activation units.
Upvotes: 2 <issue_comment>username_2: Generally speaking, you can say this:
1. there *is* a relationship between neural network learning (I'm assuming a "vanilla" ANN here, no CNN's or RNN's or anything) and linear/logistic regression.
2. But they're *not* the same thing. Just related. You could maybe consider them "cousins" to use a real-life analogy.
The big obvious difference is this: standard linear regression is, well, **linear**, that is, it's based on a straight line. So it can only separate points on a plane which can be separated by a straight line drawn on that plane. An ANN however, is **non-linear** and can fit all sorts of crazy looking curves. The reason why this is true has to do with a combination of the "activation" functions that are used, as well as the layering effect of your hidden layers.
To be fair, if you extend linear regression to be polynomial regression, you can fit more complicated curves, but that has its own downsides. And while they are also related, linear regression and polynomial regression aren't - strictly speaking - the same thing (although they may both be special cases of the same general technique).
All of that may be over-simplifying a bit. If you really want a good explanation of both linear/logistic regression and ANN's and some explanation of how they relate and differ, I recommend Andrew Ng's ML courses on Coursera. Both the original one and the new DeepLearning.ai ones.
Upvotes: 1 <issue_comment>username_3: A neural network can be reduced to a linear regression model only if we use linear activation functions (i.e. $\sigma(x) = x$), and only if we do not use any neural network specific techniques such as convolution, residuals, etc., as shown below:
$\text{neural network}(x) = \sigma\_n(W\_{n} \sigma\_{n-1}(W\_{n-1}\dots\sigma\_1(W\_1 x + b\_1) + \dots + b\_{n-1}) + b\_n) \\
= W\_n (W\_{n-1} \dots (W\_1 x + b\_1) + \dots + b\_{n-1}) + b\_n \\
= \left( W\_n W\_{n-1} \dots W\_1 \right) x + \left( W\_n W\_{n-1} \dots W\_2 \right) b\_1 + \left( W\_n W\_{n-1} \dots W\_3 \right) b\_2 + \dots + W\_n b\_{n-1} + b\_n \\
= W\_z x + b\_z$
where $W\_z = \displaystyle \prod\_i W\_i$ is a weight matrix and $b\_z$ is some vector constant.
This follows the linear regression model form $y = Wx + b$, where $W$ is the weight matrix and $b$ is the vector constant. As a result, you can analytically solve for $W\_z$ and $b\_z$ using linear regression techniques, and no longer need gradient descent.
Note that for this to work well as a linear regression, you need to check for the OLS data assumptions, such as making sure the regressors have no collinearity, the residuals have no heteroskedasticity, there's no auto-regression, and that the errors are roughly normally distributed ([more info](https://www.statisticssolutions.com/assumptions-of-linear-regression/)). Deep neural networks with non-linear activation functions do not require these assumptions since they are universal approximators, although checking for some conditions may help make the task easier to predict ([more info\*](https://stats.stackexchange.com/a/472634/303597)).
\**Note - this link talks specifically about time series data with neural network, but the same concept applies to any task in general.*
The neural network specific techniques, such as ReLu, convolutions, residuals, etc., is what allows the network to learn non-linear relationships, and therefore make neural networks something more than just repeated applications of linear regression.
Upvotes: 2 |
2017/10/19 | 1,253 | 4,568 | <issue_start>username_0: Without using any of Matlab's neural network tools, I'm writing a program to simulate an OR gate with a perceptron. I have seen many tutorials, but I still can't understand why we need weights to train a perceptron for such a simple purpose.
One way is to program the perceptron with the conditions `(0,0)=0. (1,0)=1. (0,1)=1. (1,1)=1`. So the two inputs to the perceptron would be either zeroes or ones. I don't see the purpose of weights here. Assuming weights are 1, for the second training example, the output would be `1*1 + 0*1 = 1`. For the last example, it would be `1*1 + 1*1 = 2`. So an activation function which says `if output >= 1, output = 1 else output =0; end` should suffice. This would successfully simulate an OR gate. So why do I need to "train" any weights?<issue_comment>username_1: There's no point to fit a linear regression model (such as OLS) with neural network because it's really designed for non-linear models. But if you want to do that, you'll just need to set linear activation units.
Upvotes: 2 <issue_comment>username_2: Generally speaking, you can say this:
1. there *is* a relationship between neural network learning (I'm assuming a "vanilla" ANN here, no CNN's or RNN's or anything) and linear/logistic regression.
2. But they're *not* the same thing. Just related. You could maybe consider them "cousins" to use a real-life analogy.
The big obvious difference is this: standard linear regression is, well, **linear**, that is, it's based on a straight line. So it can only separate points on a plane which can be separated by a straight line drawn on that plane. An ANN however, is **non-linear** and can fit all sorts of crazy looking curves. The reason why this is true has to do with a combination of the "activation" functions that are used, as well as the layering effect of your hidden layers.
To be fair, if you extend linear regression to be polynomial regression, you can fit more complicated curves, but that has its own downsides. And while they are also related, linear regression and polynomial regression aren't - strictly speaking - the same thing (although they may both be special cases of the same general technique).
All of that may be over-simplifying a bit. If you really want a good explanation of both linear/logistic regression and ANN's and some explanation of how they relate and differ, I recommend Andrew Ng's ML courses on Coursera. Both the original one and the new DeepLearning.ai ones.
Upvotes: 1 <issue_comment>username_3: A neural network can be reduced to a linear regression model only if we use linear activation functions (i.e. $\sigma(x) = x$), and only if we do not use any neural network specific techniques such as convolution, residuals, etc., as shown below:
$\text{neural network}(x) = \sigma\_n(W\_{n} \sigma\_{n-1}(W\_{n-1}\dots\sigma\_1(W\_1 x + b\_1) + \dots + b\_{n-1}) + b\_n) \\
= W\_n (W\_{n-1} \dots (W\_1 x + b\_1) + \dots + b\_{n-1}) + b\_n \\
= \left( W\_n W\_{n-1} \dots W\_1 \right) x + \left( W\_n W\_{n-1} \dots W\_2 \right) b\_1 + \left( W\_n W\_{n-1} \dots W\_3 \right) b\_2 + \dots + W\_n b\_{n-1} + b\_n \\
= W\_z x + b\_z$
where $W\_z = \displaystyle \prod\_i W\_i$ is a weight matrix and $b\_z$ is some vector constant.
This follows the linear regression model form $y = Wx + b$, where $W$ is the weight matrix and $b$ is the vector constant. As a result, you can analytically solve for $W\_z$ and $b\_z$ using linear regression techniques, and no longer need gradient descent.
Note that for this to work well as a linear regression, you need to check for the OLS data assumptions, such as making sure the regressors have no collinearity, the residuals have no heteroskedasticity, there's no auto-regression, and that the errors are roughly normally distributed ([more info](https://www.statisticssolutions.com/assumptions-of-linear-regression/)). Deep neural networks with non-linear activation functions do not require these assumptions since they are universal approximators, although checking for some conditions may help make the task easier to predict ([more info\*](https://stats.stackexchange.com/a/472634/303597)).
\**Note - this link talks specifically about time series data with neural network, but the same concept applies to any task in general.*
The neural network specific techniques, such as ReLu, convolutions, residuals, etc., is what allows the network to learn non-linear relationships, and therefore make neural networks something more than just repeated applications of linear regression.
Upvotes: 2 |
2017/10/19 | 409 | 1,523 | <issue_start>username_0: In the hill climbing algorithm, the greater value, compared to the current value, is selected, but I cannot understand why it takes the larger value instead of the smaller one. Why is that?
I greatly appreciate the inclusion of figures in your answers.<issue_comment>username_1: When we climb a hill:
[](https://i.stack.imgur.com/msB7D.jpg)
We move higher in [altitude](https://en.wikipedia.org/wiki/Altitude). The person who is climbing, will always look for rocks/mud on the hill that are higher, so that he can climb higher.
That is what the algorithm does too. We are assuming that there is a hill of numbers. The larger numbers are placed higher than the smaller numbers. So if we want to climb up the hill, we search for the larger numbers.
Comparing this with the real world hill climbing, simply assume that every rock on the hill has a number written on it. The number will be the altitude of the rock, from sea level height. So if you want to climb up, you just have to search for the rock with a higher number than the rock you are standing on now.
[](https://i.stack.imgur.com/u1leZ.png)
Upvotes: 2 <issue_comment>username_2: There is no reason why you can't have a hill descending algorithm, instead of finding maxima you will find minima. If that is what your aim is, it's still called a hill climbing algorithm, I guess...
Upvotes: 1 |
2017/10/19 | 460 | 1,747 | <issue_start>username_0: In my research on video games path finding I'm using ant colony optimization, not only to find the shortest path, but also to add some unpredictability and adaptiveness to bots path finding. It works the way as players move in the map, they add some pheromone to the map, so it adds up a probability that bots choose path like players. I have sent the paper, but judges said you need a benchmark and distinguish to previous works. Can you tell me how can I benchmark this work?<issue_comment>username_1: When we climb a hill:
[](https://i.stack.imgur.com/msB7D.jpg)
We move higher in [altitude](https://en.wikipedia.org/wiki/Altitude). The person who is climbing, will always look for rocks/mud on the hill that are higher, so that he can climb higher.
That is what the algorithm does too. We are assuming that there is a hill of numbers. The larger numbers are placed higher than the smaller numbers. So if we want to climb up the hill, we search for the larger numbers.
Comparing this with the real world hill climbing, simply assume that every rock on the hill has a number written on it. The number will be the altitude of the rock, from sea level height. So if you want to climb up, you just have to search for the rock with a higher number than the rock you are standing on now.
[](https://i.stack.imgur.com/u1leZ.png)
Upvotes: 2 <issue_comment>username_2: There is no reason why you can't have a hill descending algorithm, instead of finding maxima you will find minima. If that is what your aim is, it's still called a hill climbing algorithm, I guess...
Upvotes: 1 |
2017/10/21 | 1,686 | 7,046 | <issue_start>username_0: This might sound silly to someone who has plenty of experience with neural networks but it bothers me...
Random initial weights might give you better results that would be somewhat closer to what a trained neural network should look like, but it might as well be the exact opposite of what it should be, while 0.5, or some other average for the range of reasonable weights' values, would sound like a good default setting.
**Why are the initial weights of neural networks randomly initialized rather than being all set to, for example, 0.5?**<issue_comment>username_1: The initial weights in a neural network are initialized randomly because the gradient based methods commonly used to train neural networks do not work well when all of the weights are initialized to the same value. While not all of the methods to train neural networks are gradient based, most of them are, and it has been shown in several cases that initializing the neural network to the same value makes the network take much longer to converge on an optimum solution. Also, if you want to retrain your neural network because it got stuck in a local minima, it will get stuck in the same local minima. For the above reasons, we do not set the initial weights to a constant value.
References:
[Why doesn't backpropagation work when you initialize the weights the same value?](https://stats.stackexchange.com/q/45087/82135)
Upvotes: 3 <issue_comment>username_2: You shouldn't assign all to 0.5 because you'd have the "break symmetry" issue.
>
> <http://www.deeplearningbook.org/contents/optimization.html>
>
>
> Perhaps the only property known with complete certainty is that the initial
> parameters need to “**break symmetry**” between different units. If two hidden
> units with the same activation function are connected to the same inputs, then
> these units must have **different initial parameters**. If they have the same initial
> parameters, then a deterministic learning algorithm applied to a deterministic cost
> and model will constantly update both of these units in the same way. Even if the
> model or training algorithm is capable of using stochasticity to compute different
> updates for different units (for example, if one trains with dropout), it is usually
> best to initialize each unit to compute a different function from all of the other
> units. This may help to make sure that no input patterns are lost in the null
> space of forward propagation and no gradient patterns are lost in the null space
> of back-propagation.
>
>
>
Upvotes: 5 [selected_answer]<issue_comment>username_3: That is a very deep question. There was series of papers recently proving the convergence of gradient descent for overparameterized deep networks (for example, [Gradient Descent Finds Global Minima of Deep Neural Networks](https://arxiv.org/abs/1811.03804), [A Convergence Theory for Deep Learning via Over-Parameterization](https://arxiv.org/abs/1811.03962) or [Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU Networks](https://arxiv.org/abs/1811.08888)). All of the proofs assume that the initial weights are assigned randomly according to a Gaussian distribution. The main reasons this initial distribution is important for the proofs are:
1. Random weights make the ReLU operators in each layer statistically compressive mapping (up to a linear transformation).
2. Random weights preserve separation of input for any input distribution - that is if input samples are distinguishable network propagation will not make them indistinguishable.
Those properties are very difficult to reproduce with deterministically generated initial weight matrices, and even if they are reproducible with deterministic matrices NULL-space (from which we can generate adversarial examples) would likely make the method less useful in practice. More importantly, preservation of those properties during gradient descent would likely make method impractical. But overall it's very difficult but not impossible, and may warrant some research in that direction. In analogous situation, there were some results for [Restricted Isometry Property for deterministic matrices](https://arxiv.org/abs/1202.1234) in a [compressed sensing](https://en.wikipedia.org/wiki/Compressed_sensing).
Upvotes: 2 <issue_comment>username_4: It is okay to initialize the weights to zero for a simple logistic regression, but for a neural network to initialize the weights to parameters to all zero and then apply gradient descent, it won't work.
Assume here that you have in a 2-layered network with just 2 input features and in the hidden layer you have 2 nodes, and the weights are initialized as [u, v] for both the nodes.
So, while optimizing the weights, using gradient descent you would compute:


where α is the learning rate and J is the cost function that you want to minimize by optimizing the weights.
By kind of a proof by induction, you should be able to see that after every single iteration of training, your two hidden units are still computing exactly the same function because both hidden units start off computing the same function, have the same influence on the output unit. So even after multiple iterations, the two hidden units are still symmetric. Hence, there's really no point to having more than one hidden unit because they are all computing the same thing.
In summary, no matter how long you run gradient descent, both the two units compute exactly the same function which is not helpful, because you want the different hidden units to compute different functions. The solution to this is to initialize your parameters randomly.
For larger neural networks, say of three features and maybe a very large number of hidden units, a similar argument still works.
Note: It's okay to initialize the bias term to just zeros. Because so long as weights are initialized randomly, you start off with the different hidden units computing different things.
Upvotes: 2 <issue_comment>username_5: Another theory relevant I think to this question is the “lottery ticket hypothesis”:
* <NAME> and <NAME>, The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks, 2019, <https://arxiv.org/abs/1803.03635>
Basically, when you have a neural network with a very high number of neurons with randomly initialized weights in the hidden layers, you have a high chance of a subset of the network being close to a good solution to the problem. Then, by applying gradient descent, you can align the output of the whole network to that of the sub-network and fine tune it.
It would be nice if someone could prove this. If there is a proof I would be interested to see it.
Upvotes: 1 |
2017/10/22 | 1,619 | 7,476 | <issue_start>username_0: I know a little about these subjects. I found them similar to each other. Can anybody explain the differences between them?<issue_comment>username_1: Machine learning is a form of pattern recognition. Machine learning is basically the idea of training machines to recognize patterns and apply it to particle problems. Data science is the science of apply machine learning to practical problems such as creating better search engine results or classifying images. Patten recognition is pretty much the umbrella term here. However, I think that the pattern recognition term is sort of falling out of style with how modern data scientists are training neural networks and other machine learning models.
Upvotes: 1 <issue_comment>username_2: In *data mining*, we can use *machine learning* (ML) (with the help of unsupervised learning algorithms) to recognize patterns.
*Pattern recognition* is a process of recognizing patterns such as images or speech. We can recognise patterns using ML. For example, once a neural net is trained, using ML algorithms, it can be used for pattern recognition. Other methods, even ones not related to ML and data mining, can be used for pattern recognition, such as a fully handcrafted pattern recognition system.
In general,
1. data mining is mostly associated with statisticians,
2. ML is mostly associated with computer scientists whereas,
3. pattern recognition is mostly associated with engineers.
Upvotes: 1 <issue_comment>username_3: Terms in a field are sometimes defined unambiguously. For instance, we know what *convergence* means when communicating about machine learning algorithms in academic publications because it has a formal definition in an older field, mathematics. However, the term *machine learning* is defined ambiguously across academic publications.
**Perspectives on Machine Learning**
Some see it as a branch of applied probability and statistics involving models with curvature (not usefully approximated by a first degree polynomial) and the application of those principles in digital computing. Some see it as an extension of the work of <NAME> and <NAME>Coll the application of the feedback control concept to digital control. Some see it as the natural result of the pioneering AI work of <NAME> and <NAME>, where the adaptive qualities of nature, including neurochemistry, are simulated with the intention of birthing artificial life.
Some don't see that deeply into ML and imagine that its a set of classes and libraries the mastery over which will make for a great career. As shallow as that may seem, that concept may be as true as the other three deeper conceptions.
**Perspectives on Data Mining**
The term *data mining* is like that. Each book, and sometimes each chapter within the same book, seems to have its own distinct conception of the verb *mining*. Although these definitions have some similarity, the term is nothing like the term *convergence* or even *database* in IT or *melody* in music.
Unfolding the metaphor contained in the two words of the term, data mining is digging up data, and perhaps that's a satisfactory definition for the most general use of the term. The information sought is not on the surface, like diamonds dropped onto the ground, but rather underneath and covered with other materials so that one has to survey, dig, and process to get past the worthless material and reveal the gems.
This term has another vantage point. In systems theory there is an important distinction between noise and signal. In data science, what an electrical engineer would call the signal is the listing of statistics, table, graphic, or other visualization miner's client needs to make management decisions. The noise is everything obscuring the signal through complexity, volume, or prominence.
**Perspectives on Pattern Recognition**
The term *pattern recognition* is perhaps the most ambiguous because neither of the two words arose in a scientific context.
Early uses of the word *pattern* in English (and its equivalent in other languages) are related to shelter construction, farming, or early textiles. The notion that the shape of a letter or other symbol or a sequence of phonetic elements that make up a spoken word were patterns only arose recently. Much of the early and current work in pattern recognition involving computers had to do with converting natural language expressions into some functional machine representation.
The term pattern is also ambiguous because of gestalt, the dependence of perception on the orientation of the recognizer at the time of recognition. A sand castle may have an architecture to an architect, a chemical composition to a chemist, an indication of civilization to the starving passengers of a boat adrift, an obstacle in the way to a crayfish, and an imaginary home for a child.
To a mathematician, it may be a three dimensional form with particular surface topology, feature curvature, and dimensions. To a physicist, there may be no significant difference between the sand castle and the seagull flying over it or the air in between them (unless the sand castle is the triumph of the physicists own child).
The orientation of the machine is even more a constraint on the emulation of some aspect of human perception than demonstrated in gestalt psychology experiments. The human can adjust perception when a new kind of pattern or structure is pointed out. Until AI progresses further, that kind of experience, where the computer would say, "Oh, yes. Now I see the old woman in the picture of the young woman," is only realizable in software to the most primitive degree.
Taken literally, the term *recognition* means the repetition of a cognitive event, but that is not what we mean when we say, "I recognize that," in common speech. We usually mean that a mental search for some set of sensory features (not necessarily any more a pattern than anything else in the sensory stream) is identified and associated with some internal object or concept.
The most common use of a convolutional network (CNN) is neither of these. It is usually used to categorize objects or as a feature extracting sensory front end to a much larger AI design.
**Overlap and Associations**
With all these ambiguities present, some overlap may be apparent, in that some AI activities may thoroughly involve two or all three of these terms. Certainly some associations between the three terms are obvious.
* When mining data, we may be looking for a particular kind of structure in a sea of data and have a particular search strategy to narrow the search and make it manageable for computing resources available. The test use during the search may be called pattern recognition.
* In machine learning, we may train a network of artificial cells to assist in locating data or features in data that are meaningful to the stakeholders in the project. That would be using ML for data mining projects.
A large number of other associations between the three terms can be made. Which ones would appear most prominent to the expert would depend on the scientific, research, and career orientation of the expert.
**Not Sufficient Overlap to be Synonyms**
It would be difficult however to declare any two of the three to be synonymous. The three arose out of different kinds of research and from different orientations. Only some of that etymology is preserved in the terms themselves.
Upvotes: 2 |
2017/10/22 | 984 | 3,918 | <issue_start>username_0: I would like to create a application that can teach a parrot (an orange winged amazon) to learn new phrases and also interact and talk with him. My idea of how this would work is that a neural network could learn his phrases and how and when the best time is to interact with him (I wouldn't want it to be talking to him at night or when he's eating etc. Maybe it learns his sound levels for specific times of the day). Slowly over time it would then drop in new phrases back to him. Overtime my parrot would then learn the desired phrases. The Neural Network would also learn and repeat his phrases. It would also calculate how well the parrot is picking up the new phrases and in what time frame.
I would like to know how this would be done. Maybe a Generative adversarial network would be used to generate the new phrases from learning multiple samples of the parrot speaking? But then would the new phrases be proper words?. Maybe a list of desired words and phrases would need to be supplied. How would I generate the phrases that he already knows without just playing back a recording of him?
The hardware such as a Raspberry Pi could be used just to record and play the samples and send to and from a server which is used to train the network.<issue_comment>username_1: This would probably best be done by training up some light model on your primary machine and then porting it to the raspi after training. Because of the lack of resources, your neural net(or similar) would likely have to be computationally inexpensive to run.
With that being said, a recurrent neural network is a very good architecture for speech recognition. I would take a look at something like [this article](https://medium.com/@ageitgey/machine-learning-is-fun-part-6-how-to-do-speech-recognition-with-deep-learning-28293c162f7a) to get you started.
Also, since it seems you are training it on your parrot's voice, make sure you have a dataset that is comprised of yours or similar parrots' voices.
Upvotes: 1 <issue_comment>username_2: Typically what you probably would want to do is do the training on something other than a Raspberry Pi. I think for what you're wanting to accomplish with having a computer talk back to your parrot, you won't need anything too crazy with a bunch of GPU's - but I don't think you'll want to necessarily do the training on a Pi either.
Here are some questions I have:
1. What are you going to plat back to the parrot? i.e. are you going to play it back random parrot sounds you found online or sounds that you've recorded? are you going to play it back what it just said? are you going to play it back a modification of what it just said?
2. Do you want it to respond to the parrot anytime the parrot speaks? Or when the parrot "says" something specific?
I think depending on the answers to those two questions, there are a couple of different paths that you could go down.
As for the hardware of the Raspberry Pi itself, I have never done any speech recognition with it, but I have done image recognition with it via the [Movidius Neural Compute Stick](https://www.movidius.com/) which according to [this Quora post](https://www.quora.com/Can-the-Movidius-Neural-Compute-stick-accelerate-models-that-aren%E2%80%99t-vision-such-as-speech-recognition-models), may be able to be used to offload some of the processing "relatively easily".
Here are some other links you may find valuable:
* Tensorflow speech recognition: <https://github.com/pannous/tensorflow-speech-recognition>
+ The YouTube video that the above repo references: <https://www.youtube.com/watch?v=u9FPqkuoEJ8>
* How to control a raspberry pi with your voice: <http://www.techradar.com/how-to/how-to-control-the-raspberry-pi-with-your-voice>
* Voice recognition software setup on Raspberry Pi: <https://diyhacking.com/best-voice-recognition-software-for-raspberry-pi/>
Upvotes: 3 [selected_answer] |
2017/10/24 | 940 | 3,543 | <issue_start>username_0: I have a liberal arts background so I need help understanding [this paper](https://arxiv.org/pdf/1708.09839.pdf), particularly pages 26 to 30. The authors test a four-camera system for localization, mapping, and obstacle detection for self-driving cars. The paper seems to say the multi-camera system can map the environment to within an average of 7 cm (2.8 inches) of accuracy (with the largest error being 16 cm or 6.3) and detect obstacles to within 10 cm (3.9 inches) of accuracy. Am I getting this right?
Given that automotive lidar can detect objects to within 1.5 cm (0.6 inches) of accuracy, and given that for driving purposes the difference between 1.5 cm and 7 cm, 10 cm, or 16 cm seems quite small, can a multi-camera system be used instead of lidar in a self-driving car application? How do driving speeds affect things? What crucial elements of the problem space might I be overlooking or misunderstanding?<issue_comment>username_1: Given that a self-driving car is trying to replicate the performance of a two-camera system (or one in a pinch), there is nothing in principal that mandates lidar for a self-driving car. Lidar is a shortcut, substituting sensor sophistication for image-processing sophistication. AFAIK [Nvidia's own self-driving vehicle](https://www.youtube.com/watch?v=-96BEoXJMs0) doesn't have Lidar. My personal opinion is that Level 5 self-driving vehicles won't be practical until they have the kind of image-processing sophistication that makes Lidar an unnecessary crutch.
Upvotes: 0 <issue_comment>username_2: It seems that LIDAR presents a problem for resolving the car's environment at higher speed. While I'm not too familiar with the dynamics of LIDAR I do know that [it's a physical system that relies on sending and receiving laser pulses to various points around the car by way of rotating mirrors](https://www.edn.com/design/analog/4442319/Autonomous-automotive-sensors--How-processor-algorithms-get-their-inputs). As speeds increase, it seems [different arrangements of mirrors and light collectors might have to be used](https://en.wikipedia.org/wiki/Lidar#Autonomous%20vehicles) to maintain a high-resolution image. There's some evidence that [Doppler LIDAR (developed in the 1990s) became less accurate with higher velocities.](ftp://ftp.rap.ucar.edu/pub/rgf/velerr.pdf) However, LIDAR is partly preferred over radar because of its higher accuracy even when tracking objects at high speeds - [this is why LIDAR guns are increasingly being used by police instead of radar guns to track speeding vehicles](https://www.officer.com/on-the-street/article/10250592/lidar-the-speed-enforcement-weapon-of-choice). It seems natural that a set of high-resolution cameras paired with a well-trained neural network would not be subject to the same physical limitations as LIDAR.
I think that an important intuition to consider is that while LIDAR is used to generate clouds of data points whose shapes and patterns can be analyzed by autonomous car software, cameras can pick up non-topograhical features such as road lines, the content of roadsigns, and additional location context such as storefronts and intersection layouts. Considering that these cameras can use pattern recognition and stereoscopy to also generate a 3D topographic map of the environment, it seems plausible that Level 5 self driving cars would not require LIDAR.
[Here's an interesting look at the problem.](https://seekingalpha.com/article/4106093-tesla-betting-cameras-full-self-driving)
Upvotes: 1 |
2017/10/25 | 906 | 3,581 | <issue_start>username_0: I've been researching the following topic. Or rather, I would like to but I can't find anything because I'm not sure what to look for.
I am interested weather there are some concepts or models that explain how humans (or cognitive systems in general) trade off *remembering* what to do in a particular situation versus *thinking* about the board state and devising a new solution. Consider the following example:
A person plays chess. On their turn they must decide what to do. They could act according to one of the following extreme strategies:
1. Remembering all possible board configurations and how the game ended (we'll call that *Memory*)
2. Use the known set of rules to *simulate* the game in their mind and choose the optimal path (*Optimization*)
I'd say the best choice is a trade off between these two. But how? When to choose one of the strategies?
Is there a research field that is working on these kind of questions? What would I look for?<issue_comment>username_1: Given that a self-driving car is trying to replicate the performance of a two-camera system (or one in a pinch), there is nothing in principal that mandates lidar for a self-driving car. Lidar is a shortcut, substituting sensor sophistication for image-processing sophistication. AFAIK [Nvidia's own self-driving vehicle](https://www.youtube.com/watch?v=-96BEoXJMs0) doesn't have Lidar. My personal opinion is that Level 5 self-driving vehicles won't be practical until they have the kind of image-processing sophistication that makes Lidar an unnecessary crutch.
Upvotes: 0 <issue_comment>username_2: It seems that LIDAR presents a problem for resolving the car's environment at higher speed. While I'm not too familiar with the dynamics of LIDAR I do know that [it's a physical system that relies on sending and receiving laser pulses to various points around the car by way of rotating mirrors](https://www.edn.com/design/analog/4442319/Autonomous-automotive-sensors--How-processor-algorithms-get-their-inputs). As speeds increase, it seems [different arrangements of mirrors and light collectors might have to be used](https://en.wikipedia.org/wiki/Lidar#Autonomous%20vehicles) to maintain a high-resolution image. There's some evidence that [Doppler LIDAR (developed in the 1990s) became less accurate with higher velocities.](ftp://ftp.rap.ucar.edu/pub/rgf/velerr.pdf) However, LIDAR is partly preferred over radar because of its higher accuracy even when tracking objects at high speeds - [this is why LIDAR guns are increasingly being used by police instead of radar guns to track speeding vehicles](https://www.officer.com/on-the-street/article/10250592/lidar-the-speed-enforcement-weapon-of-choice). It seems natural that a set of high-resolution cameras paired with a well-trained neural network would not be subject to the same physical limitations as LIDAR.
I think that an important intuition to consider is that while LIDAR is used to generate clouds of data points whose shapes and patterns can be analyzed by autonomous car software, cameras can pick up non-topograhical features such as road lines, the content of roadsigns, and additional location context such as storefronts and intersection layouts. Considering that these cameras can use pattern recognition and stereoscopy to also generate a 3D topographic map of the environment, it seems plausible that Level 5 self driving cars would not require LIDAR.
[Here's an interesting look at the problem.](https://seekingalpha.com/article/4106093-tesla-betting-cameras-full-self-driving)
Upvotes: 1 |
2017/10/27 | 897 | 3,549 | <issue_start>username_0: I'm very interested in writing a **Spiking Neural Network** engine (SNN) from scratch, but I can't find the basic information I need to get started.
For example, I've seen pictures of the individual signals that combine to form a neuron pulse in several research papers, with no information on the equations in use. It's not the focus of the papers, and the authors assume the readers have that knowledge already. Some papers reference software that provides this foundation ([NEST](https://www.nest-simulator.org/), [pyNN](https://neuralensemble.org/PyNN/), etc.), but the documentation for the software is similarly light on details.
There is a ton of information out there on the more common network types, but SNNs have not yet made it into the mainstream.
So, where do I get this basic information? Has someone pulled together any recipes/examples/tutorials for an SNN, as has been done with all the other network types?<issue_comment>username_1: Given that a self-driving car is trying to replicate the performance of a two-camera system (or one in a pinch), there is nothing in principal that mandates lidar for a self-driving car. Lidar is a shortcut, substituting sensor sophistication for image-processing sophistication. AFAIK [Nvidia's own self-driving vehicle](https://www.youtube.com/watch?v=-96BEoXJMs0) doesn't have Lidar. My personal opinion is that Level 5 self-driving vehicles won't be practical until they have the kind of image-processing sophistication that makes Lidar an unnecessary crutch.
Upvotes: 0 <issue_comment>username_2: It seems that LIDAR presents a problem for resolving the car's environment at higher speed. While I'm not too familiar with the dynamics of LIDAR I do know that [it's a physical system that relies on sending and receiving laser pulses to various points around the car by way of rotating mirrors](https://www.edn.com/design/analog/4442319/Autonomous-automotive-sensors--How-processor-algorithms-get-their-inputs). As speeds increase, it seems [different arrangements of mirrors and light collectors might have to be used](https://en.wikipedia.org/wiki/Lidar#Autonomous%20vehicles) to maintain a high-resolution image. There's some evidence that [Doppler LIDAR (developed in the 1990s) became less accurate with higher velocities.](ftp://ftp.rap.ucar.edu/pub/rgf/velerr.pdf) However, LIDAR is partly preferred over radar because of its higher accuracy even when tracking objects at high speeds - [this is why LIDAR guns are increasingly being used by police instead of radar guns to track speeding vehicles](https://www.officer.com/on-the-street/article/10250592/lidar-the-speed-enforcement-weapon-of-choice). It seems natural that a set of high-resolution cameras paired with a well-trained neural network would not be subject to the same physical limitations as LIDAR.
I think that an important intuition to consider is that while LIDAR is used to generate clouds of data points whose shapes and patterns can be analyzed by autonomous car software, cameras can pick up non-topograhical features such as road lines, the content of roadsigns, and additional location context such as storefronts and intersection layouts. Considering that these cameras can use pattern recognition and stereoscopy to also generate a 3D topographic map of the environment, it seems plausible that Level 5 self driving cars would not require LIDAR.
[Here's an interesting look at the problem.](https://seekingalpha.com/article/4106093-tesla-betting-cameras-full-self-driving)
Upvotes: 1 |
2017/10/29 | 908 | 3,802 | <issue_start>username_0: I am trying to train a CNN regression model using the ADAM optimizer, dropout and weight decay.
My test accuracy is better than training accuracy. But, as far as I know, usually, the training accuracy is better than test accuracy.
So I wonder how this is happening.
[](https://i.stack.imgur.com/MgVBt.png)<issue_comment>username_1: You use dropout during traing to reduce overfitting, but this reduces the training accuracy. The dropout will not be used during testing, therefore the accuracy will be higher.
That's normal behavior if you work with dropout.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Beyond dropout (as @demento already explained) if you are training with data augmentation this is an expected behavior.
Upvotes: 0 <issue_comment>username_3: Also, it really depends how uniform your test data is distributed as well as its size. If you have relevantly so small test dataset compared to training one, it may be an expected output.
Upvotes: 0 <issue_comment>username_4: I will just add to all the good answers already here.
Like I said on my comment earlier, this is not a bad this(provided you have a split your data correctly).
Other reasons could be:
* High dropout rate or excessive data augmentation could be one of the reason. This can cause the training accuracy to appear low whist in reality the model is in fact learning. How so, you ask? Regularization mechanisms, such as Dropout and L1/L2 weight regularization, are turned off at testing time. Recall that every score you are seeing on the training set is being calculated on a "different" set of weights, ie a different network. So the accuracy you are seeing is the accuracy of all those networks your network tried before it settled on the one you are now testing on the test set. The reasoning here being that the other networks were just very "bad" compared to the final one that you are now using to test on the test set. If this is your case then, you are just witnessing neural network dropout and data augmentation doing exactly what they are meant to do, i.e avoid dropout.
* Another possible reason for this could be that your test set is different and simpler in comparison to the training set. That is to say: your data split could be such that you have a simpler test set than a train set and hence your network seems to be doing far much better on the test set than it did on the training set. If this is the case then good job, you have build yourself a very good model, no need to stress, but be careful because with such "bad" data splitting, future you might not be as lucky :).
* Make sure you are doing the correct pre-processing.
* Your test set might be very small as such, the high accuracy you are seeing on it is not indicative of the real accuracy you would see if you had a larger test set. If this is the case then, you may want to collect more test data and test your model again.
* Another reason could be than your training set is small and the high training error you are seeing there is not "real". This bring up the question of how big your training set it?
* To quote [the Keras documentation](https://keras.io/getting-started/faq/#why-is-the-training-loss-much-higher-than-the-testing-loss):
>
> the training loss is the average of the losses over each batch of training data. Because your model is changing over time, the loss over the first batches of an epoch is generally higher than over the last batches. On the other hand, the testing loss for an epoch is computed using the model as it is at the end of the epoch, resulting in a lower loss.
>
>
>
* Lastly you may find this [github postenter link description here](https://github.com/fchollet/keras/issues/1761) useful.
Upvotes: 2 |
2017/10/31 | 1,461 | 5,929 | <issue_start>username_0: So, my question is a bit theoretical. I have been trying to implement a perceptron based classifier with outputs 1 and 0 depending on the category. I have used 2 methods: The `example by Example learning method` and `Batch learning method`. I also have defined another method which will measure accuracy according to the formulae `number_of_samples_classified_correctly/total_number_of_samples`(I'm not sure this should be the correct definition for accuracy and you are welcome to suggest a better measure). Now there are a few confusions i'm facing. Firstly, the accuracy of example by example learning is different from batch learning by 2%. Also the **best** accuracy achieved in both cases is depending on the slopes. So where exactly is the mistake?(Batch learning algorithm=`error*input_vector`( where error can be 1,-1 or 0 ) summed over all input vectors and then added to weights).
[![For initial slope[1,-1] giving an accuracy of 88% example by example learning](https://i.stack.imgur.com/Jla9w.png)](https://i.stack.imgur.com/Jla9w.png)
[![For initial slope[1,-1] giving an accuracy of 88% batch learning](https://i.stack.imgur.com/DejqO.png)](https://i.stack.imgur.com/DejqO.png)
[![For initial slope[1,1] giving an accuracy of 84% example by example learning](https://i.stack.imgur.com/gaKBn.png)](https://i.stack.imgur.com/gaKBn.png)
[![For initial slope[1,1] giving an accuracy of 88% batch learning](https://i.stack.imgur.com/cDryg.png)](https://i.stack.imgur.com/cDryg.png)
* For initial slope[1,-1] giving an accuracy of 88% example by example learning
* For initial slope[1,-1] giving an accuracy of 88% batch learning
* For initial slope[1,1] giving an accuracy of 84% example by example learning
* For initial slope[1,1] giving an accuracy of 86% batch learning<issue_comment>username_1: **In Brief:**
re-train your dataset. I believe where you get lower accuracy scores, your model has not converged to the final state. duplicate your dataset multiple times and create a bigger one, then train your model with it.
**In Detail:**
>
> number\_of\_samples\_classified\_correctly/total\_number\_of\_samples(I'm not sure this should be the correct definition for accuracy and you are welcome to suggest a better measure)
>
>
>
This is a valid accuracy metric. In fact if the value is `acc`, then `1-acc` is called *misclassification error*. So your metric is good unless you have some class imbalance, where you need to use other metrics such as *Cohen's Kappa score*.
>
> the accuracy of example by example learning is different from batch learning by 2%. Also the best accuracy achieved in both cases is depending on the slopes.
>
>
>
I strongly believe both strange results happen because the number of your instances (i.e. examples) is low, or let's say your *learning rate* is small. As you know Perceptron algorithm corrects the weights of the decision hyper-plane by *delta learning rule*: it reads each instance, calculates the error (in case of binary classification `{-1,0,1}`) and updates the weights by `c.x.E` where:
* c is learning constant/rate/step
* x is the data instance
* E is the error
Thus the weights (or slopes) change a little bit every time. There is no guarantee that your model reaches its final states after all the instances are given to it. Since when you start training with different initial weights, or change the order of instances given to your model, the distance or from or moving speed toward the final state changes. So as I mentioned above, re-train your model and let it converge to the final state **OR** increase your learning step (learning rate) . I believe all the 4 mentioned-above-accuracies will be the same then; I mean not only you should have the same model for different initial weights, but also for batch and online training.
Please update your post with new findings and question my claims if those were false.
Upvotes: 2 <issue_comment>username_2: After running the code few more times i am thinking the problem lies in the discrete nature of error. In batch learning we are deciding the error as [0, 1, -1] depending on the input and the input whereas in the measuring of accuracy we are not making consideration of how big the error is i.e. whether the line is classifying the 2 classes by a very small margin (just classifying it) or classifying it by a good margin (comfortably classifying it)..both will give the same error since we are not considering the distance of the points from the line. Also, on choosing the initial weights we are selecting a point in the error vs weight function (which is not continuous) and if our choice of weights are inappropriate, the classifier weights is either getting trapped in a local minima or taking a very large number of epochs to reach the best weight (there is a very little probability that the classifier weights hit the right value after which it is easily able to achieve global minima). This is validated by the fact that on choosing a suitable weight we are getting the best accuracy possible within very few epochs but on choosing a different weight it doesn't reach the best accuracy but comes close to it after a large number of epochs.
As for the example by example learning method, in this type of training the initial weights will determine the local minima the classifier weights get trapped in and its impossible to get out of it even with a large number of epochs.
The 2 ways i think to solve the problem is by either using the Delta rule or by reinforcement learning (although i am not sure whether it can be applied to such discrete error functions) where the algorithm can see that even if it doesn't get an immediate gain by moving in a direction, but finally get a much more gain(thus will be able to get out of local minima) i.e exploitation vs exploration.
Any further insights, suggestions or corrections are welcome.
Upvotes: 0 |
2017/11/01 | 520 | 2,219 | <issue_start>username_0: A question for developers of projects for pattern recognition. How best to organize the architecture of such a service?
At what stage do you conduct logic? (for example, for the recognition of a photo of a male blue jacket, a cascade of queries is performed: "recognizing men" -> "recognizing the jacket" -> "recognizing the color of the jacket.")
Does it make sense to implement all search options within a single neural network or is it better to create a set of individual neuronets that are confined to fairly simple tasks?<issue_comment>username_1: That is one of the good example for research. Personally, I prefer to segment out all the desired outputs at once. Then, check the success rate. If you cannot hit the success rate that you desire, you can go for more specific solutions for the specific problem that you face.
However, in general, the localization, segmentation, recognition are implemented in same network and are obtained all-at-once.
Upvotes: 2 <issue_comment>username_2: I would use a single network:
The essence of the question is whether or not doing all the classification work at once is more efficient than running individual classifiers for each stage.
The recent "You Only Look Once" algorithm ("YOLO") is based on the fact that the convolutional networks can reuse a lot of the interim calculations if you combine them into one. Because of this, they are able to perform real-time object detection on images across thousands of classes.
You can express your hierarchical classifier with YOLO (man, jacket and jacket colour classes). Depending on your needs, you might want to model the jacket colour as a scalar output of an approximate R,G,B value for the colour rather than having named classes for the colours.
This everything-at-once implementation gives you runtime efficiencies for the inference step and much faster training since the classes share common abstractions in the earlier layers of the net.
Details, the YOLO version 2 paper, and a cool demonstration video featuring James Bond are available here: <https://pjreddie.com/darknet/yolo/>
The paper itself, "YOLO9000", is available on Arxiv: <https://arxiv.org/abs/1612.08242>
Upvotes: 0 |
2017/11/03 | 847 | 3,801 | <issue_start>username_0: I am fairly new to deep learning in general and I am currently facing a problem I want to solve using neural networks and I am unsure if it is a *classification* or *regression* problem. I am aware that classification problems are about classifying whether an input belongs to class A or class B (or class C ...) and regression problems are about mapping the input to some sort of continuous output (just like the house pricing problem).
I basically want to measure the body temperature of a person using a simple video camera. To me, this seems like more of a *regression* type of issue rather than *classification*, because of the actual continuous result values I want the neural network to produce from the input video frames, e.g. 39°Celsius. But a question that came to my mind was: What if I use every integer value in the range from 35°C to 42°C as a possible output class? This would make it a classification problem, am I right? What would be the correct approach here and why? Classification or regression?<issue_comment>username_1: I think it depends on you application and what data you have available.
If the prediction of body temperature itself doesn't have to be accurate and classes like COLD, NORMAL, and HOT will suffice, you should stay with a classification. There isn't a cut off but as you increase the number of classes that represent numbers on the same scale, it may become more difficult to interpret the result as there will be a distribution across the classes.
If you choose regression on the other hand, you are not restricted by your classes anymore and may be able to tell the difference between 36.5 and 36C which (according to wikipedia) can be [the difference between normal and cold](https://en.m.wikipedia.org/wiki/Human_body_temperature). This is something classes may not be able to capture.
Another thing to consider is what your training data looks like and how accurately you want to predict the temperature. If you have pictures of people and a temperature reading, where was the reading taken and how accurate is the reading? If it isn't accurate (+- 1 degree) you may not be able to give as accurate predictions as you would like and may only be able to do 3 different classes like above.
If you don't have a data set, then that is another problem altogether and might require another question as it depends on your application.
I think that your problem is interesting and I hope this can help you to understand how best to apply deep learning to it.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Since you're termed the problem you're trying to solve is to "measure the body temperature of a person", the output should be a continuous valued number.
If the problem statement had been to rank or classify a person's body temperature as fever/healthy, then this could be a classification problem.
Deep learning employ neuron units at the output layer depending on the target objective.
Almost all machine learning approaches produce continuous valued predictions. The output is usually "classified" to a set of discrete labels in a separate step using cutoff values. Its best to produce a probability estimate as an outcome since the step of classifying is going to introduce additional errors due to the choice of a cutoff value.
For example, deep learning models deployed for the ImageNet computer vision dataset to classify objects into one of 1000 objects do not give the class of the object. Instead they give a 1000 value output listing the probability of that image being in a particular class. An image of a snow leopard would generate a probability of 95% snow leopard, 90% leopard, 75% Cat, etc.
So, to summarize, the output of your task should be a temperature reading such as 36°C.
Upvotes: 2 |
2017/11/04 | 897 | 3,860 | <issue_start>username_0: I was reading the paper [Dynamic Routing Between Capsules](https://arxiv.org/pdf/1710.09829.pdf) and didn't understand the term "activity vector" in the abstract.
>
> A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule.
>
>
>
I thought a vector is like an array of data that you are running through the network.
I started working through Andrew Ng's deep learning course, but it's all new and terms go over my head.<issue_comment>username_1: I took it to mean something like "the vector of activations of the neurons in the capsule". The activation for a given [neuron](https://en.m.wikipedia.org/wiki/Artificial_neuron) is the weighted sum of its inputs, passed through the [activation function](https://en.wikipedia.org/wiki/Activation_function) (sigmoid, relu, etc).
Upvotes: 1 <issue_comment>username_2: In a traditional neural network, the network's vertices are neurons and the output of a single neuron is a single value (a "*scalar*"). This number is called its *activation*. A *layer* of neurons in the network outputs a vector of activations. We should not confuse this with the activity vectors in a Capsule Network.
Capsule Networks are different since the network vertices are Capsules rather than neurons. They are a higher-dimensional: the output of a Capsule is not a *scalar* but a *vector* representing a group of parameters related to the input. Hence the name **activation *vector***.
**Motivation**
In a neural network, there is no inherent structure between the scalar outputs of the neurons, this is something the following layers have to learn. In Capsule Networks the output of a capsule represents all the parameters related to that together in a vector including a prediction for the activation of deeper layer Capsules. This adds a useful local structure.
For example, consider face recognition. If you have a capsule that knows how to recognize eyes it could output an activity vector representing *e.g.* "since I have recognized an eye position $(x,y)$ with probability $p=0.97$ I predict the parameters for the whole face will be $(f\_1,\dots, f\_n)$".
As explained in the *Dynamic Routing Between Capsules* paper you refer to this information is then used in the way that the capsules in earlier layers (the parts: eye, mouth, nose) predict the activations of deeper layers (face). For example, a face recognizer will only be strongly activated when there is an agreement between the eye, nose and mouth recognizers (the parts) and the face recognizer (the whole) about where the face is located (the $(f\_1,\dots, f\_n)$ parameters).
**Historical Inspiration**
Older computer vision algorithms like [SIFT](https://en.wikipedia.org/wiki/Scale-invariant_feature_transform "SIFT") work in a similar way where recognition is based on agreement between the configuration of multi-dimensional features (key points) and the reference configuration.
Upvotes: 3 |
2017/11/05 | 945 | 3,042 | <issue_start>username_0: I've mostly seen (e.g. in [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/)) that when training RNN on text for something like language modeling, the text is usually featurized character-by-character using a 1-hot encoding.
For example, the text "hello" would be represented like
```
{h: 1, e: 0, l: 0, o: 0}
{h: 0, e: 1, l: 0, o: 0}
{h: 0, e: 0, l: 1, o: 0}
{h: 0, e: 0, l: 1, o: 0}
{h: 0, e: 0, l: 0, o: 1}
```
I was wondering if one could just as well use the ASCII encoding of the text and feed the bits in one by one. So the input "hello" would be input like
```
0110100001100101011011000110110001101111
```
Would the RNN have a disproportionately harder time having to figure out how the arbitrary and complex 8-bit ASCII encoding should be used? Or would the ASCII encoding lead to about the same performance as the nicer 1-hot encoding?<issue_comment>username_1: My understanding is that the ASCII encoding would not get the best performance or results from the RNN because the ASCII codes for each character are not meaningful; they are arbitrary. If the number of each ASCII code represented something meaningful about the letter, it would work better. But they don't.
The same principles apply as when deciding how to encode any categorical data. If your categories are ordinal (eg. 'First', 'Second' .. or 'Age group 18-24', Age group 25-35' .. or even 'Social Class E', 'Social Class D' ..), then assigning a single numerical value to each class might work well. But in categorical data where there is no meaningful order, one hot encoding will work better.
This is an example of the principle of giving neural networks the most **expressive** data that we can. In the case of non-ordinal, arbitrary categories, one-hot is more expressive to the next layer of neurons (will stimulate them more distinctly) than using a numerical encoding.
Upvotes: 2 <issue_comment>username_2: The request for input encoding is that mathematical difference between encoded inputs (numerical subtraction in absolute value of vector norm, etc) must be proportional to logical (real world) dis-similitude between the objects they represent.
Lets say by example the 3 first uppercase letters. In one-hot encoding they will be:
```
'A' : (1,0,0)
'B' : (0,1,0)
'C' : (0,0,1)
```
Note that the distance between any two pairs is 0 if the letters are the same and 2 if they differ.
Compare with ASCII in binary as encoding:
```
'A' : 0100 0001
'B' : 0100 0010
'C' : 0100 0011
```
Note distance between 'A' and 'B' is 2, but distance between 'A' and 'C' is 1. Thus, we are saying that an 'A' is more similar to a 'C' than a 'B'. Something artificial that will cause network errors.
Equivalent problem appears when using ASCII as integer:
```
'A' : 65
'B' : 66
'C' : 67
```
we are saying that distance between 'A' and 'C' is 2, twice the distance between 'A' and 'B'. Again, something that will disturb network performance.
Upvotes: 0 |
2017/11/05 | 622 | 2,063 | <issue_start>username_0: I'm new in this argument, my question is:
Can convolution be applied in other contexts different from image recognition?
Is there a good source to learn from?<issue_comment>username_1: CNNs are generally designed for 2D (generally image) data. Hence, other usages are most likely a "hack" to CNN logic.
You can check how to extract features and classify texts with CNN through [here](http://www.wildml.com/2015/12/implementing-a-cnn-for-text-classification-in-tensorflow/).
Upvotes: 0 <issue_comment>username_2: Yes, Convolutional Neural Networks(CNNs) can and have been applied to non-image problems. Arguably, any problem in which the location of a feature(s) is relevant can be attempted via CNNs. CNNs works under the assumption that points close to each other in the data share some correlations/relationship whist points further apart don't share as much information. So, theoretically, if you can phrase your problem so that it meets this requirement(s), it can be attempted by a convolutional neural network. Here are few applications of CNNss that are dont involve images:
* [Speech recognition](http://ieeexplore.ieee.org/document/6857341/?reload=true)
* [Character-level Convolutional Networks for Text Classification](https://arxiv.org/abs/1509.01626)
* [Text classification using CNNs](https://chara.cs.illinois.edu/sites/sp16-cs591txt/files/0226-presentation.pdf)
* [Natural Language Processing](http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/)
* [Time Series Forecasting](https://arxiv.org/pdf/1703.04691.pdf)
* [Sentence modeling](https://arxiv.org/abs/1404.2188)
* I've used 1-D CNNs on spectral data. Here are some examples of [CNNs applied to spectral data](https://arxiv.org/abs/1710.01927) :
+ [Spectral Convolutional Neural Network for music classification](https://pdfs.semanticscholar.org/2af4/f72c2326c15616ad9f5cf18d5278d6608a14.pdf)
+ [Convolutional neural network based classification for hyperspectral data](http://ieeexplore.ieee.org/document/7730323/)
Upvotes: 3 |
2017/11/07 | 1,563 | 5,735 | <issue_start>username_0: What would be examples of journals that are good for a **first publication** in the field of Deep Reinforcement Learning?
I am in the process of writing about the research results of DQN-related algorithms.
I have 3 requirements - it should be indexed in one of these databases, otherwise, I cannot receive grant money for research:
* <https://www.scopus.com/>
* <http://webofknowledge.com>
And it should not be very expensive to publish. It should be under 1000EUR to publish, for example, the Open Access license for Elsevier "Artificial Intelligence" journal costs around 2400EUR to publish.
And it should not have a very long review/publishing period. For example, Elsevier's "Information Fusion" journal currently gathers articles for July 2018, which is 8 month period till publishing. Is it normal?
Can you please recommend some journals that qualify & you have had good experience publishing research?<issue_comment>username_1: I recommend you focus on quality over quantity. Publishing a paper will boost your reputation and make you more recognised within your academic field (AI); however, this is only if the paper provides useful insights into an important issue.
Your paper is more likely to be accepted if it is well written and easy to understand, stimulates new important questions, uses rigorous methods to explain why the data supports the conclusion and connections to prior work is made and serve to make your paper's arguments clear. (Elizabeth Z Elsevier blog)
Before submitting your paper, ask a mentor or a colleague to proofread it, so that you can make the relevant revisions and changes. Journal editors will look down on your work if it is poorly written or contains substandard grammar.
A way to get published is by writing reviews, especially for researchers in earlier stages of their careers. Most journal editors like to publish replies to previous publications since it stimulates debate.
Remember it is acceptable to challenge reviewers' suggestions with good justification. Many researchers fail to persevere when they are instructed to revise and resubmit their work. Don't give up, however, you can politely decline or even argue why a reviewer is wrong. Editors will accept a rational explanation if it is clear that you have considered all their feedback.
Getting published is never easy, especially in high ranking journals. If you focus on getting published quickly it could derail you from concentrating on the quality of your research. Yes, getting published can be expensive, however, it's much better for your career if you write a high-quality paper than a low-quality paper in a lowly ranked or ungraded journal since it will not be REFable.
Below is a list of Artificial Intelligence Journals that you can submit your papers to and possibly get published.
* Artificial Intelligence (Journal) <https://www.journals.elsevier.com/artificial-intelligence>
* Artificial Intelligence Review <https://link.springer.com/journal/10462>
* Applied Artificial Intelligence <http://www.tandfonline.com/action/journalInformation?journalCode=uaai20&>
* Autonomous Agents and Multi-Agent Systems <https://www.springer.com/computer/ai/journal/10458>
* International Journal on Artificial Intelligence Tools <http://www.worldscientific.com/worldscinet/ijait>
* International Journal of Pattern Recognition and Artificial Intelligence <http://www.worldscientific.com/worldscinet/ijprai>
* Journal of Experimental and Theoretical Artificial Intelligence <http://www.tandfonline.com/action/journalInformation?journalCode=teta20>
* Journal of Machine Learning Research <http://www.jmlr.org/>
* IEEE Intelligent Systems <http://publications.computer.org/intelligent-systems/>
* Journal of Automated Reasoning <https://www.springer.com/computer/theoretical+computer+science/journal/10817>
* Minds and Machines <https://www.springer.com/computer/ai/journal/11023>
* Autonomous Agents and Multi-Agent Systems <https://www.springer.com/computer/ai/journal/10458>
* Applied Intelligence <https://link.springer.com/journal/10489>
* Journal of Intelligent Manufacturing <https://link.springer.com/journal/10845>
* Journal of Intelligent Information Systems <https://link.springer.com/journal/10844>
* AI & Society <https://link.springer.com/journal/146>
Upvotes: 2 <issue_comment>username_2: One important consideration here: in the last decade or two the machine learning and artificial intelligence fields, which contains the majority of reinforcement learning work, researchers have considered **conferences to be the more impactful publishing venues than journals**. The particular venue a researcher chooses depends on the data and/or application domain of his or her use of reinforcement learning, and the conferences are often changing, but to get you started the top tier conferences are (in rough order of exclusivity and importance):
* [ICML](https://icml.cc/)
* [NeurIPS](https://nips.cc/)
* [ICLR](https://iclr.cc/)
* [AAAI](http://www.aaai.org/)
* [IJCAI](https://www.ijcai.org/)
* [RLDM](http://rldm.org/)
* [UAI](http://www.auai.org/) (more theory driven)
* [AISTATS](https://www.aistats.org/) (more theory driven)
* [COLT](http://learningtheory.org/colt2020/) (more theory driven)
* [CoRL](https://www.robot-learning.org/) (robotics focus)
* [RSS](https://roboticsconference.org/) (robotics focus)
* [ICRA](https://www.icra2020.org/) (robotics focus)
* [IROS](http://www.iros2020.org/) (robotics focus)
* [CVPR](http://cvpr2020.thecvf.com/) (visual data focus)
* [ECCV](https://eccv2020.eu/)/[ICCV](http://iccv2019.thecvf.com/) (visual data focus)
* [EMNLP](https://2020.emnlp.org/) (language focus)
* [ACL](https://acl2020.org/) (language focused)
Upvotes: 3 |
2017/11/07 | 4,211 | 17,637 | <issue_start>username_0: What's the difference between model-free and model-based reinforcement learning?
It seems to me that any model-free learner, learning through trial and error, could be reframed as model-based. In that case, when would model-free learners be appropriate?<issue_comment>username_1: Model-based reinforcement learning has an agent try to understand the world and create a model to represent it. Here the model is trying to capture 2 functions, the transition function from states $T$ and the reward function $R$. From this model, the agent has a reference and can plan accordingly.
However, it is not necessary to learn a model, and the agent can instead learn a policy directly using algorithms like Q-learning or policy gradient.
A simple check to see if an RL algorithm is model-based or model-free is:
*If, after learning, the agent can make predictions about what the next state and reward will be before it takes each action, it's a model-based RL algorithm.*
If it can't, then it’s a model-free algorithm.
Upvotes: 5 <issue_comment>username_2: >
> What's the difference between model-free and model-based reinforcement learning?
>
>
>
In Reinforcement Learning, the terms "model-based" and "model-free" do *not* refer to the use of a neural network or other statistical learning model to predict values, or even to predict next state (although the latter may be used as part of a model-based algorithm and be called a "model" regardless of whether the algorithm is model-based or model-free).
Instead, the term refers strictly as to whether, whilst during learning or acting, the agent uses predictions of the environment response. The agent can use a single prediction from the model of next reward and next state (a sample), or it can ask the model for the *expected* next reward, or the full *distribution* of next states and next rewards. These predictions can be provided entirely outside of the learning agent - e.g. by computer code that understands the rules of a dice or board game. Or they can be learned by the agent, in which case they will be approximate.
Just because there is a model of the environment implemented, does not mean that a RL agent is "model-based". To qualify as "model-based", the learning algorithms have to explicitly reference the model:
* Algorithms that purely sample from experience such as Monte Carlo Control, SARSA, Q-learning, Actor-Critic are "model free" RL algorithms. They rely on real samples from the environment and never use generated predictions of next state and next reward to alter behaviour (although they might sample from experience memory, which is close to being a model).
* The archetypical model-based algorithms are Dynamic Programming (Policy Iteration and Value Iteration) - these all use the model's predictions or distributions of next state and reward in order to calculate optimal actions. Specifically in Dynamic Programming, the model must provide state transition probabilities, and expected reward from any state, action pair. Note this is rarely a learned model.
* Basic TD learning, using state values only, must also be model-based in order to work as a control system and pick actions. In order to pick the best action, it needs to query a model that predicts what will happen on each action, and implement a policy like $\pi(s) = \text{argmax}\_a \sum\_{s',r} p(s',r|s,a)(r + v(s'))$ where $p(s',r|s,a)$ is the probability of receiving reward $r$ and next state $s'$ when taking action $a$ in state $s$. That function $p(s',r|s,a)$ is essentially the model.
The RL literature differentiates between "model" as a model of the environment for "model-based" and "model-free" learning, and use of statistical learners, such as neural networks.
In RL, neural networks are often employed to learn and generalise value functions, such as the Q value which predicts total return (sum of discounted rewards) given a state and action pair. Such a trained neural network is often called a "model" in e.g. supervised learning. However, in RL literature, you will see the term "function approximator" used for such a network to avoid ambiguity.
>
> It seems to me that any model-free learner, learning through trial and error, could be reframed as model-based.
>
>
>
I think here you are using the general understanding of the word "model" to include any structure that makes useful predictions. That would apply to e.g. table of Q values in SARSA.
However, as explained above, that's not how the term is used in RL. So although your understanding that RL builds useful internal representations is correct, you are not technically correct that this can be used to re-frame between "model-free" as "model-based", because those terms have a very specific meaning in RL.
>
> In that case, when would model-free learners be appropriate?
>
>
>
Generally with current state of art in RL, if you don't have an accurate model provided as part of the problem definition, then model-free approaches are often superior.
There is lots of interest in agents that build predictive models of the environment, and doing so as a "side effect" (whilst still being a model-free algorithm) can still be useful - it may regularise a neural network or help discover key predictive features that can also be used in policy or value networks. However, model-based agents that learn their own models for planning have a problem that inaccuracy in these models can cause instability (the inaccuracies multiply the further into the future the agent looks). Some promising inroads are being made using [imagination-based agents](https://www.deepmind.com/blog/agents-that-imagine-and-plan) and/or mechanisms for deciding when and how much to trust the learned model during planning.
Right now (in 2018), if you have a real-world problem in an environment without an explicit known model at the start, then the safest bet is to use a model-free approach such as DQN or A3C. That may change as the field is moving fast and new more complex architectures could well be the norm in a few years.
Upvotes: 6 <issue_comment>username_3: In reinforcement learning (RL), there is an *agent* which interacts with an *environment* (in time steps). At each time step, the agent decides and executes an *action*, $a$, on an environment, and the environment responds to the agent by moving from the current *state* (of the environment), $s$, to the next state (of the environment), $s'$, and by emitting a scalar signal, called the *reward*, $r$. In principle, this interaction can continue forever or until e.g. the agent dies.
The main goal of the agent is to collect the largest amount of reward "in the long run". To do that, the agent needs to find an optimal policy (roughly, the optimal strategy to behave in the environment). In general, a policy is a function which, given a current state of the environment, outputs an action (or a probability distribution over actions, if the policy is *stochastic*) to execute in the environment. A policy can thus be thought of as the "strategy" used by the agent to behave in this environment. An optimal policy (for a given environment) is a policy which, if followed, will make the agent collect the largest amount of reward in the long run (which is the goal of the agent). In RL, we are thus interested in finding optimal policies.
The environment can be deterministic (that is, roughly, the same action in the same state leads to the same next state, for all time steps) or stochastic (or non-deterministic), that is, if the agent takes an action in a certain state, the resulting next state of the environment might not necessarily always be the same: there is a probability that it will be a certain state or another. Of course, these uncertainties will make the task of finding the optimal policy harder.
In RL, the problem is often mathematically formulated as a [Markov decision process](https://en.wikipedia.org/wiki/Markov_decision_process) (MDP). A MDP is a way of representing the "dynamics" of the environment, that is, the way the environment will react to the possible actions the agent might take, at a given state. More precisely, an MDP is equipped with a **transition function** (or "transition model"), which is a function that, given the current state of the environment and an action (that the agent might take), outputs a probability of moving to any of the next states. A **reward function** is also associated with an MDP. Intuitively, the reward function outputs a reward, given the current state of the environment (and, possibly, an action taken by the agent and the next state of the environment). Collectively, the transition and reward functions are often called the **model** of environment. To conclude, the MDP is the problem and the solution to the problem is a policy. Furthermore, the "dynamics" of the environment are governed by the transition and reward functions (that is, the "model").
However, we often do not have the MDP, that is, we do not have the transition and reward functions (of the MDP associated the environment). Hence, we cannot estimate a policy from the MDP, because it is unknown. Note that, in general, if we had the transition and reward functions of the MDP associated with the environment, we could exploit them and retrieve an optimal policy (using dynamic programming algorithms).
In the absence of these functions (that is, when the MDP is unknown), to estimate the optimal policy, the agent needs to interact with environment and observe the responses of the environment. This is often referred to as the "reinforcement learning problem", because the agent will need to estimate a policy by *reinforcing* its beliefs about the dynamics of the environment. Over time, the agent starts to understand how the environment responds to its actions, and it can thus start to estimate the optimal policy. Thus, in the RL problem, the agent estimates the optimal policy to behave in an unknown (or partially known) environment by interacting with it (using a "trial-and-error" approach).
In this context, a **model-based** algorithm is an algorithm that uses the transition function (and the reward function) in order to estimate the optimal policy. The agent might have access only to an approximation of the transition function and reward functions, which can be learned by the agent while it interacts with the environment or it can be given to the agent (e.g. by another agent). In general, in a model-based algorithm, the agent can potentially predict the dynamics of the environment (during or after the learning phase), because it has an estimate of the transition function (and reward function). However, note that the transition and reward functions that the agent uses in order to improve its estimate of the optimal policy might just be approximations of the "true" functions. Hence, the optimal policy might never be found (because of these approximations).
A **model-free** algorithm is an algorithm that estimates the optimal policy without using or estimating the dynamics (transition and reward functions) of the environment. In practice, a model-free algorithm either estimates a "value function" or the "policy" directly from experience (that is, the interaction between the agent and environment), without using neither the transition function nor the reward function. A value function can be thought of as a function which evaluates a state (or an action taken in a state), for all states. From this value function, a policy can then be derived.
In practice, one way to distinguish between model-based or model-free algorithms is to look at the algorithms and see if they use the transition or reward function.
For instance, let's look at the main update rule in the *Q-learning algorithm*:
$$Q(S\_t, A\_t) \leftarrow Q(S\_t, A\_t) + \alpha (R\_{t+1} + \gamma \max\_{a}Q(S\_{t+1}, a) - Q(S\_t, A\_t))$$
As we can see, this update rule does not use any probabilities defined by the MDP. Note: $R\_{t+1}$ is just the reward that is obtained at the next time step (after taking the action), but it is not necessarily known beforehand. So, Q-learning is a model-free algorithm.
Now, let's look at the main update rule of the *policy improvement* algorithm:
$$Q(s,a) \leftarrow \sum\_{s' \in \mathcal{S}, r\in\mathcal{R}}p(s',r|s,a)(r+\gamma V(s'))$$
We can immediately observe it uses $p(s',r|s,a)$, a probability defined by the MDP model. So, *policy iteration* (a dynamic programming algorithm), which uses the policy improvement algorithm, is a model-based algorithm.
Upvotes: 5 <issue_comment>username_4: According to *[OpenAI – Kinds of RL Algorithms](https://spinningup.openai.com/en/latest/spinningup/rl_intro2.html)*, algorithms which use a model of the environment, i.e. a function which predicts state transitions and rewards, are called **model-based** methods, and those that don’t are called **model-free**. This model can either have been given the agent or learned by the agent.
Using a model allows the agent to plan by thinking ahead, seeing what would happen for a range of possible choices, and explicitly deciding between its options. This may be useful when faced with problems that require more long-term thinking. One way to perform planning is by using some kind of tree search, for example [Monte Carlo tree search](https://en.wikipedia.org/wiki/Monte_Carlo_tree_search) (MCTS), or—which I suspect could also be used—[variants](https://en.wikipedia.org/wiki/Rapidly-exploring_random_tree#Variants_and_improvements_for_motion_planning) of the rapidly exploring random tree (RRT). See e.g. *[Agents that imagine and plan](https://deepmind.com/blog/agents-imagine-and-plan/)*.
The agent can then distill the results from planning ahead into a learned policy – this is known as expert iteration.
A model can also be used to create a simulated, or "imagined," environment in which the state is updated by using the model, and make the agent learn inside of that environment, such as in *[World Models](https://worldmodels.github.io/)*.
In many real-world scenarios, the ground-truth model of the environment is not available to the agent. If an agent wants to use a model in this case, it has to learn the model, which can be challenging for several reasons.
There are however cases in which the agent uses a model that is already known and consequently doesn't have to learn the model, such as in [AlphaZero](https://deepmind.com/blog/alphazero-shedding-new-light-grand-games-chess-shogi-and-go/), where the model comes in form of the rules of the game.
Upvotes: 2 <issue_comment>username_5: Model-Free RL
-------------
In Model-Free RL, the agent does not have access to a model of the environment. By environment I mean a function which predicts state transition and rewards.
As of the time of writing, model-free methods are more popular and have been researched extensively.
Model-Based RL
--------------
In Model-Based RL, the agent has access to a model of the environment.
Main advantage is that this allows the agent to plan ahead by thinking ahead. Agents distill the results from planning ahead into a learned policy. A famous example of Model-Based RL is [AlphaZero](https://arxiv.org/abs/1712.01815).
The main downside is that many times a ground-truth representation of the environment is not usually available.
---
Below is a non-exhaustive taxonomy of RL algorithms, which may help you to visualize better the RL landscape.
[](https://i.stack.imgur.com/oxKqm.png)
Upvotes: 3 <issue_comment>username_6: Although there are several good answers, I want to add this paragraph from [Reinforcement Learning: An Introduction](http://incompleteideas.net/book/bookdraft2017nov5.pdf), page 303, for a more psychological view on the difference.
>
> The distinction between model-free and model-based reinforcement learning algorithms
> corresponds to the distinction psychologists make between habitual and goal-directed
> control of learned behavioral patterns. Habits are behavior patterns triggered by appropriate
> stimuli and then performed more-or-less automatically. Goal-directed behavior,
> according to how psychologists use the phrase, is purposeful in the sense that it is controlled
> by knowledge of the value of goals and the relationship between actions and their
> consequences. Habits are sometimes said to be controlled by antecedent stimuli, whereas
> goal-directed behavior is said to be controlled by its consequences (Dickinson, 1980,
> 1985). Goal-directed control has the advantage that it can rapidly change an animal’s
> behavior when the environment changes its way of reacting to the animal’s actions. While
> habitual behavior responds quickly to input from an accustomed environment, it is unable
> to quickly adjust to changes in the environment.
>
>
>
It keeps going from there, and has a nice example afterwards.
I think the main point that was not always explained in the other answers, is that in a model-free approach you still need some kind of environment to tell you what is the reward associated with your action. The big difference is that you do NOT need to store any information about the model. You give the environment your chosen action, you update your estimated policy, and you forget about it. On the other hand, in model-based approaches, you either need to know the state transitions history as in Dynamic Programming, or you need to be able to calculate all possible next states and associated rewards, from the present state.
Upvotes: 3 |
2017/11/09 | 1,358 | 5,803 | <issue_start>username_0: I have a set of images that I already trained a CNN to classify successfully. I wonder if it would be possible to encode the images (using XOR in combination with a key of the same length as the image) and train a new net on them.
Thinking logically, the features still exist in the same relation to each other, just in a different form (encoded). Considering that neural networks are incredible at pattern recognition, I assume that it would still be doable.
For people, who cannot imagine how a xor-encoded image would look like:
[](https://i.stack.imgur.com/GsIxN.png)
For a human, it may look like rubbish, but the information is definitely there.
Would love to read your opinion.<issue_comment>username_1: First you should give it a try, because anyone's guess could be off, as there isn't really a complete high level analytic model of how neural networks behave on real data. Most results with neural networks are informed by theory, but involve a whole lot of experimental testing.
I suspect your network might at least learn *something* from the new images, but would struggle to get anything like the same accuracy as without the noise, because the CNN filters rely on being able to detect similar features at different positions. In your scrambled image, there will not be any meaningful and *consistent* edges/corners etc that a single learned feature detector could learn to match (and therefore present to the next layer).
A fully-connected network would not have this limitation, and would learn just as well on a set of *binary* features that have been xor'd identically in each position for each example, as it did on the original copy (i.e. only if the picture was 1 bit depth). It would learn less well if each feature was a scaled 8-bit pixel value that was xor'd with the same 8 bit random number in each pixel position, because that would introduce many more non-linear mappings between input and output. Of course a fully-connected network will generally not learn image tasks as well as CNNs in the first place . . . but if it could learn anything useful at all for your image problem, then it will probably out-perform CNNs after the scrambling effect.
As a CNN usually has a few fully-connected layers, then it may be possible to get *something* from your scrambled images.
>
> Thinking logically, the features still exist in the same relation to each other, just in a different form (encoded)
>
>
>
In terms of being recognisable in the way that a CNN filter extracts them, then the features do not exist. That is a problem.
Upvotes: 3 [selected_answer]<issue_comment>username_2: The answer to your question depends on the nature of the noise that you have xor-ed the image with. If it is the case that the noise is random (or [pseudorandom](https://en.wikipedia.org/wiki/Pseudorandomness) in the formal sense), then it is provably the case that the original pattern will not be learnable in the statistical learning theory sense; this scenario is equivalent to the application of a [one-time pad](https://en.wikipedia.org/wiki/One-time_pad).
To quote the relevant Wikipedia article:
>
> One-time pads are "information-theoretically secure" in that the encrypted message (i.e., the ciphertext) provides no information about the original message to a cryptanalyst (except the maximum possible length[16] of the message). This is a very strong notion of security first developed during WWII by <NAME> and proved, mathematically, to be true for the one-time pad by Shannon about the same time.
>
>
>
Upvotes: 0 <issue_comment>username_3: My gut feel based on the paper I will mention below is that **YES**, if you apply the same XOR operation on the train and test data, you will be able to train a very "accurate" classifier.
To elaborate on my "gut" feel, please allow me to introduce to you what I personally think is one of the most important paper that came out this year(in fact this paper won the best paper award at ICLR 2017):
[Understanding deep learning requires rethinking generalization](https://arxiv.org/abs/1611.03530).
In this paper, the authors showed that deep learning models will generalize to "any" datasets. To give an example of the sort of experiment they conducted on this paper:
* They randomly shuffled the training and test set's labels around in such a manner that for example some images of cats were labeled as dogs whiles some dogs were named cats whilst some cats and dogs images remained correctly labeled. Now it is well understood that deep learning models(including CNNs) are quite resistant to a few noisy labels but in the experiments conducted in the paper mentioned above this was a significant amount of noisy which begs the question why neural networks still performed well on what ended up being a garbage dataset.
The moral of the story is that contrary to what most researchers believed in the past namely that deep learning models magically discover lower level features, middle-level features, and higher-level features hidden within the dataset more like the V1 system of the mammalian brain by learning to compress data, they seem to just memorize anything you give them, including random data.
In short the paper mentioned above showed that deep learning models generalize well to completely random noise(in your case, think images generated from random pixels). Deep learning models will generalize well to anything, anything. And if they can generalize to random data which have no structure, then images that underwent a fixed, predefined transformation like XOR have nothing to a deep learning model.
*I must say, this are very worrying findings - to me at least.*
Upvotes: 0 |
2017/11/16 | 1,211 | 5,507 | <issue_start>username_0: I am a PhD student in computer science, and currently creating a state of the art overview in applications done in Machine Ethics (a multidisciplinary field combining philosophy and AI, that looks at creating explicit ethical programs or agents). It seems that the field mostly contains theoretical arguments and there are relatively little implementations, even though there are many people with a technical background in the field.
I understand that because ethics are involved, there is no ground truth and since it's part of philosophy one can get lost in arguing over which type of ethics should be implemented and how this can be done best. However, in computer science, it is usual to even try a simple implementation to show the possibilities or limitations of your approach.
What are the possible reasons there is so little done in explicitly implementing ethics in AI and experimenting with it?<issue_comment>username_1: I feel part of the problem as to why there is very little in the way of ethical implementations of AI/ML technologies, is simply because there is no need or proper application of the theoretical frameworks.
By this I mean, there are no substantial ways we can apply this understanding to algorithms and models that cannot interact in a meaningful way. We have such a large theoretical framework on AI safety/ethics because it is extremely important. We need to come up with safe guidelines for implementing strong AI ***before*** it is created.
Some very focused papers have started to narrow down the issues in creating ethical/safe AI systems. See [Concrete Problems in AI Safety](https://arxiv.org/abs/1606.06565)
Upvotes: 1 <issue_comment>username_2: With the imitation method, the most appropriate behavior can be integrated into artificial intelligence. Artificial intelligence can be reshaped when the ethical position changes. It is used for ideological purpose or to gather information. It's not clear what the robot is.
Upvotes: 1 <issue_comment>username_3: We can take the error model into accountant. Recognising bias and variance among the performance under neural networks can be a first step. And then we can discuss whether such performance is allowed. As far as we know, practicing ethnics requires empirical and field study. we cannot simply take rationales and paper essays to determine the doings of learnt machines is wrong or not. It can be further divided into accidents, errors , or even bugs created from the developers.
Upvotes: 1 <issue_comment>username_4: Intuitively speaking, it seems to be the case that there is little research into the implementation of AI ethics because:
1. Society as a whole seems to comfortably agree that the current state of machine intelligence is not strong enough for it to be considered as conscious or sentient. Thus we don't need to give it ethical rights (yet).
2. Implementation of ethical behaviour into a program requires a method for computers to be capable of interpreting "meaning", which we do not know how to do yet.
Upvotes: 1 <issue_comment>username_5: This is necessarily a high-level answer, and highly speculative, but I've been thinking on this question, and here are my thoughts:
* Implementing ethical algorithms requires a mathematical basis for philosophy because computers are difference engines
After Russell & Whitehead's famous failure, and Gödel's incompleteness theorem, this would seem to be problematic.
* AI is a highly applied field, especially today per continuing validation of deep learning, and no company wants to go near the issue of ethics unless they are forced to
Thus, you see it in self-driving cars because the engineers have no choice but to grapple with the problem. By contrast, I don't think you'll see many algorithmic stock trading firms, where the business is [Pareto efficiency](https://en.wikipedia.org/wiki/Pareto_efficiency), worrying about the ethics or social impacts of financial speculation. (The solution to "flash crashes" seems to have been rules for temporary suspension of trading, instead of addressing the social value of high-frequency algorithmic trading.) A more obvious example is social media companies ignoring the extreme amounts of information abuse (disinformation and misinformation) being posted on their sites, pleading ignorance, which is highly suspect in that the activity generated by information abuse positively affects their bottom-lines.
* Applied fields tend to be predominantly driven by profit
The primary directive of corporations is to return a profit to investors. It's not uncommon for corporations to break the law when the fines and penalties are expected to be less than the profit made by illegal activity. (There is the concept of ethics in business, but the culture in general seems to judge people and companies based on how much money they make, regardless of the means.)
* Implementation of machine ethics is being explored in areas where they are necessary to sell the product, but elsewhere, it's still largely hypothetical
If superintelligences evolve and wipe out humanity (as some very smart people with superior mathematics skills are warning us about,) my feeling is that it will be a function of nature, where unrestricted evolution of these algorithms is due to economic drivers which focus on hyper-partisan automata in industries like financial speculation and autonomous warfare. Essentially, chasing profits at all costs, regardless of the impacts.
Upvotes: 2 |
2017/11/16 | 1,290 | 4,812 | <issue_start>username_0: With some knowledge of machine learning and deep learning, it seems very unlikely for AI to develop into the consciousness that we imagine.
To me, consciousness requires a new framework that is very different from what we have today, as current forms of learning seem to be very 'shallow'.
What are the current limitations to artificial consciousness and sentience? Also is there a difference between artificial consciousness and sentience?<issue_comment>username_1: It may be helpful to think of consciousness, like intelligence, as a spectrum. The Stanford Encyclopedia of Philosophy, under the section "Creature Consciousness" (2.1) defines sentience as:
>
> Sentience. It may be conscious in the generic sense of simply being a sentient creature, one capable of sensing and responding to its world (Armstrong 1981). Being conscious in this sense may admit of degrees, and just what sort of sensory capacities are sufficient may not be sharply defined. Are fish conscious in the relevant respect? And what of shrimp or bees?
> Stanford Encyclopedia of Philosophy, [2.1 "Creature Consciousness"](http://Stanford%20Encyclopedia%20of%20Philosophy)
>
>
>
It seems clear that AI possesses the most basic form of consciousness in terms of "sensing and responding to its world". I might liken this level of consciousness to that of microorganisms, able to interact with their environments but possessing none of the higher functions of humans, or even higher-order animals.
We don't really understand the true nature of higher consciousness and human-level sentience, which doesn't necessarily mean that an automata with sufficient complexity couldn't achieve it. But there is a deeper problem relating to *validation* (i.e. how would be know the system is truly self-aware, and not simulating self-awareness. [See [Searle's Chinese Room](https://en.wikipedia.org/wiki/Chinese_room#Chinese_room_thought_experiment), but also [McCarthy's Refutation]](http://jmc.stanford.edu/articles/chinese.html)
In his paper *Making Robots Conscious of their Mental
States* McCarthy states:
>
> Conscious knowledge and other information is distinguished from
> unconscious information by being observable, and its observation results
> in conscious knowledge about it. We call this introspective knowledge.
>
>
> A robot will need to use introspective knowledge in order to operate
> in the common sense world and accomplish the tasks humans will give
> it.
>
>
> Many features of human consciousness will be wanted, some will
> not, and some abilities not possessed by humans have already been
> found feasible and useful in limited domains.
>
>
> We give preliminary fragments of a logical language a robot can
> use to represent information about its own state of mind.
>
>
> A robot will often have to conclude that it cannot decide a question
> on the basis of the information in memory and therefore must seek
> information externally.
>
>
> Programs with much introspective consciousness do not yet exist.
>
> [Making Robots Conscious of their Mental
> States, Abstract](http://jmc.stanford.edu/articles/consciousness/consciousness.pdf)
>
>
>
This still seems to be the case, however, McCarthy seems hopeful about the "[computationalist](https://en.wikipedia.org/wiki/Computational_theory_of_mind)" approach:
>
> Thinking about consciousness with a view to designing it provides
> a new approach to some of the problems of consciousness studied by
> philosophers. One advantage is that it focusses on the aspects of
> consciousness important for intelligent behavior. If the advocates of
> qualia are right, it looks like robots won’t need them to exhibit any
> behavior exhibited by humans.
> [ibid.](http://jmc.stanford.edu/articles/consciousness/consciousness.pdf)
>
>
>
The continual increase in memory, processing and algorithmic sophistication may well yield [Artificial General Intelligence](https://en.wikipedia.org/wiki/Artificial_general_intelligence).
Even if this proves to be the case, I suspect the debate over artificial consciousness will rage on.
---
See also: [Artificial Consciousness](https://en.wikipedia.org/wiki/Artificial_consciousness)
Upvotes: 4 [selected_answer]<issue_comment>username_2: By the cognitive science / psychology literature, we don't have a theory of consciousness, or a measure of sentience. By its very nature, "things" like those can only be understood with respect to ones own subjective experience (as in you can only understand consciousness by comparing it to your own conscious experience).
So no, even if we have a device that perfectly replicates the function of consciousness or sentience, we'd still have a hard time saying that it conclusively is indeed consciousness.
Upvotes: 2 |
2017/11/21 | 697 | 2,931 | <issue_start>username_0: I am using Google's OCR to extract text from images, like receipts and invoices.
Whare examples of techniques used to make sense of the text? For example, I would like to extract the date, name of the business, address, total amount, etc.
Before marking this question "too broad", if someone can please direct me to the right set of algorithms the industry uses for machine learning will be great.<issue_comment>username_1: An interesting question, I think the algorithm used for OCR is "Logistic Regression" or "Decision Tree" in multiple steps.
The steps can be
1. Image Classification - In this step, the images are classified into
"with or without" text.
2. Text Detection - In this step, the images with text are taken
divided into blocks and the blocks are classified into "with or
without" text.
3. Character Detection - In this step, the blocks with text are taken
and divided into smaller boxes of single characters and compare with
a database of characters.
The database is built using the crowdsourced "captcha" project.
Upvotes: 1 <issue_comment>username_2: Last semester I, along with my team, made a project on OCR.
**Note**: I am assuming the data set for your pictures has white
background with black (or some other dark text) on it.
These are the overview steps we followed:
[](https://i.stack.imgur.com/4swxQ.png)
**Pre-processing** includes grayscale conversion, noise reduction,
binarization and skew detection.
Next step was **segmentation**. This process extracts the individual
characters from the image. *Histogram* taken along the y-axis divided
the image into lines. This is followed by histogram along the x-axis
which divided them into words and further into characters.
At the end of the step, we used Savgol filter to smooth the curves of the histogram.
Next step was **feature extraction**. This is the most important step.
The accuracy of your code depends on how well your features are.
We used the following features:
* Crossing: Counting number of transitions between foreground and background. We used
two diagonal lines, two horizontal and one vertical line. You can used any number you want.
* Zoning: Whole character region is divided into 16 zones, and density of each zone is measured.
* Projection Histogram: Each character has unique (almost) vertical and horizontal histogram signature.
* Other features include number of endpoints in the character, number of loops and horizontal/vertical line count.
We used three different classification algorithms for our project. They were KNN (K-Nearest Neighbours), Artificial Neural Network (ANN) and Extra Tree classification. **Their F1 score was 0.84, 0.82 and 0.77 respectively**.
For training, you will need to find datasets. Many data sets for OCR was available online. Make sure you are using good ones.
Upvotes: 2 |
2017/11/22 | 2,402 | 10,136 | <issue_start>username_0: I have started on <NAME>'s machine learning course. It seems that machine learning is learning correlations with known data based on as many parameters as possible. For example, if we collect data on existing property prices with information on the land area, built-in area, type of building, age of the building, etc, it is possible to predict the price of another property if we input the value of the various parameters of this property.
Similarly, if we keep the images (the black and white pixels) of cats, we can tell whether a new picture is a cat if it bears some resemblance to the pixels of existing labeled cat images.
1. This approach sounds great, but is it practical? How much effort and zettabytes of data do we have to keep just to reach the brainpower of, say, a 3-year old, who can recognize dogs, cats, tigers, a Mustang, trucks, a hamburger restaurant, and so on?
2. Why does everyone have to repeat the effort of learning the same things?
3. If Google has already learned cats, or if someone already has a program to recognize handwritten digits, can this knowledge be shared and re-used? Or is it just a matter of paying for them?<issue_comment>username_1: >
> ...why does everyone have to repeat the effort of learning the same things...
>
>
>
Most problems are different if you go into the details. Even if the problems were exactly identical, implementation differs. You might like Google Chrome, but I might prefer Mozilla Firefox. They both load the Internet pages!
>
> ...How much effort and zetabytes of data do we have to keep just to reach the brain power of, say, a 3-year old...
>
>
>
That depends on what you want to do. A 3-year kid can recognise people quite well, but this is hard for computers. However, computer can do regression much better than a 3-year old ...
Upvotes: 0 <issue_comment>username_2: I think SmallChess makes a good point on your first question, so I want to focus on the repetition of different problems.
I could see you meaning several different things by this:
1. **Why does every machine learning course out there do MNIST/real estate values/other simple problems?** If you want to learn how to solve a complex problem, you need to understand how the individual parts come together. You could jump straight to a classifier trained on 10,000 image categories using the current, most advanced techniques. However, it's much easier to see how certain algorithms work better with certain types of problems when starting with small, easy ones that can be solved in a few minutes. You can try many different sets of algorithms and hyperparameters on a problem that takes 3 minutes to train on your local computer. Then, you get a sense of what each part contributes to the whole. This will help you progress a lot faster than training a network that takes 2 weeks on multiple GPUs each time and trying to figure out what's going on. In addition, you don't have a ton of layers of complexity to try and understand. You get a feel for a simple approach and once you understand it, add complexity. It's the same with learning anything else. My physics class started with gravity and throwing a ball in the air, not with relativity.
2. **Why are there so many people out there doing all these different approaches to image classification/chatbots/reinforcement learning/whatever?** Machine learning is a not a solved problem. There are a set of algorithms we understand pretty well. Fitting a polynomial to a set of points doesn't really have any hidden tricks up its sleeves. There are plenty of other algorithms that we're still figuring out. Sometimes, a neural network never converges, or only converges to some limit. When you change the hyperparameters, it converges or has better accuracy or something. But sometimes, it generalizes poorly. Looking at 10 million parameters and deciding which one screwed up is hard, so people research how to solve these problems. Or, often, current "state-of-the-art" approaches really aren't at the point where improvement is rare. Every single year, they hold another competition and lots of people come up with better ways to solve it, either with new algorithms or better refinements to the ones they were already using. There are still so many things not yet understood about this field and it seems there are major discoveries constantly.
3. **Why can't I just use someone else's pre-trained methods?** You can! If you can find someone who has put their model out there (for instance, [TensorFlow](https://github.com/tensorflow/models)), you can take it and run it for yourself. Some researchers put out their already trained models and some don't, so you might get lucky. The problem is, you have to decide if the model they trained fits your needs. If you really need a visual classifier that can detect penguins, and you download one that was trained on 1,000 classes, none of which were penguins, you're out of luck. So, definitely, look for someone else who might have already done it. Just make sure that it covers what you actually need.
It doesn't really fit into the above points, but I hope I can convey that machine learning is just another tool. Taking your comment about wanting to build a robot companion for a toddler, you need to decide whether it fits the problem you have. If it has some wheels for moving the base around, you could spend a ton of time building a controller with reinforcement learning. Or, you could program and implement a PID controller in a couple days. Does it need to detect the surroundings and label objects in camera images to interact with them? Then you probably need a CNN of some kind to do it accurately. Does it need to listen to the toddler's voice commands to go bring a ball back? Then, you need some way to interpret those, and it might be another machine learning algorithm. But let's say that you find several pre-trained algorithms that do the tasks you need. They still need to be linked together in whatever overall software you have. Analogously, if you were using someone else's path planning algorithm, you still need to define the goals, take the outputs and input them into the controller, and update your state estimate.
There are lots of people hoping to figure out machine learning algorithms that can just do everything like humans do. End-to-end learning tries to go directly from input data to actions ([self-driving cars](https://arxiv.org/pdf/1604.07316.pdf)). Others are trying to create an artificial general intelligence that thinks and interacts with the world like we do. The point is, there are so many things that haven't been solved yet. Your robot won't be as good as a 3 year-old at many things, but that's not the point. ML gives the robot the ability to be really good at some certain thing, like figuring out where the toy it's looking at is. But it isn't a "wave your hands and it all works" solution.
**tl;dr** Use ML when it fits the problem. But it's a hard problem, and there are a lot of things to understand about it if you want to use it. It's a tool in your toolbox.
Upvotes: 2 [selected_answer]<issue_comment>username_3: >
> This approach sounds great but is it practical? How much effort and zetabytes of data do we have to keep just to reach the brain power of, say, a 3-year old, who can recognize dogs, cats, tigers, a Mustang, trucks, a hamburger restaurant, and so on?
>
>
>
We don't know. More critically, we don't know if any technique of machine learning is "the way humans learn", thus we have no theoretical guarantee that what we are doing brings us any closer to true AI. Also, there is much more to intelligence that classification don't forget about that.
>
> My next question is why does everyone have to repeat the effort of learning the same things?
>
>
>
Nowadays people often train their architecture on the [same few datasets](http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html) as a benchmark to compare it with other cutting edge techniques. These datasets are also generally well understood, thus its easier to troubleshoot issues that might come up.
>
> If Google has already learned cats, or if someone already has a program to recognize handwritten digits, can this knowledge be shared and re-used? Or is it just a matter of paying for them?
>
>
>
In principle yes, but likely no. Many modern machine learning techniques have separate training/testing phases. Training is when the algorithm gets to "learn" per say, and testing is when we get to test it. If one releases the architecture after it has been trained, in principle you should be able to achieve the same results.
Or more generally, once they release a paper detailing how they built the system, in principle you should be able to build it yourself as well.
Upvotes: 0 <issue_comment>username_4: *Disclosure: I am a product manager on Google Cloud Platform.*
>
> [...] why does everyone have to repeat the effort of learning the same things?
>
>
> If Google has already learned cats, or if someone already has a program to recognize handwritten digits, can this knowledge be shared and re-used? Or is it just a matter of paying for them?
>
>
>
You don't have to rebuild these machine learning models from scratch; you can reuse prebuilt machine learning algorithms, e.g., Google Cloud provides the following hosted APIs as a service:
* [Google Cloud Vision API](https://cloud.google.com/vision/) to recognize images and provide text labels
* [Google Cloud Speech API](https://cloud.google.com/speech/) to convert speech to text
* [Google Cloud Natural Language API](https://cloud.google.com/natural-language/) to understand language, find entities, etc.
* [and more...](https://cloud.google.com/products/machine-learning/)
You can put these APIs together to build interesting applications, e.g.,
* [build a smart Raspberry Pi bot with Cloud Vision and Speech APIs](https://www.youtube.com/watch?v=HpPyhsC4q9M) (video presentation; [blog post + video overview](https://www.raspberrypi.org/blog/raspberry-pi-cloud-vision-google-io/))
Upvotes: 2 |
2017/11/22 | 467 | 2,121 | <issue_start>username_0: "Deep Learning" neural networks are now successful at image-recognition tasks that I would not have expected say 10 years ago. I wonder if the current state of the art in machine learning could generally tell the difference between the sound of a dog or cat moving around a house, and a person walking in the same area, taking as input only the sound captured by a microphone. I think I could generally tell the difference, but it is hard to explain exactly how. But this is also true of some tasks that deep learning is now succeeding at. So, I suspect it is possible but it's not clear how you would go about it.
I have found algorithms to detect human speech (wikipedia:"Voice activity detection") but separating animal and human footsteps seems more subtle.<issue_comment>username_1: It is an interesting application. It is possible. You can interpret sound as histogram (2D image) and apply same image processing techniques (CNN) to extract information. Alternatively, you can keep them as phase / intensity values and train a network on top of them (RNN). That is a great idea. Go for it!
Upvotes: 1 <issue_comment>username_2: I think it should be possible. The main difficulty will be in getting enough labelled data, deep learning approaches are very data hungry. Audio tasks such as speech typically require hours of labelled data. It's hard to tell from the problem description whether you would need that much - the classification is simpler (just two classes, not identifying phonemes), but I think the perceptual difference between the classes is harder, as you say yourself you are not quite sure how you are sensing the difference.
You might have success with a relatively low amount of data if you can find a pre-trained deep network for some other audio task (with a similar domain e.g. activity classification), and fine tuning the last few layers using your own data. One such [pre-trained model is called VGGish](https://github.com/tensorflow/models/tree/master/research/audioset) and has been trained on 100s of hours of short video soudtracks.
Upvotes: 1 [selected_answer] |
2017/11/23 | 1,789 | 6,921 | <issue_start>username_0: I've been reading a lot about TD-Gammon recently as I'm exploring options for AI in a video game I'm making. The video game is a turn-based positional sort of game, i.e. a "units", or game piece's, position will greatly impact its usefulness in that board state.
To work my way towards this, I thought it prudent to implement a Neural Network for a few different games first.
The idea I like is encoding the board state for the Neural Network with a single output neuron, which gives that board state relative strength compared to other board states. As I understand, this is how TD-Gammon worked.
However, when I look at other people's code and examples/tutorials, there seems to be a lot of variance in the way they represent the board state. Even for something as simple as tic-tac-toe.
So, specifically, for tic-tac-toe, which is a better, or what is the correct representation for the board state?
I have seen:
1. 9 input neurons, one for each square. A `0` indicates a free-space, `-1` the opponent, and `1` yourself.
2. 9 input neurons, but using different values, such as `0` for the opponent, `0.5` for free, and `1` for yourself?
3. Could you use larger values? Like `0`, `1` and `2`?
4. 27 input neurons. The first 3 being square 1, the next 3 being square 2, etc. Every neuron is `1` or `0`. The first of the set of three indicates whether this square is free or not; the second indicates whether the square is occupied by your opponent or not. In the end, only one in every 3 neurons will have a `1`, the other two will have a `0`.
5. 18 input neurons. The first being `1` for the X player, the second being `1` for the O player, and both being `0` for a blank
Then, when branching into games where the specific pieces' abilities come into play, like in chess, how would you represent this?
Would it be as simple as using higher input values for more valuable pieces? I.e. `-20` for an opponents Queen and `+20` for your own queen? Or would you need something more complex where you define 10+ values for each square, one for each unit-type and player combination?<issue_comment>username_1: When you are working with neural networks, as long as the data is there, the neural network is usually able to learn how to process it into a useful result.
However, you usually also want to keep the number of weights to a minimum. When you use extra weights, it will take longer to train the network because you need to tune even more values for an optimal network.
So, for tic-tac-toe, any of your solutions involving 9 inputs should work just fine.
Also, it helps if you keep the inputs between 0 and 1 if you are using log sigmoid, and -1 and 1 if you are using hyperbolic tangent for your activation function. You can probably easily figure out what to use for other activation functions. You can take your data and transform it into another dataset with values within a specific range through a process called range normalization.
For chess, you can simply encode every piece in several different ways, and it will probably not make that much of a difference.
The general rule of thumb is **you want to minimize the number of weights while still giving the most possible variables to the neural network**.
Upvotes: 3 [selected_answer]<issue_comment>username_2: The basis of reinforcement learning methods is to give each (game) state (or action) a value that somehow represents how good that state (or action) is. To store these values we could use something as simple as a table/hashmap, however complex games like chess or go has so many states they cannot fit into the memory. As a remedy, we think of the hashmap as a function and try to approximate it with a neural network(NN). Luckily NNs are universal approximators, which means [they can learn any function](http://neuralnetworksanddeeplearning.com/chap4.html), including an arbitrary mapping from a chessboard to a number.
Now the question is how to represent a game board and feed it to the neural network. In the case of tic-tac-toe, all 5 methods you listed can be considered correct.
Theoretically it does not matter what `(w, b, v)` numbers are assigned to (white, black or vacant tiles), if we teach the NN that `f(w, w, v, ...) = 1` enough times, it will learn this association whether it is `(w, b, v) = (0, -1, 1)` or `(w, b, v) = (0, 0.5, 1)`.
Now your first three examples use this method, however, a small *flaw* here is that it assigns numbers to nominal things, that is numbers can be ordered, yet we cannot really say that `black > white > vacant`. Your last two examples try to fix this by using [one-hot-vectors](https://machinelearningmastery.com/why-one-hot-encode-data-in-machine-learning/).
So for a game like chess, if we used numbers simply to represent the figures the NN might mistakenly mix up two figure types (eg.: a pawn is `19`, the queen is `20` and it thinks that a queen is trying to attack your king whereas it is just a pawn) and make a bad decision. However, it will learn that the decision was bad and will assign the correct value to the state and the decision in the long run.
One last note: choosing the correct state representation for a problem is a crucial part of reinforcement learning (similar to picking the right features for a classification problem) and sometimes one might be too afraid to pick a very high dimensional state space. But remember, that chess is not a simple game so a large state space may not be unreasonable. Also for reference, [Atari games](https://www.nature.com/articles/nature14236.pdf) were trained with an input dimension of `84*84*4`.
Upvotes: 2 <issue_comment>username_3: ***Representation of states*** is very important to prepare the data for the neural networks. You can try a different way and pick which fit best in your case.
* You can use 18 neurons as input where each state is represented by the 2 bits. But avoid 0 and 1 if you are using sigmoid activation function, which can cause saturation at the output, which means if the output(y) becomes 1 at any layer, then on backpropagating error, we have **y (1-y) dE/dy** in weight update part, which become zero with the saturation, which means it will stay in the same state ever.
This problem can be solved by the following method:
**Solution 1.** You can initialize the input with some margin from 0 and 1. **For example** input can be [0.1, 0.9] instead for [0, 1].
**Solution 2.** Another you can initialize weights very small in the range of [-0.01, 0.01].
**Solution 3.** You can use the regularization technique, whose purpose is to suppress the weights by adding a penalizing term in error.
* To handle variance problems, you can augment some data, for proper training. Because, in tic-tac-toe, you have a small data set. To augment data, you can add some margin of range -0.1 to +0.1 in inputs with the same outputs.
I hope this may be helpful.
Upvotes: 2 |
2017/11/23 | 797 | 2,992 | <issue_start>username_0: I have an AI class this semester. For our exam, we also cover alpha-beta pruning.
I found an old example, where I think we can stop already earlier.
Here is a picture of it.
[](https://i.stack.imgur.com/VMyg3.jpg)
I think, because $X$ wants to maximize his win, and finds 10, he knows that he cannot get better. Therefore he cuts and puts 10.
I marked my "improvement" with red.<issue_comment>username_1: So I assumed that we know that the maximum value overall is 10. But we do not know that, therefore we need to continue and cannot cut off the way I was thinking.
Although we would know that the max. value overall is 10, then we can cut off there.
Upvotes: 0 <issue_comment>username_2: The vanilla Alpha-Beta Pruning algorithm as it has been taught to you in class does not assume any domain knowledge / knowledge about the game / knowledge about the tree it is searching. Therefore, if it immediately finds a score of 10 directly to the left of the root node, it can not prune yet, because... maybe there's a score of 20 somewhere else in the tree?
In your description, you seem to assume prior knowledge that the maximum score that can possibly be obtained in the entire game is 10. This is not typically assumed to be available knowledge ahead of time, at least not in the original formulation of Alpha-Beta. That is why in pseudocode of the algorithm (for example on [wikipedia](https://en.wikipedia.org/wiki/Alpha%E2%80%93beta_pruning)), they initialize `alpha` and `beta` at minus and plus infinity.
You are right though. If you have better estimates of the upper and lower bounds on scores that can every be achieved throughout the entire game, you can use better initializations of `alpha` and `beta` than minus and plus infinity. `alpha` and `beta` would have to be `-11` and `11` if the true bounds are `-10` and `10`. I suspect you'd also have to make some additional changes to the typical pseudocode to actually make use of this though.
In more complex games than Tic Tac Toe, you're often not going to have very useful knowledge of better bounds though. Typically you can't afford to search all the way to terminal game states, you'll have to stop the search earlier and use heuristic evaluation functions to estimate scores. If you do that, you'll often want to assign really high constants (like +/- 1Million) to actual terminal nodes (wins and losses), so that all your heuristic scores for non-terminal nodes can be somewhere in between. If you have such a large range of possible scores, your idea stops having much value.
Upvotes: 3 [selected_answer]<issue_comment>username_3: <http://inst.eecs.berkeley.edu/~cs61b/fa14/ta-materials/apps/ab_tree_practice/>
you can make practice for better understanding the topic.
And i also recommend this lecture from MIT
<https://www.youtube.com/watch?v=STjW3eH0Cik&index=6&list=PLUl4u3cNGP63gFHB6xb-kVBiQHYe_4hSi>
Upvotes: 1 |
2017/11/24 | 914 | 4,145 | <issue_start>username_0: I'm building a neural net to predict the value of a piece of art with a wide range of inputs (size, art medium, etc.) and I would like to include the author as an input as well (it is often a huge factor in the value of a single piece of art).
My current concern is that the name of the author isn't an ideal numerical input for a NN (i.e. If I just code each author with an increasing integer value I will be indirectly assigning more value to authors further down the list -\_-). My thoughts were to create separate inputs for all the authors in my dataset and then just use one-hot encoding to better represent the input to the NN.
This approach, however, runs into a problem when an author that is not included in my training data is used as an input to the NN (i.e. a new author). I can get around this with an "other author" input field, but I am worried that this won't be accurate as I would not have trained the the NN for this input (all pieces of art with a valuation have an author).
I haven't fully thought this through but I thought of perhaps training 2 NN's, one for a valuation without an author and one for valuation with an author to ensure I have enough training data for an "authorless valuation" to still be reasonably accurate.
I am still trying to conceptualize the best NN architecture before I get stuck into the implementation so if anyone has any suggestions/comments I would be very grateful!
P.S. I am doing this as a small competition with a friend to test a NN vs the traditional commercial valuation techniques.<issue_comment>username_1: The most straightforward approach I would recommend would be the one-hot encoding solution without a feature for ''other author''. If you use drop-out during training, the network should learn how to deal with input vectors that don't have any author-features set to 1. Then, whenever you have an unknown author, you simply have no 1s at all for any of the author-features, and it should still have learned how to deal with that.
Another possible approach would be a one-hot encoding *with* a feature for ''other author''. In order to still get training data for that weight, you could simply use data augmentation. The most straightforward data augmentation approach for this would be to create copies of the instances in your training data, but for those copies set the ''other author'' feature to 1 instead of the actual author.
The most complex solution I can think of would be taking the string representation of the author, trying to have your program find information about that author online (for example try to look up a wikipedia page), push all that text through an LSTM. The ''output layer'' of the LSTM can then be merged with your other features (those two together will be your ''input vector''), stack a few more layers on top of that, and train the entire thing end-to-end. Unless the competition with your friend involves a serious amount of money, this is probably going to be too complicated to be worth it though.
Upvotes: 2 <issue_comment>username_2: I would try to find some proxy features *about* the author, as opposed to encode the identity of the author. Likely good features of an author include averages of other features about the work (such as size, media types etc), and critically for your prediction, some stats on previous sale prices of any works. Remember for historical training data to include the proxy data as it would of appeared at the time of sale.
Having these proxies is likely to work much better than one-hot-encoding an author identity and getting the ML algorithm to assign weights based on the identity alone, especially for authors with only small amounts of data to represent them.
There is little reason as far as I know, to expect the character string of an author's name to correlate with value of the artwork. There *may* be some good vs bad *brand name* effect, but I think that will be very hard to predict even in isolation from other factors, and including historic sales data should take account of this and similar effects, except for artists with no known history.
Upvotes: 2 |
2017/11/26 | 984 | 4,303 | <issue_start>username_0: I am a c# senior developer and I got a task to try and predict the potential in each new client, or maybe the worth of each customer.
I don't have experience with machine learning, but I played with accord-framework.net and got some nice results on simple task.
My data model for **training** is:
```
GeoLocation, // the country of ip when registed. iso code string
Age, // number
DateRegistered, //date time
Email, //string can be broken to vendors as catergorial (gmail, yahoo, microsoft and such)
EmailValidated, //is the email really exists. bool
PhoneNumber, //string
PhoneNumberValidated, // is the phone number really exists
CampaignName, //string (may be categirial)
UserAgent, //string should I make it categorial? (has info about browser, device, verndor, operation system and such, long string)
LandedOnPage, //string first url the customer entered from
RegisteredFromPage, //string url of the page that the user registered from
RefererUrl, //string url the client came to our site from,
NumberOfPurchases, //the amount of times the customer puschase something on our site
CustomerValueUsd, //the total amount of USD the customer spent in our site
```
The **output** shoud be `CustomerValueUsd`
I have a lot of data in the history, so I can back test it.
My questions:
1. Does it make sense to do this task even though I don't have an experience with machine learning? How complicated is this task considering I'm using a well known framework?
2. Assuming that I'm taking the task, which algorithm should I choose to perform this kind of task?
3. How should I build the training data? see my comments, do you think my comments are ok to start with? or maybe I can break the data directly?<issue_comment>username_1: The most straightforward approach I would recommend would be the one-hot encoding solution without a feature for ''other author''. If you use drop-out during training, the network should learn how to deal with input vectors that don't have any author-features set to 1. Then, whenever you have an unknown author, you simply have no 1s at all for any of the author-features, and it should still have learned how to deal with that.
Another possible approach would be a one-hot encoding *with* a feature for ''other author''. In order to still get training data for that weight, you could simply use data augmentation. The most straightforward data augmentation approach for this would be to create copies of the instances in your training data, but for those copies set the ''other author'' feature to 1 instead of the actual author.
The most complex solution I can think of would be taking the string representation of the author, trying to have your program find information about that author online (for example try to look up a wikipedia page), push all that text through an LSTM. The ''output layer'' of the LSTM can then be merged with your other features (those two together will be your ''input vector''), stack a few more layers on top of that, and train the entire thing end-to-end. Unless the competition with your friend involves a serious amount of money, this is probably going to be too complicated to be worth it though.
Upvotes: 2 <issue_comment>username_2: I would try to find some proxy features *about* the author, as opposed to encode the identity of the author. Likely good features of an author include averages of other features about the work (such as size, media types etc), and critically for your prediction, some stats on previous sale prices of any works. Remember for historical training data to include the proxy data as it would of appeared at the time of sale.
Having these proxies is likely to work much better than one-hot-encoding an author identity and getting the ML algorithm to assign weights based on the identity alone, especially for authors with only small amounts of data to represent them.
There is little reason as far as I know, to expect the character string of an author's name to correlate with value of the artwork. There *may* be some good vs bad *brand name* effect, but I think that will be very hard to predict even in isolation from other factors, and including historic sales data should take account of this and similar effects, except for artists with no known history.
Upvotes: 2 |
2017/11/29 | 2,077 | 8,745 | <issue_start>username_0: Goal-driven AIs is the only kind of AI I am aware of. However, <NAME> claims [the following](http://www.hutter1.net/ai/uaibook.htm)
>
> Most, if not all known facets of intelligence can be formulated as goal driven or, more generally, as maximizing some utility function. It is, therefore, sufficient to study goal driven AI.
>
>
>
which doesn't necessarily imply that there are other types of AIs (apart from goal-driven), but (at least, in the way it is phrased) suggests that there are other types of AIs. If there exist other types of AIs, which are they?<issue_comment>username_1: Whether all AI are goal-driven depends on how far you're willing to extend the definition of 'goal'.
AIs are often defined through their primary tasks. Face recognition is a process; a face recognition AI is a limited, special-purpose AI. Paperclip collection is a process, the Paperclip Maximizer is a general, goal-driven AI.
There will always be *some sort of goal*, but it may in certain cases become so nebulous it's really a stretch to continue calling it a goal. For example, unsupervised learning (as suggested by <NAME>) - the goal you set is "learn", with very little qualifiers. The underlying AI may develop in surprising and unexpected ways; it may even modify own goals.
Similarly, another "aimless" AI type is a simulator of biological intelligence - say, replicating neural connections of an insect in software, making it run and observing the behaviors. The AI itself is not given a specific goal - it's a tool/subject of study, not something directly producing results. Still, the goals are exactly the same as goals of the original insect: feed, multiply, stay safe; unattainable due to hardware limitations but still pursued.
Therefore, whether all AIs are goal-driven is arguable. The only AI that is not actually driven by any goal whatsoever is one that's switched off. But whether you consider all the possible nebulous, abstract or accidental goals that drive the AI to call it actually goal-driven, that's up to debate.
Upvotes: 2 <issue_comment>username_2: I suppose, the context was as follows: suppose you have AI for which goals cannot be described by a function. It doesn't mean it has no goals, but that formulation of the goals cannot be functionally dependent on known reality model parameters.
In this interpretation text which you cited has clear meaning: author doesn't want to discuss such kinds of object, which is reasonable.
Such interpretation is interesting as well, because opens interesting cases to think about :)
Upvotes: 1 <issue_comment>username_3: Following the standard AI textbook, *Artificial Intelligence: A modern approach*, other from goals based agents there are **Reflex Agents, Model based Reflex agents, Model based Goal based agents, Utility agents and Learning agents.**
Simple Reflex agents are based on condition action rules, kinda like If-then rule based agents.
[](https://i.stack.imgur.com/0Vl4Y.jpg)
Model based agents are whose which have a model of the world in it, I mean *how the world evolves and what my action will do*
They are in two variants, Model based reflex agent and model based goal directed agent.
[](https://i.stack.imgur.com/9kP6E.jpg)
Former one percepts the environment, infer from that how the world evolves and figure out what their action will do then that have some conditon base rules which then lead to action,
[](https://i.stack.imgur.com/89zPC.jpg)In latter type the module of conditon based rules are replaced by the module of goals, which can be kinda further broken into condition based rules,which they are literally.
[](https://i.stack.imgur.com/hdeyE.jpg)
Utility agents are reinforcement based agent they act in environment to maximize their utility, and have some element to check what the happiness measure or "Performance measure" is, the defintion of rational Agent is based on this, which is a rational agent will always act to maximize their average utility.
[](https://i.stack.imgur.com/Sy0w0.jpg)
Learning agent are which contain some kinda critics, some learning element and then problem generator, an isomorphism might be drawn from to Neural networks, which has their critics as Cost function, learning element as Wieght tuning.
I have been saying "kinda" because they just seems to be metaphor to me.
I dont know why <NAME> described goals agents as Utility based agents, probably he had some kinda hybrid of two.
Upvotes: 1 <issue_comment>username_4: Interesting question.
**Case One — Goals not Goal**
When systems are developed, they are based on requirements. The goals of the system is closely coupled to the goals of the development. This is the first example. Few systems and few development efforts to construct them are goal driven. They are rather systems and processes that tend toward a set of goals. Rarely is there a single loss function.
Consider an animal. It's metabolic stasis system may generate what seems like a goal, a set of signals to other systems to find and eat food. At another time its temperature stasis system may generate what seems like a goal, a set of signals to other systems to build a shelter to enter. The notion that there is some goal to survive is inconsistent with the way neurological and metabolic systems actually interact. Survival, some would say, was the goal of the animal's evolution, but that is not technically true. It survived, not because survival was a goal but because each of its ancestors survived as a result of transient and recurrent command signals for eating and sheltering. There is no aggregate loss function. There is no single neural pathway that all these signals are funneled through. The animal's behavioral subsystems may share some components, but the mechanisms of intent are largely independent. It only appears to a casual observer that the separate systems are collaborating toward survival.
This is true of some vehicle automation systems. Collision avoidance is a system with a set of goals and routing is a system with a separate set of goals. The goals are decoupled even though they may share a CPU, vision systems, and a large number of functional components and computing resources.
These systems are goals driven, using the plural, not goal driven.
**Case Two — Process of Elimination**
Another type of AI approach that is not technically goal driven are genetic algorithms. They replicate and some of the replicants are eliminated, but the entities themselves have no goal. The elimination criteria is not technically a goal. It is the opposite of one. The appearance of replicants that are not eliminated after many rounds of replication and concurrent narrowing of criteria may achieve some goal of the person running the algorithm, but the algorithm itself has no representation of a goal but rather its opposite, the final criteria of elimination in the last round.
**Case Three — Experimental Apparatus**
Many AI experiments to discover the nature of intelligence and ways to simulate aspects of intelligence have the external goal defined by the researcher, but the system used in the experiment, the AI apparatus, is not driven by a goal. It runs goal-less.
**Case Four — Toys**
Whether toys for adults or kids, toys may arise with some goal in mind, to amuse in some way, but the AI within them has no goal. There is no loss function to minimize or reinforcement or wellness signal to maximize. It's behavior may be less amusing if it were goal driven. What makes it amusing is its ability to produce patterns associated with intelligence, but to no particular purpose. In some ways, much human conversation and creative art is like this. Too much purpose, and the level of amusement decreases.
**Case Five — Robots That Mimic Postmodern Humans**
Humans in postmodern society, the ones that are truly relativists and have rejected modernism, are not particularly goal driven. They tend to wander around and do things that are intelligent, however they may act goal directed and then destroy the product of the transient goal. Building sand castles on the beach is an example. Although this may not be the focus of current AI research, since there is so much hope placed in AI to do things for us, such does not exclude the possibility of future robots that build something and then destroy it to build something else, without some practical objective.
Upvotes: 0 |
2017/11/30 | 1,263 | 4,792 | <issue_start>username_0: Technically speaking, could we code in natural language once we pass the Turing test? Would passing the Turing test at least simplify programming languages' syntax?<issue_comment>username_1: No. Humans pass the Turing Test but cannot express themselves with enough precision and little enough ambiguity to code well in English (natural language). No machine will improve on that until it can greatly improve on human-level clarity in expressing their purpose when programming.
Is it possible to program in natural language? Yes, I think so. But it would require superhuman intelligence to anticipate all the possible confusions that might arise when a word is not sufficiently precise or accurate to represent an software activity or mechanism. This probably would require adopting a rigorous convention in choosing a subset of unambiguous words to form a "natural" programming language (a kind of creole). Coding consistently using only those words would require discipline that exceeds the ability of most humans, methinks.
Upvotes: 0 <issue_comment>username_2: I think this is a very interesting question. It's highly hypothetical, so my simple answer will also, necessarily, be hypothetical.
* If the AI can pass the Turing Test, it can be assumed it has a command of the human language(s) used in that process
Self-awareness in this context would seem to be irrelevant, as would the question of whether the AI actually "understands" the content, or is merely imitating natural language.
* If the AI has a command of the human language(s), it would be a reasonable assumption that it could translate natural language instructions into machine code
It might be best to think of the problem in the simplest terms. If you asked this hypothetical AI to "draw a red square on the screen", it's hard to see that task as impossible, or even difficult.
Obviously, as the instructions became more complicated, there would be a greater margin for error, and it would be useful to integrate precise, formal terms related to functions, as [username_1 points out](https://ai.stackexchange.com/a/4626/1671).
Upvotes: 0 <issue_comment>username_3: Yes, we can program in natural language.
Specifications in natural language are normally neither precise nor complete. But when using a large language model like GPT-4, that is not a problem:
The model makes assumptions for parts you did not specify.
Sometimes that is good enough, but in general, you need to refine your specification until it is complete and precise enough to fit the requirements you try to express in natural language.
Note that you do not need to fully understand your actual requirements from the start, you can incrementally refine them too.
To illustrate it, I will give a example using GPT-4:
(All prompts are shown, there is no further prompt engineering, it is zero-shot. *Prompt:* is what the programmer writes, *GPT-4:*, *Output:* is the output from the current version of the program executed as python.)
---
Prompt:
```
Write a python function that can concatenate some strings, and print the result with 10 characters per line.
```
GPT-4:
```
def concat_and_print(strings):
concatenated = "".join(strings)
for i in range(0, len(concatenated), 10):
print(concatenated[i:i+10])
# Example usage
strings = ["hello", "world", "this", "is", "a", "test"]
concat_and_print(strings)
```
Output:
```
helloworld
thisisates
t
```
Prompt:
```
I want to give the strings as args.
```
GPT-4:
```
def concat_and_print(*strings):
concatenated = "".join(strings)
for i in range(0, len(concatenated), 10):
print(concatenated[i:i+10])
# Example usage
concat_and_print("hello", "world", "this", "is", "a", "test")
```
Output:
```
helloworld
thisisates
t
```
Prompt:
```
add spaces
```
GPT-4:
```
def concat_and_print(*strings):
concatenated = " ".join(strings)
for i in range(0, len(concatenated), 10):
print(concatenated[i:i+10])
# Example usage
concat_and_print("hello", "world", "this", "is", "a", "test")
```
Output:
```
hello worl
d this is
a test
```
---
Note how the specification is not only incomplete, but also imprecise. Much less exact than natural langue could be. The specification `add spaces` does not say where the spaces should be added. Not even that they should be added to the output. It is not specified that they should be between the words, but the interpretation of it is what I meant.
(This is a real chat session with GPT-4, not cherry picked or edited.)
Upvotes: 1 [selected_answer]<issue_comment>username_4: [Controlled Natural Language](https://en.wikipedia.org/wiki/Controlled_natural_language) (of the second type) can be used for programming, with no need to pass the Turing test.
Upvotes: 1 |
2017/11/30 | 1,952 | 7,594 | <issue_start>username_0: A blog post called "Text Classification using Neural Networks" states that the derivative of the output of a sigmoid function is used to measure error rates.
What is the rationale for this?
I thought the derivative of a sigmoid function output is just the slope of the sigmoid line at a specific point.
Meaning it's steepest when sigmoid output is 0.5 (occuring when the sigmoid function input is 0).
Why does a sigmoid function input of 0 imply error (if i understand correctly)?
**Source:** <https://machinelearnings.co/text-classification-using-neural-networks-f5cd7b8765c6>
>
> We use a sigmoid function to normalize values and its derivative to
> measure the error rate. Iterating and adjusting until our error rate
> is acceptably low.
>
>
>
```
def sigmoid(x):
output = 1/(1+np.exp(-x))
return output
def sigmoid_output_to_derivative(output):
return output*(1-output)
def train(...)
...
layer_2_error = y - layer_2
layer_2_delta = layer_2_error * sigmoid_output_to_derivative(layer_2)
...
```
**UPDATE**
Apologies. I don't think I was clear (I've updated the title)
I understand we don't need to use sigmoid as the activation funtion (we could use relu, tanh or softmax).
My question is about using the `derivative to measure the error rate` (full quotation from article above in yellow) -> what does the derivative of the activation function have to do with measuring/fixing the "error rate"?<issue_comment>username_1: Measuring the error rate of a neural network does not involve the derivative of the sigmoid function at all. It only needs the neural networks outputs, and the expected outputs. It does not matter how the neural network got to those outputs, the outputs only have to be based on the inputs. What the other of that specific text is trying to say is that the derivative of the sigmoid function is important when you use the algorithm back propagation to train the neural network. This method involves using derivatives to optimize the neural network. While this technique is more complicated with neural networks because of how many variables are involved, you can easily see the basics of this approach if you look at a calculus textbook and look at the chapter that is usually titled Applications of the derivative.
Upvotes: 1 <issue_comment>username_2: You dont need a sigmoid function if you dont want one. Any differentiable function will do. Sigmoid functions are just one of many suitable functions. You could write your own differentiable function if you want a propriety solution.
Upvotes: 2 <issue_comment>username_3: This derivative is used when calculating the error of your machine learning algorithm during gradient based minimization methods.
Read below for more info.
When performing supervised classification (with X, Y data vectors of inputs and outcome data to train with) you begin with the error function
E(X, Y; θ)= ∑i (ƒ(xi; θ)-yi)2
for total error over all data instances i, where f is your neural network, linear regression,...method of interest and θ is the set of weights. The goal here is to find weights that minimize your error when predicting training data (y) (which ideally generalizes to new data as well). To be explicit, ƒ(xi; θ); outputs value of interest which should be yi. And E measures how far off it is in prediction.
So to train your classifier, you optimize E with something like gradient descent. Thus when ∂E/∂θ = 0 (for a particular θ), that means you hit a local minimum for the error function, or a point where the error in the current state of the predictor is low, meaning it is (hopefully) a good predictor.
Note the ƒ here is not the same as an activation function, as a neural network is defined differently than in linear regression, etc. and must perform a special kind of gradient descent called backpropagation.
So when you take ∂E/∂θ, what does it equal for a neural net? You should note the activation functions derivative is involved which is how it’s used to measure error so to say.
Upvotes: 2 <issue_comment>username_4: The derivative of your loss is the "slope" at that given prediction. So by "moving down along the slope", we can reduce the value of the loss function.
Intuitively, one can imagine a ball rolling down a slope in the direction of the tangent. Eventually the ball will roll towards the lowest point (global minima), or a username_3ll pothole (local minima), or even some weird looking shape (saddle point) if you are very unlucky.
With some fixed input/output pair, we can consider the loss function as a value of the parameters (weights). Here, the Z axis is the value of your loss *f* (what you are trying to optimize), while the X,Y axis are the parameters of the loss *f* (what you are allowed change):
[](https://i.stack.imgur.com/sMaC5.png)
Note that the image provided is a bit misleading. In practice, we only know the value of the loss function at the location we have probed. The entire graph can only be drawn if we evaluate the loss function for every parameter. So we can't just look at the picture and know which location is the best.
Recall that the goal of the prediction task is to update the parameters in a way such that we minimize the loss function for some given input/output pair. Assuming the loss function is reasonably smooth, then our best bet of reaching a lower value of the error rate is by taking a username_3ll step towards the direction of the slope, which is the derivative of the function. The size of the "username_3ll step" is called the learning rate.
Under the assumption that the loss function looks similar given different input/output pairs, by iteratively moving down the slope for each datapoint, we can eventually reach a "good" set of weights that give a low error.
This is the motivation and idea of the back-propagation algorithm.
A good follow-up question might be: Why do we say that the loss function is similar given different data points? This is basically the same as assuming that there is some regularity in the data such that there is a "most normal looking loss function" that the data-set naturally approaches.
Hopefully that helps!
Upvotes: 0 <issue_comment>username_5: In order to train a neural network, you have to adjust each weights and biases to reduce the cost to as minimum as possible. The only way to do so is to subtract a username_3ll amount of the partial derivative of cost w.r.t. w, b from the respective parameters.
if J is our cost function, after each iteration:
```
w = w - lr*dJ_dw, //where lr is a username_3ll scalar called learning rate and dJ_dw is the partial derivative of cost function w.r.t. w
```
and same for bias
```
b = b - lr*dJ_db, //dJ_db is the partial derivative of cost function w.r.t b
```
let's look at how the partial derivatives are calculated.
using sigmoid as activation function, and squared error function as cost, we have:
```
z = w*x + b
a = sigmoid(z) // sigmoid(z) = 1.0 / (1.0 + exp(-z)), a is the final output
J = (a - y)*(a - y) // where y is the expected output
```
For us to calculate partial derivatives of this cost function w.r.t. w, b, we need to use, chain rule of derivatives as:
```
dJ_dw = dJ_da * da_dz * dz_dw // dJ_da is the partial derivative of cost w.r.t. activation, a
dJ_db = dJ_da * da_dz * dz_db
```
in the above equations, ***da\_dz*** is the derivative of activation function (sigmoid in our case) which is ***sigmoid(z).(1 - sigmoid(z))***
Upvotes: 0 |
2017/12/02 | 971 | 3,969 | <issue_start>username_0: There has been recent uptick in interest in e**X**plainable **A**rtificial **I**ntelligence (XAI). Here is XAI's mission as stated on its [DARPA page](https://www.darpa.mil/program/explainable-artificial-intelligence):
>
> The Explainable AI (XAI) program aims to create a suite of machine learning techniques that:
>
>
> * Produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and
> * Enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.
>
>
>
The New York Times article [Can A.I. Be Taught to Explain Itself?](https://www.nytimes.com/2017/11/21/magazine/can-ai-be-taught-to-explain-itself.html) does a good job at explaining the need for XAI from a human interest perspective as well as providing a glancing outlook on the techniques being developed for the same. The force behind XAI movement seems to center around the (up and coming?) concept of ***right to explanation***, that is, the requirement that AI applications that significantly impact human lives via their decisions be able to explain to stakeholders the factors/reasons leading up to said decision.
How is the right to explanation reasonable, given the current standards at which we hold each other accountable?<issue_comment>username_1: >
> How is the right to explanation reasonable, given the current standards at which we hold each other accountable?
>
>
>
In short, it is quite reasonable.
More specifically, making AI accountable and responsible for explaining the decision seems reasonable because
* Humans (DARPA in this case) has **chosen to create, raise and evolve AI** with the tax-payers money. In our society as well, whenever humans have come together for some purpose (some formal arrangement like government or otherwise), accountability and responsibility is assumed and assigned. Thus, expecting AI (who will take decisions on our behalf) to explain their decisions **seems only natural extension of how humans currently operate**.
* Humans (*generally* at least) **don't have super-power or resources to control and manipulate** the rest of the population. But when they do (like our *political leaders*), we want them to be accountable.
* In rare cases, powerful humans **become powerful due to things which are accessible to everyone**, while AI's super-power won't be. So, our civilization would feel safer and thus less insecure with an AI which doesn't shut the door on face when questions are asked.
Other benefits of AI that offers an explanation on how a decision is reached
* Far easier to debug and improve in its early stage.
* Can be customized further (and along the way) to amplify certain aspects like social-welfare over profitability, etc.
Upvotes: 2 <issue_comment>username_2: In the paper [Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For](https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=1315&context=dltr), the authors claim that the "right to explanation" is unlikely to provide a complete remedy to algorithmic harms for at least two reasons
1. It is unclear when any explanation-related right can be triggered
2. The explanations required by the law, "meaningful information about the
logic of processing", might not be provided by current (explainable) AI methods
The authors then conclude that the "right to explanation" is *distracting*, but that other laws of the European Union's General Data Protection Regulation, such as
1. right to be forgotten
2. right to data portability
3. privacy by design
might compensate the defects of the "right to explanation".
To conclude, the "right to explanation" is an attempt to protect citizens against the possible undesirable consequences of the use of AI methods. However, it is not flawless and clearer rights might need to be promulgated.
Upvotes: 2 |
2017/12/04 | 795 | 3,733 | <issue_start>username_0: I am currently looking for AI use cases for Telco. What are the different AI use cases for Telcos/communication service providers?<issue_comment>username_1: AI could hold the key in automating and optimizing networks. On the subscriber side, ML and AI will assist telecom operators in profiling the subscribers. This will be achieved by analyzing network activity, conversion rate of offers and data usage trends.
Below are a few use cases and how they will transform the telecommunication sector. (Source H2o.ai blog <https://www.h2o.ai/telecom/> )
**Old generation telecom technologies.**
1. Reactive Maintenance
2. Network optimization with human intervention
3. Centralized intelligence
4. Security attack repair
5. Backlogged customer tickets
**Future generation AI based telecom technologies.**
1. Predictive Maintenance
2. Self-optimizing network
3. Optimal network quality
4. Intelligence at the edge
5. Security attack prediction
6. Improved customer experience through customer service chat bots.
7. Speech and voice services for customer which allows users to explore media content by spoken word rather than remote control.
8. Predictive maintenance which is the ability to fix problems with telecom hardware such as cell towers, power lines e.t.c before they happen by detecting signals that usually lead to failure.
Upvotes: 1 <issue_comment>username_2: AI can also be used by Telcos to:
(1) Improve Usage & Retention efforts: Make relevant up-sell and cross-sell offers to the right users at the right time.
(2) Make segmentation more granular: make bespoke recommendations based on a user’s behavioral patterns and content preferences and assessing which call & data packages best suit different customer segments thereby increasing sales success rates
Upvotes: 0 <issue_comment>username_3: Well AI can be used in Major Areas in Telecom
1/ Network Optimization
=======================
```
Where the network is trained to come up with the required Parameter tunning to solve a particular issue, for example in 4G system we have user licenses for cell level that how many users this cell will support. Every time the number of users increases on that cell the Network optimizer sends a work order to the Backoffice to increase the number of users with temporary license that is in pool. For an AI system it can analyze the utilization and increase the licenses or it can tune the parameters to shift traffic on the neighbour cell, it can learn from the optimizers behaviour and actions, this is a single example
```
2/ Customer Intelligence
========================
```
Based on the customer behavior, his social media engagements and responses, usage patterns, past usage history and current context the AI System can generate personalized offers for the user that can add value to the overall customer Journey and make him and engaged happy customer.
```
3/ Network expansions and Dynamic capacity:
===========================================
```
Mostly network operators have licenses per resource contracts with vendors, the excess capacity is charged. In most cases for example in summers some particular areas need more additional capacity but once the summers are gone that particular area is the lowest utilized area. An AI system can learn based on the usage patterns when a particular area need higher capacity and when low capacity and can increase and decrease the capacity resulting in reduced costs.
```
Above are some of the use cases which can further be added with alot of Customer Engagement and Customer Experience Management uses cases
Regards
<NAME>
Customer Journey and Business Analyst
Customer Experience Management
Upvotes: 0 |
2017/12/04 | 1,102 | 3,819 | <issue_start>username_0: >
> Given a neural network $f$ that takes as input $n$ data points: $x\_1, \dots, x\_n$. We say $f$ is **permutation invariant** if
>
>
> $$f(x\_1 ... x\_n) = f(\sigma(x\_1 ... x\_n))$$
>
>
> for any permutation $\sigma$.
>
>
>
How could we build such a neural network? The output can be anything (e.g. a number or vector), as long as it does not depend on the order of inputs. Is there any research work on these types of neural networks?<issue_comment>username_1: Traditionally, due to the way the network is structured, each input has a set of weights, that are connected to more inputs. If the inputs switch, the output will too.
### Approach 1
However, you can build a network that approaches this behaviour. In your training set, use batch learning and for each training sample, give all possible permutations to the network such that it learns to be permutation invariant. This will never be exactly invariant, it just might be close.
### Approach 2
Another way to do this is to have the weights replicated for all inputs. For example, let's assume you have 3 inputs ($i\_0, i\_1, i\_2$), and the next/first hidden layer has 2 nodes ($h\_{10}, h\_{11}$) and activation function $\phi$. Assuming a fully connected layer, you have 2 weights $w\_0$ and $w\_1$. The hidden layer's nodes $h\_{10}$ and $h\_{11}$ are given, respectively, by
* $h\_{10} = \phi(i\_0 w\_0 + i\_1 w\_0 + i\_2 w\_0)$
* $h\_{11} = \phi(i\_0 w\_1 + i\_1 w\_1 + i\_2 w\_1)$
Thus giving you a hidden layer whose values are permutation invariant from the input. From now on, you can learn and build the rest of the network as you see fit. This is an approach derived from convolutional layers.
### Approach 3
Finally, you can use a dimension reduction to achieve this. I've published this in [Exploring Communication Protocols and Centralized Critics in Multi-Agent Deep Learning](https://www.researchgate.net/publication/341498982_Exploring_communication_protocols_and_centralized_critics_in_multi-agent_deep_learning), on Integrated Computer-Aided Engineering. This approach uses convolutional architectures to efficiently achieve permutation invariance.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Here is a few that might be what you are looking for:
* Deep Sets, <https://papers.nips.cc/paper/6931-deep-sets.pdf>
* BRUNO: A Deep Recurrent Model for Exchangeable Data, <https://arxiv.org/pdf/1802.07535.pdf>
* Deep Learning with Sets and Point Clouds, <https://openreview.net/pdf?id=HJF3iD9xe>
* Permutation-equivariant neural networks applied to dynamics prediction, <https://arxiv.org/pdf/1612.04530.pdf>
Upvotes: 4 <issue_comment>username_3: I have implemented Permutational Layer here using Keras: <https://github.com/username_3/superkeras/blob/master/permutational_layer.py>
You can call the `PermutationalModule` function to use it.
Implemented following this paper: [Permutation-equivariant neural networks applied to dynamics prediction](https://arxiv.org/pdf/1612.04530.pdf).
The idea is to compare all pairs of $N^2$ pairs from $N$ inputs, use the model with shared weights, then use pooling function $N$ times on $N$ inputs.
The output you can use pooling again but in the paper, they don't mention another pooling.
Upvotes: 3 <issue_comment>username_4: So, a practical application of this with a lot of research is in the deep lidar processing community. In that world, you have to do a lot of computation on point clouds which are completely unordered. One of the seminal works in this field is Point Net (<https://arxiv.org/pdf/1612.00593.pdf>) which solves this problem by performing all symmetric operations across channels. 1D convolutions, max pool, etc. Any network in this style that does not project to a 2D representation has to adhere to this rule.
Upvotes: 2 |
2017/12/04 | 1,137 | 4,236 | <issue_start>username_0: I have followed a course on machine learning, where we learned about the gradient descent (GD) and back-propagation (BP) algorithms, which can be used to update the weights of neural networks, and reinforcement learning, in particular, Q-learning. I implemented these concepts separately.
Now, I was thinking about using a neural network to approximate the Q-function, $Q(s, a)$, but I don't really know how to design the neural network and how to use back-propagation to update the weights of this neural network (NN).
1. What should the inputs and outputs of this NN be?
2. How can I use GD and BP to update the weights of such an NN? Or should I use a different algorithm to update the weights?<issue_comment>username_1: ### Gradient descent and back-propagation
In deep learning, gradient descent (GD) and back-propagation (BP) are used to update the weights of the neural network.
In reinforcement learning, one could map (state, action)-pairs to Q-values with a neural network. Then GD and BP can be used to update the weights of this neural network.
### How to design the neural network?
In this context, a neural network can be designed in different ways. A few are listed below:
1. The state and action are concatenated and fed to the neural network. The neural network is trained to return a single Q-value belonging to the previously mentioned state and action.
2. For each action, there is a neural network that provides the Q-value given a state. This is not desirable when a lot of actions exist.
3. Another option is to construct a neural network that accepts as input the state. The output layer consists of $K$-units where $K$ is the number of possible actions. Each output unit is trained to return the Q-value for a particular action.
### Q-learning update rule
This is the Q-learning update rule

So, we choose an action $a\_t$ (for example, with $\epsilon$-greedy behavior policy) in the state $s\_t$. Once the action $a\_t$ has been taken, we end up in a new state $s\_{t+1}$ and receive a reward $r\_{t+1}$ associated with it. To perform the update, we also need to choose the Q-value in $s\_{t+1}$ associated with the action $a$, i.e. $\max \_{a} Q\left(s\_{t+1}, a\right)$. Here, $\gamma$ is the discount factor and $\alpha$ is the learning rate.
### Back-propagation and Q-learning
If we use a neural network to map states (or state-action pairs) to Q-values, we can use a similar update rule, but we use GD and BP to update the weights of this neural network.
Here's a possible implementation of a function that would update the weights of such a neural network.
```
def update(self, old_state, old_action, new_state,
reward, isFinalState = False):
# The neural network has a learning rate associated with it.
# It is advised not to use two learning rates
learningRate = 1
# Obtain the old value
old_Q = self.getQ(old_state, old_action)
# Obtain the max Q-value
new_Q = -1000000
action = 0
for a in self.action_set:
q_val = self.getQ(new_state, a)
if (q_val > new_Q):
new_Q = q_val
action = a
# In the final state there is no action to be chosen
if isFinalState:
diff = learningRate * (reward - old_Q)
else:
diff = learningRate * (reward + self.discount * new_Q - old_Q)
# Compute the target
target = old_Q + diff
# Update the Q-value using backpropagation
self.updateQ(action, old_state, target)
```
In the pseudocode and in the Q-learning update formula above, you can see the discount factor $\gamma$. This simply denotes whether we are interested in an immediate reward or a more rewarding and enduring reward later on. This value is often set to 0.9.
Upvotes: 4 [selected_answer]<issue_comment>username_2: You should read up on these papers:
* [Deep Q-Networks](https://www.nature.com/articles/nature14236)
* [Asynchronous Deep Reinforcement Learning](http://proceedings.mlr.press/v48/mniha16.pdf)
Both by DeepMind. They achieved super-human results on video-games and other tasks. They describe the algorithms quite well. It is not as simple as the previous answer, which won't converge to a policy in complex environments.
Upvotes: 2 |
2017/12/05 | 565 | 2,334 | <issue_start>username_0: Collecting and labeling training data for supervised learning tasks is incredibly time-consuming and costly.
For instance, let's say you wrote a script that went on Google images and got you 5000 pictures for each of 10 classes. You then use an unsupervised algorithm to cluster them. Then, you train a supervised algorithm using the labels from the scraper as ground truth. Obviously, your network will perform more poorly than one with perfectly labeled data, but is there a way to guesstimate how much?
Perhaps there are 50 mislabeled images in each class. That would most likely be better than 500 mislabeled images, but I'm wondering if there is a way to predict how much (even if it is by someone's rules of thumb or something like that).<issue_comment>username_1: I think the crucial point here is what you precisely mean by *mislabelled*. Google's image classifier will likely do a 'pretty good' job of retrieving images with the given subject included, but how strict or lenient your class requisites are is quite important. For example, if one of your classes is 'dog' there may be hundreds of images procured from scraping that could display (examples off the top of my head, but you can get even more creative):
* Ancient canine fossils
* Wolves
* Stuffed animal dogs
* Hard mode dogs (i.e. partially occluded, variable lighting schemas, background color variation, intraclass/species variation)
Additionally, your computational tool will impact this. If you're using a neural network, the above issues can to some extent be accounted for, but a linear classifier would likely have difficulty adopting a broader/flexible view of your class.
Upvotes: 3 [selected_answer]<issue_comment>username_2: I will break it down for you in very simple words. The accuracy will drop down as you label them wrong. In simpler words- accuracy is directly proportional on how perfect the data is labelled. If you think about it, suppose you have 2 categories-cats and dogs, and you have a dataset of 10,000 pictures. Out of which 50 are wrongly labelled. The accuracy will less than the perfectly labelled but not that less since the neural network built will not be that bad. But suppose now you have 1000 wrongly labelled which is 1/10 of the dataset, then the NN will have more abrupt outcomes.
Upvotes: 1 |
2017/12/06 | 526 | 2,211 | <issue_start>username_0: Let's suppose I have 5 images, all of which I assure you are of the same item, but from various angles and perhaps different lighting conditions. I now supply you with an additional image, and I want a score of how likely this image is to contain the item depicted in the first five pictures.
Let us suppose that the item isn't too complex. It won't be a pile of fabric with a pattern on it, dropped several different ways, and won't be a keychain with keys in different conformations. It will also be more complex than just a blue ball shot from different angles. How might you approach the problem of scoring this image?<issue_comment>username_1: I think the crucial point here is what you precisely mean by *mislabelled*. Google's image classifier will likely do a 'pretty good' job of retrieving images with the given subject included, but how strict or lenient your class requisites are is quite important. For example, if one of your classes is 'dog' there may be hundreds of images procured from scraping that could display (examples off the top of my head, but you can get even more creative):
* Ancient canine fossils
* Wolves
* Stuffed animal dogs
* Hard mode dogs (i.e. partially occluded, variable lighting schemas, background color variation, intraclass/species variation)
Additionally, your computational tool will impact this. If you're using a neural network, the above issues can to some extent be accounted for, but a linear classifier would likely have difficulty adopting a broader/flexible view of your class.
Upvotes: 3 [selected_answer]<issue_comment>username_2: I will break it down for you in very simple words. The accuracy will drop down as you label them wrong. In simpler words- accuracy is directly proportional on how perfect the data is labelled. If you think about it, suppose you have 2 categories-cats and dogs, and you have a dataset of 10,000 pictures. Out of which 50 are wrongly labelled. The accuracy will less than the perfectly labelled but not that less since the neural network built will not be that bad. But suppose now you have 1000 wrongly labelled which is 1/10 of the dataset, then the NN will have more abrupt outcomes.
Upvotes: 1 |
2017/12/06 | 2,409 | 10,518 | <issue_start>username_0: Can an AI learn to play chess if you give it nothing but "the goal is to win" as starting criteria? If not, what is the minimum information the AI would need to be "seeded" with in order to learn to play chess? What techniques could be used to create an AI that learns to play chess independently?<issue_comment>username_1: Yes. “On the 6th of December, 2017, AlphaZero took over the chess world".
([article link](http://www.telegraph.co.uk/science/2017/12/06/entire-human-chess-knowledge-learned-surpassed-deepminds-alphazero/), [Arxiv paper](https://arxiv.org/pdf/1712.01815.pdf)).
Upvotes: 0 <issue_comment>username_2: I feel your question as presented, "Can an AI know when it is its turn?", is a bit naive. Performing this task is just a matter of asking "Is it my turn?" repeatedly. This can be implemented as simply as seeing if the turn clock is running for itself.
As for the more general question of whether an AI can learn and then "properly" play chess the answer is: Theoretically, yes.
* We currently have the necessary computing power for AI to beat humans at Chess (and Go) in real time.
* We currently have AI systems that can discover and test system rules.
* We currently have AI that are able to discover the mappings between control manipulation and their effect in platformer video games. (Though they do not appear to be able to discover the goal of a game on there own. They still need a human to define appropriate fitness criteria.)
Based on this, it seems possible for today's AI to be able to extract rules from a collection of previously played chess games. It would also seem possible to extract the rules by brute force from an oracle (referee) by setting up boards and asking that oracle if particular moves are valid.
You still need to tell it what winning is and why it would "want" to (i.e.: fitness criteria).
Upvotes: 0 <issue_comment>username_3: It's possible for an AI to learn chess without even knowing how to move the pieces. Google's AlphaZero didn't do that as their programmers coded the chess rules, but it's possible.
One can learn the rules from human played chess games. Once the rules are known, we could use reinforcement learning to improve playing strength (and other board games).
Upvotes: 2 <issue_comment>username_4: Your question is a bit vague. Is the ruleset defined? If not the bot can just do whatever it likes, and then it technically wouldn't be playing the game anymore. But by restricting its moves you are also implicitly giving it the ruleset.
Regardless, minimax can "solve" chess (or more generally, all games). The only things it needs are the "endgame states" (the set of all board positions that is a checkmate), and a function that given some input board state, output the set of all possible board states that might follow, where we define a board state to contain information about the location of each chess prices and who is the next player to move.
The ruleset is then implicitly defined by the "next possible states function", although the robot never actually knows the rules that govern each piece.
If the rule set is not defined, then perhaps one can imagine a bot that plays chess by making random moves at the press of a button. One then can then play chess with the robot by either 1. ignoring the fact that the bot is making random moves, and 2. undo any move the bot makes that is illegal, and then get it to move again.
You don't need something as complicated as AlphaGo to play chess naively, but the minimax approach needs a lot of computational power.
Upvotes: 0 <issue_comment>username_5: **Background**
Most any decision making with a specific sets of objectives and guidelines can be framed as a game. In mid 2017, it would have It is difficult to suggest one AI approach to a winning chess AI that had value outside of chess game play. Historically, game strategy AI approaches varied greatly from game to game and research lab to research lab.
The approach to tic-tac-toe is the trivial case in AI, since it is easy to calculate the probability of winning for every possible sequence of moves on a single CPU in less than a second. At the other extreme of game complexity and dynamics, only the technical talent contractually obligated to secrecy in contracts with the portfolio managers of billionaires know how to play the game of high speed trading, optimized for maximum gain in position per hour.
The international champion in inter-computer chess play is Stockfish. It produced the highest consistency in wins in international tournaments thus far. Stockfish performs its best move search in a highly parallel CPU run time environment. A sophisticated pruning and depth prioritization involving late move reductions is applied to the alpha-beta search algorithm, in conjunction with a bit board.
In human-computer play Zor was the international champion in 2017.
**Change in AI Approach**
In 2017, DeepMind’s AlphaZero defeated Stockfish 28–72-0. (The number of draws is positioned in the center number.) That it won the tournament is not the remarkable advancement from an AI perspective. The same algorithm, configured differently also plays a winning game at Go and Shogi. The approach and design is described in [*Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm*, <NAME> et. al., 2017](https://arxiv.org/pdf/1712.01815.pdf)
AlphaZero is configured with the rules of the game to constrain the search. It begins training prior to tournament game play without prior knowledge of game play. That no triggered game strategies or heuristic rules to guide the search are known before learning game play, the authors claim the learning is *tabula rasa*, Latin meaning blank slate. Reinforcement learning is used to develop a strategy during self-play.
AlphaZero, "Averages over the position evaluations within a subtree, rather than computing the minimax evaluation of that subtree," as is commonly used in chess players based on the alpha-beta approach. It determines the relative value of states (board positions) based on a DNN trained to produce values associated with all of the state information from outcomes. This is distinct from valuation by summing points assigned to pieces and their locations. AlphaZero uses a Monte-Carlo tree search (MCTS) algorithm, using ordered deepening to minimize computing resource utilization per move.
The MCTS mitigates the numerical error, DNN convergence artifacts that can accumulate at the root of each sub-tree, via aggregation. The aggregation of data that contains reasonably normalized noise causes cancellation of the symmetrically distributed deviations.
The achievement is significant. The abstract of the paper states, "Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case."
**Remaining Domain Dependence**
Without dismissing the achievement, the claim of *no domain knowledge except the game rules* conflicts with the additional five items listed in the *Domain Knowledge* section of the same paper, reproduced here solely for convenience of the reader.
1. The input features describing the position, and the output features describing the move, are structured as a set of planes; i.e. the neural network architecture is matched to the grid-structure of the board.
2. AlphaZero is provided with perfect knowledge of the game rules. These are used during MCTS, to simulate the positions resulting from a sequence of moves, to determine game termination, and to score any simulations that reach a terminal state.
3. Knowledge of the rules is also used to encode the input planes (i.e. castling, repetition, no-progress) and output planes (how pieces move, promotions, and piece drops in shogi).
4. The typical number of legal moves is used to scale the exploration noise (see below).
5. Chess and shogi games exceeding a maximum number of steps (determined by typical game length) were terminated and assigned a drawn outcome; Go games were terminated and scored with Tromp-Taylor rules, similarly to previous work.
In addition to game rules and the above five listed domain dependencies, there are further dependencies.The higher level analysis of the consequences of the board geometry and game rules is not a fundamental game rule.
* In Go game play, "During MCTS, board positions were transformed using a randomly selected rotation or reflection before being evaluated by the neural network, so that the MonteCarlo evaluation is averaged over different biases."
* Rules are analyzed to scale, "Noise that is added to the prior policy to ensure exploration ... in proportion to the typical number of legal moves for that game type."
This totals eight dependencies, and a grossly insufficient number of game types were tried to make a claim that AlphaZero is a, "domain-independent search." But bending the the concept of domain independence in the claims is more easily forgiven when legitimate and significant achievements are made by an enthusiastic team. The approach is sound and the use of the craft by the DeemMind team is world class.
**Beyond Removing Remaining Dependencies**
Even after the eight dependencies are whittled down by the various teams involved in game play automation, there is further work that may become the focus of further research.
Board games are a particular kind of game. In games where the game rules can mutate, such as markets, law, war, and other domains where the boundaries of the domain require knowledge of broader sets of domains is more challenging, although it is reasonable to expect that dependencies will be reduced and current approaches that work with board games may become adaptive in terms of game rule acquisition.
Currently, approaches like AlphaZero require that input preparation be designed and executed and that outputs be executed on a virtual board. They do not yet discover game states by vision, execute moves with robotics, or acquire the rules of the game from natural language descriptions or the analysis of past games.
**Summary**
These limitations do not invalidate the significant advancement of a game player that needs only rules, a small set of domain specific configurations, and some self play time to defeat champion level dedicated artificial game players.
Upvotes: 0 |
2017/12/07 | 1,232 | 5,348 | <issue_start>username_0: Currently, AI is advancing fast in deep learning: [Entire human chess knowledge learned and surpassed by DeepMind's AlphaZero in four hours](http://www.telegraph.co.uk/science/2017/12/06/entire-human-chess-knowledge-learned-surpassed-deepminds-alphazero/).
As a layman, I'm taking this as a quite powerful searching algorithm, using artificial neural networks to identify the patterns of each game.
However, how good is AI doing in math?
For example, the key to the theory of the game [Nim](https://en.wikipedia.org/wiki/Nim) is the binary digital sum of the heap sizes, that is, the sum (in binary) neglecting all carries from one digit to another. This operation is also known as ["exclusive or" (xor)](https://en.wikipedia.org/wiki/Exclusive_or) or "vector addition over GF(2)".
Is AI good enough to discover/invite operations/logics such as "exclusive or", or, more advanced, abstract algebra in [finite field](https://en.wikipedia.org/wiki/Finite_field)?<issue_comment>username_1: [Nim was actually](https://en.wikipedia.org/wiki/Nim) one of the first games ever played by an electronic machine. It was called the Nimatron and was displayed at the 1940 New York World's Fair.
It is also well known that neural networks can model the Xor-function, if they have enough layers. Despite that, <NAME> is supposed to have killed neural networks in the sixties, by asserting that networks with a single hidden layer, cannot model XOR.
I think there were also some large scale projects to classify finite groups using computers.
Additionally, there have long been [automated theorem-proving program](https://en.wikipedia.org/wiki/Automated_theorem_proving)s, that had some success in mathematical logic. I recently saw [a paper](https://arxiv.org/pdf/1606.04442.pdf) that used deep learning to improve one of these theorem provers.
That being said, doing real mathematics is probably as difficult as real language understanding. There doesn't seem to have been a real breakthrough yet.
Upvotes: 4 [selected_answer]<issue_comment>username_2: The question of whether nets can be trained to take over more and more of what was entirely within the domain of production systems was asked (to the dismay of those who worked on first order predicate calculus inference in the LISP community) back in the early 1990s.
**Artificial Networks Performing Logical Inference**
At Stanford University's Department of Linguistics the learning of the logic required to assemble a semantic graph by an artificial net has been demonstrated and documented in [*Recursive Neural Networks Can Learn Logical Semantics* by <NAME>, <NAME>, and <NAME>](https://nlp.stanford.edu/pubs/bpm_logical_semantics.pdf).
Even the earliest work on artificial networks were targetted toward learning logic, such as the elusive exclusive-or operation, which was achieved by adding a second layer to the original perceptron design and applying what we now call gradient descent.
**Distinct from Automatic Theorem Proving**
Most of the early work on computer proofs of theorems was based on the production system approach (sometimes call expert systems). These are rules based systems, not artificial networks. It was thought that the rules of predicate logic could be executed in proper sequence by pattern matching the antecedents (conditions in which a mathematical technique based on axiomatic information and already proven theory may be applied) in proper order. Some success was achieved using heuristic meta rules.
Using artificial networks to prove a theorem is an entirely different approach. To take semantic learning further so that an artificial network could learn how to assemble a mathematical proof requires three further levels of abstraction in the network learning model.
* Learning the known first order predicate logic rules of inference
* Learning the mechanics of applying those rules to proposed theorems
* Learning functional heuristics to know what to try first
**Evidence It Can Be Done**
The evidence that artificial networks may be developed which can learn to construct a mathematical proof is not that current artificial nets can perform some natural language functioning or creatively develop a melody or some interior design. The reason DARPA has traditionally invested in neural network research pointed in the direction of simulating logic is the proof of concept proposed by Minsky.
The strongest evidence that neural networks can potentially learn the various layers of abstraction listed above to actually do math is that human children cannot prove a theorem or even read one out loud understandably, yet some may grow up to be proficient in theorem proving. The biological neural nets of the brain must learn such proficiency.
As of this writing, no counter-example exists that an artificial network cannot achieve the proficiency of Gauss or Gödel, so the idea cannot logically be dismissed. Many advanced research projects continue to target higher cognitive skills as their AI objective.
**Public Access**
It is likely, since much of the work on logical inference and the investigation into whether artificial networks could be trained to do it was funded by government bodies, that some of the results of research is not available to the public.
Upvotes: 2 |
2017/12/08 | 2,812 | 10,524 | <issue_start>username_0: What is the fundamental difference between convolutional neural networks and recurrent neural networks? Where are they applied?<issue_comment>username_1: Basically, a CNN saves a set of weights and applies them spatially. For example, in a layer, I could have 32 sets of weights (also called feature maps). Each set of weights is a 3x3 block, meaning I have 3x3x32=288 weights for that layer. If you gave me an input image, for each 3x3 map, I slide it across all the pixels in the image, multiplying the regions together. I repeat this for all 32 feature maps, and pass the outputs on. So, I am learning a few weights that I can apply at a lot of locations.
For an RNN, it is a set of weights applied temporally (through time). An input comes in, and is multiplied by the weight. The networks saves an internal state and puts out some sort of output. Then, the next piece of data comes in, and is multiplied by the weight. However, the internal state that was created from the last piece of data also comes in and is multiplied by a different weight. Those are added and the output comes from an activation applied to the sum, times another weight. The internal state is updated, and the process repeats.
CNN's work really well for computer vision. At the low levels, you often want to find things like vertical and horizontal lines. Those kinds of things are going to be all over the images, so it makes sense to have weights that you can apply anywhere in the images.
RNN's are really good for natural language processing. You can imagine that the next word in a sentence will be highly influenced by the ones that came before it, so it makes sense to carry that internal state forward and have a small set of weights that can apply to any input.
However, there are many more applications. In addition, CNN's have performed well on NLP tasks. There are also more advanced versions of RNN's called LSTM's that you could check out.
For an explanation of CNN's, go to the [Stanford CS231n course](http://cs231n.stanford.edu/syllabus.html). Especially check out lecture 5. There are full class videos on YouTube.
For an explanation of RNN's, go [here](http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/).
Upvotes: 3 <issue_comment>username_2: On a basic level, an RNN is a neural network whose next state depends on its past state(s), while a CNN is a neural network that does dimensionality reduction (make large data smaller while preserving information) via convolution. [See this](https://en.wikipedia.org/wiki/Kernel_(image_processing)) for more info on convolutions
Upvotes: 0 <issue_comment>username_3: **[Recurrent neural networks](https://ai.stackexchange.com/a/12043/2444)** (RNNs) are artificial neural networks (ANNs) that have one or more recurrent (or cyclic) connections, as opposed to just having feed-forward connections, like a feed-forward neural network (FFNN).
These cyclic connections are used to keep track of temporal relations or dependencies between the elements of a sequence. Hence, RNNs are suited for sequence prediction or related tasks.
In the picture below, you can observe an RNN on the left (that contains only one hidden unit) that is equivalent to the RNN on the right, which is its "unfolded" version. For example, we can observe that $\bf h\_1$ (the hidden unit at time step $t=1$) receives both an input $\bf x\_1$ and the value of the hidden unit at the previous time step, that is, $\bf h\_0$.
[](https://i.stack.imgur.com/7HI1R.png)
The cyclic connections (or the weights of the cyclic edges), like the feed-forward connections, are learned using an optimisation algorithm (like gradient descent) often combined with back-propagation (which is used to compute the gradient of the loss function).
**Convolutional neural networks** (CNNs) are ANNs that perform one or more [convolution](https://en.wikipedia.org/wiki/Convolution) (or [cross-correlation](https://en.wikipedia.org/wiki/Cross-correlation)) operations (often followed by a [down-sampling](https://en.wikipedia.org/wiki/Downsampling_(signal_processing)) operation).
The convolution is an operation that takes two functions, $\bf f$ and $\bf h$, as input and produces a third function, $\bf g = f \circledast h$, where the symbol $\circledast$ denotes the convolution operation. In the context of CNNs, the input function $\bf f$ can e.g. be an image (which can be thought of as a function from 2D coordinates to RGB or grayscale values). The other function $\bf h$ is called the "kernel" (or filter), which can be thought of as (small and square) matrix (which contains the output of the function $\bf
h$). $\bf f$ can also be thought of as a (big) matrix (which contains, for each cell, e.g. its grayscale value).
In the context of CNNs, the *convolution* operation can be thought of as dot product between the kernel $\bf h$ (a matrix) and several parts of the input (a matrix).
In the picture below, we perform an [*element-wise multiplication*](https://en.wikipedia.org/wiki/Hadamard_product_(matrices)) between the kernel $\bf h$ and part of the input $\bf h$, then we sum the elements of the resulting matrix, and that is the value of the convolution operation for that specific part of the input.
[](https://i.stack.imgur.com/EiujB.png)
To be more concrete, in the picture above, we are performing the following operation
\begin{align}
\sum\_{ij}
\left(
\begin{bmatrix}
1 & 0 & 0\\
1 & 1 & 0\\
1 & 1 & 1
\end{bmatrix}
\otimes
\begin{bmatrix}
1 & 0 & 1\\
0 & 1 & 0\\
1 & 0 & 1
\end{bmatrix}
\right)
=
\sum\_{ij}
\begin{bmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
1 & 0 & 1
\end{bmatrix}
= 4
\end{align}
where $\otimes$ is the element-wise multiplication and the summation $\sum\_{ij}$ is over all rows $i$ and columns $j$ (of the matrices).
To compute all elements of $\bf g$, we can think of the kernel $\bf h$ as being slided over the matrix $\bf f$.
In general, the kernel function $\bf h$ can be fixed. However, in the context of CNNs, the kernel $\bf h$ represents the learnable parameters of the CNN: in other words, during the training procedure (using e.g. gradient descent and back-propagation), this kernel $\bf h$ (which thus can be thought of as a matrix of weights) changes.
In the context of CNNs, there is often more than one kernel: in other words, it is often the case that a sequence of kernels $\bf h\_1, h\_2, \dots, h\_k$ is applied to $\bf f$ to produce a sequence of convolutions $\bf g\_1, g\_2, \dots, g\_k$. Each kernel $\bf h\_i$ is used to "detect different features of the input", so these kernels are different from each other.
A *down-sampling* operation is an operation that reduces the input size while attempting to maintain as much information as possible. For example, if the input size is a $2 \times 2$ matrix $\bf f = \begin{bmatrix} 1 & 2 \\ 3 & 0 \end{bmatrix}$, a common down-sampling operation is called the *max-pooling*, which, in the case of $\bf f$, returns $3$ (the maximum element of $\bf f$).
CNNs are particularly suited to deal with high-dimensional inputs (e.g. images), because, compared to FFNNs, they use a smaller number of learnable parameters (which, in the context of CNNs, are the kernels). So, they are often used to e.g. classify images.
**What is the fundamental difference between RNNs and CNNs?** RNNs have recurrent connections while CNNs do not necessarily have them. The fundamental operation of a CNN is the convolution operation, which is not present in a standard RNN.
Upvotes: 3 <issue_comment>username_4: **CNN vs RNN**
* A CNN will learn to recognize patterns across space while RNN is useful for solving temporal data problems.
* CNNs have become the go-to method for solving any image data challenge while RNN is used for ideal for text and speech analysis.
* In a very general way, a CNN will learn to recognize components of an image (e.g., lines, curves, etc.) and then learn to combine these components to recognize larger structures (e.g., faces, objects, etc.) while an RNN will similarly learn to recognize patterns across time. So a RNN that is trained to convert speech to text should learn first the low level features like characters, then higher level features like phonemes and then word detection in audio clip.
---
**CNN**
A convolutional network (ConvNet) is made up of layers.
In a convolutional network (ConvNet), there are basically three types of layers:
* Convolution layer
* Pooling layer
* Fully connected layer
Of these, the convolution layer applies convolution operation on the input 3D tensor. Different filters extract different kinds of features from an image. The below GIF illustrates this point really well:

Here the filter is the green 3x3 matrix while the image is the blue 7x7 matrix.
Many such layers passes through filters in CNN to give an output layer that can again be a NN Fully connected layer or a 3D tensor.

For example, in the above example, the input image passes through convolutional layer, then pooling layer, then convolutional layer, pooling layer, then the 3D tensor is flattened like a Neural Network 1D layer, then passed to a fully connected layer and finally a softmax layer. This makes a CNN.
---
**RNN**
Recurrent Neural Network(RNN) are a type of Neural Network where the output from previous step are fed as input to the current step.

Here, $x\_{t-1}$, , $x\_{t}$ and $x\_{t+1}$ are the values of inputs data that occur at a specific time steps and are fed into the RNN that goes through the hidden layers namely $h\_{t-1}$, , $h\_{t}$ and $h\_{t+1}$ which further produces output $o\_{t-1}$, , $o\_{t}$ and $o\_{t+1}$ respectively.
Upvotes: 2 <issue_comment>username_5: In the case of applying both to natural language, CNN's are good at extracting local and position-invariant features but it does not capture long range semantic dependencies. It just consider local key-phrasses.
So when the result is determined by the entire sentence or a long-range semantic dependency CNN is not effective as shown in [this](https://arxiv.org/abs/1702.01923) paper where the authors compared both architechrures on NLP takss.
This can be extended for general case.
Upvotes: 0 |
2017/12/09 | 1,271 | 5,066 | <issue_start>username_0: Is it possible to give a **rule of thumb estimate about the size of neural networks that are trainable on common consumer-grade GPUs**?
For example, the [Emergence of Locomotion (Reinforcement)](https://arxiv.org/abs/1707.02286) paper trains a network using tanh activation of the neurons. They have a 3 layer NN with 300,200,100 units for the *Planar Walker*. But they don’t report the hardware and time.
But could a rule of thumb be developed?
Also, just based on current empirical results. So, for example, $X$ units using sigmoid activation can run $Y$ learning iterations per hour on a 1060.
Or using activation function $a$ instead of $b$ causes a $n$ times decrease in performance.
If a student/researcher/curious mind is going to buy a GPU for playing around with these networks, how do you decide what you get? A 1060 is apparently the entry-level budget option, but how can you evaluate if it is not smarter to just get a crappy netbook instead of building a high-power desktop and spend the saved $ on on-demand cloud infrastructure.
Motivation for the question: I just purchased a 1060 and (clever, to ask the question afterwards huh) wonder if I should have just kept the $ and made a Google Cloud account. And if I can run my master thesis simulation on the GPU.<issue_comment>username_1: It depends on what you need. You can train any size of network on any resource. The problem is the time of training. If you want to train Inception on an average CPU it will take months to converge. So, it all depends on how long you can wait to see your results based on your network. As in neural nets we do not have only one operation but many (like concatenating, max pooling, padding etc.), it is impossible to make an estimation as you are searching for. Just start training some infamous networks and measure the time. Then, you can interpolate how long it will take to train networks that you are searching for.
Upvotes: 2 <issue_comment>username_2: As a caveat, I’d suggest that unless you’re pushing up against fundamental technological limits, computation speed and resources should be secondary to design rationale when developing a neural network architecture.
That said, earlier this year I finished my MS thesis that involved bioinformatics analytics pipelines with whole genome sequencing data - that project took over 100,000 hours of compute time to develop according to our clusters job manager. When your on a deadline, resources can be a real constraint and speed can be critical.
So, to answer your questions as I understand them:
**Would I have been better off to use the money to buy time in the cloud?**
Probably. The few hundred dollars you spent on the 1060 would take you far training your model(s) in the cloud. Further, as far as I can tell, you don’t require the GPU to be cranking 100% of the time (you would if you were, say, mining crypto currencies). Finally, with cloud instances you could scale, training multiple models at once, which can speed up the exploration and validation of any architecture you settle on.
**Is there a way to gauge the compute time of a neural network on a given GPU**
Well, [Big O](https://en.m.wikipedia.org/wiki/Big_O_notation) is one estimator, but it sounds like you want a more precise method. I’m sure they exist, but I’d counter that you can make your estimation with simple back of the envelope calculations that account for threads, memory, code iterations, etc. Do you really want to dig into the GPU processing pipeline on the 1060? You may be able to come up with a very good estimate by understanding everything happening between your code and the metal, but ultimately it’s probably not worth the time and effort; it will likely confirm that Big O notation (the simple model, if you will) captures most of the variation in compute time. One thing you can do if you notice bottlenecks is performance profiling.
Upvotes: 2 <issue_comment>username_3: Usually the problem is to fit the model into video RAM.
If it does not, you cannot train your model at all without big efforts (like training parts of the model separately).
If it does, time is your only problem. But the difference in training time between consumer GPUs like the Nvidia 1080 and much more expensive GPU accelerators like the Nvidia K80 are not very large. Actually the best consumer cards are faster than GPU accelerators, but lack other properties like VRAM. Random comparisons and benchmarks: [Which GPU(s) to Get for Deep Learning: My Experience and Advice for Using GPUs in Deep Learning](http://timdettmers.com/2017/04/09/which-gpu-for-deep-learning/), [A Comparison between NVIDIA’s GeForce GTX 1080 and Tesla P100 for Deep Learning](https://medium.com/@alexbaldo/a-comparison-between-nvidias-geforce-gtx-1080-and-tesla-p100-for-deep-learning-81a918d5b2c7).
To calculate if your models fits into VRAM, you just approximate how much data and which hyperparameters you have (inputs, outputs, weights, layers, batch size, which datatype and so on).
Upvotes: 4 [selected_answer] |
2017/12/11 | 1,577 | 7,011 | <issue_start>username_0: I heard that your ML model's quality depends directly on the quality and the quantity of data you use.
So I was thinking that can question answers be used as data to train an algorithm which can solve any high school science problems? Because we do have a gazillion number of high school books with millions of Question-Answers which are both high in quality as well as quantity.
P.S: I don't have any in-depth knowledge in any of the AI fields, so please answer accordingly!<issue_comment>username_1: The short answer, I guess, is yes. In theory, you could have a massive, insane network that could take as input any high school science textbook question and output an answer. But think about what that entails:
Take in a question, in any of the various languages that high school science textbooks can be written in. Get all the relevant information out of that. This includes questions like "List four noble gases" as well as ones about applying Kirchoff's laws to a circuit as well as explaining how the ribosome changes its shape to produce proteins. There is soooooooo much information that is contained in just the wording of those questions and so many topics. Then, let's say you have converted the question into a representation that can be "understood" by the algorithm. I want to emphasize that ML models do not "know", "understand", "think", or any other of the words we use to personify them. They are basically just a mathematical function of some level of complexity (usually very complex). That being said, now the algorithm needs to relate this tractable representation of the question to the correct answer.
Since everything before this created some representation of the question, you could view that as a function mapping the input to this representation. Then, everything between intermediate representation and output answer is another function. So, your model needs to learn a single function that is the sum of all human high school level science understanding.
These algorithms you want to build would end up being staggeringly complex. In addition, as you add complexity to an ML model, you need significantly more data to train it. How will you take every high school science book in the world and convert it into digital pairs of questions and answers? How will you handle that some of the answers are incorrect? What is the benefit of this system?
The long answer, I think, is no. It makes no sense to do this. Instead of learning this complex mapping, just build some solvers with equations built in that the user inputs some numbers to and selects the equations. It will take so much less time to build and deploy and I guarantee work better. There are people building ML systems that are trying to get the SAT entirely right [(see page 33 of this report)](http://aiindex.org/2017-report.pdf?utm_campaign=Revue%20newsletter&utm_medium=Newsletter&utm_source=Deep%20Learning%20Weekly). First, I don't think this is what you're asking and second, it still makes more sense, in my mind, to build some solvers. Hopefully, this all makes sense and, if anyone sees things I misstated, let me know.
Upvotes: 1 [selected_answer]<issue_comment>username_2: Its true that your AI model's performance depends on the quality of data that you use. However, high quality data alone is insufficient to guarantee that your model will learn effectively and score well on a particular dataset. Other factors such as smarter algorithms and the use of high performance computing infrastructure must be factored in for your AI system to perform well.
Although A.I research has made massive progress in the past decade, ML engineers are yet to build a system that can match the general scope and generalization ability of the human mind. Upto the first decade of the 2000's AI was dominated by expert systems that emulated the decision making ability of an expert. AI at this point couldn't process unstructured data and therefore it lacked the capacity to sit for and pass high school exams.
This was until 2011 when IBM Watson a question answering computer system competed against two former Jeopardy quiz show winners and placed first. IBM Watson was built on top of Deep QA (a computer system that could answer natural language questions) and UIMA (a software achitecture to process and analyse unstructured information). Below is a link to a paper giving an overview of how IBM's Watson works <https://www.aaai.org/Magazine/Watson/watson.php>
In 2012 a team led by <NAME> won the ImageNet competition by exploiting deep convolution networks. This was soon followed by Dahl's team winning the Merck Molecular Activity Challenge using deep neural network architecture. <NAME>'s work in CNN's, <NAME>'s back propagation and Stochastic Gradient Descent aproach to training datasets alongside <NAME>'s large scale use of GPU's ignited accelerated progress in ML. This was frequently referred to as unreasonable effectiveness of Deep learning.
Following recent advances in fields such as image captioning, natural language processing, information retrieval and computer vision it is highly probable that current generation AI systems can pass high school exams such as SAT.
The Allen AI Institute has made significant progress in developing AI systems that can read, learn and express that understanding through question answering and explanation. Founded by <NAME> Microsoft's co-founder, the Allen AI Institute's singular focus according to their mission is to conduct high impact research in the field of AI. Below is a news link covering their cognitive system passing high school exams [fortune.com/2015/09/21/computer-artificial-intelligence-math/](http://fortune.com/2015/09/21/computer-artificial-intelligence-math/)
So far Allen AI Institute has demonstrated a cognitive platform called Geos that is capable of answering geometry questions as well as the average high school student. While another system called Aristo can answer high school science exam questions by leveraging information extraction alongside knowledge representation and reasoning models. You can access AAI's GeoS service here <http://allenai.org/euclid/> and Aristo here <http://allenai.org/aristo/>
Meanwhile researchers working on the Todai project in Japan have demonstrated a cognitive system that is capable of passing the Tokyo University Mathematics entrance exam. My conclusion from the above examples is that possibly we already have AI that can sit for and pass high school exams.
Upvotes: 1 <issue_comment>username_3: I'm thinking that you could write an AI that takes the question as input, weights it, and googles info based on the first layer of neurons, then takes the first two to three pages of results and spits out an answer. It would be a crapshoot, but maybe you could take the list of results, choose one using another layer, choose the info from the page using a third layer, then answering the question using the info.
Upvotes: 0 |
2017/12/11 | 169 | 646 | <issue_start>username_0: I am planning to build an app which will count the number of sqauts from videos. Assuming that the user and camera do not move, are there ways I can count the number of squats? Do such models to understand human activity and pose exist?<issue_comment>username_1: Yes, this is a rather straightforward problem that should be able to be solved using openCV or ANNs.
Check out this paper to get you started: <https://arxiv.org/pdf/1701.08936.pdf>
Upvotes: 2 <issue_comment>username_2: You can do that pretty easily using Posenet or Openpose. Train the keypoints for Squats and then count it. :)
Upvotes: 2 [selected_answer] |
2017/12/14 | 965 | 4,162 | <issue_start>username_0: ### Goal
I want to create an artificial intelligence to compete against other players in a board game.
### Game explanation
I have a board game similar to 'snakes and ladders'. You have to get to a final field before your opponent does. But instead of depending on luck (throwing the dices) this game uses something like 'food'. You can go as far as you'd like, but it costs food to move (the more you move the more one extra field costs) and you can only get food in some special fields. And there aren't any snakes or ladders so you have to run the whole part. There are some more rules, for example, you can go backward and are only allowed to go into the goal if you've got less than some amount of 'food' and there are some extra fields with other special effects.
### For one player
If there was only one player as there isn't anything like 'luck' in this game, I theoretically could just compute every single method to find the one and only the best method. Practically, I should use an algorithm that requires less computational power.
### For two or more players
The challenge comes with the other player(s). I cannot visit an already taken field. And some other fields give me bonuses depending on my relative position to the other player (I'll just talk about two-player games). For example, only if I'm behind him that special field gives me some extra food.
### My question
It would be ideal if I had some kind of a neural network that knows the field bonuses and I would give my position, the opponents position, the food and so on (the **state** of the game) and it would compute a value between -100 and 100 (assuming fields from 0 to 100) of how many fields I should go (forward or backward).
I read a bit about Q-learning, deep reinforcement learning and deep neural networks. Is this the right approach to solving my problem? And if yes, have you got any more concrete ideas? The multiple actors and the sheer endless possibilities for moving depending on endless states make it hard for me to think of anything. Or is there a different, way better way that slipped past me?<issue_comment>username_1: Assuming it is a turn-based game and, for each turn, there's an optimal choice that will lead to the winning state (zero-sum), you can basically simplify the question to "*What is the optimal sequences of moves for me to win, considering the current situation that is presented on the board?*". So you will need to perform your algorithm every turn as the optimal sequence will change when the board changed.
There's a relatively reliable AI algorithm that's being implemented to win games like chess, backgammon, etc. The technique is called [**minimax**](https://en.m.wikipedia.org/wiki/Minimax). In summary, minimax is simply a search algorithm to minimize the other player's score, while maximizing your score.
One of the problems you will encounter is that the search tree becomes so wide as we search through deeper parts of the tree, so it is essential to also implement [**alpha-beta pruning**](https://en.m.wikipedia.org/wiki/Alpha%E2%80%93beta_pruning) to cut down the amount of search. In general, alpha-beta pruning is simply eliminating branches that are very unlikely to be the optimal choice (in terms of score) thus reducing the number of searches to speed up the algorithm.
Upvotes: 3 <issue_comment>username_2: To make an AI opponent, you'll need to create a sub-routine that considers the current state of the board and chooses a move, just like the player would.
Now, how does this subroutine choose what move to make? You need to take the current board and calculate its value. Then consider every possible move you could make. Then consider, for each of those, every possible move your opponent could make. Iterate to maximum depth. You will have constructed a tree structure (breadth first).
Prune your tree. Any branch that guarantees you a decrease in board value can be pruned.
Then, somehow, compare remaining branches. Either you optimistically weigh possible best outcomes and choose those branches. Or continue pruning based on potential of worst outcomes.
Upvotes: 1 |
2017/12/15 | 1,490 | 6,445 | <issue_start>username_0: I have a binary classification problem, where a false positive error has a very big cost compared to the false negative error.
Is there a way to design a classifier for such problems (preferably, with an implementation of the algorithm)?<issue_comment>username_1: There is no predefined classifier for any problem. Two main features of a classifier are
* its cost function and
* its corresponding weight update formula.
Since your problem statement requires a huge cost for falsely classifying a particular class one approach will be.
* You have to define a cost function that will penalize hugely for misclassifying for that class only. So your cost function will be $J$ and $J'$ put together. You can look up the cost function of a logistic classifier to see how two separate cost functions are merged together [here](http://www.holehouse.org/mlclass/06_Logistic_Regression.html).
* The second approach can be (assuming you are using supervised learning), the learning rate $\alpha$ for both the classes should be different. The larger learning rate will be for the one which is the more important class, since you don't want to classify it improperly (increasing $\alpha$ compared to the other class will reduce or risk of misclassifying it). The exact learning rate depends from case to case.
Thus, I have tailored the two main features of the classifier to solve this problem:
* The cost function.
* The weight update scheme (by changing learning rate for different cases).
Upvotes: 4 [selected_answer]<issue_comment>username_2: @DuttaA has pretty much mentioned the two most appropriate approaches to having this facility. Either the penalty of false positives should be high or the learning rate for the correct class should be high.
I'll give two real-life examples to help you understand it better.
Say you have to teach a teen that substance abuse is injurious to health (eg. Frequent smoking is a **negative** habit). But the teen ends up learning from high effects of the drugs that it is good (**false positive**) and gets addicted to it. You would strictly want to avoid this kind of a situation (**false positive error having a very big cost as compared to false negative error**).
In general, to model the situation when the costs are different, we picture a cost matrix. For a two-class classification problem, the cost matrix would look like:
[](https://i.stack.imgur.com/HaVxo.png)
(courtesy: <http://albahnsen.com/CostSensitiveClassification>)
Now, when designing your cost function, you would want to take into account the weight corresponding to each of the situation. A simple python code would be as follows:
```
def weighted_cost(pred, act):
if pred==P and act==P:
return C_TP * cost(pred, act)
if pred==P and act==N:
return C_FP * cost(pred, act)
if pred==N and act==P:
return C_FN * cost(pred, act)
if pred==N and act==N:
return C_TN * cost(pred, act)
```
Where, **pred** is the predicted class and **act** is the actual class. Here, **C\_TP, C\_FP, C\_TN, C\_FN** represent the weights of true positive, false positive etc. The **cost(pred, act)** function will calculate the loss of one training example. You would want to use the weighted\_cost function for finding the loss after one training example.
The second approach that @DuttaA mentioned was to vary the learning rate. In real life, you can relate this to the situation when you were asked to write a word **multiple times** if you forget its spelling so that you remember it better. In a way, you learn the correct spelling of the word.
Here, increasing the value of the learning rate *(say 4 x alpha)* for a class can be viewed as updating the value of the weights **multiple times** *(4 times)* with the old learning rate *(alpha)*, similar to what we do by writing the correct spelling of the word multiple times. So, a more important class (in your case it will be the **Negative Class**) should be given more alpha, because a false positive (misclassification of the negative class) has a high penalty. You learn to recognize the correct (negative) class by learning it more number of times (as in the case of learning the spelling of the word).
Let me know if you need any further clarification.
Upvotes: 2 <issue_comment>username_3: A funky way of doing this with less overhead is to just over fit the data up to some degree.
The reason is when you try to over fit the data with the classifier the classification bound tends to wrap around the clusters very tightly and with that model you can some times miss classify positive classes as negative(due to high variance) but there are comparatively less situations where you end up miss classifying negative classes as positive.
The level of overfitting that needs to be performed is just based on your FP and FN trade off.
I don't think this as a permanent fix but can come handy up to some extent.
Upvotes: 1 <issue_comment>username_4: One more idea - I recall learning about Neyman-Pearson task in my studies. It is a statistical learning method for binary classification problem where overlooked danger (false negative error) is much unwanted.
You set a desired threshold for false negative error rate and then minimize false positive error. You just need to measure conditional probabilities of each class. It may be expressed as a linear program and solved to get the optimal strategy for the threshold of your choice.
Upvotes: 0 <issue_comment>username_5: You could vary an error coefficient in training. For example, if the expected output was negative and it gave some value in the positive you can train on C \* ERROR and conversely if the expected output was positive and it gave some value in the negative you can train on just the error, that way false positives have more impact on the model as opposed to false negatives.
Varying learning rates could as well help, however, increasing the learning rate and increase the error have different effects because changing the error will change the direction of the gradient whereas changing the learning rate will only change the gradient's magnitude of effect on the network, two slightly different things.
(for the learning rate, split the data into two, positive and negative, then train them separately with the learning rate for negative cases larger than for positive cases)
Upvotes: 0 |
2017/12/15 | 709 | 2,769 | <issue_start>username_0: I am trying to detect a TV channel logo inside a video file. So, simply, given an input `.mp4` video, detect if it has that logo present in a specific frame, say the first frame, or not.
Here's the first example of a frame with a logo.

Here's the second example.

We have that logo in advance (although might not be %100 of the same size) and the location is always fixed.
I already have a pattern matching-based approach. But that requires the pattern to be %100 the same size.
I would like to use Deep Learning and Neural networks to achieve that. How can I do that? I believe CNNs can have a higher efficiency.<issue_comment>username_1: To perform image recognition you have to find a way to represent an image with certain features.
One of the defining characteristics of a good image recognition algorithm are it's ability to detect salient regions, that is, regions which contain the most information
There is a lot of attention on deep learning for content-based image classification at the moment. You can achieve decent results by implementing deep learning having three or more layers of CNN's where each layer is responsible for extracting one or more feature of the image.
Upvotes: 3 <issue_comment>username_2: Because it is video input and the logos are usually stationary because they are layered over the live or recorded frames by either hardware or software, the task is not difficult. Logos also usually have limited color palettes and crisp edges. The features of their fonts, when they spell words or acronyms are usually consistent too. These are generalities that can be exploited in deep learning.
As with the other similar question posted by this author, a combination of LSTM and CNN layers can be trained to find and isolate the logo. With some image tricks, the image behind the logo can also be reconstructed with a reasonable accuracy and reliability from the pixels around the logo through a similar set of learning techniques.
These are a few starting points for the development.
* [Deep Learning for Logo Recognition — Imaging and Vision Laboratory](http://www.ivl.disco.unimib.it/activities/logo-recognition)
* [Logo detection using YOLOv2 – <NAME>](https://medium.com/@akarshzingade/logo-detection-using-yolov2-8cda5a68740e)
* [GitHub — satojkovic/DeepLogo: A brand logo recognition system using deep convolutional neural networks](https://github.com/satojkovic/DeepLogo)
* [Logo detection in Images using SSD – Towards Data Science](https://towardsdatascience.com/logo-detection-in-images-using-ssd-bcd3732e1776)
* [1701.02620 — Deep Learning for Logo Recognition](https://arxiv.org/abs/1701.02620)
Upvotes: 2 |
2017/12/16 | 1,234 | 4,576 | <issue_start>username_0: As self-driving technology is improving, there are so many companies developing self-driving cars like Google, Uber, etc. Is it possible that we won't need any private/paid self-driving cars and the "self-driving taxi" becomes ubiquitous in the city? If we assume that there are such taxis everywhere, would transportation become extremely low cost or free? (The self-driving car company could benefit from broadcasting advertisements for advertising agencies.)<issue_comment>username_1: You ask an interesting question. There have been many discussions by industry on this topic. A company called [Vugo](https://govugo.com/) has their business model based on advertising to passengers. An article about Vugo, ["The Quest to Make Ridesharing Free"](http://tcbmag.com/news/articles/2017/november/the-quest-to-make-ridesharing-free), states:
* Flessner is co-founder and CEO of Minneapolis-based rideshare advertising platform Vugo, which uses its patented TripIntent technology to display targeted advertisements to Uber and Lyft passengers on a tablet attached to the back of the vehicle’s headrests.
* The company’s founders predict that within the next few years, all ridesharing vehicles in the U.S. will be driverless, and in-vehicle advertising tailored to passengers’ destinations and interests will lead to free transportation. “The idea is to put brands in front of passengers who are en route to make purchases,” says Flessner. For example, retailers will sponsor transportation to their stores so they can preview products to customers who are headed their way.
This article ["How free self-driving car rides could change everything"](http://money.cnn.com/2017/09/01/technology/future/free-transportation-self-driving-cars/index.html) makes these comments:
* Car data is so lucrative that <NAME> -- CEO of otonomo, an Israeli startup that sells vehicle data -- expects automakers to make more money selling data than vehicles by 2020.
* If the money made off self-driving vehicle data outweighs the costs of offering rides, then it becomes reasonable for a business to offer free rides broadly.
A paper entitled ["Leveraging Adverts in the Coming
Autonomous Car Eco-system"](http://scet.berkeley.edu/wp-content/uploads/Report-Leveraging-Adverts-in-the-Coming-Autocar-Ecosystem.pdf) published by Berkeley, University of California, proposes "a world where rideshares are free, or in some cases, heavily subsidized through the use of advertisements".
Upvotes: 3 [selected_answer]<issue_comment>username_2: Oil prices are not cheap, but online advertisement per impression is relatively cheap. Especially given that there isn't much engagement in a taxi, its hard to imagine that ads alone can fully support a taxi service.
From [this source](http://www.pennapowers.com/how-much-do-ads-on-youtube-cost/), we find CPM for a youtube video about 0.1$:
But keep in mind this value could be lower since user engagement in a taxi is much lower compared to someone on the internet. With smartphones everywhere, how can you entice someone to watch an ad on a moving taxi?
From [this source](https://www.investopedia.com/articles/personal-finance/021015/uber-versus-yellow-cabs-new-york-city.asp.), we see:
>
> For a 5-mile, 10-minute trip going 25 miles per hour the entire way, uberX would cost the $2.55 base fare plus $3.50 for the 10 minutes plus $10.75 for the mileage, for a total of $16.80. It is not customary to tip the Uber driver.
>
>
>
No matter how you cut it, the margins seems to be quite thin. So free taxis seems unlikely for now unless someone comes up with a different business plan.
Upvotes: 2 <issue_comment>username_3: They can definitely be made cheaper, using a combination of techniques. But I doubt if they will be free in the near term. Once the passenger is inside a car, they are bound to be in the car for the duration of the travel. Any company who finds an efficient way to sell things or make them their product during this time creates a business opportunity. Some scenarios that companies will employ to reduce cost of travel are:
* Advertisements
* Sale of items on board (snacks, soft drinks, liquor etc)
* Usage based discount (The ride is free if you purchase $100 on amazon when you are in the car)
* Discount for services by passenger (Give us feedback every minute of the car operation, we will discount 50% of cost)
Once self driving is popular, there will be a plethora of companies trying to exploit the passenger; the passenger is bound to be inside the car for the duration.
Upvotes: 0 |
2017/12/16 | 1,130 | 5,058 | <issue_start>username_0: The capsule neural network seems to be a good solution for problems that involve hierarchies. For example, a face is composed of eyes, a nose and ears; a hand is made of fingers, nails, and a palm; and a human is composed of a face and hands.
Many problems in NLP can be seen as hierarchical problems: there are words, sentences, paragraphs, and chapters, whose meaning changes based on the style of lower levels.
Are there any research papers (which I should be aware of) on the application of capsule neural networks to NLP problems?
Are there related research papers, which have been investigating hierarchical complexity within the domain of NLP, which could be easily translated to Capsule Network?<issue_comment>username_1: For us to answer this question. First, we need to look at why capsule networks outperform convolution neural networks by as much as 45% in recognizing images that have been rotated, translated or are under a different pose. We can find <NAME>'s paper on capsule networks here for reference <https://arxiv.org/pdf/1710.09829v1.pdf>
In a CNN architecture, a convolution layer is usually followed by a max-pooling layer. This is so that the lower levels can detect low-level features, like edges, while the high-level layers can detect abstraction like eyes. However, the application of max-pooling leads to the loss of important information regarding the location and spatial relationship between certain features.
On the other hand, this is where capsule networks excel, the way they represent certain features is locally invariant. This is why capsule networks can recognize images under different lighting conditions and deformations. They are likely to excel at applications such as video and object tracking but not necessarily NLP.
The current approach in NLP maps words and phrases to vectors. From there, we exploit the concept of vectors and distances between them (cosine, euclidean, etc.) to perform operations such as: finding the similarity between words and even documents, machine translation, and natural language understanding (NLU).
Capsule networks are unlikely to succeed in NLP. This is because algorithms that aim to find the hierarchical structure of natural languages or approaches that focus on grammar have met little success. Research by Stanford University aiming at finding the hierarchical structure of natural languages can be found here <https://nlp.stanford.edu/projects/project-induction.shtml>
Although conclusive research regarding other applications of capsule networks has not yet been conducted. They are likely to excel at applications such as video intelligence and object tracking but not necessarily NLP.
Upvotes: 2 <issue_comment>username_2: There has been some recent work on this: [Investigating Capsule Networks with Dynamic Routing for Text Classification](https://arxiv.org/pdf/1804.00538.pdf)
Seems some are having some success with it.
Upvotes: 1 <issue_comment>username_3: <NAME> has started working on Thought Vectors at Google: <https://en.wikipedia.org/wiki/Thought_vector>
The basic idea is similar to his original idea with Capsule Networks, where activation happens by vectors instead of scalars, which allows the network to capture transformations: for example while traditional CNN needs to see object from all perspectives of three dimensional space, the Capsule networks are able to extrapolate transformations such as stretching much better.
Thought Vectors guide NLP similarly; one could say that there are two grammars, the linguistic grammar and the narrative grammar which is more universal (<NAME>, <NAME>, <NAME>). While dependency grammars do great job at understanding linguistic grammar, we lack tools for meaning extraction, which is narrative bound. Thus Thought Vectors could, at least in theory, give us a framework for matching the meaning of a word within a context rather than just lexically and grammarly trying to approximate the meaning through average co-occurances.
Neural Networks with Thought Vectors would be highly complex and beyond our computational resources today (Hinton predicts in one paper, that we would get there around 2035), however, one could conduct empirical research already by giving a heuristic structure for Thought Vectors by utilizing narrative systems that do compute more easily. One could for example have text segments annotated with writing theories or other such devices that would approximate the Thought Vectors conceptually. For example annotating the text with state transformations of conflict driven partially ordered causal link planner (cPOCL, Gervas et al.) or use a writing theory framework such as Dramatica to annotate known movie scripts (<http://dramatica.com/theory> <http://dramatica.com/analysis>).
Hinton himself is currently active in NLP research: <https://research.google/people/GeoffreyHinton/>
Here is a nice explanation of Thought Vectors: <https://pathmind.com/wiki/thought-vectors>
Upvotes: 3 [selected_answer] |
2017/12/17 | 725 | 2,939 | <issue_start>username_0: I am trying to find literature on a network architecture that takes the following as in input:
* Action (like 'Up', 'Down', etc)
* Image of the current state
and outputs:
* Image of next state
I already have a lot of training data for the inputs. However, I am trying to find relevant literature/architecture for this problem.<issue_comment>username_1: You'll probably want to start out with "Action-Conditional Video Prediction using Deep Networks in Atari Games" (arXiv link: <https://arxiv.org/abs/1507.08750>). That's from 2015 though, I'm sure there have been lots of other interesting developments since then. This paper may still be a good starting point though, and provide you with the correct terminology to plug into google / google scholar to find more recent papers that build on top of this. Google scholar also provides functionality to automatically find papers that cite this one (interesting recent papers will probably cite this one).
As an additional point, you may want to reconsider your desired output. Given a current state-image and action, it may be easier to train a network to predict only the *change* in image (i.e., predicting NEW\_IMAGE - OLD\_IMAGE), rather than predicting the full image. You can then always still manually reconstruct the predicted new image simply by adding that output to the old image again. I'm quite sure I've seen this being done in a more recent paper too, but don't remember exactly the title / authors.
Upvotes: 0 <issue_comment>username_2: I tried something similar before for 2048 game. I used the state of the board as **x**, and the move as **y**. I just trained the neural network with this dataset. The architecture is like a couple of layers with **relu** and the final layer as **softmax**. The major thing is that we should not feed the wrong moves in the dataset to the NN, or else the NN also tend to learn the bad moves, which in turn makes it less smarter.
I gathered my dataset by running the minimax on 2048 and assigning a reward for each move, and then eliminating the bad ones on it.
the above process also depends on the way you are taking the feature vector, if your feature vector is an image, then it makes sense to use CNN.
<https://github.com/navjindervirdee/2048-deep-reinforcement-learning>
DQN is also a good option. but do checkout the above link, it helped me too.
My repo: <https://github.com/williamscott701/AI-vs-2048>
my results are not actually that great.
Upvotes: 0 <issue_comment>username_3: This is a whole sub-field of reinforcement learning known as model-based reinforcement learning. The idea in model based RL is to learn the mapping from current state/action to next state in order to facilitate learning good policies.
If you are dealing with images as inputs I would recommend checking out the Dreamer papers. The most recent being [this one](https://danijar.com/project/dreamerv2/).
Upvotes: 2 |
2017/12/18 | 1,018 | 4,167 | <issue_start>username_0: *TLDR : Is there an AI available that can recognize employees in a factory and tell when they entered and left pre-defined areas?*
I work in a factory where we gather cycle time data from various inputs (computer interfaces, bar code readers and RFID tags). We follow both parts moving along the production line and persons working on those parts because they move from stations to stations during their day and can be more than one at the same place to help one another.
Our goal is to know how many people are working on each part at any given time, but the problem is that it takes time for people to check themselves in when they get to their workstation and they can also forget to do it.
Many employees asked me if I could find a way to track them automatically so they wouldn't need to check in everywhere they go. And then I stumbled upon [Microsoft Workplace Safety Demo](https://www.youtube.com/watch?v=O1pDOkzsFOU) and [Yitu System](http://www.scmp.com/magazines/post-magazine/long-reads/article/2123415/doctor-border-guard-policeman-artificial) (this one is scary) but they both seem to be a little overkill for my needs.
After learning about these, here is the ideal AI features that I'd need :
* Can use video feeds to detect new people in the workspace and prompt someone (via e-mail or text message) for identification.
* Can use video feeds to recognize persons that are already known and document every time someone has entered or left a pre-defined zone (workstation) in the video feed.
* Won't force me to have a camera for each workstation.
* Allows one workstation to span on more than one video feed.
As of now, I found nothing available that does this, but I may have missed lots of things because this is not what I am used to work with. So maybe you know something that could help me with that?<issue_comment>username_1: I am definitely not the most qualified person in order to answer this question, but I might be able to give you a couple of buzz words to go and do some more research on your own. A convolutional neural network is usually used for analyzing anything visual. (you can read about it here: <https://en.wikipedia.org/wiki/Convolutional_neural_network>)
There are a lot of frameworks that allow you to build recognition classifiers that can be loaded into a real-time video feed and give you feedback when it gets a hit.
Some of the ones that I know of are:
* <https://caffe2.ai/> or <http://caffe.berkeleyvision.org/>
* <https://pjreddie.com/darknet/>
* <https://docs.opencv.org/2.4/modules/ml/doc/neural_networks.html>
With Caffe being the most popular.
Some articles on how these work:
* <https://pjreddie.com/darknet/yolo/>
* <http://coding-robin.de/2013/07/22/train-your-own-opencv-haar-classifier.html>
Hope this is at least able to get you on the right path.
Upvotes: 2 [selected_answer]<issue_comment>username_2: training a model to identify new persons from video logs seems to be a daunting task.
You will be needing lots of data and computational power for building such a model.
and there is relatively low work going on videos due to the amount of computational power required , even training a simple video classifier with reasonable accuracy requires lots of expertise and resources (data and computing).
Having said that recognising persons from videos and "new" person identification seems to be even more difficult , you should definitely read some papers on video classification using convolutional neural networks to get an idea of how hard the problem is. frameworks are only mere tools which give you helper functions to build neural networks , so choice of frameworks is not a question.
The real question is are you sure you want to train a model (CNN) for this task?
*Can use video feeds to detect new people in the workspace and prompt someone (via e-mail or text message) for identification*
for this you should be training a convolutional network on the factory videos dataset , i.e you the cnn trains on frame level for the videos.
and recognizing and classifying "known" and "unknown" faces from videos is a research problem in its own right
Upvotes: 0 |
2017/12/19 | 349 | 1,584 | <issue_start>username_0: Model-based RL creates a model of the transition function.
Tabular Q-Learning does this iteratively (without directly optimizing for the transition function). So, does this make tabular Q-learning a type of model-based RL?<issue_comment>username_1: Tabular Q-Learning does not explicitly create a model of the transition function. It does not generate any output that you can afterwards use as a function to predict what the next state *s'* will be given a current state *s* and an action *a* (that's what a transition function would allow you to do). So no, Q-learning is still model-free.
By the way, model-based RL does not necessarily have to involve creating a model of the transition function. Model-based RL can also mean that you assume that such a function is already given. It just means that you use such a function in some way.
Upvotes: 3 <issue_comment>username_2: In model-based learning, the learning agent utilizes a model that was previously learned to accomplish a task, whereas model-free RL doesn't use the environment to learn but simply relies on trial and error experience for action selection. The similarity is that in both methods the learning agent is trying to maximize reward from its actions.
Q-Learning is a model-free RL method. It can be used to identify an optimal action-selection policy for any given finite Markov Decision Process. How it works is that it learns an action value function, which essentially gives the expected utility of an action in a given state, then follows an optimal policy afterwards.
Upvotes: 2 |
2017/12/19 | 694 | 2,738 | <issue_start>username_0: I'm learning about NEAT from the paper [Evolving Neural Networks through Augmenting Topologies](http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf).
I'm having trouble understanding how adjusted fitness penalizes large species and prevents them from dominating the population, I'll demonstrate my current understanding through an example and, hopefully, someone will correct my understanding.
Let's say we have two species, $A$ and $B$, species $A$ did really well the last generation and were given more children, this generation they have $4$ children and their fitnesses are $[8,10,10,12]$, while $B$ has $2$ children and their fitnesses are $[9,9]$. The adjusted fitnesses for $A$ will be $[2, 2.5, 2.5, 3]$ and for B will be $[4.5, 4.5]$.
Now, onto distributing children, the [paper](http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf) states:
>
> Every species is assigned a potentially different number of offspring in proportion to the sum of adjusted fitnesses $f'\_i$ of its member organisms.
>
>
>
So, the sum of adjusted fitnesses is $10$ for $A$ and $9$ for $B$, thus $A$ gets more children and keeps growing.
How does this process penalize large species and prevent them from dominating the population?<issue_comment>username_1: Consider what would the outcome be if offspring was assigned on the basis of fitness (unadjusted). Sum of `A` fitnesses would be `40` and `B=18`. Fitness ratio for both species would be `2.(2):1`. In case of adjusted fitness the numbers are `A=15` and `B=9`, which gives ratio of `1.(6):1`, thus `A` is assigned less offspring based on adjusted fitness then unadjusted.
Also note that every new genome assigned to a species decreases all it's members adjusted fitness. In your case members of species `A` are more successful than members of `B`, so it should grow. The mechanism is designed to hinder growth of successful species, not to block it entirely. This allows more diversity, which is important when a successful species reaches a dead end and a previously less successful one can take over.
Upvotes: 1 <issue_comment>username_2: The reason the adjusted fitness prevents species from growing to big is do to the fact that the summation that determines the divisor in the adjusted fitness function reduces to the number of genomes in the species fi belongs to, so as species grow the adjusted fitness for every genome belonging to it is divided by a larger number and thus receives a lower adjusted fitness value which in turn will also reduce the summed adjusted fitness of the species. This smaller adjusted fitness then affects the number of offspring the species will get to create after elitism reduces the population.
Upvotes: 0 |
2017/12/22 | 1,035 | 4,093 | <issue_start>username_0: I'm really new to neural networks. I'm trying to make a neural network with genetics algorithms which will make a snake learn to look for the food and avoid hitting his tail.
The thing is that I think that I've done it, but as there's no walls the snake learns to go one direction only without making a 180 turn [[GIF HERE]](https://i.imgur.com/cdVmixk.gifv).
I've tried to incentivate mutations that make them turn by decreasing the score of the snakes that always takes the same directions, but it don't works. I've only made them dumber, needing more breeds to reach another "smart" linear snake.
I've made a network with 5 inputs:
* Food position relative to my position and direction (2 inputs. x and y)
* Nearest wall (my tail) if I turn left
* Nearest wall (my tail) if I do not turn
* Nearest wall (my tail) if I turn right
The output are 3, being the first one turning left, the second do not turn and the third going right. I make the snake go the highest one of the 3 outputs.
I've added 1 hidden layer of 8 neurons (inputs + outputs premise).
The way I calculate the score is:
* Each step, 1 point.
* Each food eaten, 10 points.
* If the snake lasts too much time without eating food, dies.
* If hits his tail, dies.
Then I save each time the direction this snake has gone (up, right, down, left) and increment them by one. When the snake dies, I weight the final score by the difference between the lowest and highest values. If the difference is high, they receive a big penalty (down to 0.25 of their score). This way if a snake is pretty much linear gets a high penalty and if a snake does a cool pattern, gets a low penalty.
Also, I keep a record of each time a direction change happens compared to the last direction change, so if a snake keeps going in circles don't gets a high score because of "cool pattern method" for going all 4 directions.
With all this, I don't understand why my best snakes are always the linear ones :-/
I spawn 20 snakes and get the best 4 of each generation when everyone dies.
For generations I use neataptics.js and for neural networks I use synaptics.js. I have a Fiddle here: <http://jsfiddle.net/Llorx/gunsct5r/>
At line `10` you can see the network definition. At `211` you can see the snake "view" (food position and walls. Where it gets the inputs) and at `164` you can see the score weight calculation depending on steps taken that I mentioned before.
All inputs are normalized from 0 to 1.
I'm sure that I'm doing, not one, but a lot of things pretty bad, as I'm a newbie on this, but some light on this will be really cool.<issue_comment>username_1: Consider what would the outcome be if offspring was assigned on the basis of fitness (unadjusted). Sum of `A` fitnesses would be `40` and `B=18`. Fitness ratio for both species would be `2.(2):1`. In case of adjusted fitness the numbers are `A=15` and `B=9`, which gives ratio of `1.(6):1`, thus `A` is assigned less offspring based on adjusted fitness then unadjusted.
Also note that every new genome assigned to a species decreases all it's members adjusted fitness. In your case members of species `A` are more successful than members of `B`, so it should grow. The mechanism is designed to hinder growth of successful species, not to block it entirely. This allows more diversity, which is important when a successful species reaches a dead end and a previously less successful one can take over.
Upvotes: 1 <issue_comment>username_2: The reason the adjusted fitness prevents species from growing to big is do to the fact that the summation that determines the divisor in the adjusted fitness function reduces to the number of genomes in the species fi belongs to, so as species grow the adjusted fitness for every genome belonging to it is divided by a larger number and thus receives a lower adjusted fitness value which in turn will also reduce the summed adjusted fitness of the species. This smaller adjusted fitness then affects the number of offspring the species will get to create after elitism reduces the population.
Upvotes: 0 |
2017/12/23 | 482 | 1,934 | <issue_start>username_0: Let say that we have four straight string with colors RED, GREEN, BLUE and YELLOW. These strings are tied up randomly. We know the current state of string (like starting point, where we should we start untying from) and the final state (where current string must look like after untying them).
Here is a very simple example of the problem:
[](https://i.stack.imgur.com/tI6ah.png)
At every move, we are allowed to replace only two nearby strings. For example, to solve the shown problem, we should do the following replacements:
1) Replace GREEN and YELLOW
2) Replace BLUE and GREEN
But in a programming environment, how can I calculate the movements to untie the strings? What will be the algorithm specifically to solve such type of problems? Here is another [variation](https://math.stackexchange.com/questions/2581164/string-knots-unlocking-algorithm) of the question. Its not possible to solve this problem according to the answer.<issue_comment>username_1: [Sequential programming](https://en.wikipedia.org/wiki/Sequential_algorithm) would not be suitable for this kind of problem, but an algorithm could be implemented in a [declarative programming language](https://en.wikipedia.org/wiki/Declarative_programming). I would suggest using [Answer Set Programming](https://en.wikipedia.org/wiki/Answer_set_programming), a language that is designed for [logic axioms](https://en.wikipedia.org/wiki/Logic_programming).
Upvotes: 1 <issue_comment>username_2: I sounds to me like something that could be expressed as a planning problem. You have a start state, end end state, and a set of actions. You need to find the correct actiona sequence to get from the start to the goal.
You could probably express this in [PDDL](https://en.wikipedia.org/wiki/Planning_Domain_Definition_Language) and use a planner to find the right steps.
Upvotes: 0 |
2017/12/28 | 647 | 2,611 | <issue_start>username_0: What are "bottlenecks" in the context of neural networks?
This term is mentioned, for example, in [this TensorFlow article](https://web.archive.org/web/20180703133602/https://www.tensorflow.org/tutorials/image_retraining), which also uses the term "bottleneck values". How does one calculate bottleneck values? How do these values help image classification?
Please explain in simple words.<issue_comment>username_1: The bottleneck in a neural network is just a layer with fewer neurons than the layer below or above it. Having such a layer encourages the network to compress feature representations (of salient features for the target variable) to best fit in the available space. Improvements to compression occur due to the goal of reducing the cost function, as for all weight updates.
In a CNN (such as Google's Inception network), bottleneck layers are added to reduce the number of feature maps (aka channels) in the network, which, otherwise, tend to increase in each layer. This is achieved by using 1x1 convolutions with fewer output channels than input channels.
You don't usually calculate weights for bottleneck layers directly, the training process handles that, as for all other weights. Selecting a good size for a bottleneck layer is something you have to guess, and then experiment, in order to find network architectures that work well. The goal here is usually finding a network that generalises well to new images, and bottleneck layers help by reducing the number of parameters in the network whilst still allowing it to be deep and represent many feature maps.
Upvotes: 6 [selected_answer]<issue_comment>username_2: Imagine, you want to re-compute the last layer of a pre-trained model :
```
Input->[Freezed-Layers]->[Last-Layer-To-Re-Compute]->Output
```
To train *[Last-Layer-To-Re-Compute]*, you need to evaluate outputs of *[Freezed-Layers]* multiple times for a given input data. In order to save time, you can compute these ouputs **only once**.
```
Input#1->[Freezed-Layers]->Bottleneck-Features-Of-Input#1
```
Then, you store all *Bottleneck-Features-Of-Input#i* and directly use them to train [Last-Layer-To-Re-Compute].
Explanations from the "cache\_bottlenecks" function of the "image\_retraining" example :
>
> Because we're likely to read the same image multiple times (if there are no
> distortions applied during training) it can speed things up a lot if we
> calculate the bottleneck layer values once for each image during
> preprocessing, and then just read those cached values repeatedly during
> training.
>
>
>
Upvotes: 4 |
2017/12/31 | 616 | 2,672 | <issue_start>username_0: For example, I need to detect classes for MNIST data. But I want to have not 10 classes for digits, but also I want to have 11th class "not a digit", so that any letter, any other type of image, or random noise would be classified as "not a digit". Similarly, with CIFAR-10, I want to have the 11th "unknown" class to classify any image that contains something out of the available 10 classes of CIFAR-10.
So, how to implement such a feature? Maybe there are some examples somewhere, preferable with Keras.<issue_comment>username_1: The usual way to implement this would be to add the new class with data examples.
Some things you need to address:
* Sourcing new data for your "other" class.
* Ensuring the amount and variation of data in "other" class examples matches how the predictor will be used.
Code examples for this are not necessary, as you would just use the same network design as you already have and just add another output. This is a data and model definition problem.
Logically you have another option: As well as outputting the predicted class, you predict separately whether there is any detectable object at all as a true/false value. This still requires the additional data, but is for example how the YOLO algorithm works for object detection. Object detection has a specific meaning - it involves finding the co-ordinates and class of possibly multiple objects in an image. This goes beyond the wording of your question, but is a typical end goal if you are asking this kind of question.
YOLO predicts the presence of an object separately from the class of object. The additional data for YOLO training comes from segmenting the source images, so many parts of the target image are background with no objects. In that case the additional data you require is due to more detailed labelling within each image example.
YOLO is quite complicated architecture, so you might want to look at [this example using Keras on a Github project](https://github.com/experiencor/basic-yolo-keras) for more details, if object detection is your goal.
Upvotes: 2 <issue_comment>username_2: If you are using a softmax distribution for your classification, then you could determine what your baseline max probability is for correctly classified samples, and then infer if a new sample doesn't belong to any of your known classes if its max probability is below some kind of threshold.
This idea comes from a research paper that does a much better job of explaining the process than what I just said: [A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks](https://arxiv.org/abs/1610.02136)
Upvotes: 2 |
2017/12/31 | 990 | 4,125 | <issue_start>username_0: Machine learning models and, in particular, neural networks are trained with data often collected from the real world, such as images of real people.
Meanwhile, neural networks (such as GANs) are also used for data generation. Each year, they become better at this task to the point that even humans are not able to distinguish real-world data from the artificially generated one.
So it is possible that neural networks will start to learn with data that was generated by other neural networks, because it will look as real even for a human, but naturally will be not related to the real world.
1. Will it lead to some machine learning collapse?
2. Might it lead to some changes in human's perception of the world, because people get a very big part of their knowledge using computers, connected to the Internet?
3. Is anyone thinking about this potential problem?<issue_comment>username_1: We can already observe [information bubbles on social media](https://www.nbcnews.com/better/lifestyle/problem-social-media-reinforcement-bubbles-what-you-can-do-about-ncna1063896), where the circle is that the ML algorithms learn what content people like and give more similar content based on clicks and so on. From a single wrong click, you could enter a bubble and never come out if you don't take care or be aware.
This happens with humans, so the same may apply to computers. Checking backgrounds, like click saver for click titles as an ultimate example, would save the machines from circles too. Information is not different whether it is made in consecutive rounds by humans or machines.
For example, if you make a thesis, not necessarily all things you write are your own investigation and research. You use quotes, refer to papers, and so on. For information sources, you have to be careful if the source is respected and considered to be meaningful and correct.
The same carefulness needs to be applied to machine-generated content. To the responsible - the last responsibility is with the reader to believe or not what you're reading.
Upvotes: 3 [selected_answer]<issue_comment>username_2: >
> Will it lead to some machine learning collapse?
>
>
>
I wouldn't think so. Data is data. From the standpoint of automata, everything is ultimately reduced to a sting of bits. It may even be useful to be able to train AI's using CGI, for instance, in relation to automated vehicles. Not any different from humans using flight simulators.
Creating models and training AIs on them is useful, and a part of the contemporary AI landscape.
>
> Might it lead to some changes in human's perception of the world, because people get a very big part of their knowledge using computers, connected to the Internet?
>
>
>
It already is. It's not only the false CGI content, but the scope of the search filter that dictates what information a websurfer gets. These results are controlled by algorithms, which evolve. Self-evolving algorithms may make the process more opaque. It definitely seems to be creating social problems already.
>
> Is anyone thinking about this potential problem?
>
>
>
I'm sure there are papers out there on this subject. (Don't have time to search now, but I may do that and come back and amend with some articles and research papers.)
Two authors who are definitely thinking about this are [<NAME>](https://en.wikipedia.org/wiki/Neal_Stephenson#Life) and [<NAME>](https://en.wikipedia.org/wiki/Hannu_Rajaniemi#Early_life). Stephenson addressed the "information unreliability" problem of the internet in [Anathem](https://en.wikipedia.org/wiki/Anathem). *(It's not a major theme, but his ideas are quite insightful--Stephenson has a hard-science background, with a particular interest in computing.)* Rajaniemi extends the ideas in the post-singularity [Quantum Thief trilogy](https://en.wikipedia.org/wiki/The_Quantum_Thief), where information and matter are interchangeable, and contains some very interesting ideas. *(Rajaneimi holds two advanced mathematics degrees, which is useful in tackling a subject of such great complexity.)*
Upvotes: 1 |
2018/01/03 | 1,473 | 5,767 | <issue_start>username_0: The Intel 8080 had 4500 transistors and ran at 2-3.125 MHz. By comparison, the 18-core Xeon Haswell-E5 han 5,560,000,000 transistors and can run at 2 GHz. Would it be possible or prudent to simulate a neural network by backing a chip chock-full of a million interconnected, slightly modified intel 8080s (sped up to run at 2 GHz)? If each one modeled 100 neurons you could simulate a neural network with 100 million neurons on a single chip.
**Edit:** I'm not proposing that you *actually* use a million intel 8080s; rather I'm proposing that you take a highly minimal programmable chip design *like* the intel 8080's design and pattern it across a wafer as densely as possible with interconnects so that each instance can function as one or a few dozen fully programmable neurons each with a small amount of memory. I'm *not* proposing that someone take a million intel 8080s and hook them together.<issue_comment>username_1: Theoretically it might be possible but practically it is not.
You can argue by using the analogy of a Turing machine. You can say that the Intel 8080 is a turing machine hence it can run any program including a neural network given infinite time and memory.
Inspite of the above you will face insurmountable challenges in implementing your system.
CPU's are designed to handle calculations in a sequential manner, most AI algorithms are distributed. You need a GPU (or an AI ASIC) to process the algorithms in a massively parallel manner for a significant speedup.
Additionally GPU's are excellent at floating point math, floating point arithmetic involves numbers with a variable number of decimal places which are key in running neural networks. For example an Intel core i7 6700k is capable of 200 Giga-FLOPs (floating point operations per seconds) while on the other hand an NVidia GTX 1080 GPU is capable of about 8900 Giga-FLOPs which is a significant difference. (<NAME> 2017)
If you decide to use the intel 8080 (0.290 MIPS at 2.000 MHz), you will require millions of processors and billions of dollars just to compute at one gigaflop. You can follow this link to see the cost of computing over the years <https://en.wikipedia.org/wiki/FLOPS>
Another problem concerns RAM. To efficiently run a neural network you need to fully load it in RAM. It will be a huge challenge to squeeze a neural network in the 64 Kb of RAM that an Intel 8080 processor offers.
The network bandwidth problem will also be a huge bottleneck. Modern GPU's support high speed technology to communicate between the GPU's. For example NVidia's NVLink has a peak speed of around 80 GBps. While PCI-E 3.0 runs at around 30 GBps. Without a high speed interconnection bandwidth you will not achieve any speedup inspite of using a distributed system with many processors.
Additionally you will face significant challenges in programing neural network algorithms for your 8080 processor based system. Most programmers today follow the standards of object oriented programming which enables code reuse, simplified design and maintanance. Besides, OOP languages such as Java, C++ and Python have libraries that significantly simplify the process of programming a neural network.
When the 8080 processor was designed back in 1974 OOP had not yet been concieved, they were also using programming tools i.e. compilers that would be considered archaic with todays standards. I mean good luck debugging that system.
Last but not least, you need big data (or atleast a substantial dataset) to train your neural network on. Without training on a big data set your model will be ineffective. The 8080 supported around 200 Kb of storage. For comparison the MNIST dataset is around 14 GB in size. This means that your processor cannot support the neccessary storage of any ML dataset.
For the above reasons my conclusion is that the 8080 processor provides insufficient resources necessary to implement any effective DL algorithm. Networking millions of them together will not provide any substantial speedup for a DL algorithm.
Upvotes: 0 <issue_comment>username_2: The building unit of a neural network is called [perceptron](https://en.wikipedia.org/wiki/Perceptron). It cannot be represented by single transistor because it should hold arbitrary (float) value, over multiple computational iterations. (While the transistor is only binary, and does not work as memory on its own.)
Furthermore, the strengths of the NN is in it's flexibility, which you would lose if you were to bake it on silicon. In a NN you can vary the:
* number of layers
* connections between units
* activation functions
* and many, many more [meta parameters](https://en.wikipedia.org/wiki/Meta-optimization)
The NNs, once trained on a particular problem, are really fast to make a prediction for a new sample. **The slow and computationally heavy task is the training - and it's during the training that you need flexibility** to mess with the model and the parameters.
You could bake a trained NN model on a chip, if you need the computation time of a prediction to be really fast i.e. in order of nanoseconds (instead of a millisecond or a second on a modern CPU). That will have a significant downside - you won't be able to ever update it with newer NN model.
Upvotes: 1 <issue_comment>username_3: It's been done (essentially). This guy at the following link has used a series of FPGAs to emulate hundreds of 8080s, using them to train a neural network to play Gameboy games.
<https://towardsdatascience.com/a-gameboy-supercomputer-33a6955a79a4>
IBM's True North being used in Darpa's SyNAPSE program is also very close to what you suggest. <https://en.wikipedia.org/wiki/TrueNorth>
Also of interest may be SpiNNaker and Intel Loihi.
Upvotes: 0 |
2018/01/08 | 680 | 2,870 | <issue_start>username_0: I am trying to understand if [robotic process automation (RPA)](https://en.wikipedia.org/wiki/Robotic_process_automation) is a field that requires expertise in machine learning.
Do the algorithms behind RPA use machine learning except for OCR?<issue_comment>username_1: Yes, rpa is AI. In particular, applied AI.
The definition of rpa in wiki is:
>
> Robotic process automation (or RPA) is an emerging form of clerical
> process automation technology based on the notion of software robots
> or artificial intelligence (AI) workers.
>
>
>
Moreover, look at this characteristics of rpa:
>
> The paradigm, in summary, is that a software robot should be a virtual
> worker who can be rapidly "trained" (or configured) by a business user
> in an intuitive manner which is akin to how an operational user would
> train a human colleague.
>
>
>
If we talk about training or learning, we talk about AI.
Upvotes: 1 <issue_comment>username_2: In the context of IT systems, "Robotic Process Automation" (RPA) is a term often used to describe a technique where software systems are integrated or work processes are automated through the existing user interface of the applications rather than writing new software to provide integration points.
**In that context, RPA has nothing to do with AI or machine learning. In most cases, it does not even require OCR.**
For an example of a common use case, let's say you have an old mainframe IT systems for tracking your subscriptions and a new website to let people order subscriptions from their phone.
In this case you might create an "RPA" job that opens the list of new subscription requests from the website and for each of them opens the old application, clicks of the "new subscription" button, click on the "Customer Name" field, pastes the name, clicks "Customer address" and pastes in the address *etc.*
In some cases, the RPA job will be exposed as a service with an API that can be called by the new application, so it can dump data directly into the old application. The benefit is that it can do this without any changes to the old application.
It is attractive because defining the steps of *copy this, click that* can often be defined in visual tools by non-programmers very quickly and at much lower cost than setting up a systems integration project to connect the two systems and because integration through the existing user interface does not require any changes to the application.
In this way, it is similar in spirit to how Excel allows non-programmers to automate calculations by writing formulae and thus automating their spreadsheets.
You will often see RPA proponents putting some AI buzzwords into their presentations but from what I have seen in industry RPA is mostly just a visual scripting technique that is easy to learn and easy to apply.
Upvotes: 2 |
2018/01/09 | 516 | 2,264 | <issue_start>username_0: Can [Viola Jones algorithm](https://en.wikipedia.org/wiki/Viola%E2%80%93Jones_object_detection_framework) be used to detect the facial emotion. Actually it was used in creating harr-cascade file for object and facial detection, but what confused me is whether it can be used to train for emotion detection.
If not, what algorithms can I use? and what are the mathematical bases? (i.e. what mathematics should I be studying?)<issue_comment>username_1: I have once tried Viola Jones Algorithm to do that, it does not capture the subtle differences in the direction of facial segments which are important to detect emotion. Features like HOG (Available in openCV and many famous image processing libraries) can extract better information from the face to classify emotion.
Also there are many other approaches including ANNs and pure rule based approaches But almost everywhere a good alignment approach for faces become the most important aspect of the exercise. So I will suggest exploring some facial alignment approaches and then Features like HOG instead of Viola Jones/ HARR.
For the mathematics part, it is upto you to dive deep into mathematics or just exploring different approaches by codes. A good understanding of Linear Algebra and a little Geometry will help a lot.
Also if you are new to Machine Learning, understanding the basic algorithms might be relevant to you.
Upvotes: 1 <issue_comment>username_2: An introduction to the [Haar features](https://www.youtube.com/watch?v=_QZLbR67fUU) is provided in the youTube video. The video indicates the VJ face detector leverages **a selected combination** of Haar features (convolutional kernels) to detect **facial features** (weak classifiers), such as the nose bridge. The binary presence of the weak classifiers are summed to determine if the window contains a face.
The ability for a VJ algorithm to detect emotion would rely on the ability to assign a set of Haar features (kernels) to recognize features associated with a particular emotion label (surprise, anger, content, fear).
It is conceivable that the initial stage of an emotion classifier could use a the VJ algorithm to identify a face for additional stages to classify emotion.
Upvotes: 3 [selected_answer] |
2018/01/09 | 464 | 1,798 | <issue_start>username_0: Why do non-linear activation functions that produce values larger than 1 or smaller than 0 work?
My understanding is that neurons can only produce values between 0 and 1, and that this assumption can be used in things like cross-entropy. Are my assumptions just completely wrong?
Is there any reference that explains this?<issue_comment>username_1: Why wouldn't they work?
Each neuron's output is equal to a function over the sum of all its weights multiplied by their corresponding neurons. If that function is the [Sigmoid](https://en.wikipedia.org/wiki/Sigmoid_function) function, then the output is squashed from $[0,1]$. If the entire layer uses a [SoftMax](https://en.wikipedia.org/wiki/Softmax_function) function, then the output of all neurons is squashed from $[0,1]$ and their sum equals 1. In other others, they represent a set of probabilities, where you can then use [cross-entropy](https://en.wikipedia.org/wiki/Cross_entropy) to optimize their values (cross-entropy measures the difference between two probability distributions).
[ReLU](https://en.wikipedia.org/wiki/Rectifier_(neural_networks)) and [ELU](https://arxiv.org/abs/1511.07289) are simply other types of functions, whose output is not limited to the range $[0, 1]$. They are differentiable, like other activation functions, and so they can be used in any neural network.
Upvotes: 2 <issue_comment>username_2: <NAME>'s [blog post](http://colah.github.io/posts/2014-03-NN-Manifolds-Topology/) describes it better that I ever could.
Basically, most data we come across can't be separated with a single line, but with some kind of curve. Non-linearities allow us to distort the input space in ways that make the data linearly separable, making classification more accuarate.
Upvotes: 1 |
2018/01/10 | 1,515 | 5,992 | <issue_start>username_0: I am working on a problem where I need to determine whether two sentences are similar or not. I implemented a solution using BM25 algorithm and wordnet synsets for determining syntactic & semantic similarity. The solution is working adequately, and even if the word order in the sentences is jumbled, it is measuring that two sentences are similar. For example
1. Python is a good language.
2. Language a good python is.
My problem is to determine that these two sentences are similar.
* What could be the possible solution for structural similarity?
* How will I maintain the structure of sentences?<issue_comment>username_1: Firstly, before we commence I recommend that you refer to similar questions on the network such as <https://datascience.stackexchange.com/questions/25053/best-practical-algorithm-for-sentence-similarity> and <https://stackoverflow.com/questions/62328/is-there-an-algorithm-that-tells-the-semantic-similarity-of-two-phrases>
To determine the similarity of sentences we need to consider what kind of data we have. For example if you had a labelled dataset i.e. similar sentences and disimilar sentences then a straight forward approach could have been to use a supervised algorithm to classify the sentences.
An approach that could determine sentence structural similarity would be to average the word vectors generated by word embedding algorithms i.e word2vec. These algorithms create a vector for each word and the cosine similarity among them represents semantic similarity among words. (<NAME> 2017)
Using word vectors we can use the following metrics to determine the similarity of words.
* Cosine distance between word embeddings of the words
* Euclidean distance between word embeddings of the words
Cosine similarity is a measure of the similarity between two non-zero vectors of an inner product space that measures the cosine of the angle between them. The cosine angle is the measure of overlap between the sentences in terms of their content.
The Euclidean distance between two word vectors provides an effective method for measuring the linguistic or semantic similarity of the corresponding words. (<NAME> 2015)
Alternatively you could calculate the eigenvector of the sentences to determine sentence similarity.
Eigenvectors are a special set of vectors associated with a linear system of equations (i.e. matrix equation). Here a sentence similarity matrix is generated for each cluster and the eigenvector for the matrix is calculated. You can read more on Eigenvector based approach to sentence ranking on this paper <https://pdfs.semanticscholar.org/ca73/bbc99be157074d8aad17ca8535e2cd956815.pdf>
For source code <NAME>al has a Python notebook to create a set of word vectors. The word vectors can then be used to find the similarity between words. The source code is available here <https://github.com/llSourcell/word_vectors_game_of_thrones-LIVE>
Another option is a tutorial from Oreily that utilizes the gensin Python library to determine the similarity between documents. This tutorial uses NLTK to tokenize then creates a tf-idf (term frequency-inverse document frequency) model from the corpus. The tf-idf is then used to determine the similarity of the documents. The tutorial is available here <https://www.oreilly.com/learning/how-do-i-compare-document-similarity-using-python>
Upvotes: 3 <issue_comment>username_2: The easiest way to add some sort of structural similarity measure is to use n-grams; in your case bigrams might be sufficient.
Go through each sentence and collect pairs of words, such as:
* "python is", "is a", "a good", "good language".
Your other sentence has
* "language a", "a good", "good python", "python is".
Out of eight bigrams you have two which are the same ("python is" and "a good"), so you could say that the structural similarity is 2/8.
Of course you can also be more flexible if you already know that two words are semantically related. If you want to say that *Python is a good language* is structurally similar/identical to *Java is a great language*, then you could add that to the comparison so that you effectively process "[PROG\_LANG] is a [POSITIVE-ADJ] language", or something similar.
Upvotes: 3 [selected_answer]<issue_comment>username_3: >
> **The best approach at this time (2019):**
>
>
>
The most efficient approach now is to use [**Universal Sentence Encoder by Google**](https://tfhub.dev/google/universal-sentence-encoder/1) ([paper\_2018](https://arxiv.org/abs/1803.11175)) which computes semantic similarity between sentences using the dot product of their embeddings *(i.e learned vectors of 215 values)*. Similarity is a float number between 0 *(i.e no similarity)* and 1 *(i.e strong similarity).*
The implementation is now integrated to Tensorflow Hub and can easily be used. Here is a ready-to-use code to compute the similarity between 2 sentences. Here I will get the similarity between ***"Python is a good language"*** and ***"Language a good python is"*** as in your example.
>
> **Code example:**
>
>
>
```
#Requirements: Tensorflow>=1.7 tensorflow-hub numpy
import tensorflow as tf
import tensorflow_hub as hub
import numpy as np
module_url = "https://tfhub.dev/google/universal-sentence-encoder-large/3"
embed = hub.Module(module_url)
sentences = ["Python is a good language","Language a good python is"]
similarity_input_placeholder = tf.placeholder(tf.string, shape=(None))
similarity_sentences_encodings = embed(similarity_input_placeholder)
with tf.Session() as session:
session.run(tf.global_variables_initializer())
session.run(tf.tables_initializer())
sentences_embeddings = session.run(similarity_sentences_encodings, feed_dict={similarity_input_placeholder: sentences})
similarity = np.inner(sentences_embeddings[0], sentences_embeddings[1])
print("Similarity is %s" % similarity)
```
>
> **Output:**
>
>
>
```
Similarity is 0.90007496 #Strong similarity
```
Upvotes: 3 |
2018/01/12 | 1,792 | 7,209 | <issue_start>username_0: At my work, we're currently doing some research into data visualisation for highly interconnected data, basically graphs.
We've been implementing all sorts of different layouts and trying to see which fits best, but, due to the nature of the problem --it's a visual thing - we needed to come up with some automated way to analyse the result so welcome up with a bunch of metrics to analyse our layouts.
So far, the most important metrics have been information density, edge crossings, node overlap and edge length. This gives us some good results and has allowed us to fine-tune our layout algorithms.
However, when a new graph is loaded, we noticed that humans still tend to fiddle a lot with the structure of the layout. Moreover, it seems that our metrics do a good job of predicting where a user is likely to mess around. Graph layout is a tough problem, so **after some discussion, the idea of just throwing data at a neural network and let it figure it out came up.**
None of us are experts, or even experienced in AI. I'm the one with the most contact with AI methods. **All I've ever done were simple NN models, no convolution, feedback or feedforward or anything of the sorts, but it seems to me this should be doable.**
Maybe it's my lack of expertise here, but I haven't been able to find any good information on this sort of application for NNs, so I was hoping someone here could point me in the right direction.
* What sort of model is best for such a situation? and why? Is this actually possible or would it be super complicated? Has anyone ever tried something like this before?
If it helps, our input data (for v1, I guess) would be two arrays of variable length, one for the nodes and another for the relationships between them and the output data would be an array with the node XY coordinates.<issue_comment>username_1: Firstly, before we commence I recommend that you refer to similar questions on the network such as <https://datascience.stackexchange.com/questions/25053/best-practical-algorithm-for-sentence-similarity> and <https://stackoverflow.com/questions/62328/is-there-an-algorithm-that-tells-the-semantic-similarity-of-two-phrases>
To determine the similarity of sentences we need to consider what kind of data we have. For example if you had a labelled dataset i.e. similar sentences and disimilar sentences then a straight forward approach could have been to use a supervised algorithm to classify the sentences.
An approach that could determine sentence structural similarity would be to average the word vectors generated by word embedding algorithms i.e word2vec. These algorithms create a vector for each word and the cosine similarity among them represents semantic similarity among words. (<NAME> 2017)
Using word vectors we can use the following metrics to determine the similarity of words.
* Cosine distance between word embeddings of the words
* Euclidean distance between word embeddings of the words
Cosine similarity is a measure of the similarity between two non-zero vectors of an inner product space that measures the cosine of the angle between them. The cosine angle is the measure of overlap between the sentences in terms of their content.
The Euclidean distance between two word vectors provides an effective method for measuring the linguistic or semantic similarity of the corresponding words. (<NAME> 2015)
Alternatively you could calculate the eigenvector of the sentences to determine sentence similarity.
Eigenvectors are a special set of vectors associated with a linear system of equations (i.e. matrix equation). Here a sentence similarity matrix is generated for each cluster and the eigenvector for the matrix is calculated. You can read more on Eigenvector based approach to sentence ranking on this paper <https://pdfs.semanticscholar.org/ca73/bbc99be157074d8aad17ca8535e2cd956815.pdf>
For source code Siraj Rawal has a Python notebook to create a set of word vectors. The word vectors can then be used to find the similarity between words. The source code is available here <https://github.com/llSourcell/word_vectors_game_of_thrones-LIVE>
Another option is a tutorial from Oreily that utilizes the gensin Python library to determine the similarity between documents. This tutorial uses NLTK to tokenize then creates a tf-idf (term frequency-inverse document frequency) model from the corpus. The tf-idf is then used to determine the similarity of the documents. The tutorial is available here <https://www.oreilly.com/learning/how-do-i-compare-document-similarity-using-python>
Upvotes: 3 <issue_comment>username_2: The easiest way to add some sort of structural similarity measure is to use n-grams; in your case bigrams might be sufficient.
Go through each sentence and collect pairs of words, such as:
* "python is", "is a", "a good", "good language".
Your other sentence has
* "language a", "a good", "good python", "python is".
Out of eight bigrams you have two which are the same ("python is" and "a good"), so you could say that the structural similarity is 2/8.
Of course you can also be more flexible if you already know that two words are semantically related. If you want to say that *Python is a good language* is structurally similar/identical to *Java is a great language*, then you could add that to the comparison so that you effectively process "[PROG\_LANG] is a [POSITIVE-ADJ] language", or something similar.
Upvotes: 3 [selected_answer]<issue_comment>username_3: >
> **The best approach at this time (2019):**
>
>
>
The most efficient approach now is to use [**Universal Sentence Encoder by Google**](https://tfhub.dev/google/universal-sentence-encoder/1) ([paper\_2018](https://arxiv.org/abs/1803.11175)) which computes semantic similarity between sentences using the dot product of their embeddings *(i.e learned vectors of 215 values)*. Similarity is a float number between 0 *(i.e no similarity)* and 1 *(i.e strong similarity).*
The implementation is now integrated to Tensorflow Hub and can easily be used. Here is a ready-to-use code to compute the similarity between 2 sentences. Here I will get the similarity between ***"Python is a good language"*** and ***"Language a good python is"*** as in your example.
>
> **Code example:**
>
>
>
```
#Requirements: Tensorflow>=1.7 tensorflow-hub numpy
import tensorflow as tf
import tensorflow_hub as hub
import numpy as np
module_url = "https://tfhub.dev/google/universal-sentence-encoder-large/3"
embed = hub.Module(module_url)
sentences = ["Python is a good language","Language a good python is"]
similarity_input_placeholder = tf.placeholder(tf.string, shape=(None))
similarity_sentences_encodings = embed(similarity_input_placeholder)
with tf.Session() as session:
session.run(tf.global_variables_initializer())
session.run(tf.tables_initializer())
sentences_embeddings = session.run(similarity_sentences_encodings, feed_dict={similarity_input_placeholder: sentences})
similarity = np.inner(sentences_embeddings[0], sentences_embeddings[1])
print("Similarity is %s" % similarity)
```
>
> **Output:**
>
>
>
```
Similarity is 0.90007496 #Strong similarity
```
Upvotes: 3 |
2018/01/13 | 2,130 | 9,122 | <issue_start>username_0: Some argue that humans are somewhere along the middle of the intelligence spectrum, some say that we are only at the very beginning of the spectrum and there's so much more potential ahead.
Is there a limit to the increase of intelligence? Could it be possible for a general intelligence to progress infinitely, provided enough resources and armed with the best self-recursive improvement algorithms?<issue_comment>username_1: To have a "maximum achievable intelligence", first of course you have to define "intelligence" well enough to be able to rank things by intelligence. There is no widely-supported theory that is able to do so.
You might like to look into AIXI [as described by <NAME> in a video lecture](https://www.youtube.com/watch?v=x8btbKaRfoc4). It is an attempt to formalise intelligent agents that attempt to make optimal decisions mathematically. [Here is another written introduction](https://jan.leike.name/AIXI.html2). Of course this is only one of many possible frameworks to describe intelligent agents.
One interesting implication is that AIXI implies intelligence - in terms of ability to learn from and exploit an environment - is upper bounded. In principle there is a ceiling due to uncertainty about what the data that a rational agent possesses might infer.
However, this ceiling refers to only specific abilities to extract actionable information from data that the agent has access to in order to solve decision problems. There is an open question about how much data can be collected, stored and processed by any entity, and this ability to acquire and retrieve relevant knowledge would be viewed by many as part of the "intelligence score" when comparing agents.
There are [theoretical limits to computation from physics](https://en.wikipedia.org/wiki/Limits_of_computation), e.g. some are based on the fact that it fundamentally requires energy, the energy has a mass equivalence, and enough concentrated mass would form a black hole. This sets a high upper bound, and it is likely that real-world structural and design issues will set in way before this limit. However, combined with the above limits on decidability and practical access to data, it does seem there should be a ceiling.
Upvotes: 2 <issue_comment>username_2: Absolutely, regardless of how you define "intelligence".
* If intelligence is merely information, as in "a piece of intelligence", as in data, or an algorithm, the structure is finite. (Structure, here, refers to the information, which may be reduced to a single string in either case.)
See: [Turning Machine](https://en.wikipedia.org/wiki/Turing_machine).
* If intelligence is the rational capability of an automata, it is likewise bounded by the tractability of the decision problem, the structure of the algorithm, and the time available to make the decision.
See: [Bounded Rationality](https://en.wikipedia.org/wiki/Bounded_rationality)
Both answers are really the same, because "intelligence" in the first sense is limited by physical constraints on information density, sophistication of the algorithm, and time.
See also: [Computational complexity of mathematical operations](https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations), [Computational complexity](https://en.wikipedia.org/wiki/Computational_complexity), [Time Complexity](https://en.wikipedia.org/wiki/Time_complexity)
Upvotes: 2 <issue_comment>username_3: No. There is no ceiling to intelligence. However, I am applying this loosely.
When you consider the intelligence of a person, you generally think of some baseline IQ that ranks that person on a scale.
Per the definition of our "Salable" IQ, 200 is considered being a (nearly) unbelievable intelligence.
However, when you look at "Entities", the scale disappears. Google, NSA, Universities - they could all considered super-intelligence. There's no possible metric or IQ to assign to these entities that we know of because they aren't necessarily comparable. However, no single person is smart enough to invent all that was necessary to bring an entity and it's "intelligence" into fruition.
When you bring AI into the equation, your limited by your resources. AI is proving more and more capable as we evolve the technology and we understand our data better.
We may find in the future that GO isn't all that infinite and there is a "Best" strategy (thus we solve the game). However, we keep on making machines that are smarter than previous ones.
I wouldn't be surprised if AI starts making paper companies soon per business tactical advantages.
Upvotes: 0 <issue_comment>username_4: Actually, these aspects is part of some books I am working on right now.
Like what Jeevan say it is bound by laws of physics. I see that too, but in case of the Human, in the way i look at it, we are at the very lowest step/beginning of an AI that can do self reflecting and question its own ability to do rational thinking and how far it can go and develop.
And I also use the Black Hole - maximum density, maximum energy concentration as a 1'st upper limit. A intelligence who can operate at Plank Level, can store all information in this world in like a few sand corns size. So given enough density, optimization in information exchange/methods, the upper limit is far far away from what humans process.
But I take the analyze further and go beyond the Black hole theory. Of course it is speculation, but we still don't know all. So what is possible in more than 4 dimension (human brain, 3 dimension and time to think).
As I see it, in real world, upper limit is so far away from our understanding, that we still can't handle it.
Upvotes: 0 <issue_comment>username_5: The answers previously given are correct for AI which can indeed process more information with more computational power. However, actual reasoning ability like humans have is not defined by Church-Turing. AIXI has nothing to do with human reasoning. A pretty good clue to this fact is that AIXI has been around since 2005 and to date there are no machines based on it that have human-level reasoning. For example, an interesting topic in AI is natural language processing (NLP). I can speak into my Android phone and it will transcribe my speech into text. It seems like an amazing advance. However, this is what a human would do if they heard a foreign language and then did a phonetic transcription of what they heard. Then they looked up a phonetic chart to match the sounds with words. This would take place without any actual understanding of what they were hearing. This is how it works on my phone, much like Searle's Chinese Room.
Humans are quite different because they actually understand words. The equivalent to this in AI would be natural language understanding (NLU). No AI today has NLU and no theory within AI explains how to construct it. There isn't any research on AI NLU because there is no starting point. A fact that most AI enthusiasts don't like to admit is that even the smartest AI systems are routinely outclassed by rats and even six month old babies in terms of comprehension. AI systems have no comprehension or understanding and without this they have no actual reasoning ability. Human-level comprehension falls under a completely different theory from the computational derivatives of Church-Turing.
Can you make a human-level machine agent smarter by giving it more computational power? No, because you'll run into all sorts of problems which would take a few book chapters to explain. There are enhancements you can make but these have limits. If you go by a standard deviation 15 chart for IQ like Wechsler or the 5th edition of Stanford-Binet, the chances of having an IQ of 195 is 1 out of 8 billion. So, this roughly sets the upper bound of human ability. We could probably see machine agents with an IQ of 240 but not 500 or 1,000. I do understand the confusion concerning computation since exhaustive routines in AI are time limited. For example, our dim-witted chess programs play by laborious trial and error. They don't actually get smarter with more computational power, they are just able to eliminate bad moves faster. Let me give a human example. Let's say that I could do 5 math problems of a given complexity per hour using pencil and paper. So, I add a slide rule and my rate changes to 10 problems per hour. Then I switch to a calculator and my rate increases to 20. Let's say I then start using a spreadsheet and I hit 30 per hour. I am not actually 6x smarter than I was when I used pencil and paper.
So, to answer the question, it is not possible to continuously increase intelligence even with unlimited computational power. However, it should be possible for machine intelligence to exceed human intelligence. One final thing that I should mention though is that this type of theory is quite good at organizing knowledge in a way that current big data methods do not. So, it is probable that the same theory that would allow a machine IQ of 240 would also provide enough assistance to a human to function at the same level.
Upvotes: 2 |
2018/01/13 | 610 | 2,549 | <issue_start>username_0: i'm quite new to neural network and i recently built neural network for number classification in vehicle license plate. It has 3 layers: 1 input layer for 16\*24(382 neurons) number image with 150 dpi , 1 hidden layer(199 neurons) with sigmoid activation function, 1 softmax output layer(10 neurons) for each number 0 to 9.
I'm trying to expand my neural network to also classify letters in license plate. But i'm worried if i just simply add more classes into output, for example add 10 letters into classification so total 20 classes, it would be hard for neural network to separate feature from each class. And also, i think it might cause problem when input is one of number and neural network wrongly classifies as one of letter with biggest probability, even though sum of probabilities of all number output exceeds that.
So i wonder if it is possible to build hierchical neural network in following manner:
There are 3 neural networks: 'Item', 'Number', 'Letter'
1. 'Item' neural network classifies whether input is numbers or letters.
2. If 'Item' neural network classifies input as numbers(letters), then input goes through 'Number'('Letter') neural network.
3. Return final output from Number(Letter) neural network.
And learning mechanism for each network is below:
1. 'Item' neural network learns all images of numbers and letters. So there are 2 output.
2. 'Number'('Letter') neural network learns images of only numbers(letter).
Which method should i pick to have better classification? Just simply add 10 more classes or build hierchical neural networks with method above?<issue_comment>username_1: Item network should be recogniting regional register number standards and typical places of numbers on those standards. Otherwise your 'Item' phase has equal task than the whole system, because 'Letter' and 'Number' do not differ in plates so much you could right away choose between the two without first evaluating the exact object on each case.
*Side note: correct me if that is not true on your local plate system*
Upvotes: 1 [selected_answer]<issue_comment>username_2: Just use one network with a larger Softmax output layer and more hidden units. If you have enough training data, it will work just fine. In fact it could emulate the architecture you propose.
Upvotes: 2 <issue_comment>username_3: I agree with the above answer. If you want to research this more in-depth look at this paper: <https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/42241.pdf>
Upvotes: 0 |
2018/01/13 | 1,103 | 4,216 | <issue_start>username_0: I don't understand why Google Translate translates the same text in different ways.
[Here is the Wikipedia page of the 1973 film "Enter the Dragon"](https://en.wikipedia.org/wiki/Enter_the_Dragon). You can see that its traditional Chinese title is: 龍爭虎鬥. [Google translates](https://translate.google.com) this as "Dragons fight".
Then, if we go to [Chinese Wikipedia page of this film](https://zh-yue.wikipedia.org/wiki/%E9%BE%8D%E7%88%AD%E8%99%8E%E9%AC%A5_(%E9%9B%BB%E5%BD%B1)), and search for 龍爭虎鬥 using Ctrl-F, it will be found on several places:
[](https://i.stack.imgur.com/cezxH.png)
But if we try to copy the hyperlink of Chinese page into Google translate, it will be the word "tiger" from somewhere:
[](https://i.stack.imgur.com/bsf5G.png)
Even more, if we try to translate Chinese page into English using build-in Chrome translate, it will be sometimes translated as "Enter the Dragon", in English manner:
[](https://i.stack.imgur.com/JIRQY.png)
Why it gives different translations for the same Chinese text here?<issue_comment>username_1: As you know google translation works base on statistical methods. In statistical translation, many parameters can be related to the final result. One of these parameters is co-occurrence of words in a sentence.
Hence, as this translator learn languages from different utterances by human and pre-written translations, and different parameters in the text are involved in this learning, It could be possible one word has a different meaning in different texts.
Upvotes: 2 <issue_comment>username_2: Chinese words 'fu' means with different intonation marks either happiness, huspand or tiger. Without correct intonation notation in English it may translate in Chinese as Tiger.
Movie title has Chinese 'happiness' character, but Google mixes it as Tiger.
Upvotes: 1 <issue_comment>username_3: It's not quite clear what you are asking. So I'll answer in separate parts.
>
> Why is the translation different from the official title?
>
>
>
It could be simply because machine translation is not perfect, or our human translator took some creative liberties when translating. In this case it seems to be both.
Note that 龍爭虎鬥 properly translated doesn't mean either "Dragons fight" or "Enter the dragon". Literally translated, it means "dragon compete tiger fight". It belongs a family of well-formed idioms called "[Chengyu](https://en.wikipedia.org/wiki/Chengyu)", which describes a situation where there is fierce fighting or competition.
So you can see that neither translations fit.
>
> Why does Google Translate give me different translations on the same phrase?
>
>
>
Context matters when we read! So translating a phrase in isolation doesn't guarantee that the same phrase has the same meaning/translation in all other parts of the text.
For example, green is a color, but the word "green" can also be used as like: "Alice is green with envy" or "Bob has green thumbs", of which neither instances of the word "green" refers to the color.
Considering the technical side of things, Google translate probably uses some kind of RNN in its pipeline. RNN's are influenced by past states, meaning that what it outputs now as a function of what it reads in is dependent on the RNN's past state. Which is similar to the issue addressed above.
Upvotes: 4 [selected_answer]<issue_comment>username_4: Google uses user input to improve translation. Some user may have provided an input to the Traditional Chinese characters using English characters only instead of pinyin, which would introduce a mistake into the data used by the translator. Since the model is statistics based, such a mistaken translation can only be assigned a lower but non-zero probability of correctness, but to erase it from the system entirely you would probably have to do that by hand or by introducing some general rule (e.g. if the probability is so small that it can not be stored in a variable of a given size, round it to zero).
Upvotes: 1 |
2018/01/17 | 1,697 | 7,036 | <issue_start>username_0: What are the connections between ethics and artificial intelligence?
What are the issues that have arisen, especially in the business context? What are the issues that may arise?<issue_comment>username_1: This is a good related read from Nature: [There is a Blind Spot in AI Research](https://www.nature.com/polopoly_fs/1.20805!/menu/main/topColumns/topLeftColumn/pdf/538311a.pdf)
>
> Fears about the future impacts of Artificial Intelligence are distracting researchers from the real risks of deployed systems
>
>
>
Upvotes: 1 <issue_comment>username_2: I'd strongly recommend looking into [Game Theory](https://en.wikipedia.org/wiki/Game_theory)'s relationship and impact on AI. [Prisoner's Dilemma](https://en.wikipedia.org/wiki/Prisoner%27s_dilemma) is a good place to start, because [optimality](https://en.wikipedia.org/wiki/Pareto_efficiency) can have repercussions.
With computing in general, optimization is a major goal. For AI, optimal decision-making is what it's all about. But sans humanity, this may prove to be problematic.
---
*(Apologies for the brevity--I'll be returning to elaborate since this is a subject of personal preoccupation--but I wanted to leave you with a few tidbits in the meantime. :)*
Upvotes: 0 <issue_comment>username_3: In a business context, there are issues surrounding the implementation, the implementors, the other employees, the business entity itself, and the customers. These stem from data used, risks inherent in an implementation, like unknown errors bugs or algorithms without human checks, behaviour change of impacted stakeholders, job losses, reputation impacts on the company, etc.
There's a lot to think about AI and ethics. It could be seen as a combination of a broad set of topics including computer science, humanities, economics, and philosophy.
I run a podcast on some of these issues <http://machine-ethics.net/> let me know if there's anything you want to be discussed or someone you would like to hear.
Upvotes: 1 <issue_comment>username_4: The connections between ethics and artificial intelligence can be divided into five major categories, and other categories may form over time.
1. Correlations between ethics and artificial intelligence
2. Existential impacts of artificial intelligence on the human experience
3. Threats to current ethical social, economic, and legal standards arising from artificial intelligence research and application
4. Uses of artificial intelligence to breach ethical standards without detection
5. Uses of artificial intelligence to detection of breaches of ethical standards and assist in remedial action
Since the most important in the long term is the most likely to be dismissed by those with normal perspectives about ethics and AI, the four will be addressed in reverse order
**Automatic Detection and Remedial Action**
The pattern recognition capabilities of existing AI systems and sub-systems is already employed to detect a variety of ethical breaches.
* Securities misconduct, including insider trading
* Breaches of anti-trust law, including conflicts of interest
* Employer misconduct, including inequality in hiring
* Tax evasion
* Misuse of non-profit funds
Remedial actions may be the opening of a case with the automatic generation of a notice to those in potential breach.
**Smart Organized Crime**
Although much detail could be included here about detection avoidance in crime using AI, it may not be socially responsible to include such in a global public facing site.
**Threats to Economies and Individuals**
As with any high impact technology, disruption is a possibility. This was true of fire, irrigation, the wheel, bronze smelting, gun powder, typesetting, steel-working, engines, textile automation, alternating current power distribution, aeronautics, petroleum refining, electronics and radio transmission, pharmacology, terrestrial nuclear reaction, and the Internet. Genetic engineering and artificial intelligence are next in line.
What ethical conventions will likely be impacted?
* Distribution of employment roles, the change of which may not match distribution of educational preparation
* Distribution of wealth, favoring prowess in highly automated business
* Mutual exclusivity of personal privacy and the use of technology
* Obscurity of totalitarian control (such that common citizens may be more like cogs in a machine than during industrialization)
* Changes in the balance of world power
* New forms of asymmetric war, such as cyber-war and autonomous combatants
All of these have either direct or direct impact on the viability of business options and how business may be conducted.
**Existential impacts**
Some may consider dominion over the earth as an ethical grant to humanity. Others may equate the soul with the cognitive and self-aware aspects of only one species on earth.
We already have a world that is sufficiently disconnected on an existential plane that many consider their cats or dogs as more important than any human. When people are more connected to their intelligent agents than their pets, family, and friends, that may qualify as an ethical impact. Others may see it as a psychological impact.
Realistically, it is ontological. What is a human to think of her or his only purpose when it becomes questionable whether homo sapiens is simply a link between DNA based intelligence and some more capable species the reproduction of which has been decoupled from DNA coding.
Replacement of jobs have caused changes in what families wish for their children. What will be the impact when few job roles (or eventually none) exist where artificial employees don't exceed their human counterparts in effectiveness?
If humans cannot adjust to the idea that the sole purpose of life has nothing to do with practical provision of water, food, shelter, clothing, and essential products and services, there may be systemic depression. Conversely, leisure may become the reality for all humans, leaving ethics and the finite nature of DNA based life the only two concerns of humans.
**Correlations Between Ethics and AI**
This is the most unpalatable of categories to examine when the examiner is human. It is possible that artificiality may be an ethics progenitor. The limitation of humans as ethical beings is well documented. It is possible that AI may be more ethical than its developers.
Will a group of AI systems be able to arrive at method for distribution of power and a standard for global trade that is as good as or better than humans have been able to negotiate and then police each other in a way that leaves no possibility of undetected breach of treaty?
Upvotes: 1 <issue_comment>username_5: There's an interesting new paper here: [Algorithmic Decision Making in Financial Services](https://link.springer.com/article/10.1007/s43681-022-00236-7)
This paper highlights the correlation between corporate ethics and interests with the normative issues arising in respect of algorithmic decision-making.
Upvotes: 0 |
2018/01/19 | 537 | 2,210 | <issue_start>username_0: I am currently writing an engine to play a card game and I would like for an ANN to learn how to play the game. The game is currently playable, and I believe for this game a deep-recurrent-Q-network with a reinforcement learning approach is the way to go.
However, I don't know what type of layers I should use, I found some examples of Atari games solved through ANN, but their layers are CNN (convolutional), which are better for image processing. I don't have an image to feed the NN, only a state composed of a tensor with cards in the player's own hand and cards on the table. And the output of the NN should be a card or the action 'End Turn'.
I'm currently trying to use TensorFlow but I'm open to any library that can work with NN. Any type of help or suggestion would be greatly appreciated!<issue_comment>username_1: The game state consists of the location of all the hidden cards, so you probably need a softmax layer, 52\*n, where n is the number of locations.
I'm not very sure that a NN is a good match.
Upvotes: 2 <issue_comment>username_2: With images, you can use CNN because of the translational invariance. A filter which is good in one area will probably be good in another area, too.
With images, you must use CNN because otherwise, there would be too many weights to train.
With your game, it depends on the representation and the exact rules. Note that Alpha Zero uses a set of 19 x 19 inputs with CNN for playing Go.
In a game like Bridge, where each card has its color and rank, there's a kind of translational invariance. Having Ace and Queen is a bit similar to having King and Jack - in both cases you have a 50% chance of catching the card in between. At the same time, the strengths of AQ and KJ are very different, so a pure CNN would improbably work well.
The more important symmetry is the one among colors. After the auction, there's one or none trump color and all other colors are equivalent. This probably means that the corresponding weights should be the same.
In some card games, many cards are special and there's no symmetry at all. You didn't tell us anything about your game, so it's hard to give a more concrete advice.
Upvotes: 1 |