date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
|---|---|---|---|
2016/08/02
| 944
| 4,057
|
<issue_start>username_0: Can a Convolutional Neural Network be used for pattern recognition in problem domains without image data? For example, by representing abstract data in an image-like format with spatial relations? Would that always be less efficient?
[This developer](https://youtu.be/py5byOOHZM8?t=815 "This Developer") says current development could go further but not if there's a limit outside image recognition.<issue_comment>username_1: Convolutional Nets (CNN) rely on mathematical convolution (e.g. 2D or 3D convolutions), which is commonly used for signal processing. Images are a type of signal, and convolution can equally be used on sound, vibrations, etc. So, in principle, CNNs can find applications to any signal, and probably more.
In practice, there exists already work on NLP (as mentioned by <NAME>), where some people process text with CNNs rather than recursive networks. Some other works apply to sound processing (no reference here, but I have yet unpublished work ongoing).
---
*Original contents: In answer to the original title question, which has changed now. Perhaps need to delete this one*.
Research on adversarial networks (and related) show that even [deep networks can easily be fooled](http://arxiv.org/abs/1412.1897), leading them to see a dog (or whatever object) in what appears to be random noise when a human look at it (the article has clear examples).
Another issue is the generalization power of a neural network. Convolutional nets have amazed the world with their capability to generalize way better than other techniques. But if the network is only fed images of cats, it will recognize only cats (and probably see cats everywhere, as by adversarial network results). In other words, even CNs have a hard time generalizing too far *beyond* what they learned from.
The recognition limit is hard to define precisely. I would simply say that the diversity of the learning data pushes the limit (I assume further detail should lead to more appropriate venue for discussion).
Upvotes: 4 <issue_comment>username_2: The simple answer is "no, they aren't limited to images": CNNs are also being used for natural language processing. (See [here](http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/) for an introduction.)
I haven't seen them applied to graphical data yet, but I haven't looked; there are some obvious things to try and so I'm optimistic that it would work.
Upvotes: 3 <issue_comment>username_3: Convolutional neural network can be applied not only for image recognition but also for video analysis and recognition, natural language processing, in games (e.g. [Go](https://en.wikipedia.org/wiki/Computer_Go#New_approaches_to_problems)) or even for [drug discovery](https://ai.stackexchange.com/a/1424/8) by predicting the interaction between molecules and biological proteins[wiki](https://en.wikipedia.org/wiki/Convolutional_neural_network#Applications).
Therefore it can be used for variety of problems by using convolutional and subsampling layers connected to more fully connected layers. They're easier to train, because have fewer parameters than fully connected networks with the same number of hidden units.[UFLDL](http://ufldl.stanford.edu/tutorial/supervised/ConvolutionalNeuralNetwork/)
Upvotes: 2 <issue_comment>username_4: A convolutional neural network can be used wherever patterns are locally correlated and translatable (as in shiftable). This is the case because CNNs contain filters that look for certain local patterns everywhere in the input.
You'll find local and translatable patterns in pictures, text, time series, etc.
It doesn't make as much sense to use CNNs if your data is more like a bag of features with an irrelevant order. In that case, you might have trouble detecting patterns that contain features which happen to be farther apart in your input vector. You will not find local and translatable patterns in your data if you can reorder the data points of the input vectors without losing information.
Upvotes: 2
|
2016/08/02
| 886
| 3,527
|
<issue_start>username_0: I've heard the terms strong-AI and weak-AI used. Are these well defined terms or subjective ones? How are they generally defined?<issue_comment>username_1: The terms *strong* and *weak* don't actually refer to processing, or optimization power, or any interpretation leading to "strong AI" being *stronger* than "weak AI". It holds conveniently in practice, but the terms come from elsewhere. In 1980, [<NAME>](https://en.wikipedia.org/wiki/John_Searle) coined the following statements:
* AI hypothesis, strong form: an AI system can *think* and have a *mind* (in the philosophical definition of the term);
* AI hypothesis, weak form: an AI system can only *act* like it thinks and has a mind.
So *strong AI* is a shortcut for an AI systems that verifies the *strong AI hypothesis*. Similarly, for the weak form. The terms have then evolved: strong AI refers to AI that performs as well as humans (who have minds), weak AI refers to AI that doesn't.
The problem with these definitions is that they're fuzzy. For example, [AlphaGo](https://en.wikipedia.org/wiki/AlphaGo) is an example of weak AI, but is "strong" by Go-playing standards. A hypothetical AI replicating a human baby would be a strong AI, while being "weak" at most tasks.
Other terms exist: [Artificial General Intelligence](https://en.wikipedia.org/wiki/Artificial_general_intelligence) (AGI), which has cross-domain capability (like humans), can learn from a wide range of experiences (like humans), among other features. Artificial Narrow Intelligence refers to systems bound to a certain range of tasks (where they may nevertheless have superhuman ability), lacking capacity to significantly improve themselves.
Beyond AGI, we find Artificial Superintelligence (ASI), based on the idea that a system with the capabilities of an AGI, without the physical limitations of humans would learn and improve far beyond human level.
Upvotes: 6 [selected_answer]<issue_comment>username_2: In contrast to the *philosophical* definitions, which rely on terms like "mind" and "think," there are also definitions that hinge on *observables*.
That is, a Strong AI is an AI that understands itself well enough to self-improve. Even if it is philosophically not equivalent to a human, or unable to perform *all* cognitive tasks that a human can, this AI can still generate a tremendous amount of optimization power / good decision-making, and its creation would be of historic importance (to put it lightly).
A Weak AI, in contrast, is an AI with no or limited ability to self-modify. A chessbot that runs on your laptop might have superhuman ability to play chess, but it can *only* play chess, and while it might tune its weights or its architecture and slowly improve, it cannot modify itself in a deep enough way to generalize to other tasks.
Another way to think about this is that a Strong AI is an AI researcher in its own right, and a Weak AI is what AI researchers produce.
Upvotes: 3 <issue_comment>username_3: [Strong](https://en.wikipedia.org/wiki/Strong_AI) and [weak AI](https://en.wikipedia.org/wiki/Weak_AI) are the older terms for [AGI](https://en.wikipedia.org/wiki/Artificial_general_intelligence) (artificial general intelligence) and narrow AI. At least that's how I have seen it used and wikipedia seems to agree.
I personally haven't seen Searle's definition of "weak and strong AI" in use much, but maybe the shift to the newer terms came about in part because Searle successfully confused the issue.
Upvotes: 1
|
2016/08/02
| 1,824
| 7,652
|
<issue_start>username_0: I know that language of Lisp was used early on when working on artificial intelligence problems. Is it still being used today for significant work? If not, is there a new language that has taken its place as the most common one being used for work in AI today?<issue_comment>username_1: Overall, the answer is no, but the current paradigms owe a lot to LISP. The language most commonly used today is python.
Relevant answers:
* Stack Overflow thread explaining why LISP was thought of as the AI language: [Why is Lisp used for AI](https://stackoverflow.com/questions/130475/why-is-lisp-used-for-ai)
* Quora answer by <NAME>, who wrote a popular textbook on the subject and is currently Director of Research at Google: [Is it true that Lisp is highly used programming language in AI?](https://www.quora.com/Is-it-true-that-Lisp-is-highly-used-programming-language-in-AI)
LISP pioneered many important concepts in what we now call functional programming, with a key attraction being how close the programs were to math. Many of these features have since been incorporated into modern languages (see [the Wikipedia page](https://en.wikipedia.org/wiki/Lisp_(programming_language))). LISP is very expressive: it has very little syntax (just lists and some elementary operations on them) but you can write short succinct programs that represent complex ideas. This amazes newcomers and has sold it as the language for AI. However, this is a property of programs in general. Short programs can represent complex concepts. And while you can write powerful code in LISP, any beginner will tell you that it is also very hard to read anyone else's LISP code or to debug your own LISP code. Initially, there were also performance considerations with functional programming and it fell out of favor to be replaced by low level imperative languages like C. (For example, functional programming requires that no object ever be changed ("mutated"), so every operation requires a new object to be created. Without good garbage collection, this can get unwieldy). Today, we've learned that a mix of functional and imperative programming is needed to write good code and modern languages like python, ruby and scala support both. At this point, and this is just my opinion, there is no reason to prefer LISP over python.
The paradigm for AI that currently receives the most attention is Machine Learning, where we learn from data, as opposed to previous approaches like Expert Systems (in the 80s) where experts wrote rules for the AI to follow. Python is currently the most widely used language for machine learning and has many libraries, e.g. Tensorflow and Pytorch, and an active community. To process the massive amounts of data, we need systems like Hadoop, Hive or Spark. Code for these is written in python, java or scala. Often, the core time-intensive subroutines are written in C.
The AI Winter of the 80s was not because we did not have the right language, but because we did not have the right algorithms, enough computational power and enough data. If you're trying to learn AI, spend your time studying algorithms and not languages.
Upvotes: 4 [selected_answer]<issue_comment>username_2: LISP is still used significantly, but less and less. There is still momentum due to so many people using it in the past, who are still active in the industry or research (anecdote: the last VCR was produced by a Japanese maker in July 2016, yes). The language is however used (to my knowledge) for the kind of AI that does not leverage Machine Learning, typically as the reference books from Russell and Norvig. These applications are still very useful, but Machine Learning gets all the steam these days.
Another reason for the decline is that LISP practitioners have partially moved to Clojure and other recent languages.
If you are learning about AI technologies, LISP (or Scheme or Prolog) is good choice to understand what is going on with "AI" at large. But if you wish or have to be very pragmatic, Python or R are the community choices
Note: The above lacks concrete example and reference. I am aware of some work in universities, and some companies inspired by or directly using LISP.
---
To add on @username_1's answer, LISP (and Scheme, and Prolog) has qualities that made it look like it was better suited for creating intelligent mechanisms---making AI as perceived in the 60s.
One of the qualities was that the language design leads the developer to think in a quite elegant way, to decompose a big problem into small problems, etc. Quite "clever", or "intelligent" if you will. Compared to some other languages, there is almost no choice but to develop that way. LISP is a list processing language, and "purely functional".
One problem, though, can be seen in work related to LISP. A notable one in the AI domain is the work on the [Situation Calculus](https://en.wikipedia.org/wiki/Situation_calculus), where (in short) one describes objects and rules in a "world", and can let it evolve to compute *situations*---states of the world. So it is a model for reasoning on situations. The main problem is called the [frame problem](https://en.wikipedia.org/wiki/Frame_problem), meaning this calculus cannot tell what does *not* change---just what changes. Anything that is not defined in the world cannot be processed (note the difference here with ML). First implementations used LISPs, because that was the AI language then. And there were bound by the frame problem. But, as @username_1 mentioned, it is not LISP's fault: Any language would face the same framing issue (a conceptual problem of the Situation Calculus).
So the language really does not matter from the AI / AGI / ASI perspective. The concepts (algorithms, etc.) are really what matters.
Even in Machine Learning, the language is just a practical choice. Python and R are popular today, primarily due to their library ecosystem and the focus of key companies. But try to use Python or R to run a model for a RaspberryPI-based application, and you will face some severe limitations (but still possible, I am doing it :-)). So the language choice burns down to pragmatism.
Upvotes: 3 <issue_comment>username_3: I definitely continue to often use Lisp when working on AI models.
You asked if it is being used for *substantial* work. That's too subjective for me to answer regarding my own work, but I queried one my AI models whether or not it considered itself substantial, and it replied with an affirmative response. Of course, it's response is naturally biased as well.
Overall, a significant amount of AI research and development is conducted in Lisp. Furthermore, even for non-AI problems, Lisp is sometimes used. To demonstrate the power of Lisp, I engineered the first neural network simulation system written entirely in Lisp over a quarter century ago.
Upvotes: 3 <issue_comment>username_4: Clojure, a dialect of Lisp (implemented for the Java Virtual Machine), was used to implement [Clojush](https://github.com/lspector/Clojush), a *PushGP* system, i.e. a genetic programming (which is a sub-class of evolutionary algorithms) system based on the use of the [*Push* programming language](https://faculty.hampshire.edu/lspector/push.html), which is a stack-based programming language. The lead developer of Clojush, <NAME>, and other people still do research on these topics. So, yes, Lisp is still being used in artificial intelligence!
However, it's also true that Python and C/C++ (to implement the low-level stuff) are probably the two most used programming languages in AI nowadays, especially for deep learning.
Upvotes: 0
|
2016/08/02
| 2,459
| 9,639
|
<issue_start>username_0: What are the specific requirements of the Turing test?
* What requirements if any must the evaluator fulfill in order to be qualified to give the test?
* Must there always be two participants in the conversation (one human and one computer) or can there be more?
* Are placebo tests (where there is not actually a computer involved) allowed or encouraged?
* Can there be multiple evaluators? If so does the decision need to be unanimous among all evaluators in order for the machine to have passed the test?<issue_comment>username_1: The "Turing Test" is generally taken to mean an updated version of the Imitation Game Alan Turing proposed in his 1951 paper of the same name. An early version had a human (male or female) and a computer, and a judge had to decide which is which, and what gender they were if human. If they were correct less than 50% then the computer was considered "intelligent."
The current generally accepted version requires only one contestant, and a judge to decide whether it is human or machine. So yes, sometimes this will be a placebo, effectively, if we consider a human to be a placebo.
Your first and fourth questions are related - and there are no strict guidelines. If the computer can fool a greater number of judges then it will of course be considered a better AI.
The University of Toronto has a validity section in [this paper on Turing](http://www.psych.utoronto.ca/users/reingold/courses/ai/turing.html), which includes a link to [<NAME>' commentary](http://ciips.ee.uwa.edu.au/Papers/Technical_Reports/1997/05/Index.html) on why the Turing test may not be relevant (humans may also fail it) and the [Loebner Prize](http://www.loebner.net/Prizef/loebner-prize.html), a formal instantiation of a Turing Test .
Upvotes: 4 [selected_answer]<issue_comment>username_2: There are really two questions here, that I can see. One is "what were the specific requirements of the original Turing test, as stated by Turing himself?" The other is "What should the specific requirements of a modern Turing test be?" Things have advanced a lot since Turing's day, and I think it's reasonable for us to consider extending/modifying his test to reflect our current understanding.
The answer to the first question is easy enough to look up, so I think the interesting one is the second one. What *should* a test to determine intelligence look like? With that in mind, I think the answer to all four questions posed by the OP is "it depends". I don't think there's universal consensus on how to structure a perfect Turing test, so a given experimenter is really free to set things up however he/she wants.
This is all, of course, based on the assumption that the Turing test or a Turing Test-like test is actually of value. That's not necessarily a given. Consider that, to some extent, what we're talking about is designing an AI with an exceptional ability for deceit! That is, assuming the questioner is allowed to simply ask "are you human", then we have to assume that the AI is supposed to lie if it wants to pass the test. So one might rightly ask, is designing a system to be really good at telling lies, a valuable approach to AI?
Upvotes: 2 <issue_comment>username_3: If you want to understand relativity, read Einstein1,2, not a book about relativity authored by a professor who think's he's got it. If you want to understand Alan Turing's test for intelligence in the context of human dialog, read Turing.3 Interpretations can be worse than worthless. They are often misleading. If the principles seem too thick, read it over again until you get it.
In the case of Turing's test for intelligence in the context of human dialog, to understand it fully, the following background is assumed when Turing wrote, which, if you read his 1950 article, will become apparent.
* How Turing's completeness theorem responds to <NAME>'s second incompleteness theorem
* The strategy of a controlled test
* The difference between (a) hearing and speaking and (b) listening and wittily responding — This is particularly pertinent today because the chat-bots do (a) and could be anywhere from 5 to 500 years away from doing (b). To reach (c) deeply comprehending and responding with inspiration, AI researchers must go beyond modelling the human mind and approach the challenge of modelling the minds of people like Gödel, Einstein, and Turing. Whether that will ever occur is yet to be revealed.
The specific requirements of the Imitation Game, Alan Turing's subtitle above the description of his thought experiment, are a matter of record.
**Specific Requirements [Excerpt from Actual Article]**
>
> [The imitation game] is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B thus:
>
>
> C: Will X please tell me the length of his or her hair?
>
>
> Now suppose X is actually A, then A must answer. It is A's object in the game to try and cause C to make the wrong identification. His answer might therefore be:
>
>
> "My hair is shingled, and the longest strands are about nine inches long."
>
>
> In order that tones of voice may not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms. Alternatively the question and answers can be repeated by an intermediary. The object of the game for the third player (B) is to help the interrogator.
>
>
> The best strategy for her is probably to give truthful answers. She can add such things as "I am the woman, don't listen to him!" to her answers, but it will avail nothing as the man can make similar remarks.
>
>
> We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"
>
>
>
There have been thousands of critiques of both Einstein's relativity and Turing's test, none of which add much value. Study the thinking of great contributors through their own words and all the refuse that follows will be interesting primarily in its lack of greatness.
**Secondary Questions in This Thread**
>
> What requirements if any must the evaluator fulfill in order to be qualified to give the test?
>
>
>
The interrogator (C) is not an evaluator. Evaluation would be an attempt to be objective, however the premise of Turing's thought experiment is that the interrogator provide her or his subjective judgment. From a statistics point of view, the interrogator should be selected randomly from the population of the world that shares a spoken language with (A) and (B).
>
> Must there always be two participants in the conversation (one human and one computer) or can there be more?
>
>
>
There must be exactly two to fit the scenario described by Al<NAME>. (See below for more detail.)
>
> Are placebo tests (where there is not actually a computer involved) allowed or encouraged?
>
>
>
One could test all kinds of things, and researchers do, however, that would be outside of the scope of Turing's thought experiment.4
>
> Can there be multiple evaluators? If so does the decision need to be unanimous among all evaluators in order for the machine to have passed the test?
>
>
>
What would reveal the most information to those that sponsor an actual Imitation Game would be a double blind fully randomized test where (A), (B), and (C) are pulled from as random a sample of those men, women, or software systems of the type under test that can converse in a common language, and the test would be run many times with random selections from the samples.
Unanimity, evaluation, additional complexity, and communication other than that which was specified by the test would only frustrate the cause, if one sticks with Turing's original intention regarding the question, "Can computers think?"
**Other Views of Intelligence**
Turing, as did <NAME>, who stated that machines will never pass a less controlled version of Turing's Imitation Game, saw intelligence through the lens of dialog. Others have considered other kinds of dialog and other contexts than dialog. I addressed this in another question:
>
> [Can a brain be intelligent without a body?](https://ai.stackexchange.com/questions/8219/can-a-brain-be-intelligent-without-a-body)
>
>
>
**References and Footnotes**
[1] *Relativity: The Special and the General Theory* by <NAME>, 1916
[2] *The Principle of Relativity* by <NAME> and <NAME>, 1923
[3] <NAME> (1950) Computing Machinery and Intelligence. Mind 49: 433-460. <https://www.csee.umbc.edu/courses/471/papers/turing.pdf>
[4] Turing's 1950 article did not recommend that his thought experiment should be embodied and used in commercial validation of future AI systems. Al<NAME> was, however, concerned with practical computing at one specific point in his career. That was when the Nazis had overrun France, were pulverizing his homeland from the air, and had sunk a significant portion of the English Navy from below, with the help of Enigma cryptography.
Upvotes: 0
|
2016/08/02
| 655
| 2,736
|
<issue_start>username_0: I believe that statistical AI uses inductive thought processes. For example, deducing a trend from a pattern, after training.
What are some examples of successfully applied Statistical AI to real-world problems?<issue_comment>username_1: There are several examples. For example, one instance of using Statistical AI from my workplace is:
1. Analyzing the behavior of the customer and their food-ordering trends, and then trying to upsell by recommending them the dishes which they might like to order/eat. This can be done through the apriori and FP-growth algorithms. We then, automated the algorithm, and then the algorithm improves itself through an `Ordered/Not-Ordered` metric.
2. Self-driving cars. They use reinforcement and supervised learning algorithms for learning the route and the gradient/texture of the road.
Upvotes: 4 [selected_answer]<issue_comment>username_2: There are many online services that use statistical neural networks for recommendations. For example, we have [a well known service](http://imhonet.ru) here in Russia that could give it's users recommendations for movies and shows to watch and books to read. Its recommendation core is based on many things known about a user: what movies/books he or she loves and what not, analyses his or her friends like and so on. While you have only a few items rayed it will give you very strange recommendations but then it becomes more accurate and really could give you some true gems.
Upvotes: 2 <issue_comment>username_3: Not strictly examples of AI, but related to the greater AI project: But us in the psychology / cognitive science side of things sure love our [bayesian modelling!](https://www.frontiersin.org/articles/10.3389/fpsyg.2014.01144/full)
In fact there are people who believe that a theory grounded in such analysis would ultimately bring us to a [unified theory of the brain and cognition!](https://www.nature.com/articles/nrn2787)
Unfortunately to my knowledge, these theories are not yet complete or testable in interesting ways as they are grounded more in the philosophy end of things. More so the claims that the psychologists make are rather weak: that hypothesis updating and inference is Bayesian-like (which isn't super exciting to be honest) (but my knowledge in this area is not super complete)
Alas, more work needs to be done but at least there is psychological support for the claim that cognition is Bayesian-like.
Upvotes: 2 <issue_comment>username_4: Statistical AI is widely used in finance for asset management (particularly hedge funds) and trade execution looking at high-speed small data sets, lots of HMMs and SSMs, but nobody talks about it because it provides proprietary riches.
Upvotes: 0
|
2016/08/02
| 778
| 3,449
|
<issue_start>username_0: Some programs do exhaustive searches for a solution while others do heuristic searches for a similar answer. For example, in chess, the search for the best next move tends to be more exhaustive in nature whereas, in Go, the search for the best next move tends to be more heuristic in nature due to the much larger search space.
Is the technique of brute force exhaustive searching for a good answer considered to be AI or is it generally required that heuristic algorithms be used before being deemed AI? If so, is the chess-playing computer beating a human professional seen as a meaningful milestone?<issue_comment>username_1: If a computer is just brute-forcing the solution, it's not learning anything or using any kind of intelligence at all, and therefore it shouldn't be called "artificial intelligence." It has to make decisions based on what's happened before in similar instances. For something to be intelligent, it needs a way to keep track of what it's learned. A chess program might have a really awesome measurement algorithm to use on every possible board state, but if it's always trying each state and never storing what it learns about different approaches, it's not intelligent.
Upvotes: 3 <issue_comment>username_2: If one thinks of intelligence as a continuous measure of optimization power (that is, how much better are outcomes for any unit of cognitive effort expended), then exhaustive search has non-zero intelligence (in that it does actually give better outcomes as more effort is expended) but *very, very low* intelligence (as the outcomes are better mostly by luck, and the amount of effort expended can be impossibly large).
Upvotes: 3 <issue_comment>username_3: Really any 'intelligence' exhibited by a computer is deemed AI, regardless of brute force or use of username_3rt heuristics. For example, a chat bot can be coded to respond to most responses using many, many if statements. This is an AI no matter how poorly coded/designed it is.
The chess playing computer beating a human professional can be seen as a meaningful milestone. I mean, someone programmed a computer to beat grandmaster chess players and chess geniuses. Many thought that wasn't possible since chess is such a complex game. This kind of work likely segued into more complex AI, for if a computer could play chess, then it surely complete other complex tasks as well.
Note how refined chess programming is: magic bitboards, Zobrist hashing, pruning, lazy SMP, and many more. This is perhaps not the sort of milestone of AI that you thought, but again, the things that can be considered AI are pretty broad.
Upvotes: 2 <issue_comment>username_4: Brute force approach is certainly the first step of many in AI programming. But using these experiences the program must learn to find the best solution or at least a closer solution to the problem. Since the first goal in AI is to find any solution, nothing can beat the brute force approach. But then using the previous results of brute force approaches, the program must develop its own heuristics and use this data along with brute force to find the optimal solution.
Upvotes: 2 <issue_comment>username_5: I dont know why you wouldnt consider it ai since every single thing has used something like it thats been in the recent news.
evolving a neural network is very similar to brute force search, just it hits local optima, because its not exhaustive.
Upvotes: 1
|
2016/08/02
| 269
| 1,083
|
<issue_start>username_0: How is a neural network having the "deep" adjective actually distinguished from other similar networks?<issue_comment>username_1: The difference is mostly in the number of layers.
For a long time, it was believed that "1-2 hidden layers are enough for most tasks" and it was impractical to use more than that, because training neural networks can be very computationally demanding.
Nowadays, computers are capable of much more, so people have started to use networks with more layers and found that they work very well for some tasks.
The word "deep" is there simply to distinguish these networks from the traditional, "more shallow" ones.
Upvotes: 6 [selected_answer]<issue_comment>username_2: A deep neural network is just a (feed-forward) neural network with many layers.
However, deep belief networks, Deep Boltzmann networks, etc., are not considered (debatable) deep neural networks, as their topology is different (i.e. they have undirected networks in their topology).
See also this: <https://stats.stackexchange.com/a/59854/84191>.
Upvotes: 3
|
2016/08/02
| 523
| 2,369
|
<issue_start>username_0: What is the effectiveness of pre-training of unsupervised deep learning?
Does unsupervised deep learning actually work?<issue_comment>username_1: [Unsupervised pre-training](http://www.cs.toronto.edu/~fritz/absps/ncfast.pdf) was done only very shortly, as far as I know, at the time when deep learning started to actually work. It extracts certain regularities in the data, which a later supervised learning can latch onto, so it is not surprising that it might work. On the other hand, unsupervised learning doesn't give particularly impressive results in very deep nets, so it is also not surprising that with current very deep nets, it isn't used anymore.
I was wondering whether the initial success with unsupervised pre-training had something to do with the fact that the ideal initialization of neural nets was only worked out later. In that case, unsupervised pre-training would only be a very complicated way of getting the weights to the correct size.
Unsupervised deep learning is something like the holy grail of AI right now and hasn't been found yet. Unsupervised deep learning would allow you to use massive amounts of unlabeled data and let the net form its own categories. Later you can just use a little bit of labeled data to give these categories their proper labels. Or just train it immediately on some task, in the conviction that it has a huge amount of knowledge about the world already. This is also what the problem of common sense comes down to: a huge and detailed model of the world, that could only be acquired by unsupervised learning.
Upvotes: 3 [selected_answer]<issue_comment>username_2: i think that Training deep learning neural networks can be difficult because of local optima in the objective function and because complex models are prone to overfitting. Unsupervised pre-training initializes a discriminative neural net from one which was trained using an unsupervised criterion, such as a deep belief network or a deep autoencoder. This method can sometimes help with both the optimization and the overfitting issues, and about deep learning actually work Because there is no external taecher in unsupervised learning, it is really crucial to increase the entropy which can be done by redundancies in the data.
source: <https://metacademy.org/graphs/concepts/unsupervised_pre_training>
Upvotes: 0
|
2016/08/02
| 5,909
| 21,731
|
<issue_start>username_0: The following [page](http://www.evolvingai.org/fooling)/[study](http://www.evolvingai.org/files/DNNsEasilyFooled_cvpr15.pdf) demonstrates that the deep neural networks are easily fooled by giving high confidence predictions for unrecognisable images, e.g.
[](https://i.stack.imgur.com/7pgrH.jpg)
[](https://i.stack.imgur.com/pBm48.png)
How this is possible? Can you please explain ideally in plain English?<issue_comment>username_1: First up, those images (even the first few) aren't complete trash despite being junk to humans; they're actually finely tuned with various advanced techniques, including another neural network.
>
> The deep neural network is the pre-trained network modeled on AlexNet provided by [Caffe](https://github.com/BVLC/caffe). To evolve images, both the directly encoded and indirectly encoded images, we use the [Sferes](https://github.com/jbmouret/sferes2) evolutionary framework. The entire code base to conduct the evolutionary experiments can be download [sic] [here](https://github.com/Evolving-AI-Lab/fooling). The code for the images produced by gradient ascent is available [here](https://github.com/Evolving-AI-Lab/fooling/tree/master/caffe/ascent).
>
>
>
Images that are actually random junk were correctly recognized as nothing meaningful:
>
> In response to an unrecognizable image, the networks could have output a low confidence for each of the 1000 classes, instead of an extremely high confidence value for one of the classes. In fact, they do just that for randomly generated images (e.g. those in generation 0 of the evolutionary run)
>
>
>
The original goal of the researchers was to use the neural networks to automatically generate images that look like the real things (by getting the recognizer's feedback and trying to change the image to get a more confident result), but they ended up creating the above art. Notice how even in the static-like images there are little splotches - usually near the center - which, it's fair to say, are triggering the recognition.
>
> We were not trying to produce adversarial, unrecognizable images. Instead, we were trying to produce recognizable images, but these unrecognizable images emerged.
>
>
>
Evidently, these images had just the right distinguishing features to match what the AI looked for in pictures. The "paddle" image does have a paddle-like shape, the "bagel" is round and the right color, the "projector" image is a camera-lens-like thing, the "computer keyboard" is a bunch of rectangles (like the individual keys), and the "chainlink fence" legitimately looks like a chain-link fence to me.
>
> Figure 8. Evolving images to match DNN classes produces a tremendous diversity of images. Shown are images selected to showcase diversity from 5 evolutionary runs. The diversity suggests that the images are non-random, but that instead evolutions producing [sic] discriminative features of each target class.
>
>
>
Further reading: [the original paper](http://www.evolvingai.org/files/DNNsEasilyFooled_cvpr15.pdf) (large PDF)
Upvotes: 7 [selected_answer]<issue_comment>username_2: An important question that does not yet have a satisfactory answer in neural network research is how DNNs come up with the predictions they offer. DNNs effectively work (though not exactly) by matching patches in the images to a "dictionary" of patches, one stored in each neuron (see [the youtube cat paper](http://research.google.com/archive/unsupervised_icml2012.html)). Thus, it may not have a high level view of the image since it only looks at patches, and images are usually downscaled to much lower resolution to obtain the results in current generation systems. Methods which look at how the components of the image interact may be able to avoid these problems.
Some questions to ask for this work are: How confident were the networks when they made these predictions? How much volume do such adversarial images occupy in the space of all images?
Some work I am aware of in this regard comes from <NAME> and <NAME>ikh's Lab at Virginia Tech who look into this for question answering systems: [Analyzing the Behavior of Visual Question Answering Models](http://arxiv.org/pdf/1606.07356v1.pdf) and [Interpreting Visual Question Answering models](https://filebox.ece.vt.edu/~dbatra/papers/gmpb_icmlvis16.pdf).
More such work is needed, and just as the human visual system does also get fooled by such "optical illusions", these problems may be unavoidable if we use DNNs, though AFAIK nothing is yet known either way, theoretically or empirically.
Upvotes: 4 <issue_comment>username_3: The images that you provided may be unrecognizable for us. They are actually the images that we recognize but evolved using the [Sferes](https://github.com/sferes2/sferes2) evolutionary framework.
While these images are almost impossible for humans to label with anything but abstract arts, the Deep Neural Network will label them to be familiar objects with 99.99% confidence.
This result highlights differences between how DNNs and humans recognize objects. Images are either directly (or indirectly) encoded
According to this [video](https://youtu.be/M2IebCN9Ht4)
>
> Changing an image originally correctly classified in a way imperceptible to humans can cause the cause DNN to classify it as something else.
>
>
> In the image below the number at the bottom are the images are supposed to look like the digits
> But the network believes the images at the top (the one like white noise) are real digits with 99.99% certainty.
>
>
>

>
> The main reason why these are easily fooled is that Deep Neural Network does not see the world in the same way as human vision. We use the whole image to identify things while DNN depends on the features. As long as DNN detects certain features, it will classify the image as a familiar object it has been trained on.
> The researchers proposed one way to prevent such fooling by adding the fooling images to the dataset in a new class and training DNN on the enlarged dataset. In the experiment, the confidence score decreases significantly for ImageNet AlexNet. It is not easy to fool the retrained DNN this time. But when the researchers applied such method to MNIST LeNet, evolution still produces many unrecognizable images with confidence scores of 99.99%.
>
>
>
More details [here](http://www.evolvingai.org/fooling) and [here](http://www.kdnuggets.com/2015/01/deep-learning-can-be-easily-fooled.html).
Upvotes: 5 <issue_comment>username_4: The neural networks can be easily fooled or hacked by adding certain structured noise in image space ([Szegedy 2013](https://arxiv.org/abs/1312.6199), [Nguyen 2014](http://arxiv.org/abs/1412.1897)) due to ignoring non-discriminative information in their input.
For example:
>
> Learning to detect jaguars by matching the unique spots on their fur while ignoring the fact that they have four legs.[2015](http://arxiv.org/abs/1506.06579)
>
>
>
So basically the high confidence prediction in certain models exists due to a '*combination of their locally linear nature and high-dimensional input space*'.[2015](http://arxiv.org/abs/1412.1897)
Published as a conference paper at [ICLR 2015](http://www.stat.ucla.edu/~ywu/ICLR2015.pdf) (work by Dai) suggest that transferring discriminatively trained parameters to generative models, could be a great area for further improvements.
Upvotes: 2 <issue_comment>username_5: Can't comment(due to that required 50 rep), but I wanted to make a response to username_3 and the OP. I think you guys are skipping the fact that the neural network only really is saying truly from a programmatic standpoint that "this is most like".
For example, while we can list the above image examples as "abstract art", they definitively are most like was is listed. Remember learning algorithms have a scope on what they recognize as an object and if you look at all the above examples... and think about the scope of the algorithum... these make sense (even the ones at a glance we would recognize as white noise). In Vishnu example of the numbers, if you fuzz your eyes and bring the images out of focus, you can actually in every case spot patterns that really closely reflect the numbers in question.
The problem that is being shown here is that the algorithm appears to not have a "unknown case". Basically when the pattern recognition says that it doesn't exist in the output scope. (so a final output node group that says this is nothing that I know off). For example, people do this as well, as it's one thing humans and learning algorithms have in common. Here's a link to show what I'm talking about (what is the following, define it) using only known animals that exist:

Now as a person, limited by what I know and can say, I'd have to conclude that the following is an elephant. But it's not. Learning algorithms (for the most part) do not have a "like a" statement, the out put always validates down to a confidence percentage. So tricking one in this fashion is not surprising... what is of course surprising is that based on it's knowledge set, it actually comes to the point in which, if you look at the above cases listed by OP and Vishnu that a person... with a little looking... can see how the learning algorithm probable made the association.
So, I wouldn't really call it a mislabel on the part of the algorithm, or even call it a case where it's been tricked... rather a case where it's scope was developed incorrectly.
Upvotes: 2 <issue_comment>username_6: All answers here are great, but, for some reason, nothing has been said so far on why this effect *should not surprise* you. I'll fill the blank.
Let me start with one requirement that is absolutely essential for this to work: the attacker *must know neural network architecture* (number of layers, size of each layer, etc). Moreover, in all cases that I examined myself, the attacker knows the snapshot of the model that is used in production, i.e. all weights. In other words, the "source code" of the network isn't a secret.
You can't fool a neural network if you treat it like a black box. And you can't reuse the same fooling image for different networks.
In fact, you have to "train" the target network yourself, and here by training I mean to run forward and backprop passes, but specially crafted for another purpose.
### Why is it working at all?
Now, here's the intuition. Images are very high dimensional: even the space of small 32x32 color images has `3 * 32 * 32 = 3072` dimensions. But the training data set is relatively small and contains real pictures, all of which have some structure and nice statistical properties (e.g. smoothness of color). So the training data set is located on a tiny manifold of this huge space of images.
The convolutional networks work extremely well on this manifold, but basically, know nothing about the rest of the space. The classification of the points outside of the manifold is just a linear extrapolation based on the points inside the manifold. No wonder that some particular points are extrapolated incorrectly. The attacker only needs a way to navigate to the closest of these points.
### Example
Let me give you a concrete example how to fool a neural network. To make it compact, I'm going to use a very simple logistic regression network with one nonlinearity (sigmoid). It takes a 10-dimensional input `x`, computes a single number `p=sigmoid(W.dot(x))`, which is the probability of class 1 (versus class 0).

Suppose you know `W=(-1, -1, 1, -1, 1, -1, 1, 1, -1, 1)` and start with an input `x=(2, -1, 3, -2, 2, 2, 1, -4, 5, 1)`. A forward pass gives `sigmoid(W.dot(x))=0.0474` or 95% probability that `x` is class 0 example.

We'd like to find another example, `y`, which is very close to `x` but is classified by the network as 1. Note that `x` is 10-dimensional, so we have the freedom to nudge 10 values, which is a lot.
Since `W[0]=-1` is negative, it's better for to have a small `y[0]` to make a total contribution of `y[0]*W[0]` small. Hence, let's make `y[0]=x[0]-0.5=1.5`.
Likewise, `W[2]=1` is positive, so it's better to increase `y[2]` to make `y[2]*W[2]` bigger: `y[2]=x[2]+0.5=3.5`. And so on.

The result is `y=(1.5, -1.5, 3.5, -2.5, 2.5, 1.5, 1.5, -3.5, 4.5, 1.5)`, and `sigmoid(W.dot(y))=0.88`. With this one change we improved the class 1 probability from 5% to 88%!
### Generalization
If you look closely at the previous example, you'll notice that I knew exactly how to tweak `x` in order to move it to the target class, because I knew the network gradient. What I did was actually a *backpropagation*, but with respect to the data, instead of weights.
In general, the attacker starts with target distribution `(0, 0, ..., 1, 0, ..., 0)` (zero everywhere, except for the class it wants to achieve), backpropagates to the data and makes a tiny move in that direction. Network state is not updated.
Now it should be clear that it's a common feature of feed-forward networks that deal with a small data manifold, no matter how deep it is or the nature of data (image, audio, video or text).
### Potection
The simplest way to prevent the system from being fooled is to use an ensemble of neural networks, i.e. a system that aggregates the votes of several networks on each request.
It's much more difficult to backpropagate with respect to several networks simultaneously. The attacker might try to do it sequentially, one network at a time, but the update for one network might easily mess up
with the results obtained for another network. The more networks are used, the more complex an attack becomes.
Another possibility is to smooth the input before passing it to the network.
### Positive use of the same idea
You shouldn't think that backpropagation to the image has only negative applications. A very similar technique, called *deconvolution*, is used for visualization and better understanding what neurons have learned.
This technique allows synthesizing an image that causes a particular neuron to fire, basically see visually "what the neuron is looking for", which in general makes convolutional neural networks more interpretable.
Upvotes: 4 <issue_comment>username_7: There is already many good answers, I will just add to those that came before mine:
This type of images you are referring to are called [adversarial perturbations](http://arxiv.org/pdf/1412.1897), (see [1](https://arxiv.org/pdf/1412.1897.pdf), and it is not limited to images, it has been shown to apply to text too, see [Jia & Liang, EMNLP 2017](https://arxiv.org/abs/1707.07328). In text, the introduction of an irrelevant sentence which doesn't contradict the paragraph has been seen to cause the network to come to a completely different answer (see see [Jia & Liang, EMNLP 2017](https://arxiv.org/abs/1707.07328)).
**The reason they work** is due to the fact that neural network view images in a different way from us, coupled with the high dimentionality of the problem space. Where we see the whole picture, they see a combination of features which combine to form an object([Moosavi-Dezfooli et al., CVPR 2017](https://arxiv.org/abs/1610.08401)). According to perturbation generated against one network has been seen to have high likelihood to work on other networks:
[](https://i.stack.imgur.com/7zK85.jpg)
In the figure above, it is seen that The universal perturbations computed for the VGG-19 network, for example, have a fooling ratio above 53% for all other tested architectures.
So how do you deal with the threat of adversarial perturbations?
Well, for one, you can try to generate as many perturbations as you can and use them to fine-tune your model. Whist this somewhat solves the problem, it doesn't solve the problem entirely. In ([Moosavi-Dezfooli et al., CVPR 2017](https://arxiv.org/abs/1610.08401)) the author reported that, repeating the process by computing new perturbations and then fine-tuning again seems to yield no further improvements, regardless of the number of iterations, with the fooling ratio hovering around 80%.
Perturbations are an indication of the shallow pattern matching that neural networks perform, coupled with their minimal lack of in-depth understanding of the problem at hand. More work still needs to be done.
Upvotes: 2 <issue_comment>username_8: >
> How is it possible that deep neural networks are so easily fooled?
>
>
> Deep neural networks are easily fooled by giving high confidence predictions for unrecognizable images. How is this possible? Can you please explain ideally in **plain** English?
>
>
>
Intuitively, extra hidden layers ought to make the network able to learn more complex classification functions, and thus do a better job classifying. While it may be named *deep learning* it's actually a shallow understanding.
Test your own knowledge: Which animal in the grid below is a [Felis silvestris catus](https://en.wikipedia.org/wiki/Cat), take your time and no cheating. Here's a hint: which is a domestic house cat?
[](https://i.stack.imgur.com/P09Tk.jpg)
For a better understanding checkout: "[Adversarial Attack to Vulnerable Visualizations](https://medium.com/visweekly/adversarial-attack-to-vulnerable-visualizations-bfa3ed50efc2)" **and** "[Why are deep neural networks hard to train?](http://neuralnetworksanddeeplearning.com/chap5.html)".
The problem is analogous to [aliasing](https://en.wikipedia.org/wiki/Aliasing), an effect that causes different signals to become indistinguishable (or aliases of one another) when sampled, and the [stagecoach-wheel effect](https://en.m.wikipedia.org/wiki/Wagon-wheel_effect), where a spoked wheel appears to rotate differently from its true rotation.
The neural network doesn't *know* what it's looking at or which way it's going.
Deep neural networks aren't an expert on something, they are trained to decide mathematically that some goal has been met, if they are not trained to reject wrong answers they don't have a concept of what is wrong; they only *know* what is correct and what is not correct - wrong and "not correct" are not necessarily the same thing, neither is "correct" and true.
The neural network doesn't *know* right from wrong.
Just like most people wouldn't know a house cat if they saw one, two or more, or none. How many house cats in the above photo grid, none. Any accusations of including cute cat pictures is unfounded, those are all dangerous wild animals.
Here's another example. Does answering the question make Bart and Lisa smarter, does the person they are asking even know, are there unknown variables that can come into play?
[](https://i.stack.imgur.com/BznDK.jpg)
We aren't there yet but neural networks can quickly provide an answer that is likely to be correct, especially if it was properly trained to avoid all misteps.
Upvotes: 3 <issue_comment>username_9: Neural networks are easily fooled, provided you know how to fool them.
Consider a linear network with an input layer and an output layer, which has an error function E (we don't need hidden layers to show how to fool a network). For a given input image x, E measures the (squared) difference between the network's output y and the desired (correct) output.
The output unit’s state y is given by the inner product of x with the output unit’s weight vector w,so that
```
y=w·x.
```
If we change x to x′ by adding ∆x then the output will change by ∆y to
```
y′ = w·x+w·∆x (9.4)
= y + ∆y. (9.5)
```
Notice that ∆x defines a direction in the input space; the question is, which direction ∆x will have most impact on y?
By definition, a change in x in the direction ∇E produces the largest
possible change in y. An adversarial image x' is constructed by taking the derivative ∇E of E with respect to the input image x, that is,
```
x′ = x + ε∇E,
```
where ε is a small constant.
By definition, ∇E is the direction of steepest ascent, so the modification ε∇E to x will alter y more than a change in any other direction.
[](https://i.stack.imgur.com/ggkFa.jpg)
This is an extract from the book: [Artificial Intelligence Engines: A Tutorial Introduction to the Mathematics of Deep Learning](https://jim-stone.staff.shef.ac.uk/BookBayes2012/books_by_jv_stone) (2019).
Upvotes: 2
|
2016/08/02
| 671
| 3,126
|
<issue_start>username_0: What is the definition of a deep neural network? Why are they so popular or important?<issue_comment>username_1: A deep neural network (DNN) is nothing but a neural network which has multiple layers, where *multiple* can be subjective.
IMHO, any network which has 6 or 7 or more layers is considered deep. So, the above would form a very basic definition of a deep network.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Deep networks have two main differences with 'normal' networks.
The first is that computational power and training datasets have grown immensely, meaning that it's practical to run larger networks and statistically valid (that is, we have enough training examples that we won't just run into over-fitting problems with larger networks).
The second is that back propagation is limited the more layers you have; each layer represents a gradient of the error, and so by the time one is about six layers deep there isn't much error left to modify the neuron weights. But one might reasonably expect earlier neurons to be more important than later neurons, since they represent 'concepts' that are closer to the raw inputs.
New training techniques sidestep this problem, typically by doing unsupervised learning on the raw inputs, creating higher-level 'concepts' that are then useful as inputs for supervised learning.
(For example, consider the problem of determining whether or not an image contains a cat from the pixels. The early layers of the network should be doing things like detecting edges, which one could expect to be shared among all images and mostly independent of what one is trying to do with the output layers, thus also hard to train through 'cat-not cat' signals many layers up.
Upvotes: 3 <issue_comment>username_3: General structure of an Artificial Neural Network
**Input Layer + Hidden Layers + Output Layer**
If there are more hidden layers in the artificial neural network, then the neural network is called as Deep Neural network. How many exactly constitute a deep neural network is a point of debate, but in general, the more the hidden layers, the deep is the neural network.
Coming to why they are so popular or important, many problems like object detection, classification, face recognition, speech recognition got solved with the advent of deep neural networks. It's not a exaggeration to say that, the performance of deep neural networks crossed even the human performance in many of the above mentioned tasks. That means now a computer is the best one to do the above tasks than humans. All the above mentioned problems were lying in research field since almost 5 decades. All of them have been solved to perfection only in the last 4,5 years just because of the success of deep neural networks. That is why they are very popular and important. I mentioned very few problems that i worked on, there are many similar tasks that deep neural networks solved with ease in the last decade.
And, at this point in time, many people across the world are working on solving innumerable applications using deep neural networks.
Upvotes: 1
|
2016/08/02
| 638
| 2,731
|
<issue_start>username_0: In [this video](https://youtu.be/oSdPmxRCWws?t=30) an expert says, "One way of thinking about what intelligence is [specifically with regard to artificial intelligence], is as an optimization process."
Can intelligence always be thought of as an optimization process, and can artificial intelligence always be modeled as an optimization problem? What about pattern recognition? Or is he mischaracterizing?<issue_comment>username_1: A good answer to this question depends on what you want to use the labels for.
When I think about "optimization," I think about a solution space and a cost function; that is, there are many possible answers that could be returned and we can know what the cost is of any particular answer.
In this view, the answer is "yes"--pattern recognition is a case where each pattern is a possible answer, and the optimization method is trying to find the one where the cost is lowest (that is, where the answer matches what you want it to match).
But most interesting optimization problems are characterized by exponential solution spaces and clean cost functions, and so can be thought of more as 'search' problems, whereas most pattern recognition problems are characterized by simple solution spaces and complicated cost functions, and it might feel unnatural to put the two of them together.
(In general, I do think that optimization and intelligence are deeply linked enough that optimization power is a good measure of intelligence, and certainly a better measure of the *practical* use of intelligence than pattern recognition.)
Upvotes: 3 <issue_comment>username_2: I can offer two (at first sight, conflicting) perspectives on this:
Firstly:
*If the letter string 'abc' becomes 'abd' what would "doing the same thing" to 'ijk' look like?*
This is just one example of a problem (so-called 'letter-string analogy problems') that is not easily framed as an optimization problem - there is a range of answers that appear compelling to humans, each for its own structurally-specific reason. Some of the subtleties of these kinds of problems are discussed in detail [here](http://cognitrn.psych.indiana.edu/rgoldsto/courses/concepts/copycat.pdf).
Secondly:
Here's a *very* high-level perspective on AGI in which [optimization plays a key part](http://arxiv.org/abs/cs/0309048).
It's not at all clear how these two very different scales of approach might be reconciled. As someone who does optimization research for a living, I'd be inclined to say that, certainly for all *current, practical* purposes, AGI can't really be treated as an optimization problem, since most interesting activities don't readily lend themselves to description via a cost function.
Upvotes: 3
|
2016/08/02
| 582
| 2,545
|
<issue_start>username_0: What specific advantages of declarative languages make them more applicable to AI than imperative languages?
What can declarative languages do easily that other languages styles find difficult for this kind of problem?<issue_comment>username_1: A good answer to this question depends on what you want to use the labels for.
When I think about "optimization," I think about a solution space and a cost function; that is, there are many possible answers that could be returned and we can know what the cost is of any particular answer.
In this view, the answer is "yes"--pattern recognition is a case where each pattern is a possible answer, and the optimization method is trying to find the one where the cost is lowest (that is, where the answer matches what you want it to match).
But most interesting optimization problems are characterized by exponential solution spaces and clean cost functions, and so can be thought of more as 'search' problems, whereas most pattern recognition problems are characterized by simple solution spaces and complicated cost functions, and it might feel unnatural to put the two of them together.
(In general, I do think that optimization and intelligence are deeply linked enough that optimization power is a good measure of intelligence, and certainly a better measure of the *practical* use of intelligence than pattern recognition.)
Upvotes: 3 <issue_comment>username_2: I can offer two (at first sight, conflicting) perspectives on this:
Firstly:
*If the letter string 'abc' becomes 'abd' what would "doing the same thing" to 'ijk' look like?*
This is just one example of a problem (so-called 'letter-string analogy problems') that is not easily framed as an optimization problem - there is a range of answers that appear compelling to humans, each for its own structurally-specific reason. Some of the subtleties of these kinds of problems are discussed in detail [here](http://cognitrn.psych.indiana.edu/rgoldsto/courses/concepts/copycat.pdf).
Secondly:
Here's a *very* high-level perspective on AGI in which [optimization plays a key part](http://arxiv.org/abs/cs/0309048).
It's not at all clear how these two very different scales of approach might be reconciled. As someone who does optimization research for a living, I'd be inclined to say that, certainly for all *current, practical* purposes, AGI can't really be treated as an optimization problem, since most interesting activities don't readily lend themselves to description via a cost function.
Upvotes: 3
|
2016/08/02
| 5,188
| 20,009
|
<issue_start>username_0: Obviously, self-driving cars aren't perfect, so imagine that the Google car (as an example) got into a difficult situation.
Here are a few examples of unfortunate situations caused by a set of events:
* The car is heading toward a crowd of 10 people crossing the road, so it cannot stop in time, but it can avoid killing 10 people by hitting the wall (killing the passengers),
* Avoiding killing the rider of the motorcycle considering that the probability of survival is greater for the passenger of the car,
* Killing an animal on the street in favour of a human being,
* Purposely changing lanes to crash into another car to avoid killing a dog,
And here are a few dilemmas:
* Does the algorithm recognize the difference between a human being and an animal?
* Does the size of the human being or animal matter?
* Does it count how many passengers it has vs. people in the front?
* Does it "know" when babies/children are on board?
* Does it take into the account the age (e.g. killing the older first)?
How would an algorithm decide what it should do from the technical perspective? Is it being aware of above (counting the probability of kills), or not (killing people just to avoid its own destruction)?
Related articles:
* [Why Self-Driving Cars Must Be Programmed to Kill](https://www.technologyreview.com/s/542626/why-self-driving-cars-must-be-programmed-to-kill/)
* [How to Help Self-Driving Cars Make Ethical Decisions](https://www.technologyreview.com/s/539731/how-to-help-self-driving-cars-make-ethical-decisions/)<issue_comment>username_1: The answer to a lot of those questions depends on how the device is programmed. A computer capable of driving around and recognizing where the road goes is likely to have the ability to visually distinguish a human from an animal, whether that be based on outline, image, or size. With sufficiently sharp image recognition, it might be able to count the number and kind of people in another vehicle. It could even use existing data on the likelihood of injury to people in different kinds of vehicles.
Ultimately, people disagree on the ethical choices involved. Perhaps there could be "ethics settings" for the user/owner to configure, like "consider life count only" vs. "younger lives are more valuable." I personally would think it's not terribly controversial that a machine should damage itself before harming a human, but people disagree on how important pet lives are. If explicit kill-this-first settings make people uneasy, the answers could be determined from a questionnaire given to the user.
Upvotes: 6 <issue_comment>username_2: This is the well known [*Trolley Problem*](https://en.wikipedia.org/wiki/Trolley_problem). As [username_1](https://ai.stackexchange.com/a/134/8) said, people disagree on the right course of action for trolley problem scenarios, but it should be noted that with self-driving cars, reliability is so high that these scenarios are really unlikely. So, not much effort will be put into the problems you are describing, at least in the short term.
Upvotes: 4 <issue_comment>username_3: For a driverless car that is designed by a single entity, the best way for it to make decisions about whom to kill is by estimating and minimizing the probable liability.
It doesn't need to absolutely correctly identify all the potential victims in the area to have a defense for its decision, only to identify them as well as a human could be expected to.
It doesn't even need to know the age and physical condition of everyone in the car, as it can ask for that information and if refused, has the defense that the passengers chose not to provide it, and therefore took responsibility for depriving it of the ability to make a better decision.
It only has to have a viable model for minimizing exposure of the entity to lawsuits, which can then be improved over time to make it more profitable.
Upvotes: 3 <issue_comment>username_4: Personally, I think this might be an overhyped issue. Trolley problems only occur when the situation is optimized to prevent "3rd options".
A car has brakes, does it not? "But what if the brakes don't work?" Well, then **the car is not allowed to drive at all.** Even in regular traffic, human operators are taught that your speed should be limited as such that you can stop within the area you can see. Solutions like these will reduce the possibility of a trolley problem.
As for animals... if there is no explicit effort to deal with humans on the road I think animals will be treated the same. This sounds implausible - roadkill happens often and human "roadkill" is unwanted, but animals are a lot smaller and harder to see than humans, so I think detecting humans will be easier, preventing a lot of the accidents.
In other cases (bugs, faults while driving, multiple failures stacked onto each other), perhaps accidents will occur, they'll be analysed, and vehicles will be updated to avoid causing similar situations.
Upvotes: 5 <issue_comment>username_5: In the real world, decisions will be made based on the law, and [as noted over on Law.SE](https://law.stackexchange.com/questions/1639/what-is-the-legal-take-on-the-trolley-problem), the law generally favors inaction over action.
Upvotes: 4 <issue_comment>username_6: >
> How could self-driving cars make ethical decisions about who to kill?
>
>
>
It shouldn't. Self-driving cars are not moral agents. Cars fail in predictable ways. Horses fail in predictable ways.
>
> the car is heading toward a crowd of 10 people crossing the road, so
> it cannot stop in time, but it can avoid killing 10 people by hitting
> the wall (killing the passengers),
>
>
>
In this case, the car should slam on the brakes. If the 10 people die, that's just unfortunate. We simply cannot *trust* all of our beliefs about what is taking place outside the car. What if those 10 people are really robots made to *look* like people? What if they're *trying* to kill you?
>
> avoiding killing the rider of the motorcycle considering that the
> probability of survival is greater for the passenger of the car,
>
>
>
Again, hard-coding these kinds of sentiments into a vehicle opens the rider of the vehicle up to all kinds of attacks, including *"fake"* motorcyclists. Humans are *barely* equipped to make these decisions on their own, if at all. When it doubt, just slam on the brakes.
>
> killing animal on the street in favour of human being,
>
>
>
Again, just hit the brakes. What if it was a baby? What if it was a bomb?
>
> changing lanes to crash into another car to avoid killing a dog,
>
>
>
Nope. The dog was in the wrong place at the wrong time. The other car wasn't. Just slam on the brakes, as safely as possible.
>
> Does the algorithm recognize the difference between a human being and an animal?
>
>
>
Does a human? Not always. What if the human has a gun? What if the animal has large teeth? Is there no context?
>
> * Does the size of the human being or animal matter?
> * Does it count how many passengers it has vs. people in the front?
> * Does it "know" when babies/children are on board?
> * Does it take into the account the age (e.g. killing the older first)?
>
>
>
Humans can't agree on these things. If you ask a cop what to do in any of these situations, the answer won't be, "You should have swerved left, weighed all the relevant parties in your head, assessed the relevant ages between all parties, then veered slightly right, and you would have saved 8% more lives." No, the cop will just say, "You should have brought the vehicle to a stop, as quickly and safely as possible." Why? Because cops know people normally aren't equipped to deal with high-speed crash scenarios.
Our target for "self-driving car" should not be 'a moral agent on par with a human.' It should be an agent with the reactive complexity of cockroach, which fails predictably.
Upvotes: 7 [selected_answer]<issue_comment>username_7: Frankly I think this issue (the Trolley Problem) is inherently overcomplicated, since the real world solution is likely to be pretty straightforward. Like a human driver, an AI driver will be programmed to act at all times in a generically ethical way, always choosing the course of action that does no harm, or the least harm possible.
If an AI driver encounters danger such as imminent damage to property, obviously the AI will brake hard and aim the car away from breakable objects to avoid or minimize impact. If the danger is hitting a pedestrian or car or building, it will choose to collide with the least precious or expensive object it can, to do the least harm -- placing a higher value on a human than a building or a dog.
Finally, if the choice of your car's AI driver is to run over a child or hit a wall... it will steer the car, *and you*, into the wall. That's what any good human would do. Why would a good AI act any differently?
Upvotes: 2 <issue_comment>username_8: >
> “This moral question of whom to save: 99 percent of our engineering work is to prevent these situations from happening at all.”
> —<NAME>, Mercedes-Benz
>
>
>
This quote is from an article titled [Self-Driving Mercedes-Benzes Will Prioritize Occupant Safety over Pedestrians published OCTOBER 7, 2016 BY <NAME>](http://blog.caranddriver.com/self-driving-mercedes-will-prioritize-occupant-safety-over-pedestrians/), retrieved 08 Nov 2016.
Here's an excerpt that outlines what the technological, practical solution to the problem.
>
> The world’s oldest carmaker no longer sees the problem, similar to the question from 1967 known as the Trolley Problem, as unanswerable. Rather than tying itself into moral and ethical knots in a crisis, Mercedes-Benz simply intends to program its self-driving cars to save the people inside the car. Every time.
>
>
> All of Mercedes-Benz’s future Level 4 and Level 5 autonomous cars will prioritize saving the people they carry, according to <NAME>, the automaker’s manager of driver assistance systems and active safety.
>
>
>
There article also contains the following fascinating paragraph.
>
> A study released at midyear [by Science](http://science.sciencemag.org/content/352/6293/1514) magazine didn’t clear the air, either. The majority of the 1928 people surveyed thought it would be ethically better for autonomous cars to sacrifice their occupants rather than crash into pedestrians. Yet the majority also said they wouldn’t buy autonomous cars if the car prioritized pedestrian safety over their own.
>
>
>
Upvotes: 3 <issue_comment>username_9: I think that in most cases the car would default to reducing speed as a main option, rather than steering toward or away from a specific choice. As others have mentioned, having settings related to ethics is just a bad idea. What happens if two cars that are programmed with opposite ethical settings and are about to collide? The cars could potentially have a system to override the user settings and pick the most mutually beneficial solution. It's indeed an interesting concept, and one that definitely has to discussed and standardized before widespread implementation. Putting ethical decisions in a machines hands makes the resulting liability sometimes hard to picture.
Upvotes: 2 <issue_comment>username_10: How could self-driving cars make ethical decisions about who to kill?
**By managing legal liability and consumer safety.**
A car that offers the consumer safety is going to be a car that is bought by said consumers. Companies do not want to be liable for killing their customers nor do they want to sell a product that gets the user in legal predicaments. Legal liability and consumer safety are the same issue when looked at from the perspective of "cost to consumer".
>
> And here are few dilemmas:
>
>
> * Does the algorithm recognize the difference between a human being and
> an animal?
>
>
>
If an animal/human cannot be legally avoided (and the car is in legal right - if it is not then something else is wrong with the AI's decision making), it likely won't. If the car can safely avoid the obstacle, the AI could reasonably be seen to make this decision, ie. swerve to another lane on an open highway. Notice there is an emphasis on liability and driver safety.
>
> * Does the size of the human being or animal matter?
>
>
>
Only the risk factor from hitting the obstacle. Hitting a hippo might be less desirable than hitting the ditch. Hitting a dog is likely more desirable than wrecking the customer's automobile.
>
> * Does it count how many passengers it has vs. people in the front?
>
>
>
It counts the people as passengers to see if the car-pooling lane can be taken. It counts the people in front as a risk factor in case of a collision.
>
> * Does it "know" when babies/children are on board?
>
>
>
No.
>
> * Does it take into the account the age (e.g. killing the older first)?
>
>
>
No. This is simply the wrong abstraction to make a decision, how could this be weighted into choosing the right course of action to reduce risk factor? If Option 1 is hit young guy with 20% chance of significant occupant damage and no legal liability and Option 2 is hit an old guy with 21% chance of significant occupant damage and no legal liability, then what philosopher can convince even just 1 person of the just and equitable weights to make a decision?
Thankfully, the best decision a lot of the time is to hit the breaks to reduce speed (especially when you consider that it is often valuable to act predictably so that pedestrians and motorists can react accordingly). In the meantime, better value improvements can be made in terms of predicting when drivers will make bad decisions and when other actions (such as hitting the reverse) are more beneficial than hitting the breaks. At this point, it is not worth it to even begin collecting the information to make the ethical decisions proposed by philosophers. Thus, this issue is over-hyped by sensational journalists and philosophers.
Upvotes: 3 <issue_comment>username_11: The only sensible choice is to use predictable behaviour. So in the people in front of the car scenario: First the car hits the brakes, at the same time honks the horn, and stays on course. The people then have a chance to jump out of the way leading to zero people being killed. Also with full brakes (going from 50km per hour to zero is less than 3 car lengths), an impact situation is almost not imaginable. Even if full stop cannot be reached, severe damage to the pedestrians is unlikely.
The other scenario is just crazy. So the distance has to be less than 3 car lengths, at least 1 car length in needed for the steering, then a car crashing is an uncontrollable situation, might lead to spinning and kill all 11 persons.
Apart from saying that I don't believe there is an example in reality where there is a dilemma; the solution in these unlikely cases if to conform with the expectations of the opposing party to allow the other party to mitigate the situation as well.
Upvotes: 2 <issue_comment>username_12: I think there would not be a way to edit such ethics settings in a car. But hey, if cell phones can be rooted, why not cars? I imagine there'll be Linux builds in the future for specific models that will let you do whatever you want.
As for who'll make such decisions, it'll be much like privacy issues of today. There'll be a tug-of war on the blanket by the OS providers (who'll try to set it to a minimum amount of people injured, each with its own methods), insurance companies (who'll try to make you pay more for OS's that will be statistically shown to damage your car easier), and car manufacturers (who'll want you to trash your car as soon as you can, so you'll buy a new one; or make cars that require a ridiculous amount of $$$ service).
Then some whistleblower will come out and expose a piece of code that chooses to kill young children over adults - because it will have a harder time distinguishing them from animals, and will take chances to save who it'll more surely recognize as humans. The OS manufacturer will get a head-slap from the public and a new consensus will be found. Whistleblowers will come out from insurance companies and car manufacturers too.
Humanity will grab a hot frying pan and burn itself and then learn to put on gloves beforehand. My advice would just make sure you won't be that hand - stay away from them for a couple of years until all the early mistakes are made.
Upvotes: 2 <issue_comment>username_13: They shouldn't. People should.
People cannot put the responsibilities of ethical decisions into the hands of computers. It is our responsibility as computer scientists/AI experts to program decisions for computers to make. Will human casualties still exist from this? Of course, they will--- people are not perfect and neither are programs.
There is an excellent in-depth debate on this topic [here](https://www.youtube.com/watch?v=93Xv8vJ2acI&index=46&list=WL&t=6s). I particularly like Yann LeCun's argument regarding the parallel ethical dilemma of testing potentially lethal drugs on patients. Similar to self-driving cars, both can be lethal while having good intentions of saving *more* people in the long run.
Upvotes: 3 <issue_comment>username_14: I think we need to state our own moral before thinking about what the cars moral(or ethical setting) should be. I recommend reading this paper [Autonomous Cars: In Favor of a Mandatory Ethics Setting](https://link.springer.com/article/10.1007/s11948-016-9806-x) which argues why it is in everyone's best interest that we prioritize the safety of the majority, and not just the driver(yes, in the best interest of the driver too).
You can test your own moral in many different situations, some like your examples, on [MITs moral machine](http://moralmachine.mit.edu/). It's rather uncomfortable but very interesting. You can also find some analysis of people's answers on their website.
**My answers to your examples:**
>
> The car is heading toward a crowd of 10 people crossing the road, so it cannot stop in time, but it can avoid killing 10 people by hitting the wall (killing the passengers)
>
>
>
I assume hitting the brakes is not going to work, or else the dilemma is pointless. I think the car should hit the wall. Pedestrians should not suffer just because someone else is driving a car, especially when there are 10 pedestrians and maximal 5(typically 1 or 2) in the car.
>
> Avoiding killing the rider of the motorcycle considering that the probability of survival is greater for the passenger of the car
>
>
>
I think this one is harder, especially since the motorcycle probably is not autonomous, and (in contrast to the pedestrians in the previous example) riding motorcycles is quite dangerous. Are the motorcyclist accepting the risk when entering the roads? If avoiding the motorcyclist means a probable death for the driver, then no. If not, probably should avoid it.
>
> Killing an animal on the street in favor of a human being
>
>
> Purposely changing lanes to crash into another car to avoid killing a dog
>
>
>
I think humans are more important than animals.
**I don't think there exists a correct answer for this.**
One of the really interesting things is that from the data collected by the moral machine is that there are big differences based on where in the world you're from. Typically, the western typically countries prioritize saving children over the elderly, while this is not the case for the whole world. Countries with strong governments like Finland and Japan prioritize people abiding the law, while people from countries with weaker/corrupt government does not care so much about that. Even in this comment section, you can find differences! I, for example, think the pedestrians should be spared in the first example,
while username_6 thinks that it is obvious that the passengers should be protected!
Upvotes: 1
|
2016/08/02
| 684
| 3,063
|
<issue_start>username_0: Which deep neural network is used in [Google's driverless cars](https://en.wikipedia.org/wiki/Google_self-driving_car) to analyze the surroundings? Is this information open to the public?<issue_comment>username_1: >
> The most common machine learning algorithms found in self driving cars involve **object tracking** based technologies used in order to pinpoint and distinguish between different objects in order to better analyse a digital landscape.
>
>
>
Algorithms are designed to become more efficient at this by modifying internal parameters and testing these changes.
I hope that provides a general overview of the subject.
>
> Since Google's cars are in development and are proprietary, they will probably not share their specific algorithm, however you can take a look at similar technologies to learn more.
>
>
>
To find out more, take a look at an [Oxford-based initiative in self driving cars and how they work](http://ori.ox.ac.uk/how-robotcar-works/).
Upvotes: 2 <issue_comment>username_2: It will not be single DNN architecture, rather it will be a collection of different DNN architectures that are used together to make the final decision. Convolutions are using the images/videos from the camera. Other architectures use other sensory sources. These DNNs will be trained to compute the high-level features from their sensory sources and then those high-level features will probably be fed into an LSTM (or some other form of RNN) that is trained with some form of Reinforcement learning algorithm to compute the action (like slowing down, applying breaks etc).
Upvotes: 2 <issue_comment>username_3: Self-driving cars use a combination of both supervised as well as reinforcement learning.
Huge amounts of sensor data are recorded in real-time. This data can be used to train all sorts of supervised classifiers, e.g. for predicting rain or switching on lights. You can also set up a model to predict pedestrians and other cars. This is supervised learning.
Reinforcement learning can be used in situations positive or negative signals appear when driving a car: Traffic lights, blinking signals from other vehicles and street signs in general. These signals can be used to train a reinforcement model and decide on best actions (adjust speed, steer,..) to get the maximal reward (or better minimize costs of a crash)
Upvotes: 2 <issue_comment>username_4: What you are calling 'analyzing the surroundings' is generally referred to as perception. Self-driving cars sense their surroundings using cameras, radars, lidars often combining or fusing more than one sensor to paint a picture of the environment. A lot of algorithms get used for fusing the sensor data and then deriving an understanding of the surrounding. One such example is semantic scene segmentation of camera data that tries to identify object boundaries in camera images. Typically a fully convolutional neural network is used to achieve this.
To the best of my knowledge Google does not disclose the exact algorithms anywhere.
Upvotes: 2
|
2016/08/02
| 581
| 2,450
|
<issue_start>username_0: Two common activation functions used in deep learning are the hyperbolic tangent function and the sigmoid activation function. I understand that the hyperbolic tangent is just a rescaling and translation of the sigmoid function:
$\tanh(z) = 2\sigma(z) - 1$.
Is there a significant difference between these two activation functions, and in particular, **when is one preferable to the other**?
I realize that in some cases (like when estimating probabilities) outputs in the range of $[0,1]$ are more convenient than outputs that range from $[-1,1]$. I want to know if there are differences **other than convenience** which distinguish the two activation functions.<issue_comment>username_1: I don't think it makes sense to decide activation functions based on desired properties of the output; you can easily insert a calibration step that maps the 'neural network score' to whatever units you actually want to use (dollars, probability, etc.).
So I think preference between different activation functions mostly boils down to the [different properties](https://en.wikipedia.org/wiki/Activation_function#Comparison_of_activation_functions) of those activation functions (like whether or not they're continuously differentiable). Because there's just a linear transformation between the two, I think that means there isn't a meaningful difference between them.
Upvotes: 2 <issue_comment>username_2: **Sigmoid > Hyperbolic tangent:**
As you mentioned, the application of Sigmoid might be more convenient than hyperbolic tangent in the cases that we need a probability value at the output (as @matthew-graves says, we can fix this with a simple mapping/calibration step). In other layers, this makes no sense.
**Hyperbolic tangent > Sigmoid:**
Hyperbolic tangent has a property called "approximates identity near the origin" which means $\tanh(0) = 0$, $\tanh'(0) = 1$, and $\tanh'(z)$ is continuous around $z=0$ (as opposed to $\sigma(0)=0.5$ and $\sigma'(0)=0.25$). This feature (which also exists in many other activation functions such as *identity*, *arctan*, and *sinusoid*) lets the network learn efficiently even when its weights are initialized with small values. In other cases (eg *Sigmoid* and *ReLU*) these small initial values *can* be problematic.
**Further Reading:**
[Random Walk Initialization for Training Very Deep Feedforward Networks](https://arxiv.org/abs/1412.6558)
Upvotes: 4 [selected_answer]
|
2016/08/02
| 523
| 2,164
|
<issue_start>username_0: [Fuzzy logic](https://ai.stackexchange.com/questions/10/what-is-fuzzy-logic) is the logic where every statement can have any real truth value between 0 and 1.
How can fuzzy logic be used in creating AI? Is it useful for certain decision problems involving multiple inputs? Can you give an example of an AI that uses it?<issue_comment>username_1: My impression is that fuzzy logic has mostly declined in relevance and [probabilistic logic](https://en.wikipedia.org/wiki/Probabilistic_logic) has taken over its niche. (See the [comparison on Wikipedia](https://en.wikipedia.org/wiki/Fuzzy_logic#Comparison_to_probability).) The two are somewhat deeply related, and so it's mostly a change in perspective and language.
That is, fuzzy logic mostly applies to *labels* which have *uncertain ranges*. An object that's cool but not too cool *could* be described as either cold or warm, and fuzzy logic handles this by assigning some fractional truth value to the 'cold' and 'warm' labels and no truth to the 'hot' label.
Probabilistic logic focuses more on the probability of some fact given some observations, and is deeply focused on the uncertainty of observations. When we look at an email, we track our belief that the email is "spam" and shouldn't be shown to the user with some number, and adjust that number as we see evidence for and against it being spam.
Upvotes: 2 <issue_comment>username_2: A classical example of fuzzy logic in an AI is the expert system Mycin.
Fuzzy logic can be used to deal with probabilities and uncertainties.
If one looks at, for example, predicate logic, then every statement is either true or false. In reality, we don't have this mathematical certainty.
For example, let's say a physician (or expert system) sees a symptom that can be attributed to a few different diseases (say A, B and C). The physician will now attribute a higher likelihood to the possibility of the patient having any of these three diseases. There is no definite true or false statement, but there is a change of weights. This can be reflected in fuzzy logic, but not so easily in symbolic logic.
Upvotes: 4 [selected_answer]
|
2016/08/02
| 798
| 3,243
|
<issue_start>username_0: In [Minds, Machines and Gödel](https://philpapers.org/rec/LUCMMA-8) (1959), <NAME> shows that any human mathematician can not be represented by an algorithmic automaton (a Turing Machine, but any computer is equivalent to it by the Church-Turing thesis), using Gödel's incompleteness theorem.
As I understand it, he states that since the computer is an algorithm and hence a formal system, Gödel's incompleteness theorem applies. But a human mathematician also has to work in a formal axiom system to prove a theorem, so wouldn't it apply there as well?<issue_comment>username_1: After he lays out his argument, he deals with some counterarguments. The following looks like the weakest one to me:
>
> We can use the same analogy also against those who, finding a formula their first machine cannot produce as being true, concede that that machine is indeed inadequate, but thereupon seek to construct a second, more adequate, machine, in which the formula can be produced as being true. This they can indeed do: but then the second machine will have a Gödelian formula all of its own, constructed by applying Gödel's procedure to the formal system which represents its (the second machine's) own, enlarged, scheme of operations. And this formula the second machine will not be able to produce as being true, while a mind will be able to see that it is true. And if now a third machine is constructed, able to do what the second machine was unable to do, exactly the same will happen: there will be yet a third formula, the Gödelian formula for the formal system corresponding to the third machine's scheme of operations, which the third machine is unable to produce as being true, while a mind will still be able to see that it is true. And so it will go on.
>
>
>
In short, by making the system more complex, it can see the inadequacy of a less complex system, but a yet more complex system can see its inadequacy. But from whence comes the claim that a mind *could* see the inadequacy in the *n*th machine? If, say, the Gödelian formula had as many components to it as a human brain had neurons, it seems suspect to claim that the human *could* evaluate that formula and identify that it is in fact a Gödelian formula, rather than a similar but not quite identical sentence.
Upvotes: 2 <issue_comment>username_2: Yes, it applies. If a statement cannot be derived in a finite number of steps, then it doesn't matter if the person trying to prove it is a human or a computer.
The mathematician has one advantage over a standard theorem proving algorithm: the mathematician can "step out of the system" (as <NAME>ter called in *Gödel, Escher, Bach*), and start thinking *about* the system. From this point of view, the mathematician may find that the derivation is impossible.
However, an AI for proving theorems could be programmed to recognize patterns in the derivation, just like our hypothetical mathematician, and start reasoning *about* the formal system to derive properties of the formal system itself.
Both the AI and the mathematician would still be bound by the laws of mathematics, and not be able to prove a theorem if it was mathematically improvable.
Upvotes: 3 [selected_answer]
|
2016/08/02
| 1,071
| 4,724
|
<issue_start>username_0: Back in college, I had a Complexity Theory teacher who stated that artificial intelligence was a contradiction in terms. If it could be calculated mechanically, he argued, it wasn't intelligence, it was math.
This seems to be a variant of the Chinese Room argument. This argument is a metaphor, where a person is put in a room full of Chinese books. This person doesn't understand a word of Chinese but is slipped messages in Chinese under the door. The person has to use the books, which contain transformation rules, to answer these messages. The person can apply the transformation rules but does not understand what (s)he is communicating.
Does the Chinese room argument hold? Can we argue that artificial intelligence is merely clever algorithmics?<issue_comment>username_1: Depends on who you ask! <NAME>, who proposed this argument, would say "yes", but others would say it is irrelevant. The Turing Test does not stipulate that a machine must actually "understand" what it is doing, as long as it seems that way to a human. You could argue that our "thinking" is only a more sophisticated form of clever algorithmics.
Upvotes: 2 <issue_comment>username_2: There are two broad types of responses to philosophical queries like this.
The first is to make analogies and refer to intuition; one could, for example, actually calculate the necessary size for such a Chinese room, and suggest that it exists outside the realm of intuition and thus any analogies using it are suspect.
The second is to try to define the terms more precisely. If by "intelligence" we mean not "the magic thing that humans do" but "information processing," then we can say "yes, obviously the Chinese Room involves successful information processing."
I tend to prefer the second because it forces conversations towards *observable outcomes*, and puts the difficulty of defining a term like "intelligence" on the person who wants to make claims about it. If "understanding" is allowed to have an amorphous definition, then *any* system could be said to have or not have understanding. But if "understand" is itself understood in terms of observable behavior, then it becomes increasingly difficult to construct an example of a system that "is not intelligent" and yet shares all the observable consequences of intelligence.
Upvotes: 3 <issue_comment>username_3: It depends on the definition of (artificial) intelligence.
The position that Searle originally tried to refute with the Chinese room experiment was the so-called position of strong AI: An appropriately programmed computer would have a mind in the exact same sense as humans have minds.
Alan Turing tried to give a definition of artificial intelligence with the Turing Test, stating that a machine is intelligent if it can pass the test. The Turing Test is introduced [here](https://en.wikipedia.org/wiki/Turing_test). I won't explain it in detail because it is not really relevant to the answer. If you define (artificial) intelligence as Turing did, then the Chinese room experiment is not valid.
So the point of the Chinese room experiment is to show that an appropriately programmed computer is not the same as a human mind, and therefore that Turing's Test is not a good one.
Upvotes: 3 <issue_comment>username_4: First of all, for a detailed view of the argument, check out the [SEP entry on the Chinese Room](http://plato.stanford.edu/entries/chinese-room/).
I consider the CRA as an indicator of you definition of intelligence. If the argument holds, yes, the person in the room understands Chinese. However, let's sum up the three replies discussed in the SEP entry:
1. The *man* himself doesn't understand Chinese (he wouldn't be able to understand it when outside the room), but the *system* man+room understands it. Accepting that reply suggests that there can exist an intelligent system which parts aren't themselves intelligent (which can be argued of the human body itself).
2. The system doesn't understand Chinese, as it cannot interact with the world in the same way a robot or a human could (i.e. it cannot learn, is limited in the set of questions it can answer)
3. The system doesn't understand Chinese (depending on your definition of *understanding*), and you couldn't say a human performing the same feats as the Chinese room understands Chinese either.
So whether the argument, or a variant of it holds, depends on your definitions of *intelligent*, *understanding*, on how you define the system, etc. The point being that the thought experiment is a nice way to differentiate between the definitions (and many, many debates have been held about them), in order to avoid talking past each other endlessly.
Upvotes: 3
|
2016/08/02
| 1,043
| 4,639
|
<issue_start>username_0: A superintelligence is a machine that can surpass all intellectual activities by any human, and such a machine is often portrayed in science fiction as a machine that brings mankind to an end.
Any machine is executed using an algorithm. By the Church-Turing thesis, any algorithm that can be executed by a modern computer can be executed by a Turing Machine. However, a human can easily simulate a Turing Machine. Doesn't this mean that a machine can't surpass all intellectual activities, since we can also execute the algorithm?
This argument is most likely flawed, since my intuition tells me that superintelligence is possible. However, it is not clear to me where the flaw is. Note that this is my own argument.<issue_comment>username_1: I believe this argument is based on the fact that intelligence is a single dimension when it really isn't. Are machines and humans really on the same level if a machine can solve a complex problem in a millionth of the time a human can?
It also assumes that the Turing machine is still the best computational model for the time period that you are in, which is not necessarily true for the future, it is just true until this point in time.
Upvotes: 3 [selected_answer]<issue_comment>username_2: A quantum computer has a huge amount of internal state that even the machine can't get at directly. (You can only sample the matrix state.) The amount of that state goes up exponentially with each quantum bit involved in the system. Some operations get insane speedups from quantum computing: you just put the quantum wire through a quantum gate and you've updated the entire matrix at once.
Simulating a quantum computer with a classical one would take exponentially longer for each qubit. With several dozen qubits, the machine's computing power for some tasks couldn't even be approached by a normal computer, much less a human mind.
Relevant: my answer on [To what extent can quantum computers help to develop Artificial Intelligence?](https://ai.stackexchange.com/a/114/75)
Note that with quantum computers, you've gone beyond the normal zeroes and ones. You then need a [quantum Turing machine](https://en.wikipedia.org/wiki/Quantum_Turing_machine), which is a generalization of the classical one.
Upvotes: 1 <issue_comment>username_3: The flaw in your argument is that "surpass" doesn't just mean that you should be able to run all algorithms, it includes a notion of complexity, i.e. how many time steps you will take to simulate an algorithm.
How do you simulate an algorithm with a Turing machine? A [Turing machine](https://en.wikipedia.org/wiki/Turing_machine) consists of a finite state machine and an infinite tape. A Turing Machine does run an algorithm, determined by its initial state and the state transition matrix, but what I think you are talking about is Universal Turing Machines (UTM) that can read "code" (which is usually a description of another Turing machine) written on a "code segment" of the tape and then simulate that machine on input data written on the "data segment" of the tape.
Turing machines can differ in the number of states in their finite state machines (and also in the alphabet they write on the tape but any finite alphabet is easily encoded in binary so this should not be the big reason for differences among Turing machines). So, you can have UTMs with bigger state machines and UTMs with smaller state machines. The bigger UTM could possibly surpass the smaller one if they use the same encoding for the "code" part of the tape.
You can also play around with the code used to describe the TM being simulated. This code could be C++, for example, or could be a Neural network with the synapse strength written down as a matrix. Which description is better for computation depends on the problem.
An example comparison among UTMs with different state machines: consider different compilers for the same language, say C++. Both of them will first compile C++ to assembly and then run another UTM which reads and executes assembly (your physical CPU). So, a better compiler will run the same code faster.
Back to humans vs computers, humans are neural networks that run algorithms like those you would write in C++. This involves a costly and inefficient conversion of the algorithm into hand movements. A computer uses a compiler to convert C++ to assembly that it can run natively, so it's able to do a much more efficient implementation of C++ code. Alternately, humans have a ton of neurons, and the neural code, i.e. synapse strength, is hard to read, so current computers cannot run that code yet.
Upvotes: 1
|
2016/08/02
| 1,969
| 7,451
|
<issue_start>username_0: From Wikipedia:
>
> AIXI ['ai̯k͡siː] is a theoretical mathematical formalism for artificial general intelligence. It combines Solomonoff induction with sequential decision theory. AIXI was first proposed by <NAME> in 2000[1] and the results below are proved in Hutter's 2005 book Universal Artificial Intelligence.[2]
>
>
>
Albeit non-computable, approximations are possible, such as *AIXItl*. Finding approximations to AIXI could be an objective way for solving AI.
Is *AIXI* really a big deal in artificial *general* intelligence research? Can it be thought as a central concept for the field? If so, why don't we have more publications on this subject (or maybe we have and I'm not aware of them)?<issue_comment>username_1: "Current artificial intelligence research" is a pretty broad field. From where I sit, in a mostly CS realm, people are focused on narrow intelligence that can do economically relevant work on narrow tasks. (That is, predicting when components will fail, predicting which ads a user will click on, and so on.)
For those sorts of tools, the generality of a formalism like AIXI is a weakness instead of a strength. You don't need to take an AI that could in theory compute anything, and then slowly train it to focus on what you want, when you could just directly shape a tool that is the mirror of your task.
I'm not as familiar with AGI research itself, but my impression is that AIXI is, to some extent, the simplest idea that could work--it takes all the hard part and pushes it into computation, so it's 'just an engineering challenge.' (This is the bit about 'finding approximations to AIXI.') The question then becomes, is starting at AIXI and trying to approximate down a more or less fruitful research path than starting at something small and functional, and trying to build up?
My impression is the latter is much more common, but again, I only see a small corner of this space.
Upvotes: 3 <issue_comment>username_2: AIXI is really a conceptual framework. All the hard work of actually compressing the environment still remains.
To further discuss the question raised in <NAME> answer: given our current limited level of ability to represent complex environments, it seems to me that it doesn't make a lot of practical difference whether you start with AIXI as defining the 'top' of the system and working down (e.g. via supposedly generalized compression methods) or start at the 'bottom' and try solve problems in a single domain via domain-specific methods that (you hope) can subsequently be abstracted to provide cross-domain compression.
Upvotes: 2 <issue_comment>username_3: >
> Is AIXI really a big deal in artificial general intelligence research?
>
>
>
Yes, it is a great *theoretical* contribution to AGI. AFAIK, it is the most serious attempt to build a theoretical framework or foundation for AGI. Similar works are Schmidhuber's [Gödel Machines](ftp://ftp.idsia.ch/pub/juergen/gmAGI.pdf) and [SOAR architecture](https://soar.eecs.umich.edu/).
AIXI is an abstract and non-[anthropomorphic](https://en.wikipedia.org/wiki/Anthropomorphism) framework for AGI which builds on top of the reinforcement learning field, without a few usual assumptions (e.g., without the Markov and [ergodicity](https://arxiv.org/abs/1705.10557) assumptions, which guarantees that the agent can easily recover from any mistakes it made in the past). Even though some optimality properties of AIXI have been proved, it is (Turing) [uncomputable](https://en.wikipedia.org/wiki/Computable_function) (it cannot be run on a computer), and so it is of very limited practical usefulness. Nonetheless, in the Hutter's book [Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability](http://hutter1.net/ai/uaibook.htm) (2005), where several properties of AIXI are rigorously proved, a computable but intractable version of AIXI, AIXItl, is also described. Furthermore, in the paper [A Monte Carlo AIXI Approximation](https://arxiv.org/abs/0909.0801) (2009), by <NAME> et al., a computable and [tractable](https://en.wikipedia.org/wiki/Computational_complexity_theory#tractable_problem) approximation of AIXI is introduced. So, there have been some attempts to make AIXI practically useful.
The article [What is AIXI? — An Introduction to General Reinforcement Learning](https://jan.leike.name/AIXI.html) (2015), by <NAME>, which is one of the contributors to the development and evolution of the AIXI framework, gives a gentle introduction to the AIXI agent. See also [The AIXI Architecture](https://plato.stanford.edu/entries/artificial-intelligence/aixi.html) at the Stanford Encyclopedia of Philosophy for a possibly gentler introduction to AIXI.
>
> Can it be thought of as a central concept for the field?
>
>
>
Yes, the introduction of AIXI and related research has contributed to the evolution of the AGI field. There have been several discussions and published papers, after its introduction in 2000 by Hutter in the paper [A Theory of Universal Artificial Intelligence based on Algorithmic Complexity](https://arxiv.org/abs/cs/0004001).
See e.g. section 7, "Examples of Superintelligences", of the paper [Artificial General Intelligence and the Human Mental Model](https://intelligence.org/files/AGI-HMM.pdf) (2012), by <NAME> and <NAME>. See also <https://wiki.lesswrong.com/wiki/AIXI> which contains a discussion regarding a few problems related to AIXI, which need to be solved or possibly avoided in future AGI frameworks. Furthermore, see also [this](https://www.lesswrong.com/posts/TtYuY2QBug3dn2wuo/the-problem-with-aixi) and [this](http://alife.co.uk/essays/on_aixi/) articles.
>
> If so, why don't we have more publications on this subject (or maybe we have and I'm not aware of them)?
>
>
>
There have been several publications, mainly by <NAME> and associated researchers. You can see <NAME>'s publications on the following web page: <http://www.hutter1.net/official/publ.htm>.
If you are interested in contributing to this theory, there are several ways. If you are mathematically well educated, you can attempt to solve some of the problems described [here](http://www.hutter1.net/ai/) (which are also mentioned in the Hutter's 2005 book mentioned above). Furthermore, you can also contribute to new approximations or improvements of existing approximations of the AIXI agent. Finally, you can build your new AGI framework by avoiding the problems associated with the AIXI framework. See also [projects promoted by Hutter](https://cecs.anu.edu.au/people/marcus-hutter). It may be a good idea to also take into account e.g. Gödel Machines and related work, before attempting to introduce a new framework (provided you are capable of it).
I think that this theory has not attracted more people probably because it is highly technical and mathematical (so it is not very easy to understand unless you have a very solid background in reinforcement learning, probability theory, etc.). I also think that most people (in the AI community) are not interested in theories, but they are mainly guided by practical and useful results.
Upvotes: 3 <issue_comment>username_4: AIXI is important, the reinforcement learning we already see is a smaller classical version of the full intractable model, that one would require quantum computation to see it come to fruition.
Upvotes: -1
|
2016/08/02
| 424
| 1,536
|
<issue_start>username_0: It is proved that a recurrent neural net with rational weights can be a super-Turing machine. Can we achieve this in practice ?<issue_comment>username_1: I presume the proof the OP is referring to can be found in [this monograph](http://link.springer.com/book/10.1007%2F978-1-4612-0707-8) by <NAME>?
In his article ['The Myth of Hypercomputation'](http://www1.maths.leeds.ac.uk/~pmt6sbc/docs/davis.myth.pdf), the eminent computer scientist <NAME> explains (p8-9) that there is nothing 'super Turing' about this formulation.
EDIT: It's looking like the claim about *rational* weights being super-Turing is made in [this](http://www.ncbi.nlm.nih.gov/pubmed/25354762) more recent paper by Siegelmann, which introduces an additional assumption of *plasticity*, i.e. that weights can be dynamically updated.
Upvotes: 2 <issue_comment>username_2: You mean real numbered weights (specifically, ***ir***rational). This would require a machine that has unlimited precision over irrational values. I've seen machine parts that have many qualities. I've never seen one that has unlimited qualities. QM may give us some magical transistors that can hold an unlimited number of different values - or by deferring computation into the future and then teleporting the results back into the past (our present). Outside of that, for classical systems, you'd need a analog device that can output irrational values with unlimited precision. I don't think we've discovered any devices that can do that.
Upvotes: 1
|
2016/08/02
| 548
| 2,305
|
<issue_start>username_0: Given the proven [halting problem](https://en.wikipedia.org/wiki/Halting_problem) for [Turing machines](https://en.wikipedia.org/wiki/Turing_machine), can we infer limits on the ability of strong Artificial Intelligence?<issue_comment>username_1: Does the halting problem imply any limits on human cognition?
Yes, absolutely--that there are pieces of code a human could look at and not be sure whether or not it will halt in finite time. (Certainly there are pieces of code that a human can look at and say "yes" or "no" definitely, but we're talking about the ones that are actually quite difficult to analyze.)
The halting problem means that there are types of code analysis that no computer could do, because it's mathematically impossible. But the realm of *possibility* is still large enough to allow strong artificial intelligence (in the sense of code that can understand itself well enough to improve itself).
Upvotes: 5 [selected_answer]<issue_comment>username_2: The halting problem is an example of a general phenomenon known as [Undecidability](https://en.wikipedia.org/wiki/Undecidable_problem), which shows that there are problems no Turing machine can solve in finite time. Let's consider the generalization that it is undecidable whether a Turing Machine satisfies some non-trivial property P, called [Rice's theorem](https://en.wikipedia.org/wiki/Rice%27s_theorem).
First note that the halting problem applies only if the Turing machine takes arbitrarily long input. If the input is bounded, it is possible to enumerate all possible cases and the problem is no longer undecidable. It might still be inefficient to calculate it, but then we are turning to the complexity theory, which should be a separate question.
Rice's theorem implies that an intelligence (a human) cannot be able to determine whether another intelligence (such as an AGI) possesses a certain property, such as being [friendly](https://en.wikipedia.org/wiki/Friendly_artificial_intelligence). This does not mean that we cannot design a Friendly AGI, but it does mean that we cannot check whether an arbitrary AGI is friendly. So, while we can possibly create an AI which is guaranteed to be friendly, we also need to ensure that IT cannot create another AI which is unfriendly.
Upvotes: 4
|
2016/08/02
| 618
| 2,125
|
<issue_start>username_0: Consider these neural style algorithms which produce some art work:
* [Neural Doodle](https://github.com/alexjc/neural-doodle)
* [neural-style](https://github.com/jcjohnson/neural-style)
Why is generating such images so slow and why does it take huge amounts of memory? Isn't there any method of optimizing the algorithm?
What is the mechanism or technical limitation behind this? Why we can't have a realtime processing?
Here are few user comments ([How ANYONE can create Deep Style images](https://www.reddit.com/r/deepdream/comments/3jwl76/how_anyone_can_create_deep_style_images/)):
* >
> Anything above 640x480 and we're talking days of heavy crunching and an insane amount of ram.
>
>
>
* >
> I tried doing a 1024pixel image and it still crashed with 14gigs memory, and 26gigs swap. So most of the VM space is just the swapfile. Plus it takes several hours potentially days cpu rendering this.
>
>
>
* >
> I tried 1024x768 and with 16gig ram and 20+ gig swap it was still dying from lack of memory.
>
>
>
* >
> Having a memory issue, though. I'm using the "g2.8xlarge" instance type.
>
>
><issue_comment>username_1: It is a labor-intensive process, but that does sound excessive. If you have a g2.8xlarge, make sure you are using the using the GPU flags for neural-style, which will cut your render time by an order of magnitude.
That having been said, it is building a rather large network (depending on your parameters), and a 1024x768 image is a lot of input to work with. It will take time, but shouldn't take more than a couple hours with the GPU flag enabled correctly.
Upvotes: 2 <issue_comment>username_2: Real time style transfer and neural doodle is very much possible and is an active topic I see users working on to improve upon. The basic idea is to do only feed forward propagation at test time and train with appropriate loss functions at train time.
[Perceptual Losses for Real-Time Style Transfer
and Super-Resolution](http://arxiv.org/pdf/1603.08155v1.pdf) is a good starting point to understand a methodology for this purpose.
Upvotes: 3 [selected_answer]
|
2016/08/02
| 451
| 1,868
|
<issue_start>username_0: Can autoencoders be used for supervised learning *without adding an output layer*? Can we simply feed it with a concatenated input-output vector for training, and reconstruct the output part from the input part when doing inference? The output part would be treated as missing values during inference and some imputation would be applied.<issue_comment>username_1: I recall reading papers about such systems, if I understand you correctly, but can't recall the titles at the moment.
The idea was to use character-based generative RNNs, train them on sequences encoded like "datadatadatadata|answer", and then when feeding in "otherdatadata|" then it would continue to generate some kind of expected answer.
But, as far as I recall, that was just a neat illustration since if you have the data to do something supervised, then you'd get better results by conventional methods.
Upvotes: 1 <issue_comment>username_2: One such paper I know of and which I implemented is [Semi-Supervised Learning using Ladder Networks](https://arxiv.org/abs/1507.02672) . I quote here their description of the model:
>
> Our approach follows Valpola (2015), who proposed a Ladder network where the auxiliary task is
> to denoise representations at every level of the model. The model structure is an autoencoder with
> skip connections from the encoder to decoder and the learning task is similar to that in denoising
> autoencoders but applied to every layer, not just the inputs. The skip connections relieve the pressure
> to represent details in the higher layers of the model because, through the skip connections, the decoder can recover any details discarded by the encoder.
>
>
>
For further explanations on the architecture check [Deconstructing the Ladder Network Architecture](https://arxiv.org/abs/1511.06430) by <NAME>.
Upvotes: 2
|
2016/08/02
| 1,196
| 4,508
|
<issue_start>username_0: I'm aware that neural networks are probably not designed to do that, however asking hypothetically, is it possible to train the deep neural network (or similar) to solve math equations?
So given the 3 inputs: 1st number, operator sign represented by the number (1 - `+`, 2 - `-`, 3 - `/`, 4 - `*`, and so on), and the 2nd number, then after training the network should give me the valid results.
Example 1 (`2+2`):
* Input 1: `2`; Input 2: `1` (`+`); Input 3: `2`; Expected output: `4`
* Input 1: `10`; Input 2: `2` (`-`); Input 3: `10`; Expected output: `0`
* Input 1: `5`; Input 2: `4` (`*`); Input 3: `5`; Expected output: `25`
* and so
The above can be extended to more sophisticated examples.
Is that possible? If so, what kind of network can learn/achieve that?<issue_comment>username_1: Not really.
Neural networks are good for determining non-linear relationships between inputs when there are hidden variables. In the examples above, the relationships are linear, and there are no hidden variables. But even if they were non-linear, a traditional ANN design would not be well suited to accomplish this.
By carefully constructing the layers and tightly supervising the training, you could get a network to consistently produce the output 4.01, say, for the inputs: 2, 1 (+), and 2, but this is not only wrong, it's an inherently unreliable application of the technology.
Upvotes: 3 <issue_comment>username_2: Yes, it has been done!
However, the applications aren't to replace calculators or anything like that. The lab I'm associated with develops neural network models of equational reasoning to better understand how humans might solve these problems. This is a part of the field known as [Mathematical Cognition](https://web.archive.org/web/20201111232708/http://archive.is/mt4RO). Unfortunately, our website isn't terribly informative, but here's a [link](https://web.archive.org/web/20200331161444/http://web.stanford.edu/%7Ekmickey/pdf/MickeyMcClelland2014.pdf) to an example of such work.
Apart from that, recent work on extending neural networks to include external memory stores (e.g. Neural Turing Machines) was used to solve math problems as a good proof of concept. This is because many arithmetic problems involve long procedures with stored intermediate results. See the sections of [this paper](http://arxiv.org/pdf/1511.08228.pdf) on long binary addition and multiplication.
Upvotes: 6 [selected_answer]<issue_comment>username_3: Yes - it would seem that it is now possible to achieve more is required from the example you've given this paper describes a DL solution to a considerably harder problem - [generating the source code for a program described in natural language](http://arxiv.org/pdf/1510.07211.pdf).
Both of these can be described as regression problems (i.e. the goal is to minimize some loss function on the validation set), but the search space in the natural language case is much bigger.
Upvotes: 2 <issue_comment>username_4: 1. It is possible! In fact, it's an example of the popular deep learning framework Keras. Check out [this link to see the source code](https://github.com/keras-team/keras-io/blob/master/examples/nlp/addition_rnn.py).
2. This particular example uses a recurrent neural network (RNN) to process the problem as a sequence of characters, producing a sequence of characters which form the answer. Note that this approach is obviously different from how humans tend to think about solving simple addition problems, and probably isn't how you would ever want a computer to solve such a problem. Mostly this is an example of sequence to sequence learning using Keras. When handling sequential or time-series inputs, RNNs are a popular choice.
Upvotes: 3 <issue_comment>username_5: There's the fairly well established field of [automated theorem proving](https://en.wikipedia.org/wiki/Automated_theorem_proving). This most likely encompasses solving equations, but doesn't necessarily involve AI. [This post from the Cross Validated stackexchange](https://stats.stackexchange.com/questions/262390/application-of-machine-learning-to-the-automated-theorem-proving) has some more information on the topic.
Upvotes: 2 <issue_comment>username_6: Yes, it is possible, you would be training a network with 3 inputs and 1 output to learn the model $Y= X\_1 + X\_2 + X\_3$ , although I don't see what would be the use of doing that. Furthermore, how do you plan to generalize to learn to add four or five terms?
Upvotes: 0
|
2016/08/02
| 730
| 2,932
|
<issue_start>username_0: From Wikipedia:
>
> A mirror neuron is a neuron that fires both when an animal acts and when the animal observes the same action performed by another.
>
>
>
Mirror neurons are related to imitation learning, a very useful feature that is missing in current real-world A.I. implementations. Instead of learning from input-output examples (supervised learning) or from rewards (reinforcement learning), an agent with mirror neurons would be able to learn by simply observing other agents, translating their movements to its own coordinate system. What do we have on this subject regarding computational models?<issue_comment>username_1: [This article](http://www.cell.com/trends/cognitive-sciences/abstract/S1364-6613(04)00243-8?_returnURL=http%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS1364661304002438%3Fshowall%3Dtrue) gives a description of mirror neurons in terms of Hebbian learning, a mechanism that has been widely used in AI. I don't know whether the formulation given in the article has ever actually been implemented computationally.
Upvotes: 3 <issue_comment>username_2: We actually do have many things along that line, motion capture for 3-D movies instance comes to mind almost immediately. The problem if I think about it is less of a situation in observing another actor, computers are relativity good at doing that already with the amount of image recognition software we have, rather it's a problem of understanding if an action yielded a good outcome as a net which is something that computers cannot do as it's not a single node network problem. For example, we've already programmed a computer to understand human language (Watson, arguably), but even Watson didn't understand the concept that saying "f\*\*\*" is bad. (Look that up, it's a funny side story.)
But the point is, learning algorithms are not true learning in a sense as a computer currently has no sense of "a good outcome", hence at this stage observation learning is very much limited in a sense to "monkey see, monkey do".
Perhaps the closest thing I have ever read about with this was firefighting search and rescue bots that were on a network and would broadcast to each other when one of them had been destroyed as the bots would know the area was something that they had to avoid.
Otherwise, I think this is the problem with observational learning. A person can observe that punching someone usually will get you hit back, a computer will observe and parrot the action, good or bad.
Upvotes: 0 <issue_comment>username_3: Whether "I take the ball" or "he takes the ball", all stored instances of 'taking' and 'ball' will be weakly activated and 'taking [the] ball' will be strongly activated. Doesn't this qualify as 'mirroring'? If you also know that "I have an arm" and "he has an arm", etc., then when "he takes some blocks", it isn't too hard to think that "I could take some blocks."
Upvotes: 0
|
2016/08/03
| 409
| 1,735
|
<issue_start>username_0: Is it possible that, at some time in the future, AIs will be able to initiatively develop themselves, rather than passively being developed by humanity?<issue_comment>username_1: This is known as the [intelligence explosion](https://en.wikipedia.org/wiki/Intelligence_explosion) hypothesis or [recursive self-improvement](https://wiki.lesswrong.com/wiki/Recursive_self-improvement).
Upvotes: 2 <issue_comment>username_2: Humans might create somewhere in the future a so-called ultraintelligent machine, a machine that can surpass all intellectual activities by any human. This would be the last invention man would need to do, since this machine is better in inventing machines than humans are (since that is an intellectual activity). Also, since humans can create machines as good as the ultraintelligent machine, this machine can create better machines, which in turn can create better machines, etcetera. This is known as the [Intelligence explosion](https://en.wikipedia.org/wiki/Intelligence_explosion), and it is also called recursive self-improvement.
The existence, let alone the development, if an ultraintelligent machine is still hypothetical. We are nowhere close to creating an ultraintelligent machine.
Upvotes: 2 <issue_comment>username_3: It depends what you mean by 'develop themselves' - in a rather limited sense, an online machine learning approach such as Genetic Algorithms 'develops itself' to provide better solutions.
There is already a theoretical model that represents the *ultimate* concept of development: <NAME>'s [Goedel Machine](http://arxiv.org/abs/cs/0309048) is constructed so as to self-modify when it can prove that this modification is optimal.
Upvotes: 0
|
2016/08/03
| 616
| 2,409
|
<issue_start>username_0: I have been wondering since a while ago about the [theory of multiple intelligences](https://en.wikipedia.org/wiki/Theory_of_multiple_intelligences) and how they could fit in the field of Artificial Intelligence as a whole.
We hear from time to time about [<NAME>](https://www.theguardian.com/artanddesign/jonathanjonesblog/2016/feb/08/leonardo-da-vinci-mechanics-of-genius-science-museum-london) being a genius or [Bach's musical intelligence](https://www.youtube.com/watch?v=xUHQ2ybTejU). These persons are commonly said to be (have been) *more intelligent*. But the multiple intelligences speak about cooking or dancing or chatting as well, i.e. *coping with everyday tasks* (at least that's my interpretation).
**Are there some approaches on incorporating multiple intelligences into AI?**
Here's a related question: [How could emotional intelligence be implemented?](https://ai.stackexchange.com/q/26/2444).<issue_comment>username_1: If anything, multiple intelligences are much more obvious in AI than in other fields, because we haven't yet unlocked how to do transfer between domains.
As an example, AlphaGo is very, very good at playing Go, but it's got basically nothing in the way of bodily-kinesthetic intelligence. But other teams have built software to control robots that does have bodily-kinesthetic intelligence, while not being good at the tasks that AlphaGo excels at.
This sort of modular intelligence is typically referred to as 'narrow AI,' whereas we use the term 'general AI' (or AGI, for Artificial General Intelligence) to refer to intelligence that we've built that can do roughly as many different kinds of things as people can do.
Upvotes: 2 <issue_comment>username_2: On possibility is a [blackboard architecture](https://en.wikipedia.org/wiki/Blackboard_system). Envision each different "kind" of intelligence as a discrete agent, and let the agents collaborate using the blackboard model. Now you have an AI with multiple intelligences.
This is something I've actually been experimenting with, and while I don't have any particularly impressive results to share or anything, I hold a strong belief that an approach that is at least somewhat like this will be crucial to developing an AGI. And that is rooted in my belief that the human mind does have "multiple intelligences" and that they collaborate something like this.
Upvotes: 0
|
2016/08/03
| 342
| 1,337
|
<issue_start>username_0: How do I decide the optimal number of layers for a neural network (feedforward or recurrent)?<issue_comment>username_1: There is a technique called `Pruning` in neural networks, which is used just for this same purpose.
The pruning is done on the number of hidden layers. The process is very similar to the pruning process of decision trees. The pruning process is done as follows:
* Train a large, densely connected, network with a standard training
algorithm
* Examine the trained network to assess the relative importance of the
weights
* Remove the least important weight(s)
* retrain the pruned network
* Repeat steps 2-4 until satisfied
However, there are [several optimized methods](https://arxiv.org/abs/1510.00149) for pruning neural nets, and it is also a [very active area of research](http://www.idiap.ch/ftp/reports/1997/rr97-03.pdf).
Upvotes: 4 [selected_answer]<issue_comment>username_2: You can take a look at bayesian hyperparameter optimization as a general method of optimizing loss (or anything) as a function of the hyperparameters. But note that in general the deeper your network the better, so optimizing loss as a function of number of layers isn't a very fun thing to do.
Grid search and a bit of common sense (as learnt by seeing many examples) should be your best bet.
Upvotes: 2
|
2016/08/03
| 388
| 1,593
|
<issue_start>username_0: Have there been proposed extensions to go beyond a Turing machine that solve the halting problem and if so, would those proposed extensions have value to advance strong Artificial Intelligence? For example, does quantum computing go beyond the definition of a Turing machine and resolve the halting problem, and does that help in creating strong AI?<issue_comment>username_1: There is a technique called `Pruning` in neural networks, which is used just for this same purpose.
The pruning is done on the number of hidden layers. The process is very similar to the pruning process of decision trees. The pruning process is done as follows:
* Train a large, densely connected, network with a standard training
algorithm
* Examine the trained network to assess the relative importance of the
weights
* Remove the least important weight(s)
* retrain the pruned network
* Repeat steps 2-4 until satisfied
However, there are [several optimized methods](https://arxiv.org/abs/1510.00149) for pruning neural nets, and it is also a [very active area of research](http://www.idiap.ch/ftp/reports/1997/rr97-03.pdf).
Upvotes: 4 [selected_answer]<issue_comment>username_2: You can take a look at bayesian hyperparameter optimization as a general method of optimizing loss (or anything) as a function of the hyperparameters. But note that in general the deeper your network the better, so optimizing loss as a function of number of layers isn't a very fun thing to do.
Grid search and a bit of common sense (as learnt by seeing many examples) should be your best bet.
Upvotes: 2
|
2016/08/03
| 419
| 1,357
|
<issue_start>username_0: What was the first AI that was able to carry on a conversation, with real responses, such as in the famous ['I am not a robot. I am a unicorn' case?](https://www.youtube.com/watch?v=WnzlbyTZsQY)
A 'real response' constitutes a sort-of personalized answer to a specific input by a user.<issue_comment>username_1: In 1986, the first PC therapist program was written by <NAME>. This program won the first Loebner Prize in 1991, and then again in 1992, 1993 and 1995.
In 1981 or 1982, Jabberwacky was founded, which is the foundation of the current Cleverbot. Jabberwacky appeared on the internet in 1997, reaching the third place for the Loebner Prize in 2003, the second place in 2004, and won in 2005 and 2006. In 2008, Cleverbot was launched as an variant of Jabberwacky.
I'm not sure these are really the earliest, but that also depends on what you want earliest (programming started, first conversation, first decent conversation, etc.). Also, it depends on what you call a "real response".
Upvotes: 3 <issue_comment>username_2: The well-known 'Eliza' program (Weizenbaum, ~1964) would appear to be the first.
Eliza was designed to model the emotionally-neutral response of a psychotherapist and this masks some of the weaknesses of its limited underlying pattern-matching mechanisms.
Upvotes: 3 [selected_answer]
|
2016/08/03
| 628
| 2,402
|
<issue_start>username_0: This question stems from quite a few "informal" sources. Movies like *2001, A Space Odyssey* and *Ex Machina*; books like *Destination Void* (Frank Herbert), and others suggest that general intelligence *wants* to survive, and even learn the importance for it.
There may be several arguments for survival. What would be the most prominent?<issue_comment>username_1: The concept of 'survival instinct' probably falls in the category of what Marvin Minsky would call a 'suitcase word', i.e. it packages together a number of related phenomena into what at first appears to be a singular notion.
So it's quite possible that we can construct mechanisms that have the appearance of some kind of 'hard-coded' survival instinct, without that ever featuring as an explicit rule(s) in the design.
See the beautiful little book ['Vehicles'](https://mitpress.mit.edu/books/vehicles) by the neuroanatomist Valentino Braitenberg for a compelling narrative of how such 'top down' concepts as 'survival instinct' might evolve 'from the bottom up'.
Also, trying to ensure that intelligent artefacts place too high a priority on their survival might easily lead to a [Killbot Hellscape](https://xkcd.com/1613/).
Upvotes: 3 <issue_comment>username_2: <NAME> wrote a paper called [Basic AI Drives](https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf) that steps through why we would expect an AI with narrow goals to find some basic, general concepts as instrumentally useful for their narrow goals.
For example, an AI designed to maximize stock market returns but whose design is silent on the importance of continuing to survive would realize that its continued survival is a key component of maximizing stock market returns, and thus take actions to keep itself operational.
In general, we should be skeptical of 'anthropomorphizing' AI and other code, but it seems like there *are* reasons to expect this beyond "well, humans behave this way, so it must be how all intelligence behaves."
Upvotes: 3 <issue_comment>username_3: I will answer by quoting the book *The Myth of Sisyphus*:
>
> There is only one really serious philosophical question, and that is suicide.
>
>
>
So, probably, we need *some* degree of a survival instinct if we don't want our AGIs to "terminate themselves" (whatever that means), whenever they get existential.
Upvotes: 2
|
2016/08/03
| 533
| 1,893
|
<issue_start>username_0: Identifying sarcasm is considered one of the most difficult open-ended problems in the domain of ML and NLP/NLU.
So, was there any considerable research done on that front? If yes, then what is the accuracy like? Please, also, explain the NLP model briefly.<issue_comment>username_1: The following survey article by researchers from IIT Bombay summarizes recent advances in sarcasm detection: [Arxiv link](https://arxiv.org/abs/1602.03426).
In reference to your question, I do not think it is considered either extraordinarily difficult or open-ended. While it does introduce ambiguity that computers cannot yet handle, Humans are easily able to understand sarcasm, and are thus able to label datasets for sarcasm detection.
Upvotes: 4 [selected_answer]<issue_comment>username_2: There has been a recent work in the same domain where neural networks(CNNs to be accurate) are used for the same purpose. Some info. about the research is:
>
> To learn that context, the paper describes a method by which the
> neural network finds the user’s “embeddings” — i.e. contextual cues
> like the content of previous tweets, related interests and accounts,
> and so on. It uses these various factors to plot the user with others,
> and (ideally) finds that they form relatively well-defined groups.
>
>
>
So, the paper uses CNNs, word and user embeddings for detecting sarcasm in text. There is also a [Techcrunch article](https://techcrunch.com/2016/08/04/this-neural-network-tries-to-tell-if-youre-being-sarcastic-online/) on that.
The paper uses sentiment of the tweet and compares with that of the other similar tweets:
>
> If the sentiment of the tweet seems to disagree with the bulk of what
> is expressed by similar users, there’s a good chance sarcasm is being
> employed.
>
>
>
[Link to the paper](http://arxiv.org/pdf/1607.00976v2.pdf)
Upvotes: 2
|
2016/08/03
| 560
| 2,198
|
<issue_start>username_0: I'd like to know more about [implementing emotional intelligence](https://ai.stackexchange.com/q/26/8).
Given I'm implementing a chatbot and I'd like to introduce the levels of curiosity to measure whether user text input is interesting or not.
A high level would mean the bot is asking more questions and is following the topic. A lower level of curiosity makes the bot not asking any questions and changing the topics.
Less interesting content could mean the bot doesn't see any opportunity to learn something new or it doesn't understand the topic or doesn't want to talk about it, because of its low quality.
How this possibly can be achieved? Are there any examples?<issue_comment>username_1: It's possible to implement a form of curiosity-driven behavior without requiring full 'emotional intelligence'. One elementary strategy would be to define some form of similarity measure on inputs.
More generally, <NAME> has pioneered work on 'Artificial Curiosity/Creativity' and 'Intrinsic Motivation' and has written a number of papers on the subject:
* [Artificial Curiosity](http://people.idsia.ch/~juergen/curioussingapore/curioussingapore.html)
* [Intrinsic Motivation](http://people.idsia.ch/~juergen/ieeecreative.pdf)
Here is a [video](https://www.youtube.com/watch?v=Ipomu0MLFaI) of a nice associated presentation.
Upvotes: 4 [selected_answer]<issue_comment>username_2: I think "curiosity" in AI would signify a *'desire to search.'* It's an *interest*, that is *experienced* by some agent, in making something known that was previously unknown.
So to define how much curiosity a chat bot *should* have, we should:
1. Specify what kinds of information the agent *prefers* knowing.
2. Measure how much information is *unknown* about those preferred subjects. ('what is the user's name?' or 'What does the user need help with?')
3. Measure the difficulty in making each unknown fact known.
4. Sort unknown facts by difficulty of finding the answer.
5. Set the "desire to search" on the highest ranking unknown fact.
While simplistic, those steps would constitute a state of affairs sufficient to describe "curiosity," in my opinion.
Upvotes: 2
|
2016/08/03
| 1,801
| 5,880
|
<issue_start>username_0: I would like to learn more about whether it is possible and how to write a program that decompiles executable binary (an object file) to the C source. I'm not asking exactly 'how', but rather how this can be achieved.
Given the following `hello.c` file (as example):
```
#include
int main() {
printf("Hello World!");
}
```
Then after compilation (`gcc hello.c`) I've got the binary file like:
```
$ hexdump -C a.out | head
00000000 cf fa ed fe 07 00 00 01 03 00 00 80 02 00 00 00 |................|
00000010 0f 00 00 00 b0 04 00 00 85 00 20 00 00 00 00 00 |.......... .....|
00000020 19 00 00 00 48 00 00 00 5f 5f 50 41 47 45 5a 45 |....H...\_\_PAGEZE|
00000030 52 4f 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |RO..............|
00000040 00 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 |................|
00000050 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
00000060 00 00 00 00 00 00 00 00 19 00 00 00 d8 01 00 00 |................|
00000070 5f 5f 54 45 58 54 00 00 00 00 00 00 00 00 00 00 |\_\_TEXT..........|
$ wc -c hello.c a.out
60 hello.c
8432 a.out
```
For the learning dataset, I assume I'll have to have thousands of source code files along with its binary representation, so the algorithm can learn about moving parts on certain changes.
How would you tackle this problem?
My concerns (and sub-questions) are:
* Does my algorithm need to be aware of the header file, or it's "smart" enough to figure it out?
* If it needs to know about the header, how do I tell my algorithm "here is the header file"?
* What should be input/output mapping (whether some section to section or file to file)?
* Do I need to divide my source code into some sections?
* Do I need to know exactly how decompilers work or AI can figure it out for me?
* Should I have two neural networks, one for header, another for body it-self?
* or more separate neural networks, each one for each logical component (e.g. byte->C tag, etc.)<issue_comment>username_1: In-between your input and desired output, there's obviously a huge space to search. The more relevant domain information you include as features, the higher chance that the Deep Learning (DL) algorithm can find the desired mapping.
At this early stage in DL research, there aren't so many rules of thumb to tell you what features to explicitly encode - not least because it depends on the size of your training corpus. My suggestion would be: obtain (or generate) a large corpus of C code, train on that with the most naive feature representation that you think might work, then repeatedly gather data and add more feature preprocessing as necessary.
This following paper describes a DL approach to what is almost the 'reverse problem' to yours - [generating the source code for a program described in natural language](http://arxiv.org/pdf/1510.07211.pdf).
I found the strength of the results reported in this paper surprising, but it does give me some hope that what you are asking might be possible.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Software Reverse Engineering is one of my hobby.
First things first: forget about headers. All information about headers and separate C file is gone.
You're missing some crucial step, IMHO.
* Compilation creates one or multiple object files (.o), then the linker creates an executable.
* You should work from disassembled code. The disassembler works pretty well with some exceptions (self-modifying code, self-extracting executable, various obfuscation techniques) and will take care of a lot of work for you: identifying various sections, finding functions, guessing (fairly accurately) the calling convention.
* Then the compiler optimization will mess up your code in a very clever way and some part of your original code will never ever be seen again (hey, look, your 200 lines of bugged code always return 0 anyway so I'll just replace them with "xor eax, eax").
* Sometimes, it's fine, and sometimes it produces unreadable C code (vectorizations that have no C equivalent and will be decompiled into hundreds of lines of intrinsic instead of a fine readable "for" loop).
* I'm not done yet. You also have exceptions and interrupts, structures, union, function pointers, function inlining, threading, system call and signals, loop unrolling, etc.
Going down (from human-readable to binary) is relatively easy compared to going up (decompilation) because so much information is lost during the compilation process.
My best bet would be to have a bunch of disassembled function produced by a disassembler and produce an LLVM intermediate representation using your AI, then compare it with the LLVM IR produced by Clang (`clang -S -emit-llvm foo.c`).
An infinite quantity of C code can produce the exact same code. Therefore, I think it's meaningless to make an AI read C code for the purpose of decompilation: the information is lost forever.
Commercial and Open/Free decompilers do not produce C code either, they produce some kind of pseudo-C full of errors, missing code, or code even less readable than the ASM.
The following code :
```
int main() {
int toto = 0x0000BEEF;
int titi = 0xDEAD0000;
toto = toto | titi;
return toto;
}
```
produces this:
```
int __cdecl main(int argc, const char **argv, const char **envp)
{
return -559038737;
}
```
And this is the disassembled version:
```
mov eax, 0DEADBEEFh
retn
```
Plus a few thousands lines of assembly that are unrelated to your code but are needed to make the program work.
You can't go back and you have no way of knowing this is the exact same code unless
1. you can do static analysis (very easy in this case, but absurdly difficult in the real world)
2. or compare the IR or ASM produced by both code with the same compiler with the same options on the same architecture and operating system.
Upvotes: 1
|
2016/08/03
| 1,053
| 4,390
|
<issue_start>username_0: I'm investigating the possibility of storing the semantic-lexical connections (such as the relationships to the other words such as phrases and other dependencies, its strength, part of speech, language, etc.) in order to provide analysis of the input text.
I assume this has been already done. If so, to avoid reinventing the wheel, is there any efficient method to store and manage such data in some common format which has been already researched and tested?<issue_comment>username_1: In-between your input and desired output, there's obviously a huge space to search. The more relevant domain information you include as features, the higher chance that the Deep Learning (DL) algorithm can find the desired mapping.
At this early stage in DL research, there aren't so many rules of thumb to tell you what features to explicitly encode - not least because it depends on the size of your training corpus. My suggestion would be: obtain (or generate) a large corpus of C code, train on that with the most naive feature representation that you think might work, then repeatedly gather data and add more feature preprocessing as necessary.
This following paper describes a DL approach to what is almost the 'reverse problem' to yours - [generating the source code for a program described in natural language](http://arxiv.org/pdf/1510.07211.pdf).
I found the strength of the results reported in this paper surprising, but it does give me some hope that what you are asking might be possible.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Software Reverse Engineering is one of my hobby.
First things first: forget about headers. All information about headers and separate C file is gone.
You're missing some crucial step, IMHO.
* Compilation creates one or multiple object files (.o), then the linker creates an executable.
* You should work from disassembled code. The disassembler works pretty well with some exceptions (self-modifying code, self-extracting executable, various obfuscation techniques) and will take care of a lot of work for you: identifying various sections, finding functions, guessing (fairly accurately) the calling convention.
* Then the compiler optimization will mess up your code in a very clever way and some part of your original code will never ever be seen again (hey, look, your 200 lines of bugged code always return 0 anyway so I'll just replace them with "xor eax, eax").
* Sometimes, it's fine, and sometimes it produces unreadable C code (vectorizations that have no C equivalent and will be decompiled into hundreds of lines of intrinsic instead of a fine readable "for" loop).
* I'm not done yet. You also have exceptions and interrupts, structures, union, function pointers, function inlining, threading, system call and signals, loop unrolling, etc.
Going down (from human-readable to binary) is relatively easy compared to going up (decompilation) because so much information is lost during the compilation process.
My best bet would be to have a bunch of disassembled function produced by a disassembler and produce an LLVM intermediate representation using your AI, then compare it with the LLVM IR produced by Clang (`clang -S -emit-llvm foo.c`).
An infinite quantity of C code can produce the exact same code. Therefore, I think it's meaningless to make an AI read C code for the purpose of decompilation: the information is lost forever.
Commercial and Open/Free decompilers do not produce C code either, they produce some kind of pseudo-C full of errors, missing code, or code even less readable than the ASM.
The following code :
```
int main() {
int toto = 0x0000BEEF;
int titi = 0xDEAD0000;
toto = toto | titi;
return toto;
}
```
produces this:
```
int __cdecl main(int argc, const char **argv, const char **envp)
{
return -559038737;
}
```
And this is the disassembled version:
```
mov eax, 0DEADBEEFh
retn
```
Plus a few thousands lines of assembly that are unrelated to your code but are needed to make the program work.
You can't go back and you have no way of knowing this is the exact same code unless
1. you can do static analysis (very easy in this case, but absurdly difficult in the real world)
2. or compare the IR or ASM produced by both code with the same compiler with the same options on the same architecture and operating system.
Upvotes: 1
|
2016/08/03
| 958
| 3,584
|
<issue_start>username_0: I'm interested in implementing a program for natural language processing (aka [ELIZA](https://en.wikipedia.org/wiki/ELIZA)).
Assuming that I'm already [storing semantic-lexical connections](https://ai.stackexchange.com/q/212/8) between the words and its strength.
What are the methods of dealing with words which have very distinct meaning?
Few examples:
* 'Are we on the same page?'
The 'page' in this context isn't a document page, but it's part of the phrase.
* 'I'm living in Reading.'
The 'Reading' is a city (noun), so it's not a verb. Otherwise it doesn't make any sense. Checking for the capital letter would work in that specific example, but it won't work for other (like 'make' can be either verb or noun).
* 'I've read something on the Facebook wall, do you want to know what?'
The 'Facebook wall' has nothing to do with wall at all.
In general, how algorithm should distinguish the word meaning and recognise the word within the context?
For example:
* Detecting the word for different type of speech, so it should recognise whether it's a verb or noun.
* Detecting whether the word is part of phrase.
* Detecting word for multiple meaning.
What are the possible approaches to solve that problem in order to identify the correct sense of a word with the context?<issue_comment>username_1: >
> In general, how algorithm should distinguish the word meaning and recognise the word within the context?
>
>
>
I don't think anybody knows how to answer this for the general case. If they did, they'd have basically solved AGI. But we can certainly talk about techniques that get part-of-the-way there, and approaches that could work.
One thing I would consider trying (and I don't know off-hand if anybody has tried this exact approach) is to model the disambiguation of each word as a discrete problem for a [Bayesian Belief Network](https://en.wikipedia.org/wiki/Bayesian_network) where your priors (for any given word) are based on both stored "knowledge" as well as the previously encountered words in the (sentence|paragraph|document|whatever). So if you "know", for example, that "Reading is a city in the UK" and that "place names are usually capitalized", your network should be strongly biased towards interpreting "Reading" as the city, since nothing in the word position in the sentence strongly contradicts that.
Of course I'm hand-waving around some tricky problems in saying that, as [knowledge representation](https://en.wikipedia.org/wiki/Knowledge_representation_and_reasoning) isn't exactly a solved problem either. But there are at least approaches out there that you could use. For example, you could use the [RDF](https://en.wikipedia.org/wiki/Resource_Description_Framework) / triple based approach from the [Semantic Web](https://en.wikipedia.org/wiki/Semantic_Web) world. Finding a good way to merge that stuff with a Bayesian framework could yield some interesting results.
There has been a bit of research on "probabilistic RDF" that you could possibly use as a starting point. For example:
<http://om.umiacs.umd.edu/material/papers/prdf.pdf>
<http://ceur-ws.org/Vol-173/pos_paper5.pdf>
<https://www.w3.org/2005/03/07-yoshio-UMBC/>
<http://ebiquity.umbc.edu/paper/html/id/271/BayesOWL-Uncertainty-Modeling-in-Semantic-Web-Ontologies>
<http://www.pr-owl.org/>
Upvotes: 3 [selected_answer]<issue_comment>username_2: Use something like Word2Vec. If a particular node has two edges that are very far from each other, besides the node in question, split the node into word(1) and word(2) nodes.
Upvotes: 1
|
2016/08/03
| 450
| 1,796
|
<issue_start>username_0: Given a list of fixed numbers from a mathematical constant, such as $\pi$, is it is possible to train AI to attempt to predict the next numbers of this constant?
Which AI or neural network would be more suitable for this task?
Especially, the one which will work without memorizing the entire training set, but the one which will attempt to find some patterns or statistical association.<issue_comment>username_1: Pseudo-random number generators are specifically defined to defeat any form of prediction via 'black box' observation. Certainly, some (e.g. linear congruential) have weaknesses, but you are unlikely to have any success in general in predicting the output of a modern RNG. For devices based on chaotic physical systems (e.g. most national lotteries), there is no realistic possibility of prediction.
"Patterns or statistical association" is a much weaker criterion than 'prediction'. Some very recent work has applied topological data analysis to visualize patterns within the infamous Randu RNG.
Upvotes: 3 [selected_answer]<issue_comment>username_2: You would probably have to pack recursive structures into finite-dimensional real vectors and there have been such attempts. The finite precision limits goes as far as the recursion can go.
The limitation of *feedforward* neural networks is restricted to finite input and output spaces, so *recurrent* may be more suitable for this task as in theory can process arbitrarily long strings of numbers, but it has much more practical difficulties than feedforward network.
These kind of methods are open to debate.
Source: [SAS FAQ](ftp://ftp.sas.com/pub/neural/FAQ.html)
References:
* Blair, 1997; Pollack, 1990; Chalmers, 1990; Chrisman, 1991; Plate, 1994; Hammerton, 1998; Hadley, 1999
Upvotes: -1
|
2016/08/03
| 1,108
| 4,649
|
<issue_start>username_0: Currently, most research done in artificial intelligence focuses on neural networks, which have been successfully used to solve many problems. A good example would be [DeepMind's AlphaGo](https://www.nature.com/articles/nature16961), which uses a convolutional neural network. There are many other examples, such as [Google Translate, which uses transformers](https://ai.googleblog.com/2020/06/recent-advances-in-google-translate.html), or [DQN](https://www.cs.toronto.edu/%7Evmnih/docs/dqn.pdf), which has been used to play Atari games.
So, are any of the variants of neural networks the only way to reach "true" artificial intelligence (or AGI)?<issue_comment>username_1: If by true AI, you mean 'like human beings', the answer is - no-one knows what the appropriate computational mechanisms (neural or otherwise) are or indeed whether we are capable of constructing them.
What Artificial Neural Nets (ANNs) do is essentially 'nonlinear regression' - perhaps this is not a sufficiently strong model to express humanlike behaviour.
Despite the 'Universal function approximation' property of ANNs, what if human intelligence depends on some as-yet-unguessed mechanism of the physical world?
With respect to your question about "the only way":
Even if (physical) neural mechanisms somehow actually were the *only* route to intelligence (e.g. via Penrose's quantum microtubules), how could that be proved?
Even in the formal world of mathematics, there's a saying that "Proofs of non-existence are hard". It scarcely seems conceivable that, in the physical world, it would be possible to demonstrate that intelligence could not arise by any other mechanism.
Moving back to computational systems, note that <NAME> made the interesting observation in his book ['A New Kind of Science'](http://www.wolframscience.com/nksonline/toc.html) that many of the apparently distinct mechanisms he observed seem to be capable of 'Universal Computation', so in that sense there's nothing very particular about ANNs.
Upvotes: 5 [selected_answer]<issue_comment>username_2: It depends on what you consider "true artificial intelligence". But this probably means to be able to think like a human - and perhaps, do so in a more rational manner, as in the human brain emotion comes before ratio.
It would seem that a neural network, or a genetic algorithm that evolves neural networks, is the closest way - mimicking humans.
However, the traditional counter-argument to this is that we tried to do the same with flight. We tried to copy nature, mimick the birds - trying to fly by flapping wings. But eventually we made airplanes that did not rely on flapping their wings.
In AI, there are far more variables than in aerodynamics. So it is quite likely that a human-like intelligence can be attained by other methods than neural networks.
In the end, neural networks are one approach to machine learning. There are others, all governed by the rules for what can and cannot be learnt. (There is a field called Computational Learning Theory that covers this).
Although it is possible to extend learning systems beyond what can be learnt according to COLT, this means that such a learning system - neural network or otherwise - is essentially flawed, and will draw wrong conclusions at one point or another.
Upvotes: 2 <issue_comment>username_3: To have any chance at answering this, you'd first need a rigorous definition of "true artificial intelligence", which we don't have. And even if you had that, the best answer would probably be "nobody knows." We don't even understand exactly how human intelligence (which is probably the best model of intelligence we have available to study) works. What we do know (or think we know) is that ANN's are at best a very superficial replica of brain function. It may turn out that they're absolutely the wrong path to achieving "true artificial intelligence" although I expect most people would be surprised if that turned out to be the case.
What probably wouldn't be so surprising would be if some other technique emerged which is better than ANN's, OR if it turns out that you need an ensemble of techniques. Personally I think it's close to self-evident that the brain works largely in a probabilistic fashion, but it's also clear that we do sometimes use symbolic processing / deductive logic / rules / etc. And right now, ANN's don't give you much in the way of reasoning, deduction, etc. So we may ultimately find that we have to combine a probabilistic approach like ANN's with other techniques - maybe Inductive Logic Programming or something of that nature.
Upvotes: 2
|
2016/08/03
| 2,931
| 12,850
|
<issue_start>username_0: I've noticed that a few questions on this site mention genetic algorithms and it made me realize that I don't really know much about those.
I have heard the term before, but it's not something I've ever used, so I don't have much idea about how they work and what they are good for. All I know is that they involve some sort of evolution and randomly changing values.
Can you give me a short explanation, preferably including some sort of practical example that illustrates the basic principles?<issue_comment>username_1: A genetic algorithm is an algorithm that randomly generates a number of attempted solutions for a problem. This set of attempted solutions is called the "population".
It then tries to see how well these solutions solve the problem, using a given *fitness function*. The attempted solutions with the best *fitness* value are used to generate a new population. This can be done by making small changes to the attempted solutions (mutation) or by combining existing attempted solutions (crossover).
The idea is that, over time, an attempted solution emerges that has a high enough *fitness* value to solve the problem.
The inspiration for this came from the theory of evolution; the fittest solutions survive and procreate.
**Example 1**
Suppose you were looking for the most efficient way to cut a number of shapes out of a piece of wood. You want to waste as little wood as possible.
Your attempted solutions would be random arrangements of these shapes on your piece of wood. *Fitness* would be determined by how little wood would be left after cutting the shapes following this arrangement.
The less wood is left, the better the attempted solution.
**Example 2**
Suppose you were trying to find a polynomial that passes through a number of points. Your attempted solutions would be random polynomials.
To determine the *fitness* of these polynomials, you determine how well they fit the given points. (In this particular case, you would probably use the least squares method to determine how well the polynomial fit the points).
Over a number of trials, you would get polynomials that fit the points better, until you had a polynomial that fit the points closely enough.
Upvotes: 3 <issue_comment>username_2: Evolutionary algorithms are a family of optimization algorithms based on the principle of **Darwinian natural selection**. As part of natural selection, a given environment has a population of individuals that compete for survival and reproduction. The ability of each individual to achieve these goals determines their chance to have children, in other words, to pass on their genes to the next generation of individuals, who, for genetic reasons, will have an increased chance of doing well, even better, in realizing these two objectives.
This principle of continuous improvement over the generations is taken by evolutionary algorithms to optimize solutions to a problem. In the **initial generation**, a population composed of different **individuals** is generated randomly or by other methods. An individual is a solution to the problem, more or less good: the quality of the individual in regards to the problem is called **fitness**, which reflects the adequacy of the solution to the problem to be solved. The higher the fitness of an individual, the higher it is likely to pass some or all of its genotype to the individuals of the next generation.
An individual is coded as a **genotype**, which can have any shape, such as a bit vector (**genetic algorithms**) or a vector of real (evolution strategies). Each genotype is transformed into a **phenotype** when assessing the individual, i.e. when its fitness is calculated. In some cases, the phenotype is identical to the genotype: it is called **direct** **coding**. Otherwise, the coding is called indirect. For example, suppose you want to optimize the size of a rectangular parallelepiped defined by its length, height and width. To simplify the example, assume that these three quantities are integers between 0 and 15. We can then describe each of them using a 4-bit binary number. An example of a potential solution may be to genotype 0001 0111 1010. The corresponding phenotype is a parallelepiped of length 1, height 7 and width 10.
During the transition from the old to the new generation, the **variation** **operators**, whose purpose is to manipulate individuals, are applied. There are two distinct types of variation operators:
* the **mutation** **operators**, which are used to introduce variations within the same individual, as genetic mutations;
* the **crossover** **operators**, which are used to cross at least two different genotypes, as genetic crosses from breeding.
Evolutionary algorithms have proven themselves in various fields such as operations research, robotics, biology, nuance, or cryptography. In addition, they can optimize multiple objectives simultaneously and can be used as black boxes because they do not assume any properties in the mathematical model to optimize. Their only real limitation is the computational complexity.
[](https://i.stack.imgur.com/wweBO.png)
Upvotes: 5 [selected_answer]<issue_comment>username_3: As observed in another answer, all you need to apply Genetic Algorithms (GAs) is to represent a potential solution to your problem in a form that is subject to crossover and mutation. Ideally, the fitness function will provide some kind of smooth feedback about the quality of a solution, rather than simply being a 'Needle in a Haystack'.
Here are some characteristics of problems that Genetic Algorithms (and indeed [Metaheuristics](https://cs.gmu.edu/%7Esean/book/metaheuristics/) in general) are good for:
* NP-complete - The number of possible solutions to the problem is
exponential, but checking the fitness of a solution is relatively
cheap (technically, with time polynomial in the input size).
* Black box - GAs work reasonably well even if you don't have a particularly
informed model of the problem to be solved. This means that these
approaches are also useful as a 'rapid prototyping' approach to
solving problems.
However, despite their widespread use for the purpose, note that GAs are actually [*not* function optimizers](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.161.5655) - GA mechanisms tend not to explore 'outlying' regions of the search space in the hope of finding some distant high quality solution, but rather to cluster around more easily attainable peaks in the 'fitness landscape'.
More detail on the applicability of GAs is given in a famous early paper ["What makes a problem hard for a Genetic Algorithm?"](https://link.springer.com/article/10.1007/BF00993046)
Upvotes: 3 <issue_comment>username_4: This answer requests a practical example of how one might be used, which I will attempt to provide in addition to the other answers. They seem to due a very good job of explaining what a genetic algorithm is. So, this will give an example.
Let's say you have a neural network (although they are not the only application of it), which, from some given inputs, will yield some outputs. A genetic algorithm can create a population of these, and by seeing which output is the best, breed and kill off members of the population. Eventually, this should optimise the neural network if it is complicated enough.
Here is a demonstration I've made, which despite being badly coded, might help you understand. <http://khrabanas.github.io/projects/evo/evo.html>
Hit the evolve button and mess around with the goals.
It uses a simple genetic algorithm to breed, mutate, and decide which individuals of the population survive. Depending on how the input variables are set, the network will be able to get to some level of closeness to them. In this fashion, the population will likely eventually become a homogeneous group, whose outputs resemble the goals.
The genetic algorithm is trying to create a "neural network" of sorts, that by taking in RGB, will yield an output color. First, it generates a random population. It then by taking 3 random members from the population, selecting the one with the lowest fitness and removing it from the population. The fitness is equal to the difference in the top goal squared + the difference in the bottom goal squared. It then breeds the two remaining ones together and adds the child to the same place in the population as the dead member. When mating occurs, there is a chance a mutation will occur. This mutation will change one of the values randomly.
As a side note, due to how it is set up, it is impossible for it to be totally correct in many cases, though it will reach relative closeness.
Upvotes: 3 <issue_comment>username_5: There are a number of good answers here explaining what genetic algorithms are, and giving example applications. I'm adding some general purpose advice on what they are good for, but also cases where you should NOT use them.
If my tone seems harsh, it is because using GAs in any of the cases in the **inappropriate** section below will lead to your paper being *instantly* rejected from any top-tier journal.
First, your problem MUST be an optimization problem. You need to define a "fitness function" that you are trying to optimize and you need to have a way to measure it.
### Good
* **Crossover functions are easy to define and natural**: When dealing with certain kinds of data, crossover/mutation functions might be easy to define. For example strings (eg. DNA or gene sequences) can be mutated easily by splicing two candidate strings to obtain a new one (this is why nature uses genetic algorithms!). Trees (like phylogenetic trees or parse trees) can be spliced too, by replacing a branch of one tree with a branch from another. Shapes (like airplane wings or boat shapes) can be mutated easily by drawing a grid on the shape and combining different grid sections from the parents to obtain a child. Usually this means your problem is composed of different parts and putting together parts from distinct solutions is a valid candidate solution.
* This means that if your problem is defined in a vector space where the coordinates don't have any special meaning, GAs are not a good choice. If it is hard to formulate your problem as a GA, it is not worth it.
* **Black Box evaluation**: If for a candidate, your fitness function is evaluated outside the computer, GAs are a good idea. For example, if you are testing a wing shape in an air tunnel, genetic algorithms will help you generate good candidate shapes to try.
* **Exception: Simulations**. If your fitness function is measuring how well a nozzle design performs and requires simulating the fluid dynamics for each nozzle shape, GAs may work well for you. They may also work if you are simulating a physical system through time and are interested in how well your design performs over the course of the operation eg. [modelling locomotion patterns](https://www.youtube.com/watch?v=dRthdBr46cs). However, methods that use partial differential equations as constraints are being developed in the literature, eg. [PDE constrained optimization](https://www.siam.org/meetings/op08/Heinkenschloss.pdf), so this may change in the future.
### Inappropriate
* **You can calculate a gradient** for your function: If you have access to the gradient of your function, you can do gradient descent, which is in general much more efficient than GAs. Gradient descent may have issues with local minima (as will GAs) but many methods have been studied to mitigate this.
* **You know the fitness function in closed form**: Then, you can probably calculate the gradient. Many languages have libraries supporting [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation), so you don't even need to do it manually. If your function is not differentiable, then you can use [subgradient descent](https://en.wikipedia.org/wiki/Subgradient_method).
* Your optimization problem is of a known form, like a **linear program or a quadratic program**: GAs (and black box optimization methods in general) are very inefficient in terms of the number of candidates they need to evaluate, and are best avoided if possible.
* **Your solution space is small**: If you can grid your search space efficiently, you can guarantee that you have found the best solution, and can make contour plots of the solution space to see if there is a region you need to explore further.
Finally, if you are considering a GA, consider more recent work in Evolutionary Strategies. I am biased towards [CMA-ES](https://en.wikipedia.org/wiki/CMA-ES), which I think is a good simple algorithm that captures the notion of a gradient in the fitness landscape in a way that traditional GAs do not.
Upvotes: 3
|
2016/08/03
| 1,019
| 4,033
|
<issue_start>username_0: In detective novels, the point is often that the reader gets enough information to solve the crime themselves. This "puzzle" aspect of detective novels is part of the attraction.
Often the difficulty for humans is to keep track of all the variables - events, items, motivations. An AI would have an easier time keeping track of all the details, but would rely on real-world knowledge to prevent making crazy mistakes. For example, if it was stated that a character took the train, the AI would need to know that this is a method of transportation - that it changes the location property of an agent over time.
Has an AI ever been able to solve a detective mystery?<issue_comment>username_1: Not exactly a detective mystery, but according to a slide dated June 2012 from a NSA PowerPoint presentation (see: <NAME>’s site), NSA used some kind of *Skynet* AI technology to analyze and detect suspicious patterns from location and communication data in order to create a watch list of suspected terrorists. This helped to track associated members of Al-Qa’ida as well as the Muslim Brotherhood. And I'm sure their AI solved a lot of mysteries and found some controversial figures.
Source: [U.S. Government Designated Prominent Al Jazeera Journalist as a member of AI Qaeda](https://theintercept.com/2015/05/08/u-s-government-designated-prominent-al-jazeera-journalist-al-qaeda-member-put-watch-list/)
For more details check: [SKYNET: Courier Detection via Machine Learning](https://theintercept.com/document/2015/05/08/skynet-courier/) for courier detection data and charts generated by analyzing GSM metadata using machine learning algorithms. Also [Applying Advanced Cloud-based Behavior Analytics](https://theintercept.com/document/2015/05/08/skynet-applying-advanced-cloud-based-behavior-analytics/).
Upvotes: 0 <issue_comment>username_2: I might be wrong, but I do not believe that something of the scope you describe would be possible with the current state of technology. It would require a lot of things which are still in relatively early stages of research.
For one, just extracting relevant information from text is a huge task by itself. Doubly so with a novel which contains a large amount of unimportant details.
It might perhaps be easier if the input was presented in the form of some sort of list of important facts. But it would still be rather difficult for the AI to connect them and find a solution.
As an example, let's say that we have these two facts:
* Alice died of a snake bite.
* Bob was seen buying a couple of mice recently.
To a human, it seems obvious that the mice were bought to feed a venomous snake. However, it would probably require a tremendous effort to teach an AI to make such connections.
Disclaimer: I don't do text processing myself, so I'm not quite up-to-date on the current state-of-the-art. It's possible that some of these things have already been done in some form. If anyone knows more about the subject, please correct me if I'm wrong.
Upvotes: 2 <issue_comment>username_3: Generally agree with @Inquisitive Lurker, but I think we also have a wide range of potential abilities/requirements. As with computer chess or Go, where there's a big difference between "beating an honest novice human " and "beating all humans"; there's a big difference between solving a simple kids' mystery and a complex adult novel.
So I don't think there would be any problem writing a program that could solve a problem that is listed as a list of statements, or laid out as a (very young) children's book. However something like an Agatha Christie or <NAME>'s "Tinker Tailor Soldier Spy" (relatively simple solution, but the story is told in a complex manner) are far in the future.
Sometimes an alternative approach might work. For example a neural network could probably solve all Colombo mysteries at the "Who did it?" level without a full "Why?" explanation, after only reading a few Colombo mysteries. The same is true for most kids!
Upvotes: 2
|
2016/08/03
| 634
| 2,590
|
<issue_start>username_0: In 1969, <NAME> and <NAME> showed that Perceptrons could not learn the XOR function.
This was solved by the backpropagation network with at least one hidden layer. This type of network can learn the XOR function.
I believe I was once taught that every problem that could be learned by a backpropagation neural network with multiple hidden layers, could also be learned by a backpropagation neural network with a single hidden layer. (Although possibly a nonlinear activation function was required).
However, it is unclear to me what the limits are to backpropagation neural networks themselves. Which patterns **cannot** be learned by a neural network trained with gradient descent and backpropagation?<issue_comment>username_1: While I'm not familiar with any explicit statements regarding what a Multilayer Perceptron (MLP) *cannot* learn, I can provide some further detail on the positive statements you made about MLP capabilities:
A MLP with a single hidden layer is capable of what is commonly termed ['Universal Function Approximation'](https://en.wikipedia.org/wiki/Universal_approximation_theorem), i.e. it can approximate any bounded continuous function to an arbitrary degree of accuracy. With two hidden layers, the boundness restriction is removed [[Cybenko, 1988]](http://link.springer.com/article/10.1007/BF02551274).
[This paper](http://www.sciencedirect.com/science/article/pii/089360809190009T) goes on to demonstrate that this is true for a wide range of activation functions (not necessarily nonlinear). 3 layer MLPs are also capable of representing any boolean function (although they may require an exponential number of neurons).
See also [this interesting answer](https://cstheory.stackexchange.com/questions/7894/universal-function-approximation) on CS SE about other Universal approximators.
Upvotes: 2 <issue_comment>username_2: Multilayer Perceptron (MLP) can theoretically approximate any bounded, continuous function. There's no guarantee for a discontinuous function. There are plenty of important discontinuous functions, like, say, the prime counting function.
The [prime counting function](https://en.wikipedia.org/wiki/Prime-counting_function) $\pi(n)$ is simply equal to the number of primes less than or equal to $n$. It has a discontinuity about each prime $p$, so good luck trying to approximate this with a neural network!
However, this function is extensively studied and extremely important in number theory. See the [Riemann hypothesis](https://en.wikipedia.org/wiki/Riemann_hypothesis).
Upvotes: 2
|
2016/08/03
| 1,080
| 4,828
|
<issue_start>username_0: Over the last 50 years, the rise/fall/rise in popularity of neural nets has acted as something of a 'barometer' for AI research.
It's clear from the questions on this site that people are interested in applying Deep Learning (DL) to a wide variety of difficult problems.
I therefore have two questions:
1. Practitioners - What do you find to be the main obstacles to
applying DL 'out of the box' to your problem?
2. Researchers - What
techniques do you use (or have developed) that might help address
practical issues? Are they within DL or do they offer an
alternative approach?<issue_comment>username_1: I have very little experience with ML/DL to call myself either practitioner, but here is my answer on the 1st question:
At its core DL solves well the task of classification. Not every practical problem can be rephrased in terms of classification. Classification domain needs to be known upfront. Although the classification can be applied to any type of data, it's necessary to train the NN with samples of the specific domain where it'll be applied. If the domain is switched at some point, while keeping the same model (NN structure), it'll have to be retrained with new samples. Furthermore, even the best classifiers have "gaps" - [Adversarial Examples](http://www.kdnuggets.com/2015/07/deep-learning-adversarial-examples-misconceptions.html) can be easily constructed from a training sample, such that changes are imperceptible to human, but are misclassified by the trained model.
Upvotes: 3 <issue_comment>username_2: Question 2.
I am researching whether Hyper dimensional computing is an alternative to Deep Learning. Hyper-D uses very long bit vectors (10,000 bits) to encode information. The vectors are random and as such they are approximately orthogonal. By grouping and averaging a collection of such vectors a "set" can be formed and later queried to see if an unknown vector belongs to the set. The set can be considered a concept or a generalize image, etc. Training is very fast as is recognition. What needs to be done is simulate the domains in which Deep Learning has been successful and compare Hyper-D with it.
Upvotes: 2 <issue_comment>username_3: From a mathematics point of view one of the major issues in deep networks with several layers are **vanishing** or **unstable gradients**. Each additional hidden layer learns significantly slower, almost nullifying the benefit of the additional layer.
Modern deep learning approaches can improve this behavior, but in simple, old fashioned neural networks this is a well known issue. You can find a well written analysis [here](http://neuralnetworksanddeeplearning.com/chap5.html) for deeper study.
Upvotes: 1 <issue_comment>username_4: To summarize, There are two major issues in applied Deep Learning.
* The first being that computationally , it's exhaustive. Normal CPU's require a lot of time to perform even the basic computation/training with Deep Learning. GPU's are thus recommended however, even they may not be enough in a lot of situations. Typical deep learning models don't support the theoretical time to be in Polynomials. However, if we look at the relatively simpler models in ML for the same tasks, too often we have mathematical guarantees that training time required for such simpler Algorithms is in Polynomials. This, for me, at least is probably the biggest difference.
There're solutions to counter this issue, though. One main approach being is to optimize DL Algorithms to a number of iterations only (instead of looking at the global solutions in practice, just optimize the algorithm to a good local solution, whereas the criterion for "Good" is defined by the user).
* Another Issue which may be a little bit controversial to young deep learning enthusiasts is that Deep Learning algorithms lack theoretical understanding and reasoning. Deep Neural Networks have been successfully used in a lot of situations including Hand writing recognition, Image processing, Self Driving Cars, Signal Processing, NLP and Biomedical Analysis. In some of these cases, they have even surpassed humans. However, that being said, they're not under any circumstance, theoretically as sound as most of Statistical Methods.
I will not go into detail , rather I leave that up to you. There're pros and cons for every Algorithm/methodology and DL is not an exception. It's very useful as has been proven in a lot of situations and every young Data Scientist must learn at least the basics of DL. However, in the case of relatively simple problems, it's better to use famous Statistical methods as they have a lot of theoretical results/guarantees to support them. Furthermore, from learning point of view, it's always better to start with simple approaches and master them first.
Upvotes: 4 [selected_answer]
|
2016/08/03
| 1,017
| 3,716
|
<issue_start>username_0: On the Wikipedia page we can read the basic structure of an artificial neuron (a model of biological neurons) which consist:
* Dendrites - acts as the input vector,
* Soma - acts as the summation function,
* Axon - gets its signal from the summation behavior which occurs inside the soma.
I've checked [deep learning Wikipedia page](https://en.wikipedia.org/wiki/Deep_learning), but I couldn't find any references to dendrites, soma or axons.
Which type of artificial neural network implements or can mimic such a model most closely?<issue_comment>username_1: Most artificial neurons model biological neurons but in a very simplistic way. Nowadays, the main aim is to achieve better performance at prediction tasks. However, there is a body of literature in neuroscience that looks at [computational models of neurons](https://en.wikipedia.org/wiki/Models_of_neural_computation). Neurons are complicated cells and our understanding of neurons is still not complete.
Upvotes: 2 <issue_comment>username_2: ANNs approximate biological neuronal networks. The approximation began with extreme simplicity in the early perceptron design. [Spiking networks](https://en.wikipedia.org/wiki/Spiking_neural_network) are examples of more accurate approximations. More accurate still, are complex simulations of neuron behavior that therefore necessitate significant computing resources.
If you are interested in a mathematical overview on analysis of biological neuron models I can recommend [Dynamical Systems in Neuroscience](https://www.izhikevich.org/publications/dsn.pdf) by [<NAME>](https://www.izhikevich.org/).
Upvotes: 2 <issue_comment>username_3: Only a small portion of the habituation, sensitization, and classical conditioning behavior of neurons has been primitively simulated in ANN systems. Simulation of actin cytoskeletal machinery1 and other agents of neural plasticity, central to learning new domains, is in its beginnings2. As of this writing, the complexity of neuron activation dwarfs the models being used in working commercial ANN systems, but the research continues along multiple fronts.
* The neuroscience of learning3,
* Parallel hardware approaches that better support ANN simulation accuracy4, 5, and
* Dynamic frameworks6
This list and the examples referenced in the superscripts, with links below, represent a tiny sample of the information available and the work in progress.
**References**
[1] [Molecular Cell Biology. 4th edition.; <NAME>, <NAME>, Zipursky SL, et al.; New York: W. H. Freeman; 2000.; Section 18.1 The Actin Cytoskeleton](https://www.ncbi.nlm.nih.gov/books/NBK21493/)
[2] [NEURON Software; Yale U](http://www.neuron.yale.edu/neuron/)
[3] [Molecular Cell Biology. 4th edition.; <NAME>, <NAME>, Zip<NAME>, et al.; New York: W. H. Freeman; 2000.; Section 21.7 Learning and Memory](https://www.ncbi.nlm.nih.gov/books/NBK21648/)
[4] [Artificial Neural Networks on Massively Parallel Computer Hardware; <NAME>; University of Magdeburg, Germany](http://www.sciencedirect.com/science/article/pii/S0925231204000682)
[5] [NeuroGrid; Stanford U](https://web.stanford.edu/group/brainsinsilicon/neurogrid.html)
[6] [Explanation of Dynamic Computational Graph frameworks](https://ai.stackexchange.com/a/3803/2444)
Upvotes: 3 [selected_answer]<issue_comment>username_4: A computational model that attempts to closely mimic the human neural networks is Numenta's [hierarchical temporal memory](https://numenta.org/hierarchical-temporal-memory/) (which has not yet received much attention from the machine learning community). In their models, they explicitly model and implement dendrites and other biological concepts.
Upvotes: 0
|
2016/08/04
| 1,264
| 4,948
|
<issue_start>username_0: Have there been any studies which attempted to use AI algorithms to detect human thoughts or emotions based on brain activity, such as using [BCI/EEG devices](https://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface#EEG-based)?
By this, I mean simple guesses such as whether the person was happy or angry, or what object (e.g. banana, car) they were thinking about.
If so, did any of those studies show some degree of success?<issue_comment>username_1: As per this [site](http://www.dailymail.co.uk/sciencetech/article-2095214/As-scientists-discover-translate-brainwaves-words--Could-machine-read-innermost-thoughts.html)
>
> Researchers recorded the complex patterns of electrical activity generated by someone’s brain, as the subject listened to someone talking.
> By feeding those brainwave patterns into a computer, they were able to translate them back into actual words — the same words that the volunteer had been hearing.
>
>
> **The scientists behind the work believe they can now go further and read the unspoken thoughts of people using electrodes placed against the brain.**
>
>
> In the experiment, each patient listened to a recording of spoken words for five to ten minutes, while the net of electrodes placed under their skull monitored activity in a part of the brain involved in understanding speech called [Wernicke’s area](https://en.wikipedia.org/wiki/Wernicke%27s_area).
>
>
> In one experiment, volunteers looked at black-and-white photographs while the scanner monitored activity in part of the brain that handles vision called the primary visual cortex.
> A computer predicted accurately the image that the person was looking at purely from the pattern of brain activity.
>
>
>
So AI might be able to read our emotions as well in near future.
I found that [google glasses can detect people's emotion](http://electronics.howstuffworks.com/gadgets/high-tech-gadgets/google-glass-detect-emotions.htm) via facial expression, voice tone e.t.c, (just like us), obviously not what they are thinking in their brain.
Upvotes: 3 <issue_comment>username_2: There has been previous research with promising results cited at length in the following recent article, and although they have limited training data, here is some [impressive research for an undergraduate thesis at the University of Arkansas](http://uaf46365.ddns.uark.edu/SarahStolze_Thesis.pdf) which extends that research using an artificial neural network on enhancing a classifying algorithm's capacity to facilitate unspoken, or imagined, speech recognition by collecting and analyzing a large dataset of simultaneous EEG signal and video data streams.
>
> Imagined speech (unspoken speech, silent speech, or covert speech) is
> the process by which one thinks about a word, or “hears” the word in
> one’s head, in the absence of any vocalization or physical movement
> indicating the word. Though there exists evidence that it is possible
> for imagined speech information to be captured and interpreted. To
> facilitate imagined speech, a Brain-to-Computer Interface (BCI) must
> be implemented to provide silent communication abilities directly
> between the two entities. One of the most popular methods for
> interfacing directly between a human brain and a computer is through
> electroencephalographic signals.
>
>
> Researchers have created models capable of achieving 70 - 90%
> predictive accuracy in recognizing patterns in EEG data;
> however, the accuracy of current methods for unspoken speech
> recognition is not yet sufficient to enable fluid communication
> between humans and machines.
>
>
>
High Level Experiment Design
>
> the subjects were asked to imagine a specific word or feeling (label).
> The subjects responded to a set of uniform verbal cues describing the
> set of labels as well as the desired individual label to imagine. The
> data was then processed in order to minimize the effects of irrelevant
> signal activity, or noise. Additionally the data was processed to
> minimize its volume while still maintaining the core “information” in
> the data. The condensed dataset was created by dropping irrelevant
> information from the EEG device and applying principal component
> analysis (PCA) to the video stream data. Once the data was processed
> and assembled into the correct format, cross-validation using a random
> forest algorithm was performed on the control group of EEG signals
> alone and on the hypothesis group consisting of both EEG and video
> data. The predictive accuracy measurements obtained from the
> cross-validation experiments were used as metrics to evaluate the
> success of the hypothesis.
>
>
>
The results show a notable improvement classifying thoughts when in conjunction with the video streams.
[](https://i.stack.imgur.com/LH6sI.png)
Upvotes: 3
|
2016/08/04
| 964
| 4,034
|
<issue_start>username_0: Has there been any attempts to deploy AI with blockchain technology?
Are there any decentralized examples of AI networks with no central point of control with AI nodes acting independently (but according to a codified set of rules) creating, validating and storing the same shared decentralized database in many locations around the world?<issue_comment>username_1: [Swarm intelligence](https://en.wikipedia.org/wiki/Swarm_intelligence) is the term for systems where relatively simple agents work together to solve a complicated problem in a decentralized fashion.
In general, distributed computing methods are very important for dealing with problems at scale, and many of them embrace decentralization in a deep way. (Given the reality of hardware failure and the massive size of modern datasets relative to individual nodes, the less work is passed through a central bottleneck, the better.)
While there are people interested in doing computation on the blockchain, it seems to me like it's unlikely to be competitive with computation in dedicated clusters (like AWS).
Upvotes: 4 [selected_answer]<issue_comment>username_2: I think the best example of AI being deployed on the blockchain is [SingularityNET](https://singularitynet.io/). They just had a successful token sell where they sold out of their AGI token which will be able to be used to essentially "pay" for AI tasks to be done for you. Various AI will be put on the network and able to interact and communicate with each other to get various tasks done. There are [some great videos online](https://www.youtube.com/watch?v=iFFHsmtQs6w) where Dr. <NAME> explains this further. And here is a link to [their whitepaper](https://public.singularitynet.io/whitepaper.pdf).
Upvotes: 2 <issue_comment>username_3: I'm currently working on a p2p framework to train with neuroevolution, it will have neat, hyperneat, and eshyperneat example experiments. AI developers will be able to fork and add any experiments they want.
It isn't a blockchain per sé but will have a dht for genes, champion nets, peers of course, and genomes. Neuroevolution training can be done in parallel so the population will be distributed evenly among peers, peers that finish early will help slower peers with their evaluations, and all peers will have a check on one of the nets they evaluated by a random peer to prevent malice and all peers will check the champion of each generation. Any peer that evaluates a net will be rewarded a token that is specific to the generation it helped train and will also download a copy of the champion net, later I plan to allow non training clients the ability to view performance of past champs and purchase them from the network, any peer with a token for the genome they purchase will receive part of the payment. This will not be proof of stake, it will be a proof of work where work is evaluating nets, since this needs to be checked by other nets so people don't just post phony fitness for genomes I'll be calling it proof of fitness.
Upvotes: 0 <issue_comment>username_4: Maybe the term you're looking for is federated learning. Check out [OpenMined](https://www.openmined.org/) project, [PySyft](https://github.com/OpenMined/PySyft) and [Tensorflow Federated](https://www.tensorflow.org/federated) libraries.
Upvotes: 0 <issue_comment>username_5: Have a look at the paper [Blockchain-Based Federated Learning in Medicine](https://hal.archives-ouvertes.fr/hal-03080421/document) (2020), where the blockchain is used as a "federation" server for improving the parameters of local neural networks.
Upvotes: 1 <issue_comment>username_6: I'm aware of some works that use blockchains in [federated learning](https://en.wikipedia.org/wiki/Federated_learning) scenario for accountability, incentivize participants and so on.
Here's an example, [BlockFLA: Accountable Federated Learning via Hybrid Blockchain Architecture](https://arxiv.org/abs/2010.07427) (2020), but there are probably many similar works around.
Upvotes: 1
|
2016/08/04
| 2,274
| 7,798
|
<issue_start>username_0: In their famous book entitled [Perceptrons: An Introduction to Computational Geometry](https://rads.stackoverflow.com/amzn/click/com/0262631113), Minsky and Papert show that a perceptron can't solve the XOR problem. This contributed to the first AI winter, resulting in funding cuts for neural networks. However, now we know that a multilayer perceptron can solve the XOR problem easily.
Backprop wasn't known at the time, but did they know about manually building multilayer perceptrons? Did Minsky & Papert know that multilayer perceptrons could solve XOR at the time they wrote the book, albeit not knowing how to train it?<issue_comment>username_1: There does not appear to be a historical consensus on this.
The [Wikipedia page on the Perceptrons book](https://en.wikipedia.org/wiki/Perceptrons_(book)) (which does not come down on either side) gives an argument that the ability of MLPs to compute any Boolean function was widely known at the time (at the very least to McCulloch and Pitts).
However, [this page](http://harveycohen.net/image/perceptron.html) gives an account by someone present at the MIT AI lab in 1974, claiming that this was not common knowledge there, alluding to documentation in "Artificial Intelligence Progress Report: Research at the Laboratory in Vision, Language, and other problems of Intelligence" (p31-32) which is claimed to support this.
Upvotes: 4 <issue_comment>username_2: In section **13.2 Other Multilayer Machines** (pp. 231-232) of the book [Perceptrons: An Introduction to Computational Geometry](https://russell-davidson.arts.mcgill.ca/e706/Perceptrons.pdf#page=246) (expanded edition, third printing, 1988) Minsky and Papert actually talk about their knowledge of or opinions about the capabilities of what they call the *multilayered machines* (i.e. perceptrons with many layers or MLPs).
>
> *Have you considered "perceptrons" with many layers?*
>
>
> Well, we have considered **Gamba machines**, which could be described as "**two layers of perceptron**". We have not found (by thinking or by studying the literature) any other really interesting class of multilayered machine, at least none whose principles seem to have a significant relation to those of the perceptron. To see the force of this qualification it is worth pondering the fact, trivial in itself, that **a universal computer could be built entirely out of linear threshold modules**. This does not in any sense reduce the theory of computation and programming to the theory of perceptrons. Some philosophers might like to express the relevant general principle by saying that the computer is so much more than the sum of its parts that the computer scientist can afiord to ignore the nature of the components and consider only their connectivity. More concretely, we would call the student's attention to the following considerations:
>
>
> 1. Multilayer machines with loops clearly open all the questions of the general theory of automata.
> 2. A system with no loops but with an order restriction at each layer can compute only predicates of finite order.
> 3. On the other hand, if there is no restriction except for the absence of loops, the monster of vacuous generality once more raises its head.
>
>
> The problem of extension is not merely technical. It is also strategic. The perceptron has shown itself worthy of study despite (and even because of!) its severe limitations, It has many features to attract attention: its linearity; its intriguing learning theorem; its clear paradigmatic simplicity as a kind of parallel computation. **There is no reason to suppose that any of these virtues carry over to the many-layered version. Nevertheless, we consider it to be an important research problem to elucidate (or reject) our intuitive judgment that the extension is sterile. Perhaps some powerful convergence theorem will be discovered, or some profound reason for the failure to produce an interesting
> "learning theorem" for the multilayered machine will be found.**
>
>
>
So, let me address your first question directly.
>
> Backprop wasn't known at the time, but did they know about manually building multilayer perceptrons?
>
>
>
Yes. They say that Gamba machines could be described as a 2-layer perceptron. For reproducibility, here's the definition of the Gamba machine (section [**13.1 Gamba Perceptrons and other Multilayer Linear Machines**](https://russell-davidson.arts.mcgill.ca/e706/Perceptrons.pdf#page=243))
\begin{align}
\psi
&=
\left[\sum\_{i} \alpha\_{i}\left[\sum\_{j} \beta\_{i j} x\_{j}>\theta\_{i}\right]>\theta\right] \\
&=
\left[\sum\_{i} \alpha\_{i} \varphi\_{i} >\theta \right]
\end{align}
See also sections [**12.4.4. Layer-Machines**](https://russell-davidson.arts.mcgill.ca/e706/Perceptrons.pdf#page=220).
So, let's now address your second question.
>
> Did Minsky & Papert know that multilayer perceptrons could solve XOR at the time they wrote the book, albeit not knowing how to train it?
>
>
>
So, according to the first excerpt, their intuition was the *virtues* of perceptrons would not carry over to MLPs, but they acknowledge that more research was needed to reject or support this hypothesis.
However, in section [**13.0 Introduction**](https://russell-davidson.arts.mcgill.ca/e706/Perceptrons.pdf#page=242) of the same book, they write
>
> We believe (but cannot prove) that the **deeper limitations extend also to the variant of the perceptron proposed by <NAME>**.
>
>
>
So, they believed that the Gamba machine would not have been able to solve the XOR problem.
However, in the first excerpt, they say that a Turing machine could be built entirely out of linear threshold modules, which seems to be inconsistent with the second excerpt, but that's not really the case because they are not saying how to build a Turing machine out of the linear threshold modules but that just the specific Gamba machine would have the same limitations of the perceptron.
Upvotes: 3 <issue_comment>username_3: Whether Minsky knew or not, it was definitely known to Rosenblatt, as he published those results in his really pioneering report - [Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms](https://apps.dtic.mil/sti/pdfs/AD0256582.pdf), published in 1961.
A large majority of academic and industry experts are simply unaware of the "depth" of Rosenblatt's publication on perceptrons, where he not only proved that 3-layer perceptrons (which he called elementary) are universal (check [theorem 1 in section 5.2](https://apps.dtic.mil/sti/pdfs/AD0256582.pdf#page=116)), but where he also provided results on convergence (check [theorem 4 in section 5.5](https://apps.dtic.mil/sti/pdfs/AD0256582.pdf#page=127)) and a statistical mechanics analysis of their generalization capabilities (check [chapter 6](https://apps.dtic.mil/sti/pdfs/AD0256582.pdf#page=144) for the foundational theory, and [figure 13 and 14](https://apps.dtic.mil/sti/pdfs/AD0256582.pdf#page=178) for an application of it, following the analysis in [section 7.1.2](https://apps.dtic.mil/sti/pdfs/AD0256582.pdf#page=170)).
It is simply unfortunate that Rosenblatt accidentally died soon after Minsky and Papert's not-so-pioneering 1969 book was published. I believe its misleading influence has regressed AI research for several decades.
If only Rosenblatt lived longer and made his presence stronger in academia, we would not be handing out Turing awards in AI to those who on critical scrutiny are objectively undeserving of it.
Upvotes: 3 <issue_comment>username_4: Cold war - maybe he knew?
<https://gwern.net/doc/ai/1966-ivakhnenko.pdf>
Cybernetics and forecasting techniques
by Ivakhnenko, <NAME>
<https://archive.org/details/cyberneticsforec0000ivak>
Upvotes: 0
|
2016/08/04
| 1,310
| 4,947
|
<issue_start>username_0: According to Wikipedia [Artificial general intelligence(AGI)](https://en.wikipedia.org/wiki/Artificial_general_intelligence)
>
> Artificial general intelligence (AGI) is the intelligence of a
> (hypothetical) machine that could successfully perform any
> intellectual task that a human being can.
>
>
>
According to the below image (a screenshot of a picture from [When will computer hardware match the human brain?](http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=251030664B23757C5D420F9ED97AB725?doi=10.1.1.136.7883&rep=rep1&type=pdf) (1998) by <NAME>, and Kurzweill also uses this diagram in his book [The Singularity Is Near: When Humans Transcend Biology](https://en.wikipedia.org/wiki/The_Singularity_Is_Near)), today's artificial intelligence is the same as that of lizards.
[](https://i.stack.imgur.com/gddKB.jpg)
Let's assume that within 10-20 years, we, humans, are successful in creating an AGI, that is, an AI with human-level intelligence and emotions.
At that point, could we destroy an AGI without its consent? Would this be considered murder?<issue_comment>username_1: Firstly, an AGI could conceivably exhibit all of the observable properties of intelligence without being conscious. Although that may seem counter-intuitive, at present we have no physical theory that allows us to detect consciousness (philosophically speaking, a ['Zombie'](http://plato.stanford.edu/entries/zombies/) is indistinguishable from a non-Zombie - see the writing of [<NAME>](http://rads.stackoverflow.com/amzn/click/0316180661) and [<NAME>](http://consc.net/books/tcm/) for more on this). Destroying a non-conscious entity has the same moral cost as destroying a chair.
Also, note that 'destroy' doesn't necessarily mean the same for entities with *persistent substrate*, i.e. their 'brain state' can be reversibly serialized to some other storage medium and/or multiple copies of them can co-exist. So if by 'destroy' we simply mean 'switch off', then an AGI might conceivably be reassured of a subsequent re-awakening. Douglas Hofstadter gives an interesting description of such an 'episodic consciousness' in ["A Conversation with Einstein's Brain"](http://themindi.blogspot.co.uk/2007/02/chapter-26-conversation-with-einsteins.html)
If by 'destroy', we mean 'irrevocably erase with no chance of re-awakening', then (unless we have a physical test which proves it is *not* conscious) destroying an entity with a seemingly human-level awareness is clearly morally tantamount to murder. To believe otherwise would be *substrate-ist* - a moral stance which may one day be seen as antiquated as racism.
Upvotes: 3 <issue_comment>username_2: Even if machines with true Artificial General Intelligence were created, their apparent intelligence would still be by definition *artificial*. The word *simulation* is a [synonym](http://www.thesaurus.com/browse/artificial) and could be used to redefine AGI as Simulated General Intelligence.
Keeping that in mind, **a machine that appears to be expressing emotions would only be the result of a series of complicated algorithms** allowing a computer to assess the situation and respond in an intellectually appropriate manner based on external stimulus and conditions. Every possible action this machine could possibly make would be derived from a list of possible actions the machine is capable of, no matter how large the number of possible actions grows. The machine is still a series sensors, programmed instructions, and cycles of execution.
Destroying such a machine could potentially be the destruction of property if it wasn't owned by the person who destroyed it, but would it be murder? No.
A broken machine can potentially be rebuilt and reactivated if it is broken. It never really died; it was destroyed. A living being that is killed is really *dead* and cannot be rebuilt and made alive once again. These key differences lead me to agree with the previous answer and conclude that **no, destroying an artificial intelligence without its consent would not be murder**.
Upvotes: 1 <issue_comment>username_3: There is another problem besides consciousness, which is personal identity.
Consider an AI whose task is learning the discourse of a particular person during years, so as to simulate their conversation as closely as possible. After the death of the seed human, their artificial continuation can keep chatting with other humans (especially family members) using the ideas taught by the now dead person.
There is a strong link between such an intelligence and a human being, so it poses a problem harder than deleting a general chatbot with no individual attachment.
Upvotes: 0 <issue_comment>username_4: Yes we can, just like we sentence living human to death. Moral or not depends on the particular time point we are in history.
Upvotes: 0
|
2016/08/04
| 1,873
| 6,447
|
<issue_start>username_0: DeepMind has published a lot of works on deep learning in the last years, most of them are state-of-the-art on their respective tasks. But how much of this work has actually been reproduced by the AI community? For instance, the Neural Turing Machine paper seems to be very hard to reproduce, according to other researchers.<issue_comment>username_1: *I tend to think this question is border-line and may get close. A few comments for now, though.*
---
wrongx
There are (at least) two issues with reproducing the work of a company like DeepMind:
* Technicalities missing from publications.
* Access to the same level of data.
Technicalities should be workable. Some people have reproduced some of the [Atari](https://github.com/kristjankorjus/Replicating-DeepMind) gaming stunts. AlphaGo is seemingly more complex and will require more work, yet that should be feasible at some point in the future (individuals may lack computing resources today).
Data can be more tricky. Several companies open their data sets, but data is also the nerve of the competition...
Upvotes: 2 <issue_comment>username_2: On the suggestion of the O.P. rcpinto I converted a comment about seeing "around a half-dozen papers that follow up on Graves et al.'s work which have produced results of the caliber" and will provide a few links. Keep in mind that this only answers the part of the question pertaining to NTMs, not Google DeepMind itself, plus I'm still learning the ropes in machine learning, so some of the material in these papers is over my head; I did manage to grasp much of the material in [Graves et al.'s original paper](https://arxiv.org/pdf/1410.5401.pdf){1] though and am close to having homegrown NTM code to test. I also at least skimmed the following papers over the last few months; they do not replicate the NTM study in a strict scientific manner, but many of their experimental results do tend to support the original at least tangentially:
• In [this paper](https://arxiv.org/pdf/1607.00036.pdf) on a variant version of NTM addressing, Gulcehere, et al. do not try to precisely replicate Graves et al.'s tests, but like the DeepMind team, it does demonstrate markedly better results for the original NTM and several variants over an ordinary recurrent LSTM. They use 10,000 training samples of a Facebook Q&A dataset, rather than the N-grams Graves et al. operated on in their paper, so it's not replication in the strictest sense. They did however manage to get a version of the original NTM and several variants up and running, plus recorded the same magnitude of performance improvement.[2](https://arxiv.org/pdf/1607.00036.pdf)
• Unlike the original NTM, [this study](https://arxiv.org/abs/1505.00521) tested a version of reinforcement learning which was not differentiable; that may be why they were unable to solve several of the programming-like tasts, like Repeat-Copy, unless the controller wasn't confined to moving forwards. Their results were nevertheless good enough to lend support to the idea of NTMs. A more recent revision of their paper is apparently available, which I have yet to read, so perhaps some of their variant's problems have been solved.[3](https://arxiv.org/abs/1505.00521)
• Instead of testing the original flavor of NTM against ordinary neural nets like LSTMs, [this paper](https://arxiv.org/pdf/1510.03931v3.pdf) pitted it against several more advanced NTM memory structures. They got good results on the same type of programming-like tasks that Graves et al. tested, but I don't think they were using the same dataset (it's hard to tell from the way their study is written just what datasets they were operating on).[4](https://arxiv.org/pdf/1510.03931v3.pdf)
• On p. 8 of [this study](https://arxiv.org/pdf/1605.06065.pdf), an NTM clearly outperforms several LSTM, feed-forward and nearest-neighbor based schemes on an Omniglot character recognition dataset. An alternative approach to external memory cooked up by the authors clearly beats it, but it still obviously performs well. The authors seem to belong to a rival team at Google, so that might be an issue when assessing replicability.[5](https://arxiv.org/pdf/1605.06065.pdf)
• On p. 2 [these authors](http://www.thespermwhale.com/jaseweston/ram/papers/paper_6.pdf) reported getting better generalization on "very large sequences" in a test of copy tasks, using a much smaller NTM network they evolved with the genetic NEAT algorithm, which dynamically grows topologies.[6](http://www.thespermwhale.com/jaseweston/ram/papers/paper_6.pdf)
NTMs are fairly new so there hasn't been much time to stringently replicate the original research yet, I suppose. The handful of papers I skimmed over the summer, however, seem to lend support to their experimental results; I have yet to see any that report anything but excellent performance. Of course I have an availability bias, since I only read the pdfs I could easily find in a careless Internet search. From that small sample it seems that most of the follow-up research has been focused on extending the concept, not replication, which would explain the lack of replicability data. I hope that helps.
[1](https://arxiv.org/pdf/1410.5401.pdf) <NAME>; <NAME> and <NAME>, 2014, "Neural Turing Machines," published Dec. 10, 2014.
[2](https://arxiv.org/pdf/1607.00036.pdf) <NAME>; <NAME>; <NAME> and <NAME>, 2016, "Dynamic Neural Turing machine with Soft and Hard Addressing Schemes," published June 30, 2016.
[3](https://arxiv.org/abs/1505.00521) Zaremba, Wojciech and <NAME>, 2015, "Reinforcement Learning Neural Turing Machines," published May 4, 2015.
[4](https://arxiv.org/pdf/1510.03931v3.pdf) Zhang; Wei; <NAME> and <NAME>, 2015, "Structured Memory for Neural Turing Machines," published Oct. 25, 2015.
[5](https://arxiv.org/pdf/1605.06065.pdf) Santoro, Adam; <NAME>; <NAME>; <NAME> and <NAME>, 2016, "One-Shot Learning with Memory-Augmented Neural Networks," published May 19, 2016.
[6](http://www.thespermwhale.com/jaseweston/ram/papers/paper_6.pdf) <NAME>, Rasmus; Jacobsen, <NAME> and <NAME>, date unknown, "Evolving Neural Turing Machines." No publisher listed
All except (perhaps) <NAME> et al. were published at the Cornell Univeristy Library arXiv.org Repository: Ithaca, New York.
Upvotes: 4 [selected_answer]
|
2016/08/04
| 2,208
| 9,484
|
<issue_start>username_0: <NAME> has been researching something he calls "capsules theory" in neural networks. What is it? How do capsule neural networks work?<issue_comment>username_1: It appears to not be published yet; the best available online are [these slides](http://cseweb.ucsd.edu/~gary/cs200/s12/Hinton.pdf) for [this talk](https://www.youtube.com/watch?v=TFIMqt0yT2I). (Several people reference an earlier talk with [this link](http://techtv.mit.edu/collections/bcs/videos/30698-what-s-wrong-with-convolutional-nets), but sadly it's broken at time of writing this answer.)
My impression is that it's an attempt to formalize and abstract the creation of subnetworks inside a neural network. That is, if you look at a standard neural network, layers are fully connected (that is, every neuron in layer 1 has access to every neuron in layer 0, and is itself accessed by every neuron in layer 2). But this isn't obviously useful; one might instead have, say, *n* parallel stacks of layers (the 'capsules') that each specializes on some separate task (which may itself require more than one layer to complete successfully).
If I'm imagining its results correctly, this more sophisticated graph topology seems like something that could easily increase both the effectiveness and the interpretability of the resulting network.
Upvotes: 6 [selected_answer]<issue_comment>username_2: To supplement the previous answer: there is a paper on this that is mostly about learning low-level capsules from raw data, but explains Hinton's conception of a capsule in its introductory section: <http://www.cs.toronto.edu/~fritz/absps/transauto6.pdf>
It's also worth noting that the link to the MIT talk in the answer above seems to be working again.
According to Hinton, a "capsule" is a subset of neurons within a layer that outputs both an "instantiation parameter" indicating whether an entity is present within a limited domain and a vector of "pose parameters" specifying the pose of the entity relative to a canonical version.
The parameters output by low-level capsules are converted into predictions for the pose of the entities represented by higher-level capsules, which are activated if the predictions agree and output their own parameters (the higher-level pose parameters being averages of the predictions received).
Hinton speculates that this high-dimensional coincidence detection is what mini-column organization in the brain is for. His main goal seems to be replacing the max pooling used in convolutional networks, in which deeper layers lose information about pose.
Upvotes: 4 <issue_comment>username_3: In the abstract of the paper [Dynamic Routing between Capsules](https://research.google.com/pubs/pub46351.html) (November 7, 2017) that formally introduced capsule neural networks, the authors write
>
> A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or an object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule
> becomes active. We show that a discriminatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher-level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule.
>
>
>
Upvotes: 2 <issue_comment>username_4: Capsule Networks have two key ideas:
* the first idea is **how to represent multi-dimensional entities**. Capsule Networks does this by grouping these properties of a feature together ("capsules").
* the second is that you **activate higher-level features by agreement between lower-level features** ("routing by agreement").
First, Capsule Networks partition the image into regions subsets.
For each of these regions, it assumes that there is at most one instance of a single feature, called a Capsule.
A Capsule is able to represent an instance of a feature (but only one) and is able to represent all the different properties of that feature, *e.g.*, its *(x,y)* coordinates, its colour, its movement *etc.*
The difference from Convolutional Neural Networks (CNNs) is that the Capsules bundle the neurons into groups with multi-dimensional properties, whereas in CNNs the neurons represent single, unrelated scalar properties.
This structured Capsule representation allows you to do "routing by agreement".
To understand this, lets look at the example of a face detector. Here, you could have capsules representing "mouth", "eye", "nose" *etc.*
Since the Capsules are multi-dimensional you also train them to predict the parameters for the entire face.
Now, if the "mouth", "nose" and "eye" Capsules agree about the parameters of the face we have a very strong signal that this is a good prediction since accidental agreement in a high-dimensional space like a neural network is very unlikely.
You use this to stack the Capsules into deep networks where the activation of higher-level Capsules are conditioned on agreement between the lower-level Capsules (*e.g.* the Face capsule being activated by agreement on the face position between the Nose, Mouth, Eye Capsules in the earlier, lower-level layer).
In contrast to regular feed-forward nets this requires a bit of iteration in the forward pass through the network, but you can still use back-propagation train it.
It is an interesting way to add a bit of structure to the data. So far, it looks like they are able to provide better generalization from limited training data.
Upvotes: 2 <issue_comment>username_5: One of the major advantages of convolutional neural networks is their **invariance** to translation. However, this invariance comes with a price, that is, it does not consider how different features are related to each other. For example, if we have a picture of a face, a CNN will have difficulties distinguishing the relationship between the "mouth" feature and "nose" feature. The max-pooling layers are the main reason for this effect, because, when we use max-pooling layers, we lose the precise locations of the mouth and nose, and we cannot say how they are related to each other.
Capsules try to keep the advantage of CNN and fix this drawback in two ways
1. **Invariance**: quoting from this [paper](http://www.cs.toronto.edu/~fritz/absps/transauto6.pdf)
>
> When the capsule is working properly, the probability of the visual entity being present is locally invariant – it does not change as the entity moves over the manifold of possible appearances within the limited domain covered by the capsule.
>
>
>
In other words, capsule takes into account the existence of the specific feature that we are looking for, like the mouth or nose. This property makes sure that capsules are translation invariant the same that CNNs are.
2. **Equivariance**: instead of making the feature translation **invariance**, the capsule will make it translation-equivariant or viewpoint-equivariant. In other words, as the feature moves and changes its position in the image, the feature vector representation will also change in the same way which makes it equivariant. This property of capsules tries to solve the drawback of max-pooling layers that I mentioned at the beginning.
Upvotes: 3 <issue_comment>username_6: Capsule networks try to mimic Hinton's observations of the human brain on the machine. The motivation stems from the fact that neural networks needed better modeling of the spatial relationships of the parts. Instead of modeling the co-existence, disregarding the relative positioning, capsule-nets try to model the global relative transformations of different sub-parts along a hierarchy. This is the eqivariance vs. invariance trade-off, as explained above by others.
These networks therefore include somewhat a viewpoint / orientation awareness and respond differently to different orientations. This property makes them more discriminative, while potentially introducing the capability to perform pose estimation as the latent-space features contain interpretable, pose specific details.
All this is accomplished by including a nested layer called capsules within the layer, instead of concatenating yet another layer in the network. These capsules can provide vector output instead of a scalar one per node.
The crucial contribution of the paper is the dynamic routing which replaces the standard max-pooling by a smart strategy. This algorithm applies a [mean-shift clustering](https://en.wikipedia.org/wiki/Mean_shift) on the capsule outputs to ensure that the output gets sent only to the appropriate parent in the layer above.
Authors also couple the contributions with a margin loss and reconstruction loss, which simultaneously help in learning the task better and show state of the art results on MNIST.
The recent-paper is named [Dynamic Routing Between Capsules](https://arxiv.org/pdf/1710.09829.pdf) and is available on Arxiv: <https://arxiv.org/pdf/1710.09829.pdf> .
Upvotes: 3
|
2016/08/04
| 751
| 3,004
|
<issue_start>username_0: In [this article](https://web.archive.org/web/20190519181402/http://fabelier.org/novelty-search-and-open-ended-evolution-by-ken-stanley/), the author claims that guiding evolution by novelty alone (without explicit goals) can solve problems even better than using explicit goals. In other words, using a novelty measure as a fitness function for a genetic algorithm works better than a goal-directed fitness function. How is that possible?<issue_comment>username_1: As explained in an answer to [this AI SE question](https://ai.stackexchange.com/questions/240/what-exactly-are-genetic-algorithms-and-what-sort-of-problems-are-they-good-for/244#244), GAs are 'satisficers' rather than 'optimizers' and tend not to explore 'outlying' regions of the search space. Rather, the population tends to cluster in regions that are 'fairly good' according to the fitness function.
In contrast, I believe the thinking is that novelty affords a kind of dynamic fitness, tending to push the population away from previously discovered areas.
Upvotes: 3 <issue_comment>username_2: Novelty search selects for "novel behavior", by some domain-dependent definition of novelty. For example, novelty in a Maze-solving domain might be "difference of route explored". Eventually, networks that take every possible route through the maze will be found, and you could then select the fastest. This would work far better than a naive "objective", like distance to the goal, which could easily result in a local optima which never solves the maze.
From [Abandoning Objectives: Evolution through the
Search for Novelty Alone](http://eplex.cs.ucf.edu/papers/lehman_ecj11.pdf) (emphasis mine):
>
> In novelty search, instead of measuring overall progress with a traditional objective function, evolution employs a measure of behavioral novelty called a novelty metric. In effect, a search guided by such a metric performs explicitly what natural evolution does passively, i.e. gradually accumulating novel forms that ascend the complexity ladder.
>
> For example, in a biped locomotion domain, initial attempts might simply fall
> down. The novelty metric would reward simply falling down in a different way,
> regardless of whether it is closer to the objective behavior or not. In contrast, an objective function may explicitly reward falling the farthest, which likely does not lead to the ultimate objective of walking and thus exemplifies a deceptive local optimum. In contrast, in the search for novelty, a set of instances are maintained that represent
> the most novel discoveries. Further search then jumps off from these representative behaviors. **After a few ways to fall are discovered, the only way to be rewarded is to find a behavior that does not fall right away**. In this way, behavioral complexity rises from the bottom up. **Eventually, to do something new, the biped would have to successfully walk for some distance even though it is not an objective**.
>
>
>
Upvotes: 3
|
2016/08/04
| 984
| 4,336
|
<issue_start>username_0: Quote from this [Eric's meta post](https://ai.meta.stackexchange.com/a/46/8) about modelling and implementation:
>
> They are not exactly the same, although strongly related. This was a very difficult lesson to learn among mathematicians and early programmers, notably in the 70s (mathematical proofs can demand a lot of non-trivial programming work to make them "computable", as in runnable on a computer).
>
>
>
If they're not the same, what is the difference?
How we can say when we're talking about AI implementation, and when about modelling? It's suggested above it's not an easy task. So where we can draw the line when we talk about it?
I'm asking in general, not specifically for this site, that's why I haven't posted this question in meta<issue_comment>username_1: One good way of differentiating modelling and implementation is to consider that models occupy a much higher level of abstraction.
To continue with the mathematical example: even though experimental mathematics might be dependent on computation, the program can be considered as one possible realization of the necessary conditions of a more abstract existence proof.
Over the last 25 years, software engineering methodologies have become quite good at separating models and implementations, e.g. by using interfaces/typeclasses/abstract base classes to define constraints on behavior that is concretely realized by the implementation of derived classes.
AI has always been a battle between the ['neats and the scruffies'](https://en.wikipedia.org/wiki/Neats_vs._scruffies). Neats tend to prefer working 'top down' from clean abstractions, 'scruffies' like to work 'bottom up', and 'bang the bits' of the implementation together, to see what happens.
Of course, in practice, interplay between both styles is necessary, but AI *as a science* progresses when we abstract mechanisms away from specific implementations into their most general (and hence re-useable) form.
Upvotes: 3 [selected_answer]<issue_comment>username_2: In AI (but in general too, I believe), a simplification is that modeling is more akin to Mathematics (and related hard sciences involved, like Physics and... Computer Science), and implementation to Software Engineering.
Let's take a concrete example, really outside of AI: Find the minimum value of a given polynomial, if it exists.
The Mathematician will derivate the polynomial, find the zeros, and checkout convexity to find a minimum (if there is any zero). This procedure is very standard---some will say straightforward. It relies on a body of knowledge and an abstraction level that is appropriate for manual proof.
The Software Engineer approach is actually way longer to explain, and I am going to skip it. The point is that the body of knowledge is related but different: We have to find now a step-by-step procedure for the computer to achieve the result. The Mathematician one could be implemented directly in MathLab, almost verbatim, but we *assume* MathLab. And to build MathLab, we are back to the problem of making a procedure the computer can execute. We could for example base a procedure on Euler's method to find roots (a "simple" approach that closes on roots step after step), etc.
Simple mathematical operations can be quite complex to implement on a computer. Perhaps the most famous is [random-number generation](https://en.wikipedia.org/wiki/Random_number_generation). Mathematically, the concept is pure and clear. Generating an actual random-number is more elusive than it looks, to the point it calls for new *models* and new *implementations*...
A concrete example from history: Neural networks. In the 80s and 90s, NNs were weighted graphs that could be executed on computers using graph libraries or similar foundation libs. Choosing the weights was challenging. One day the back-propagation learning model was introduced to automated the choice of weights. The model relied on a procedure dedicated to NNs, using a terminology like partial derivates, gradient descent, chain rules, etc. And later then, clever engineers created libraries to automate the back-propagation procedure. The libraries can be somewhat far from the original model, as engineers learn how to make it computable, even faster (i.e. optimization, approximations/truncations).
Upvotes: 2
|
2016/08/04
| 624
| 2,533
|
<issue_start>username_0: I read some information1 about attempts to build neural networks in the PHP programming language. Personally I think PHP is not the right language to do so at all probably because it's a high-level language, I assume low level language are way more suitable for AI in terms of performance and scalability.
Is there a good/logical reason why you should or shouldn't use PHP as a language to write AI in?
*1* <http://www.developer.com/lang/php/creating-neural-networks-in-php.html> and <https://stackoverflow.com/questions/2303357/are-there-any-artificial-intelligence-projects-in-php-out-there><issue_comment>username_1: *Question on-topicness questionable, but...*
---
The most logical reason why PHP is unsuited for neural networks is that PHP is, well, intended to be used for server side webpages. It can connect to various external resources, such as databases, via native language features. It is very much a glue language, and not a processing language. PHP is also mostly stateless, only allowing you to store state in either clients, file storage or databases.
As such, it's **not** suitable for this sort of thing - not because PHP is a high level language, but rather because it's so request based and focused towards creating pages to serve to clients.
That won't stop people from trying, though - there are various esoteric programming languages out there in which regular programming would be an insane task or not possible at all - but from a ease of development perspective, making a neural network in PHP makes no sense.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Actually, yes. Remember, that due to the history of PHP development, some very good things has formed what we have now:
* From a simple/laggy/limited interpreter in PHP 3, we have now three mainstream lines coming one-by-one like v5/v6/v7 with *full bytecode* supported.
* In PHP v7 you don't even need a bytecode cache due to HHVM, old Zend VM is a hell-good-debugged and using a cacher like XCache you can achieve a true native execution speed **and** payload
* The PHP language interface allows **any** external C/C++ library *just to be added* as a module via very simple wrapper that can be written by the person that just red Kerrigan&Richie and Straustrup base books on C and C++. This is amazing feature, exclusive to PHP as far as I know
* In PHP v7 you're welcome to use *native* multi-threading and even CUDA-based things, if you wish to do it. I did it, so I can confirm that it works
Upvotes: 2
|
2016/08/04
| 1,411
| 4,891
|
<issue_start>username_0: How much processing power is needed to emulate the human brain? More specifically, the neural simulation, such as communication between the neurons and processing certain data in real-time.
I understand that this may be a bit of speculation and it's not possible to be accurate, but I'm sure there is some data available or research studies that attempted to estimate it based on our current understanding of the human brain.<issue_comment>username_1: H+ magazine wrote an estimate [in 2009](http://hplusmagazine.com/2009/04/07/brain-chip/) that seems broadly comparable to other things I've seen; they think the human brain is approximately 37 petaflops. A supercomputer larger than that 37 petaflop estimate [exists today](https://en.wikipedia.org/wiki/Sunway_TaihuLight).
But emulation is hard. See [this SO question about hardware emulation](https://stackoverflow.com/questions/471973/what-makes-building-game-console-emulators-so-hard) or [this article](http://www.tested.com/tech/gaming/2712-why-perfect-hardware-snes-emulation-requires-a-3ghz-cpu/) on emulating the SNES, in which they require **140 times** the processing power of the SNES chip to get it right. [This 2013 article](http://gizmodo.com/an-83-000-processor-supercomputer-only-matched-one-perc-1045026757) claims that a second of human brain activity took 40 minutes to emulate on a 10 petaflop computer (a *2400-times* slowdown, not the 4-times slowdown one might naively expect).
And all this assumes that neurons are relatively simple objects! It could be that the amount of math we have to do to model a single neuron is actually much more than the flops estimate above. Or it could be the case that dramatic simplifications can be made, and if we knew what the brain was actually trying to accomplish we could do it much more cleanly and simply. (One advantage that ANNs have, for example, is that they are doing computations with much more precision than we expect biological neurons to have. But this means emulation is *harder*, not easier, while replacement *is* easier.)
Upvotes: 5 [selected_answer]<issue_comment>username_2: Not just how much, but what kind of processing power : there're specially-crafted [dedicated chips](http://www.research.ibm.com/cognitive-computing/neurosynaptic-chips.shtml#fbid=V01grppeOIs), and it has a [practical applications](http://www.dailymail.co.uk/sciencetech/article-3516047/IBM-reveals-brain-supercomputer-neurosynaptic-chip-replicate-16-million-neurons-work-using-hearing-aid-battery.html), so it's not a lab-only project
Upvotes: 2 <issue_comment>username_3: The human brain contains about 100 billion neurons ($10^{11}$) and about a hundred trillion synapses ($10^{14}$). Each neuron can fire about 100 times a second. If we model the brain as a simple neural network, then it would be equivalent to a machine that requires 1016 calculations per second and 1013 bits of memory.
From [Wikipedia](https://en.wikipedia.org/wiki/The_Singularity_Is_Near#The_brain)
>
> Kurzweil introduces the idea of "uploading" a specific brain with every mental process intact, to be instantiated on a "suitably powerful computational substrate". He writes that general modeling requires 1016 calculations per second and 1013 bits of memory, but then explains uploading requires additional detail, perhaps as many as 1019 cps and 1018 bits. Kurzweil says the technology to do this will be available by 2040.
>
>
>
According to this site [here](http://www.extremetech.com/extreme/163051-simulating-1-second-of-human-brain-activity-takes-82944-processors):
>
> Using the NEST software framework, the team led by <NAME> and <NAME> succeeded in creating an artificial neural network of 1.73 billion nerve cells connected by 10.4 trillion synapses. While impressive, this is only a fraction of the neurons every human brain contains. Scientists believe we all carry 80-100 billion nerve cells
>
>
> It took 40 minutes with the combined muscle of 82,944 processors in K computer to get just 1 second of biological brain processing time. While running, the simulation ate up about 1PB of system memory as each synapse was modeled individually.
>
>
>
Computing power will continue to ramp up while transistors scale down, which could make true neural simulations possible in real-time with supercomputers.
[SpiNNaker](https://en.wikipedia.org/wiki/SpiNNaker) is a manycore computer architecture designed to [simulate the human brain](https://en.wikipedia.org/wiki/Human_Brain_Project). It is planned to use 1 million ARM processors (currently .5 million). The completed design will hold 100,000 cores
In this [video](https://www.youtube.com/watch?v=2e06C-yUwlc), they showed a completed rack with 100,000 cores emulating 25 million neurons (at $\frac{1}{4}$ the efficiency—it will eventually run 1,000 neurons per core).
Upvotes: 3
|
2016/08/04
| 382
| 1,659
|
<issue_start>username_0: Artificial intelligence is present in many games, both current and older games. How can such intelligence understand what to do? I mean, how can it behave like a human in a game, allowing you to play against itself, or that AI plays against itself?
In games like Age of Empires, for example.<issue_comment>username_1: Most of the existing AI bots which can play games use deep search from possible space and choose the best move. This is done by most of the chess, Go, Tic-Tac-Toe, etc bots.
However, there has been a recent breakthrough where (deep)neural nets with deep search techniques like monte-carlo search, etc; which might be more human-like and demonstrate a much more complex game behaviour than the above bots. One such example is the Google's Alpha-Go bot.
Upvotes: 2 <issue_comment>username_2: There are many different kinds of AI used in games; AI for historical board games (like chess or Go) tends to be much better than AI for computer games (such as Starcraft or Civilization), in large part because there's more academic interest in developing strategies for those games.
The basic structure of a game-playing AI is that it takes in game state inputs and outputs an action; typically, the internals also contain some sort of goal and some sort of future prediction.
But beyond that, there's tremendous amounts of variability. Some AI are little more than scripted reflexes, some are built like control systems, some do actual optimization and forward thinking.
Getting into the details of *how* the many different approaches work is probably beyond the scope of this site, though.
Upvotes: 4 [selected_answer]
|
2016/08/04
| 587
| 2,282
|
<issue_start>username_0: [At a related question in Computer Science SE](https://cs.stackexchange.com/a/60535/54605), a user told:
>
> Neural networks typically require a large training set.
>
>
>
Is there a way to define the boundaries of the "optimal" size of a training set in the general case?
When I was learning about fuzzy logic, I've heard some rules of thumb that involved examining the mathematical composition of the problem and using that to define the number of fuzzy sets.
Is there such a method that can be applicable for an already defined neural network architecture?<issue_comment>username_1: For a finite value to be 'optimal,' typically you need some benefit from more paired up with some cost for more, and eventually the lines cross because the benefit decreases and the cost increases.
Most models will have a reduction in error with more training data, that asymptotically approaches the best the model can do. See this image (from [here](http://blog.revolutionanalytics.com/2015/09/why-big-data-learning-curves.html)) as an example:
[](https://i.stack.imgur.com/Jx1AZ.png)
The costs of training data are also somewhat obvious; data is costly to obtain, to store, and to move. (Assuming model complexity stays constant, the actual cost of storing, moving, and using the model remains the same, since the weights in the model are just being tuned.)
So at some point the slope of the error-reduction curve becomes horizontal enough that more data points are costlier than they're worth, and that's the optimal amount of training data.
Upvotes: 3 [selected_answer]<issue_comment>username_2: In general, the larger the training set, the better. See [The Unreasonable effectiveness of Data](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf), though this article is quite dated (written in 2009). <NAME>, a researcher at Netflix has a [Quora answer](https://www.quora.com/In-machine-learning-is-more-data-always-better-than-better-algorithms) where he discusses that more data can sometimes hurt algorithms.
For deep neural networks in particular, it does not seem that we have hit these limits yet.
Upvotes: 1
|
2016/08/04
| 579
| 2,654
|
<issue_start>username_0: Complex AI that learns lexical-semantic content and its meaning (such as collection of words, their structure and dependencies) such as *Watson* takes terabytes of disk space.
Lets assume *DeepQA*-like AI consumed whole Wikipedia of size 10G which took the same amount of structured and unstructured stored content.
Will learning another 10G of different encyclopedia (different topics in the same language) take the same amount of data? Or will the AI reuse the existing structured and take less than half (like 1/10 of it) additional space?<issue_comment>username_1: It seems easy for this to be sublinear growth or superlinear growth, depending on context.
If we imagine the space of the complex AI as split into two parts--the context model and the content model (that is, information and structure that is expected to be shared across entries vs. information and structure that is local to particular entries), then expanding the source material means we don't have much additional work to do on the context model, but whether the additional piece of the content model is larger or smaller depends on how connected the new material is to the old material.
That is, one of the reasons why Watson takes many times the space of its source material is because it stores links between objects, which one would expect to grow with roughly order *n* squared. If there are many links between the old and new material, then we should expect it to roughly quadruple in size instead of double; if the old material and new material are mostly unconnected and roughly the same in topology, then we expect the model to roughly double; if the new material is mostly unconnected to the old material and also mostly unconnected to itself, then we expect the model to not grow by much.
Upvotes: 3 [selected_answer]<issue_comment>username_2: I know it seems like a cop-out answer to every question on AI, but "it depends". For example, if the bulk of the storage space is storing learned concepts, and attributes of example entities, then it stands to reason that concepts and entities could be reused. In that scenario, learning from an additional 10G of text would use less storage than the original.
OTOH, as others have said, it could be that the storage is mostly storing the *links* between things, in which case the number of links will likely grow exponentially. In that case, the second batch of "knowledge" would add more storage requirements than the first.
So it would come down to "what exactly is the system learning, and how does it represent what it learned?" And that answer will vary from system to system.
Upvotes: 1
|
2016/08/04
| 371
| 1,548
|
<issue_start>username_0: Is there any simple explanation how *Watson* finds and scores evidence after gathering massive evidence and analyzing the data?
In other words, how does it know which precise answer it needs to return?<issue_comment>username_1: Watson starts off by searching its massive database of sources for stuff that might be pertinent to the question. Next, it searches through all of the search results and turns them into candidate answers. For example, if one of the search results is an article, Watson might pick the title of the article as a possible answer. After finding all of these candidate answers, it proceeds to iteratively score them to determine which one is best.
The scoring process is very complicated, and involves finding supporting evidence for each answer, and then combining many different scoring algorithms to determine which candidate answer is the best. You can read a more detailed (but still very conceptual) overview [here](http://www.aaai.org/Magazine/Watson/watson.php), by the creators of Watson.
Upvotes: 3 [selected_answer]<issue_comment>username_2: IBM clearly don't provide all the details / "secret sauce" but there is some information out there on how Watson works. Some of the text search / retrieval stuff uses a technology called [UIMA](http://uima.apache.org/) which IBM open-sourced a few years ago. It also uses Prolog and some custom C++ code. Some more information can be found [here](https://web.archive.org/web/20161106224106/http://learning.acm.org/webinar/lally.cfm).
Upvotes: 2
|
2016/08/04
| 1,326
| 4,881
|
<issue_start>username_0: <NAME>'s famous [Three Laws of Robotics](https://en.wikipedia.org/wiki/Three_Laws_of_Robotics) originated in the context of Asimov's science fiction stories. In those stories, the three laws serve as a safety measure, in order to avoid untimely or manipulated situations from exploding in havoc.
More often than not, Asimov's narratives would find a way to break them, leading the writer to make several modifications to the laws themselves. For instance, in some of his stories, he [modified the First Law](https://en.wikipedia.org/wiki/Three_Laws_of_Robotics#First_Law_modified), [added a Fourth (or Zeroth) Law](https://en.wikipedia.org/wiki/Three_Laws_of_Robotics#Zeroth_Law_added), or even [removed all Laws altogether](https://en.wikipedia.org/wiki/Three_Laws_of_Robotics#Removal_of_the_Three_Laws).
However, it is easy to argue that, in popular culture, and even in the field of AI research itself, the Laws of Robotics are taken quite seriously. Ignoring the side problem of the different, subjective, and mutually-exclusive interpretations of the laws, are there any arguments proving the laws themselves intrinsically flawed by their design, or, alternatively, strong enough for use in reality? Likewise, has a better, stricter security heuristics set being designed for the purpose?<issue_comment>username_1: Asimov's laws are not strong enough to be used in practice. Strength isn't even a consideration, when considering that since they're written in English words would first have to be interpreted subjectively to have any meaning at all. You can find a good discussion of this [here](https://youtu.be/7PKx3kS7f4A).
To transcribe an excerpt:
>
> How do you define these things? How do you define "human", without first having to take a stand on almost every issue. And if "human" wasn't hard enough, you then have to define "harm", and you've got the same problem again. Almost any really solid unambiguous definitions you give for those words—that don't rely on human intuition—result in weird quirks of philosophy, leading to your AI doing something you really don't want it to do.
>
>
>
One can easily imagine that Asimov was smart enough to know this and was more interested in story-writing than designing real-world AI control protocols.
In the novel [Neuromancer](https://en.wikipedia.org/wiki/Neuromancer#Plot_summary), it was suggested that AIs could possibly serve as checks against each other. Ray Kurzweil's impending [Singularity](https://en.wikipedia.org/wiki/Technological_singularity), or the possibility of hyperintelligent AGIs otherwise, might not leave much of a possibility for humans to control AIs at all, leaving peer-regulation as the only feasible possibility.
It's worth noting that <NAME> and others ran an [experiment](http://www.yudkowsky.net/singularity/aibox/) wherein Yudkowsky played the role of a superintelligent AI with the ability to speak, but no other connection outside of a locked box. The challengers were tasked simply with keeping the AI in the box at all costs. Yudkowsky escaped both times.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Consider Asimov's first law of robotics:
>
> A robot may not injure a human being or, through inaction, allow a
> human being to come to harm.
>
>
>
That law is already problematic, when taking into consideration self-driving cars.
What's the issue here, you ask? Well, you'll probably be familiar with the classic thought experiment in ethic known as [the trolley problem](https://en.wikipedia.org/wiki/Trolley_problem). The general form of the problem is this:
>
> The trolley is headed straight for them. You are standing some
> distance off in the train yard, next to a lever. If you pull this
> lever, the trolley will switch to a different set of tracks. However,
> you notice that there is one person on the side track. You have two
> options: (1) Do nothing, and the trolley kills the five people on the
> main track. (2) Pull the lever, diverting the trolley onto the side
> track where it will kill one person. Which is the correct choice?
>
>
> source : [Wikipedia](https://en.wikipedia.org/wiki/Trolley_problem)
>
>
>
Self-driving cars will actually need to implement real life variations on the trolley problem, which basically means that [self-driving cars need to be programmed to kill human beings](https://www.technologyreview.com/s/542626/why-self-driving-cars-must-be-programmed-to-kill/).
Of course that doesn't mean that ALL robots will need to be programmed to kill, but self-driving cars are a good example of a type of robot that will.
Upvotes: 0 <issue_comment>username_3: Asimov made the three laws **specifically to prove** that no three laws are sufficient, no matter how reasonable they seem at first. I know a guy that knew the guy and he confirmed this.
Upvotes: 1
|
2016/08/05
| 1,074
| 4,335
|
<issue_start>username_0: Are there any modern techniques of generating **textual** CAPTCHA (so person needs to type the right text) challenges which can easily [fool AI](https://ai.stackexchange.com/q/92/8) with some visual obfuscation methods, but at the same time human can solve them without any struggle?
For example I'm talking about plain ability of **recognising text embedded into image** (without considering any external plugins like flash or java, image classification, etc.) and re-typing the text that has been written or something similar.
I guess adding noise, gradient, rotating letters or changing colours are not reliable methods any more, since they can be quickly broken.
Any suggestions or research has been done?<issue_comment>username_1: **Have the user label highlighted objects in video that a state of the art classifier cannot solve**
Create a state of the art video classifier. Might as well train it on Google's [YouTube-8M](https://research.googleblog.com/2016/09/announcing-youtube-8m-large-and-diverse.html) video training data. But you will want to continually feed it original video as well.
Have the classifier label as many objects as it can. Have it isolate which objects it can recognize as objects but which it is unable to label.
Have it output videos that outlines the objects. Preferably GIFs, which can be easily embedded in forms.
For 100 of these, ask 100 users what the object is. If 90% of the users agree on the name of an object, add that video to the captcha-set. Call this the pre-trained set.
Every time a user needs to authenticate, show them one of the highlighted objects in a video *not from the pre-trained set*. If the image has less than 100 showings, record the label and give the user another one from the pre-trained set. If they get it right, let them through, if not, give them another from the pretrained set.
Once the non-pre-trained video has more than 100 showings and more than 90% of the captcha-users agree, add that video to the post-trained set.
Over time, slowly remove the pre-trained set. Put expirations on each video in the post-trained set and remove them after expiration, so that they don't get used too many times.
Ideally, this process would constantly improve the video classifier, keeping it state of the art and slightly ahead of other classifiers. Perhaps it could also favor less common words and objects and more esoteric things, so as to specialize this classifier against other classifiers.
The same could be done for image labeling, but the utility of the video classifier will probably last longer, given advances in AI.
Strictly speaking, though, short of some quantum trickery, there is no captcha system that will not one day be solved by external AI systems.
(edit: oh, I just noticed you specifically said "textual captcha." If that's what you mean, then no I don't think text classification has much mystery left in it. Computers can probably glean text from pictures better than humans now. But techically, the *input* in the above described captcha system is textual.)
Upvotes: 2 <issue_comment>username_2: A method that could possibly work is utilising optical illusions such as one where two lines down a hallway are identical but one seems longer to the human eye, then they could be prompted with a multiple choice question as to the state of the line, which to our eyes looks longer, but to a computer, is still the same length of line. Of course, there is always the issue of people with eye based disabilities not being able to complete them, but different illusions could be used to accommodate that.
[Example](https://www.brainbashers.com/showillusion.asp?85)
Upvotes: 3 <issue_comment>username_3: It's an interesting question about what makes humans unique. There is a good book on the subject titled [What Computers Cant Do](https://archive.org/stream/whatcomputerscan017504mbp/whatcomputerscan017504mbp_djvu.txt) by [<NAME>](https://en.wikipedia.org/wiki/Hubert_Dreyfus).
One task that a computer can't handle (for now at least) is ranking important things. For example, CAPTCHA asks you to order a random list of things (small one, five or six items) by importance. This particular exercise requires AI to take decisions (not always rational) based on human judgement.
Upvotes: 4 [selected_answer]
|
2016/08/05
| 532
| 2,157
|
<issue_start>username_0: A neural network is a directed weighted graph. These can be represented by a (sparse) matrix. Doing so can expose some elegant properties of the network.
Is this technique beneficial for examining neural networks?<issue_comment>username_1: It depends on the type of neural networks you are dealing with.
For medium sized neural nets, the matrix approach is a very good way to do quick computations and even backpropogation of errors. One can even exploit sparse matrixes for understanding the sparse architecture of some neural nets.
But, for very large neural nets, using matrix computations would be computationally very intensive. So, relevant methods like graph-based stores, etc are used for them depending on the purpose and the architecture.
Upvotes: 2 <issue_comment>username_2: For large ANNs, something equivalent to a 'sparse matrix format' is used in practice.
In contrast to what is said in another answer given, considering an ANN as a graph doesn't actually buy very much, for two reasons:
1. The backpropagation algorithm can usefully be
defined in terms of matrix operations. [This page](http://briandolhansky.com/blog/2014/10/30/artificial-neural-networks-matrix-form-part-5) gives a
readable and comprehensive description.
2. All real-valued matrices can be represented as graphs, but the converse is clearly not the case. So while it is true that an ANN can be considered as a special case of a graph data structure, making that specialization explicit in matrix form is more efficient.
Upvotes: 2 <issue_comment>username_3: Matrix representation is beneficial for implementing neural networks in silicon.
But for examining neural networks empirically it is sometimes good to visualise the synapse weight values as images or videos: [<NAME>'s](https://www.youtube.com/watch?v=AgkfIQ4IGaM) exploration of a convolution neural network. The network seems to have a "filter" that just detects shoulders. A bit like a lock that only opens when it recognises the pattern of shoulders.[](https://i.stack.imgur.com/4g4gF.png)
Upvotes: 1
|
2016/08/05
| 604
| 2,436
|
<issue_start>username_0: Would it be ethical to implement AI for self-defence for public walking robots which are exposed to dangers such as violence and crime such as robbery (of parts), damage or abduction?
What would be pros and cons of such AI behavior? Is it realistic, or it won't be taken into account for some obvious reasons?
Like pushing back somebody when somebody start pushing it first (AI will say: he pushed me first), or running away on crowded street in case algorithm will detect risk of abduction.<issue_comment>username_1: It depends on the type of neural networks you are dealing with.
For medium sized neural nets, the matrix approach is a very good way to do quick computations and even backpropogation of errors. One can even exploit sparse matrixes for understanding the sparse architecture of some neural nets.
But, for very large neural nets, using matrix computations would be computationally very intensive. So, relevant methods like graph-based stores, etc are used for them depending on the purpose and the architecture.
Upvotes: 2 <issue_comment>username_2: For large ANNs, something equivalent to a 'sparse matrix format' is used in practice.
In contrast to what is said in another answer given, considering an ANN as a graph doesn't actually buy very much, for two reasons:
1. The backpropagation algorithm can usefully be
defined in terms of matrix operations. [This page](http://briandolhansky.com/blog/2014/10/30/artificial-neural-networks-matrix-form-part-5) gives a
readable and comprehensive description.
2. All real-valued matrices can be represented as graphs, but the converse is clearly not the case. So while it is true that an ANN can be considered as a special case of a graph data structure, making that specialization explicit in matrix form is more efficient.
Upvotes: 2 <issue_comment>username_3: Matrix representation is beneficial for implementing neural networks in silicon.
But for examining neural networks empirically it is sometimes good to visualise the synapse weight values as images or videos: [<NAME>'s](https://www.youtube.com/watch?v=AgkfIQ4IGaM) exploration of a convolution neural network. The network seems to have a "filter" that just detects shoulders. A bit like a lock that only opens when it recognises the pattern of shoulders.[](https://i.stack.imgur.com/4g4gF.png)
Upvotes: 1
|
2016/08/05
| 631
| 2,390
|
<issue_start>username_0: This [article](http://blog.claymcleod.io/2016/06/01/The-truth-about-Deep-Learning/) suggests that deep learning is not designed to produce the universal algorithm and cannot be used to create such a complex systems.
First of all it requires huge amounts of computing power, time and effort to train the algorithm the right way and adding extra layers doesn't really help to solve complex problems which cannot be easily predicted.
Secondly some tasks are extremely difficult or impossible to solve using DNN, like solving a [math](https://ai.stackexchange.com/q/154/8) equations, predicting [pseudo-random lists](https://ai.stackexchange.com/q/225/8), [fluid mechanics](https://ai.stackexchange.com/q/168/8), guessing encryption algorithms, or [decompiling](https://ai.stackexchange.com/q/205/8) unknown formats, because there is no simple mapping between input and output.
So I'm asking, are there any alternative learning algorithms as powerful as deep architectures for general purpose problem solving? Which can solve more variety of problems, than "deep" architectures cannot?<issue_comment>username_1: Have you read the book [The Master Algorithm:](http://libgen.io/ads.php?md5=F18395FBEF45575CF68BBD5AD26DF035) by <NAME>?
He discusses the present day machine learning algorithms... Their strengths, weaknesses and applications...
* Deep Neural Network
* Genetic Algorithm
* Bayesian Network
* Support Vector Machine
* Inverse Deduction
[](https://i.stack.imgur.com/9HpIP.png)
Upvotes: 4 [selected_answer]<issue_comment>username_2: Deep learning is actually pretty useful (relative to other techniques) *precisely when there is no simple mapping between input and output*, and features from the raw input need to be aggregated and combined in complex ways by successive layers to form the output.
As I pointed out in my answer to the [AI SE decompilation question](https://ai.stackexchange.com/questions/205/how-to-write-c-decompiler-using-ai), there is recent DL research which takes a natural language description as input and [generates program text as output](http://arxiv.org/pdf/1510.07211.pdf). Despite working in this general research area, I was personally surprised by this - the problem is significantly harder than the 'AI math' link you provide above.
Upvotes: 2
|
2016/08/05
| 1,137
| 4,277
|
<issue_start>username_0: Is there any research on the application of AI for drug design?
For example, you could train a deep learning model about current compounds, substances, structures, and their products and chemical reactions from the existing [dataset](https://opendata.stackexchange.com/q/3553/3082) (basically what produces what). Then you give the task to find how to create a gold or silver from the group of available substances. Then the algorithm will find the chemical reactions (successfully predicting a new one that wasn't in the dataset) and gives the results. Maybe the gold is not a good example, but the practical scenario would be the creation/design of drugs that are cheaper to create by using much simpler processes or synthesizing some substances for the first time for drug industries.
Was there any successful research attempting to achieve that using deep learning algorithms?<issue_comment>username_1: Yes, many people have worked on this sort of thing, due to its obvious industrial applications (most of the ones I'm familiar with are in the pharmaceutical industry). Here's [a paper from 2013](https://arxiv.org/abs/1305.7074) that claims good results; following the trail of [papers that cited it](https://scholar.google.com/scholar?cites=10630711614897084406&as_sdt=5,44&sciodt=0,44&hl=en) will likely give you more recent work.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Yes, there were successful attempts at predicting the interaction between molecules and biological proteins which have been used to identify potential treatments by using [convolutional neural networks](https://en.wikipedia.org/wiki/Convolutional_neural_network#Drug_discovery).
For example in 2015, the first deep learning neural network has been created for structure-based drug design which trains 3-dimensional representation of chemical interactions which works similar to how image recognition works (composing smaller features into larger, complex structures).[wiki](https://en.wikipedia.org/wiki/Convolutional_neural_network#Drug_discovery)
Study: [AtomNet: A Deep Convolutional Neural Network for Bioactivity Prediction in Structure-based Drug Discovery](https://arxiv.org/abs/1510.02855)
---
Another approach is to use evolutionary artificial neural networks which can achieve great optimization results.
Furthermore, the [paper from 2015](https://arxiv.org/pdf/1502.00193.pdf) demonstrated heuristic [chemical reaction optimization](http://link.springer.com/article/10.1007/s12293-012-0075-1) (CRO) which is inspired by the nature of chemical reactions (e.g. transforming the unstable substances into stable ones). Simulation results show that CRO outperforms many evolutionary algorithms by population-based metaheuristics mimicking the transition of molecules and their interactions in a chemical reaction.
Sample pseudocode algorithm for predicting synthesis given ω1, ω2 (from the above paper):
```
1: for all Matrices and vectors m in ω′ do
2: for all Elements e in m do
3: Generate a real r between 0 and 1
4: if r > 0.5 then
5: e =counterpart in m1
6: else
7: e =counterpart in m2
8: end if
9: end for
10: end for
```
which is used to generate a new solution ω′ based on two given solutions ω1 and ω2.
Upvotes: 2 <issue_comment>username_3: No one mentioned [Planning chemical syntheses with deep neural networks and symbolic AI](https://www.nature.com/articles/nature25978) (published in Nature - here's [arxiv link](https://arxiv.org/abs/1708.04202)). Very impressive application of deep reinforcement learning - they use Monte Carlo Tree Search with a policy network (a-la [AlphaZero](https://arxiv.org/abs/1708.04202)) to do chemical synthesis planning. Authors claim that double blind test shown that professional chemists cannot distinguish between human- and AI-generated synthesis pathways.
Speaking of Alpha\* stuff - [AlphaFold](https://deepmind.com/research/publications/AlphaFold-Improved-protein-structure-prediction-using-potentials-from-deep-learning) is a quite recent result in protein folding, which [shown breakthrough-level performance compared to all the competition](https://predictioncenter.org/casp14/zscores_final.cgi).
Upvotes: 1
|
2016/08/05
| 557
| 2,024
|
<issue_start>username_0: Assume that I want to solve an issue with a neural network that either I can't fit to existing architectures (perceptron, Konohen, etc) or I'm simply not aware of the existence of those or I'm unable to understand their mechanics and I rely on my own instead.
How can I automate the choice of the architecture/topology (that is, the number of layers, the type of activations, the type and direction of the connections, etc.) of a neural network for an arbitrary problem?
I'm a beginner, yet I realized that in some architectures (or, at least, in perceptrons) it is very hard if not impossible to understand the inner mechanics, as the neurons of the hidden layers don't express any mathematically meaningful context.<issue_comment>username_1: I think in this case, you'll probably want to use a genetic algorithm to generate a topology rather than working on your own. I personally like [NEAT (NeuroEvolution of Augmenting Topologies)](http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf).
The original NEAT paper involves evolving weights for connections, but if you only want a topology, you can use a weighting algorithm instead. You can also mix activation functions if you aren't sure which to use. [Here](http://blog.otoro.net/2016/05/07/backprop-neat/) is an example of using backpropagation and multiple neuron types.
Upvotes: 5 [selected_answer]<issue_comment>username_2: [The other answer](https://ai.stackexchange.com/a/1399/2444) mentions [NEAT](https://www.cs.ucf.edu/%7Ekstanley/neat.html) to generate network weights or topologies. The paper [NeuroEvolution: The Importance of Transfer Function Evolution and Heterogeneous Networks](http://doc.gold.ac.uk/aisb50/AISB50-S11/AISB50-S11-Turner-paper.pdf), which also gives a short summary of neuroevolution techniques, provides an alternative approach to NEAT. It uses [Cartesian Genetic Programming](https://web.archive.org/web/20160730231107/http://www.cartesiangp.co.uk/) to evolve multiple activation functions.
Upvotes: 3
|
2016/08/06
| 964
| 3,369
|
<issue_start>username_0: I'd like to know whether there were attempts to simulate the whole brain, I'm not talking only about some [ANN on microchips](https://ai.stackexchange.com/q/237/8), but brain simulations.<issue_comment>username_1: Neuromorphic engineering offers various of ways of reproducing the brain’s processing ability.
The recent technology can include IBM's multi-artificial-neuron computer, the world's first artificial nanoscale stochastic phase-change neurons[article](http://arstechnica.com/gadgets/2016/08/ibm-phase-change-neurons/?). Check the: [Stochastic phase-change neurons](http://www.nature.com/nnano/journal/v11/n8/full/nnano.2016.70.html) study.
Other can include
* [Neurogrid](http://web.stanford.edu/group/brainsinsilicon/neurogrid.html), built by Brains in Silicon at Stanford University is another example for brain simulation. It uses analog computation to emulate ion-channel activity. It emulates neurons using digital circuitry designed to maximize spiking throughput[wiki](https://en.wikipedia.org/wiki/Neuromorphic_engineering#Examples).
* [SpiNNaker](https://en.wikipedia.org/wiki/SpiNNaker), which is a manycore computer to simulate the human brain (see [Human Brain Project](https://en.wikipedia.org/wiki/Human_Brain_Project)).
* [SyNAPSE](https://en.wikipedia.org/wiki/SyNAPSE), a DARPA neuromorphic machine technology, that scales to biological levels. Each chip can have over a million of electronic “neurons” and 256 million electronic synapses between neurons. In 2014 the 5.4 billion transistor chip had one of the highest transistor counts of any chip ever produced. The program is undertaken by HRL, HP and IBM.
Upvotes: 2 <issue_comment>username_2: <NAME> said that if we can scan a human brain and then simulate it: We can run it at 1000 times the speed. The brain will be able to do 1000 years of thinking in 1 year ect.
At this stage in history we have the computer power.
The trouble lies in cutting a brain up and scanning the 100 billion neurons and 12 million kilometres of axons and 100000 billion synapses.
And piecing together the connectome from all the data.
<NAME> at MIT is working on automating this scanning process with machine learning. By gathering training data from thousands of people playing his [Eyewire game](https://en.wikipedia.org/wiki/Eyewire)
<NAME> in Europe tried to do something similar with his [Blue Brain Project](https://en.wikipedia.org/wiki/Blue_Brain_Project).
He attempted to simulate the neocortical column of a rat. The EU gave him a billion euros to do this. Unfortunately he has been heavily criticised by the Neuroscience community. They claim that we don't know the physiology well enough to make a valid simulation.
Check out his [Ted Talk](https://www.youtube.com/watch?v=LS3wMC2BpxU).
In the 1970s Sydney Brenner achieved a *full brain scan* of a C Elegans worm. This worm has one of the simplest biological neural networks having only 302 neurons.
Here is a picture of its connectome:[](https://i.stack.imgur.com/cAZ49.png)
[](https://i.stack.imgur.com/siHt8.png)
An accurate computer simulation of this worm would be a major stepping stone to uploading a human brain.
Upvotes: 4 [selected_answer]
|
2016/08/06
| 841
| 3,249
|
<issue_start>username_0: On [the Wikipedia page](https://en.wikipedia.org/wiki/Artificial_intelligence) about AI, we can read:
>
> Optical character recognition is no longer perceived as an exemplar of "artificial intelligence" having become a routine technology.
>
>
>
On the other hand, the [MNIST](https://en.wikipedia.org/wiki/MNIST_database) database of handwritten digits is specially designed for training and testing neural networks and their error rates (see: [Classifiers](https://en.wikipedia.org/wiki/MNIST_database#Classifiers)).
So, why does the above quote state that OCR is no longer exemple of AI?<issue_comment>username_1: I'm not sure if predicting MNIST can be really considered as an AI task. AI problems can be usually framed under the context of an agent acting in an environment. Neural nets and machine learning techniques in general do not have to deal with this framing. Classifiers for example, are learning a mapping between two spaces. Though one could argue that you *can* frame OCR/image classification as an AI problem - the classifier is the agent, each prediction it makes is an action, and it receives rewards based on its classification accuracy - this is rather unnatural and different from problems that are commonly considered AI problems.
Upvotes: 2 <issue_comment>username_2: Although OCR is now a mainstream technology, it remains true that none our methods genuinely have the recognition facilities of a 5 year old (claimed success with CAPTCHAs notwithstanding). We don't know how to achieve this using well-understood techniques, so OCR should still rightfully be considered an AI problem.
To see why this might be so, it is illuminating to read the essay
["On seeing A's and seeing AS"](https://web.stanford.edu/group/SHR/4-2/text/hofstadter.html) by <NAME>.
With respect to a point made in another answer, the agent framing is a useful one insofar as it motivates success in increasingly complex environments. However, there are many hard problems (e.g. Bongard) that don't need to be stated in such a fashion.
Upvotes: 4 <issue_comment>username_3: Whenever a problem becomes solvable by a computer, people start arguing that it does not require intelligence. <NAME> is often quoted: "As soon as it works, no one calls it AI anymore" ([Referenced in CACM](http://cacm.acm.org/magazines/2012/1/144824-artificial-intelligence-past-and-future/fulltext)).
One of my teachers in college said that in the 1950's, a professor was asked what he thought was intelligent for a machine. The professor reputedly answered that if a vending machine gave him the right change, that would be intelligent.
Later, playing chess was considered intelligent. However, computers can now defeat grandmasters at chess, and people are no longer saying that it is a form of intelligence.
Now we have OCR. It's already stated in [another answer](https://ai.stackexchange.com/a/1402/66) that our methods do not have the recognition facilities of a 5 year old. As soon as this is achieved, people will say "meh, that's not intelligence, a 5 year old can do that!"
A psychological bias, a need to state that we are somehow superior to machines, is at the basis of this.
Upvotes: 6 [selected_answer]
|
2016/08/06
| 595
| 2,217
|
<issue_start>username_0: [MIST](https://en.wikipedia.org/wiki/Minimum_intelligent_signal_test) is a quantiative test of humanness, consisting of ~80k propositions such as:
* Is Earth a planet?
* Is the sun bigger than my foot?
* Do people sometimes lie?
* etc.
Have any AI attempted and passed this test to date?<issue_comment>username_1: Yes, although how useful this AI can be is another question entirely.
**mpgac** is a "minimally intelligent AGI" trained on the GAC-80K corpus of MIST questions. As a result, it should be able to "minimally" pass this test. However, being trained on the GAC-80K corpus obviously make it lacking for any practical purposes. From the README:
>
> Obviously this should only be capable of producing a minimally intelligent signal when ordinary commonsense questions are asked, of the kind depicted above, using questions which would have made sense to an average human between the years 2000 and 2005. On expert knowledge or current affairs related questions it should perform no better than chance.
>
>
>
The point of mpgac is to compare it to other AIs that could be built to pass this test. Or as the writer wrote in the README:
>
> When scanning the skies how can we tell whether the radio signals detected are from an intelligent source, or are merely just background or sensor noise?
>
>
>
Ideally, you would want to build a program that is "better" than mpgac. In much the same way as ELIZA can be seen as a baseline for the Turing Test, mpgac is the baseline for the MIST test.
The GitHub repo of mpgac (as well as the GAC-80K corpus) is available [here](https://github.com/bashrc/mindpix).
Upvotes: 4 [selected_answer]<issue_comment>username_2: I believe this is exactly the kind of test where Doug Lenat's **[cyc](https://en.wikipedia.org/wiki/Cyc)** would do very well at ? But I can't answer the question : how much of that corpus could it answer correctly ? Probably quite a lot ! (and how many humans could pass that test ? probably not all of them, but many can...)
[but is cyc considered an AI? probably not... so I may be out of topic. But imo it's database should be incorporated to any AI that reaches some kind of "intelligence"...]
Upvotes: 2
|
2016/08/06
| 726
| 2,984
|
<issue_start>username_0: It is possible of normal code to prove that it is correct using mathematical techniques, and that is often done to ensure that some parts are bug-free.
Can we also prove that a piece of code in AI software will cause it to never turn against us, i.e. that the AI is [friendly](https://en.wikipedia.org/wiki/Friendly_artificial_intelligence)? Has there any research been done towards this?<issue_comment>username_1: Unfortunately, this is extremely unlikely.
It is nearly impossible to make statements about the behaviour of software in general. This is due to the [Halting problem](https://en.wikipedia.org/wiki/Halting_problem), which shows that it is impossible to prove whether a program will stop for any given input. From this result, many other things have been shown to be unprovable.
The question whether a piece of code is friendly, can very likely be reduced to a variant of the halting problem.
An AI that operates in the real world, which is a requirement for "friendliness" to have a meaning, would need to be Turing complete. Input from the real world cannot be reliably interpreted using regular or context-free languages.
Proofs of correctness work for small code snippets, with clearly defined inputs and outputs. They show that an algorithm produces the mathematically right output, given the right input.
But these are about situations that can be defined with mathematical rigour.
"Friendliness" isn't a rigidly defined concept, which already makes it difficult to prove anything about it. On top of that, "friendliness" is about how the AI relates to the real world, which is an environment whose input to the AI is highly unpredictable.
The best we can hope for, is that an AI can be programmed to have safeguards, and that the code will raise warning flags if unethical behaviour becomes likely - that AI's are programmed defensively.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Here are some examples of recent work on verifying certain properties of autonomous systems [[RoboCheck]](https://www.cs.york.ac.uk/circus/RoboCalc-event/courses/).
However, to achieve the same kind of thing for the notion of 'friendly' using formal verification (i.e. 'proving correctness using mathematical techniques'),
it would (at the least) seem necessary to be able to express 'friendly' within a logical formalism, (i.e. as a predicate testable within a model-checker, so that we can be sure a system never enters an undesirable state).
However, it's not immediately clear that 'friendly' has a more specific definition than 'a desire not to harm humans', so much more low-level detail is needed.
Some previous work in this general area that might be useful in this respect include:
* [Deontic Logic](https://en.wikipedia.org/wiki/Deontic_logic) - a logical calculus of obligations.
* [Elephant 2000](http://www.jfsowa.com/ikl/McCarthy89) - <NAME>'s description of a promise-based programming language.
Upvotes: 2
|
2016/08/06
| 521
| 2,267
|
<issue_start>username_0: We can measure the power of the machine with the number of operation per second or the frequency of the processor. But does units similar of IQ for humans exist for a AI?
I'm asking for a unit which can give countable result so something different from a Turing Test which only give a binary result.<issue_comment>username_1: One of the challenges of AI is defining Intelligence.
If we could precisely define general intelligence then we could program it into a computer. After all an algorithm is a process so well defined that it can be run on a computer.
Narrow AI can be evaluated on its success at achieving goals in an environment. In domains such as computer vision and speech recognition narrow AI algorithms can be easily evaluated.
Many universities curate narrow AI tests. <NAME> a professor at Stanford who directs the Artificial Intelligence lab there organises the annual ImageNet Challenge. In 2012 <NAME> famously won the competition by building a Deep Neural Network that could recognize pictures more accurately than humans can.
To my knowledge the testers commonly use [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall) evaluation metrics
Upvotes: 2 <issue_comment>username_2: <NAME> and <NAME> [proposed one](http://www.vetta.org/documents/42.pdf) in 2006. The main descriptive quotes (see the paper for the actual formula):
>
> Intelligence measures an agent’s general ability to achieve goals in a wide range of environments
>
>
> ...
>
>
> It is clear by construction that universal intelligence measures the general ability of an agent to perform well in a very wide range of environments, as required by our informal definition of intelligence given earlier. The definition places no restrictions on the internal workings of the agent; it only requires that the agent is capable of generating output and receiving input which includes a reward signal.
>
>
>
Upvotes: 2 <issue_comment>username_3: One thing you'll see quite often, is to declare a correspondence between a system and a human of a given age. For example "this program can answer questions about science approximately as well as an average 7 year old" or something of that nature.
Upvotes: 0
|
2016/08/06
| 354
| 1,207
|
<issue_start>username_0: In other words, which existing reinforcement method learns with fewest episodes? [R-Max](http://www.jmlr.org/papers/volume3/brafman02a/brafman02a.pdf) comes to mind, but it's very old and I'd like to know if there is something better now.<issue_comment>username_1: There isn't really one specific method which makes any RL agent have faster learning. Rather there is a long list of methods which have shown to increase the speed of learning and they can sometimes play nicely with each other.
Some examples:
1. [Options](http://www-anw.cs.umass.edu/~barto/courses/cs687/Sutton-Precup-Singh-AIJ99.pdf)
2. [True Online Methods](https://arxiv.org/abs/1512.04087)
3. [Asynchronous Methods](https://arxiv.org/abs/1602.01783)
These are the most influential and promising methods I can think of but the list of techniques is not limited to these 3.
Upvotes: 0 <issue_comment>username_2: There is a very interesting learning agent. They call it Neural-Episodic-Control. Here is the link for the paper: <https://arxiv.org/abs/1703.01988> . Their experiments show that NEC requires an order of magnitude fewer interactions with the environment than agents previously proposed.
Upvotes: 1
|
2016/08/06
| 1,370
| 5,460
|
<issue_start>username_0: Are there any research teams that attempted to create or have already created an AI robot that can be as close to intelligent as these found in [*Ex Machina*](https://en.wikipedia.org/wiki/Ex_Machina_(film)) or *[I, Robot](https://en.wikipedia.org/wiki/I,_Robot_(film))* movies?
I'm not talking about full awareness, but an artificial being that can make its own decisions and physical and intellectual tasks that a human being can do?<issue_comment>username_1: We are absolutely nowhere near, nor do we have any idea how to bridge the gap between what we can currently do and what is depicted in these films.
The current trend for DL approaches (coupled with the emergence of data science as a mainstream discipline) has led to a lot of popular interest in AI.
However, researchers and practitioners would do well to learn the lessons of the 'AI Winter' and not engage in hubris or read too much into current successes.
For example:
* Success in transfer learning is very limited.
* The 'hard problem' (i.e. presenting the 'raw, unwashed environment' to the machine and having it come up with a solution from scratch) is not being
addressed by DL to the extent that it is popularly portrayed: expert human knowledge is still required to help decide how the input should be framed, tune parameters, interpret output etc.
Someone who has enthusiasm for AGI would hopefully agree that the 'hard problem' is actually the only one that matters. Some years ago, a famous cognitive scientist said "We have yet to successfully represent *even a single concept* on a computer".
In my opinion, recent research trends have done little to change this.
All of this perhaps sounds pessimistic - it's not intended to. None of us want another AI Winter, so we should challenge (and be honest about) the limits of our current techniques rather than mythologizing them.
Upvotes: 6 [selected_answer]<issue_comment>username_2: >
> "heavier-than-air flying machines are impossible" *\_ <NAME> 1895*
>
>
>
7 years later the Wright brothers built one.
---
Currently we have many powerful narrow AI (good at special tasks) but we have no idea how to unify them into a single system like in a biological brain.
Upvotes: 0 <issue_comment>username_3: Our current approaches to AI are too inefficient to result in anything remotely close to what an average human would perceive as artificial senient beings.
Current approaches to AI involve a simulation of our own capacity for learning by creating fully functional computation machines capable of re-programming themselves. While that's definitely a good start with respect to understanding the nature of intelligence, it's still a far cry from actually creating genuine artificial intelligence.
It is not just our capacity to learn that evolved. Our very brains themselves evolved from rudimentary biochemical components at the intra-cellular level to the fascinating, complex organs they are today, along with our bodies as a whole evolving from simple single cell life to homo sapiens. So to create genuine artificial intelligence, it may actually make most sense to first start with replicating that process : creating artificial life with the capacity to evolve. It may actually make most sense to first start with creating artificial DNA and artificial cells, and move on from there.
Anyway, in [this article](http://www.alexstjohn.com/WP/2015/06/24/no-azimov-ai/) as well as [this article](http://www.alexstjohn.com/WP/2016/06/15/things-dont-compute/), Silicon Valley renegade [<NAME>](https://en.wikipedia.org/wiki/Alex_St._John) goes in greater detail on why something like [Skynet](https://en.wikipedia.org/wiki/Skynet_(Terminator)), [V.I.K.I.](http://irobot.wikia.com/wiki/VIKI) or anything like it is unlikely in the near future and may even never be within our grasp and why our current approach to artificial intelligence is a bad one.
Upvotes: 1 <issue_comment>username_4: Based on the success of IBM Watson and the amazing advances in tackling numerous hard tasks using deep learning in the past 3 years, I think a large high-tech company like Google or Amazon will create a useful conversational bot in no more than 10 years. (I've worked on the fringes of AI for 25 years and have followed the tech for even longer. These are exciting times.)
Initially, your very own AI companion ("Her"?) won't be capable of deeper philosophical conversation or insightful interpretation of novels or the human condition. But it will be able to write / speak in full paragraphs on topics like the best choice among 5 possible routes between point A and B, or summarizing the plot of a book or the gist of a news story, or why one product is better than another (e.g. based on assessing hundreds of Amazon reviews). And yes, it will be able to understand full spoken sentences from you, and generate both queries and answers.
I'm convinced such a bot *will* be useful enough that most of us will want one. Of course you won't need to buy a special piece of hardware, like the Amazon Echo. It'll be available via software on your smartphone, though the computing is likely to reside on the cloud (since that's where the data is).
Frankly, I think this is where the next innovations in smartphones will arise -- verbal interfaces that do a better job hearing and speaking and disambiguating using context about you and the kinds of questions you are likely to ask.
Upvotes: 1
|
2016/08/06
| 434
| 1,617
|
<issue_start>username_0: Humans can do multiple tasks at the same (e.g. reading while listening to music), but we memorize information from less focused sources with worse efficiency than we do from our main focus or task.
Do such things exist in the case of artificial intelligence? I doubt, for example, that neural networks have such characteristics, but I may be wrong.<issue_comment>username_1: <NAME>'s [CopyCat](https://pcl.sitehost.iu.edu/rgoldsto/courses/concepts/copycat.pdf) architecture for solving letter-string analogy problems was deliberately engineered to maintain a semantically-informed notion of 'salience', i.e. given a variety of competing possibilities, tend to maintain interest in the one that is most compelling. Although the salience value of (part of) a solution is ultimately represented numerically, the means by which it determined is broadly intended to correspond (at least functionally) to the way 'selective attention' might operate in human cognition.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Concentration, perhaps easier to grasp as "focus" or "attention", has quite some history in AI. [This answer](https://ai.stackexchange.com/a/1440/2444) mentions CopyCat, and there was work with neural networks in the 80s as well (e.g. from [Fukushima](http://link.springer.com/article/10.1007%2FBF00363973), creator of the Neocognitron).
More recently, *attention* in neural networks is [gaining momentum](http://www.wildml.com/2016/01/attention-and-memory-in-deep-learning-and-nlp/). The mechanisms are applied to learning in deep neural networks.
Upvotes: 3
|
2016/08/07
| 1,253
| 5,108
|
<issue_start>username_0: Suppose my goal is to collaborate and create an advanced AI, for instance, one that resembles a human being and the project would be on the frontier of AI research. What kind of skills would I need?
I am talking about specific things, like what university program should I complete to enter and be competent in the field.
Here are some of the things that I thought about, just to exemplify what I mean:
* Computer sciences: obviously, the AI is built on computers, it wouldn't hurt to know how computers work, but some low-level stuff and machine-specific things do not seem essential, I may be wrong of course.
* Psychology: if AI resembles human beings, knowledge of human cognition would probably be useful, although I do not imagine neurology on a cellular level or complicated psychological quirks typical to human beings like the Oedipus complex would be relevant, but again, I may be wrong.<issue_comment>username_1: As a full-time AI researcher myself, I'd say that a PhD in machine learning would certainly be one useful option.
However, in order [make much-needed progress](https://ai.stackexchange.com/questions/1420/how-close-are-we-to-creating-ex-machina), AI needs avoid falling into the trap of thinking that currently fashionable methods are any kind of 'silver bullet'. There's some danger that a PhD that heads straight into (say) some sub-sub-sub area of DL would end up imposing too much bias on the student's subsequent perspective.
AI research is an essentially multi-disciplinary activity. Other possible backgrounds therefore include:
* Mathematics or physics (to first degree or PhD level). A strong background in either of these never did anyone any harm. People who are competent in these fields tend to be able to turn their abilities to new domains relatively easily.
* Software Engineering. One of the things that AI needs are integrative architectures for knowledge engineering. [Here's why](http://www.cogsys.wiai.uni-bamberg.de/teaching/materials/AI-Light-Bulbb.htm). I believe that one of the reasons that we haven't yet managed to do [OCR at the level of a 5 year old](https://ai.stackexchange.com/questions/1396/why-ocr-cannot-be-perceived-as-good-example-of-ai) is that we've yet to accept that we have to 'build a sledgehammer to crack a nut'. Software architects are used to managing large-scale complexity, so they may be able to help.
* Cognitive Science, Psychology, Cognitive Linguistics. The reasons here are obvious.
Above all, I personally think that a good AI researcher should be creative, inquisitive and prepared to question received wisdom, all of which are more important in practice than specifics of their background.
Upvotes: 3 <issue_comment>username_2: Research on AI seems to be getting wider these days (2016). First, "obvious" few departments (no order):
* [Computer Science](https://en.wikipedia.org/wiki/Computer_science) (e.g. computation theory, algorithms): AI researchers there assume that intelligence is a kind of computation, under various forms (e.g. a neural network, a logic system).
* [Software Engineering](https://en.wikipedia.org/wiki/Software_engineering): Assuming we find a good model for AI, how do you make it? This is what the engineer will want to figure out. And it can be hard to map mathematical models to an engineered piece.
* Statistics and Probabilities (more specific than just Mathematics, which is also close to Computer Science): This is about Data Science, notably as a foundation to Machine Learning, the most active branch in AI---which "just" covers the *learning* part.
* Physics: This is particularly relevant now for hardware (see below).
* Neuro Science: Understand how the brain works, as an inspiration to create an artificial one, is the home for Connectionists. Recently, Hassabis and his team at Google Deepmind made several breakthroughs related to reinforcement learning, memory, attention, etc.
Recently Electric Engineering is getting a lot of light, together with the related branches of Physics. Several public and private laboratories focus on "brain chips". To name a few: IBM (who's working on that for some time already), Nvidia, and Facebook. Circa 2010, it became clear that techniques like deep learning require horsepower, thus an increasing focus on creating more powerful, smaller, more energy efficient chips. And on top of that, there is all the work in Quantum Computing.
But the thing is, there seems to be many more fields that are getting involved in AI research. We should mention Chemistry and Biology, as both inspiration and tools to make new models or hardware (e.g. chips that do not use silicon, so they can get smaller).
As for 2016, the above fields are the most active, and promise to remain very active for quite some time. Pick your own depending on your interest, skills, or mere intuition!
To finish, we may be surprised in a few years when we look back at where AI has come from. I believe that if we manage to build an AGI, it will leverage *all* these fields anyway. I guess the thrill is to be part of the story.
Upvotes: 3
|
2016/08/07
| 1,130
| 4,545
|
<issue_start>username_0: [White House published the information](https://web.archive.org/web/20161221222448/https://www.whitehouse.gov/webform/rfi-preparing-future-artificial-intelligence) about AI which requests mentions about *the most important research gaps in AI that must be addressed to advance this field and benefit the public*.
What are these exactly?<issue_comment>username_1: As a full-time AI researcher myself, I'd say that a PhD in machine learning would certainly be one useful option.
However, in order [make much-needed progress](https://ai.stackexchange.com/questions/1420/how-close-are-we-to-creating-ex-machina), AI needs avoid falling into the trap of thinking that currently fashionable methods are any kind of 'silver bullet'. There's some danger that a PhD that heads straight into (say) some sub-sub-sub area of DL would end up imposing too much bias on the student's subsequent perspective.
AI research is an essentially multi-disciplinary activity. Other possible backgrounds therefore include:
* Mathematics or physics (to first degree or PhD level). A strong background in either of these never did anyone any harm. People who are competent in these fields tend to be able to turn their abilities to new domains relatively easily.
* Software Engineering. One of the things that AI needs are integrative architectures for knowledge engineering. [Here's why](http://www.cogsys.wiai.uni-bamberg.de/teaching/materials/AI-Light-Bulbb.htm). I believe that one of the reasons that we haven't yet managed to do [OCR at the level of a 5 year old](https://ai.stackexchange.com/questions/1396/why-ocr-cannot-be-perceived-as-good-example-of-ai) is that we've yet to accept that we have to 'build a sledgehammer to crack a nut'. Software architects are used to managing large-scale complexity, so they may be able to help.
* Cognitive Science, Psychology, Cognitive Linguistics. The reasons here are obvious.
Above all, I personally think that a good AI researcher should be creative, inquisitive and prepared to question received wisdom, all of which are more important in practice than specifics of their background.
Upvotes: 3 <issue_comment>username_2: Research on AI seems to be getting wider these days (2016). First, "obvious" few departments (no order):
* [Computer Science](https://en.wikipedia.org/wiki/Computer_science) (e.g. computation theory, algorithms): AI researchers there assume that intelligence is a kind of computation, under various forms (e.g. a neural network, a logic system).
* [Software Engineering](https://en.wikipedia.org/wiki/Software_engineering): Assuming we find a good model for AI, how do you make it? This is what the engineer will want to figure out. And it can be hard to map mathematical models to an engineered piece.
* Statistics and Probabilities (more specific than just Mathematics, which is also close to Computer Science): This is about Data Science, notably as a foundation to Machine Learning, the most active branch in AI---which "just" covers the *learning* part.
* Physics: This is particularly relevant now for hardware (see below).
* Neuro Science: Understand how the brain works, as an inspiration to create an artificial one, is the home for Connectionists. Recently, Hassabis and his team at Google Deepmind made several breakthroughs related to reinforcement learning, memory, attention, etc.
Recently Electric Engineering is getting a lot of light, together with the related branches of Physics. Several public and private laboratories focus on "brain chips". To name a few: IBM (who's working on that for some time already), Nvidia, and Facebook. Circa 2010, it became clear that techniques like deep learning require horsepower, thus an increasing focus on creating more powerful, smaller, more energy efficient chips. And on top of that, there is all the work in Quantum Computing.
But the thing is, there seems to be many more fields that are getting involved in AI research. We should mention Chemistry and Biology, as both inspiration and tools to make new models or hardware (e.g. chips that do not use silicon, so they can get smaller).
As for 2016, the above fields are the most active, and promise to remain very active for quite some time. Pick your own depending on your interest, skills, or mere intuition!
To finish, we may be surprised in a few years when we look back at where AI has come from. I believe that if we manage to build an AGI, it will leverage *all* these fields anyway. I guess the thrill is to be part of the story.
Upvotes: 3
|
2016/08/08
| 866
| 3,698
|
<issue_start>username_0: Just for the purpose of learning I'd like to classify the likeliness of a tweet being in aggressive language or not.
I was wondering how to approach the problem. I guess I need first train my neural network on a huge dataset of text what aggressive language is. This brings up the question where I would get this data in the first place?
It feels a bit like the chicken and egg problem to me so I wonder how would I approach the problem?<issue_comment>username_1: I did a little search and couldn't find any database that has ground truth for aggressiveness. This means that you need to build yourself a database. This might be huge undertaking. Take thousands of messages, and classify them by hand whether they are aggressive or not. This part is quite labor intensive.
Second part is much easier at start but would be pain to optimize (both performance and computational cost). I would suggest you to start with Naive Bayes classifier for this job. That is the preferred classifier for spam detection. ANN would probably not work for this case because the data would be a huge sparse vector. Estimated number of words in English is over a million, which means the input layer of your ANN should be able to scale up to that number. Search for sparse vector classification for additional classifier that can be used in these cases.
Upvotes: 2 <issue_comment>username_2: The answer by [username_1](https://ai.stackexchange.com/users/210/cem-kalyoncu) mentions the difficulty of building a ground truth database for aggressiveness.
One alternative approach would be to attempt to operate at the *concept level*, which would allow the use of pre-existing ontologies such as ConceptNet.
[Here's a paper](http://www.cs.stir.ac.uk/~spo/publication/resources/sentiment-analysis.pdf) that describes this technique.
Upvotes: 3 [selected_answer]<issue_comment>username_3: A simple way to do it would be lexicograpical sentiment analysis. To do that, you'd need a list of words categorized with a score that reflects "friendly" vs "aggressive" sentiment. For an example of setting up a SA system using Spark, see [this article](http://mammothdata.com/sentiment-analysis-on-enrons-emails-with-apache-spark/). To do what you're talking about, substitute AFINN for a different dataset. You might have to create said dataset yourself, if there isn't one "out there" like you want.
Note that this isn't the most sophisticated technique in the world, but it's been found to be surprisingly effective.
Upvotes: 2 <issue_comment>username_4: I would completely agree with username_3 and username_1.
Take into account that passive agressiveness for example is more difficult to detect (irony, black humour, sarcasm likewise)
Although another head start could be to think out of the box:
Happens by chance that I have lying around a book concerning violent-free communication. So probably your best start could be to talk with linguists about violence in language and start from there. or just make some reviews about how linguists or psychlogists detect violence in language (susprise: It's probably quite complex)
Nevertheless: I don't think you need real AI, a blacklist of words and expressions together with some pattern detection for expressions could be precise enough for the beginning.
Then for all the expressions, words, etc. you could add a bayesian network for the learning part, which works with probablities (like e.g some email spam filters)
Search for example for "Naive Bayes spam filtering"
This should be pretty much enough to have a good start, so in strict sense, you don't need real AI here, just BusinessIntelligence and probability calculations.
Upvotes: 1
|
2016/08/08
| 1,066
| 3,949
|
<issue_start>username_0: [Siri](https://en.wikipedia.org/wiki/Siri) and [Cortana](https://en.wikipedia.org/wiki/Cortana) communicate pretty much like humans. Unlike [Google Now](https://en.wikipedia.org/wiki/Google_Now), which mainly gives us search results when asked some questions (not setting alarms or reminders), Siri and Cortana provide us with an answer, in the same way that a person would do.
So, are they actual AI programs or not?
(By "question" I don't mean any academic-related question or asking routes/ temperature, but rather opinion based question).<issue_comment>username_1: I would classify both as having / using elements of AI, yes. But I wouldn't say either represents a truly "intelligent" (in the AGI sense) program.
But here's the rub... as you'll see in other questions asking about definitions of AI, there's a sort of memetic thing where anything that AI begins to do successfully, immediately stops being considered "AI". So AI is always an unreachable state, because it's always "something humans can do that computers can't" and once the computer *can* do it, it isn't AI anymore. So take that into consideration.
Upvotes: 3 <issue_comment>username_2: Siri and Cortana are AI to some extent. The usual label is "weak AI" (also called "narrow" or "soft" AI). It turns out the [Wikipedia article on Weak AI](https://en.wikipedia.org/w/index.php?title=Weak_AI&oldid=736852472) explicitly refers to Siri:
>
> Siri is a good example of narrow intelligence. Siri operates within a limited pre-defined range, there is no genuine intelligence, no self-awareness, no life despite being a sophisticated example of weak AI. In Forbes (2011), <NAME>wald wrote: "The iPhone/Siri marriage represents the arrival of hybrid AI, combining several narrow AI techniques plus access to massive data in the cloud." AI researcher <NAME>, on his blog in 2010, stated Siri was "VERY narrow and brittle" evidenced by annoying results if you ask questions outside the limits of the application.
>
>
>
Important to note that "mixing" weak AIs does not make a "stronger" AI, by some arguments (see Searle's [Chinese Room](https://en.wikipedia.org/wiki/Chinese_room) argument), but there is no consensus on this topic yet.
Upvotes: 5 [selected_answer]<issue_comment>username_3: They are [virtual artificial agents](https://en.wikipedia.org/wiki/Intelligent_agent) which exhibit intelligent behavior ([AI](https://en.wikipedia.org/wiki/Artificial_intelligence)).
[Tim Urban](http://waitbutwhy.com/wait-but-who) on *[Wait But Why](https://en.wikipedia.org/wiki/Wait_But_Why)* website wrote the following:
>
> The software and data behind **Siri is AI**, the woman’s voice we hear is a personification of that AI, and there’s no robot involved at all.
>
>
>
Source: [The AI Revolution: The Road to Superintelligence](http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html)
Related: [What is the difference between AI and robots?](https://ai.stackexchange.com/q/1462/8)
Upvotes: 2 <issue_comment>username_4: Short Answer, **No**.
Explained, Siri and Cortana are just **inference engines**. Though how applaudable their ability to synthesize text from speech and parse lexical maps from the text using Machine Learning Techniques is, the artifact is still just a program, trained with substantial myriad of Q/A tuples, that generates an output given an input. Statistically mapping the probability distribution of words of a question and accordingly generating an answer is just **NOT** an AI, however, to the most, it can be classified as ***Narrow Intelligence***.
[www.cleverbot.com](http://www.cleverbot.com) is an example of such a system. The ones you mentioned are just more sophisticated and highly architectured versions of this kind; for example, connection to WolframAlpha (Siri), Computer Kernel (Cortana). In fact, there is a lot of innate use of control statements (If-Else).
Upvotes: 2
|
2016/08/08
| 1,368
| 5,290
|
<issue_start>username_0: What is the difference between artificial intelligence and robots?<issue_comment>username_1: Although there are several definitions of "robot", an essential feature of everything called "robot" is that it is capable of movement. This does not necessarily mean *displacement*; a robot arm in a factory also moves.
There is a single exception to this rule, which is bot-programs like chatbots; I will discuss them later.
Artificial Intelligence does not need to move; a chess program can be argued to be an AI, but does not move.
A robot can actually have AI; one of the definitions of robot is that it is a system, capable of *autonomous* movement. In order to be autonomous, to be able to make decisions of its own, a certain amount of AI may be necessary.
There is one class of "robots" that does not move, and does not even have physical presence; bot programs, like chatbots, that operate inside systems. I do not consider them robots, because they are not physical devices operating in the real world. A chatbot can be an AI, however - a good chatbot may have some natural language processing to interact with humans in a way that humans find natural.
To summarize; an AI can exist purely in software. But to be a robot, there must be a moving physical component in the real world.
Upvotes: 4 [selected_answer]<issue_comment>username_2: In the broadest sense, the difference is that non-robotic A(G)I *may* not be possible because, as per [this question](https://ai.stackexchange.com/questions/1415/what-kind-of-body-if-any-does-intelligence-require), it could be that "Intelligence requires a body".
More specifically, it could be that there are limitations to what the traditional (well, 1950s style) 'Brain in a vat' notion of an AI is capable of comprehending, in the absence of experience of embodied experience such as force, motion and "the raw, unawshed world".
Upvotes: 3 <issue_comment>username_3: An [AI](http://www.oxforddictionaries.com/definition/english/artificial-intelligence#artificial-intelligence__2 "OED - AI") is a computer program designed for tasks normally requiring human [intelligence](http://www.oxforddictionaries.com/definition/english/intelligence "OED - Intelligence") (a human's ability to learn), while a [robot](http://www.oxforddictionaries.com/definition/english/robot "OED - Robot") is a machine that completes complex tasks. An AI could be used to control a robot, but they are very different.
Source: [Oxford English Dictionary](http://www.oxforddictionaries.com/), above links will direct to definitions.
Upvotes: 0 <issue_comment>username_4: Basically a [robot](https://en.wikipedia.org/wiki/Robot) is a mechanical or virtual artificial agent which exhibit intelligent behavior ([AI](https://en.wikipedia.org/wiki/Artificial_intelligence)).
---
[<NAME>](http://waitbutwhy.com/wait-but-who) on *[Wait But Why](https://en.wikipedia.org/wiki/Wait_But_Why)* website wrote the following to clear things up:
>
> First, stop thinking of robots.
>
>
> A robot is a container for AI,
> ==============================
>
>
> sometimes mimicking the human form, sometimes not
>
>
> — but the AI itself **is the computer inside the robot**.
>
>
> ### AI is the brain and the robot is its body — if it even has a body.
>
>
> For example,
>
>
> the software and data behind **Siri is AI**, the woman’s voice we hear is a personification of that AI, and there’s no robot involved at all.
>
>
>
Source: [The AI Revolution: The Road to Superintelligence](http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html)
Upvotes: 1 <issue_comment>username_5: In a general sense you can say that robot is a piece of hardware, while AI is software (sometimes hardware too).
>
> [Wikipedia](https://en.wikipedia.org/wiki/Robot) states Robot as a machine which performs complex set of tasks automatically.
>
> Machine - A mechanical device basically.
>
>
>
>
So, technically you can create a robot that doesn't require any kind of complex algorithms to take decisions. A simple line follower doesn't even require a microcontroller. Just some gates are enough. Some other examples of robots are, a robotic arm, automated control systems in industries, etc. If you think about it even the printer in your house is a robot in itself.
>
> Artificial Intelligence is a field of Computer Science which deals with developing systems that can perform tasks rationally as if it is using intelligence (of human level) for taking decisions.
>
>
>
AI deals with complex algorithms. Some examples of AI are speech recognition, face recognition, natural language processing, etc.
AI don't necessarily need additional hardware. A simple desktop at home will work, while the term robot is used for external hardware that does some autonomous task repeatedly.
Upvotes: 2 <issue_comment>username_6: * Simply we say AI is software and robot is its body.
This is because the algorithms we commonly think of AI come in the form of software, where when we talk about robots, we're talking about physical automation.
In an automobile manufacturing process where automation is used, the *software* makes the decisions on what physical action the *robot* arm should take at any given time.
Upvotes: 1
|
2016/08/08
| 796
| 3,243
|
<issue_start>username_0: By reinforcement learning, I don't mean the class of machine learning algorithms such as DeepQ, etc. I have in mind the general concept of learning based on rewards and punishment.
Is it possible to create a Strong AI that does not rely on learning by reinforcement, or is reinforcement learning a requirement for artificial intelligence?
The existence of rewards and punishment implies the existence of favorable and unfavorable world-states. Must intelligence in general and artificial intelligence, in particular, have a way of classifying world-states as favorable or unfavorable?<issue_comment>username_1: Simply yes, but it can lead to over fixing of the NN.
Humans favour not dying, which is only realised once a consequence is defined for the system to realise that death is an unfavorable result. Which can be train vai observation. Allow your system to observe between 2 or more separate people/systems. Then allow opportunity to test in a safe environment with the pre existing info of the consequences that may follow, provind that if the system makes a mistake in the test/safe environment it will be saved unknownly and then informed that it made a mistake, the place system in an unsafe world in same conditions, informing it that if something happens it will die. That is the way humans grow up, and we've lasted very long with this technic.
I'm an AI Researcher and Software Engineer for the past 7 years.
Upvotes: 0 <issue_comment>username_2: It's impossible to give a definitive 'yes' answer to your question, since that would require proving that alternatives *cannot* exist.
More philosophically, it depends on what you mean by "preference over world states":
However counter-intuitive it might seem, it is conceivably possible to create Strong AI purely from local condition-action rules, in which there is no global concept of 'preference value' and/or no integrated notion of 'world state'.
Upvotes: 2 [selected_answer]<issue_comment>username_3: Simply put, we don't know how to create Strong Artificial Intelligence yet, so we don't know what is or isn't required to create it. At best we can engage in "informed speculation", in which case I'd say that the answer is more likely "yes" than "no". But that's basically just a hunch.
If you're interested in a pretty good overview of what "pieces" might be required to create Strong AI, and if you haven't read it yet, [<NAME>](http://homes.cs.washington.edu/~pedrod/)' book *The Master Algorithm* might be of interest.
Upvotes: 1 <issue_comment>username_4: **Introspect**! Do you need to know what's good/bad, pleasurable/painful or so, in order to understand and/or learn?
I am a human, hence a general intelligence, and so are you. So know thyself! I can tell for myself that I have different ways to understand and learn; some may be similar to reinforcement learning. Esp. the ~automatic ~innate ~unconcious ones, like motor movement, remembering tasty food and many other primitive functions.
**But** I can also understand things through ~intentional ~analytical ~logical thought; which some may call pure reason (<NAME>).
Yet, you don't need to hear all that, since have it already in your own mind.
Upvotes: 0
|
2016/08/09
| 891
| 2,999
|
<issue_start>username_0: For example, search engine companies want to classify their image searches into 2 categories (which they already do that) such as: [NSFW](https://en.wikipedia.org/wiki/Not_safe_for_work) (nudity, porn, brutality) and safe to view pictures.
How can artificial neural networks achieve that, and at what success rate? Can they be easily mistaken?<issue_comment>username_1: The 2015 paper entitled "[Applying deep learning to classify pornographic images and videos](https://arxiv.org/pdf/1511.08899.pdf)" applied various types of convnets for detecting pornography. The proposed architecture achieved **94.1% accuracy** on the NPDI dataset, which contains 800 videos (400 porn, 200 non-porn "easy" and 200 non-porn "difficult"). More traditional computer vision methods achieved 90.9% accuracy. The proposed architecture also performs very well regarding the ROC curve.
There does not seem to exist any works regarding the other aspects of NSFW yet.
Upvotes: 3 [selected_answer]<issue_comment>username_2: The problem detecting NSFW has been around for over two decades.
---
This study from 2005 about [*finding naked people*](http://link.springer.com/chapter/10.1007%2F3-540-61123-1_173), demonstrates a strategy for finding such images based on the color and texture properties to fetch an effective mask for skin regions attempting to group a human figure using geometric constraints on the human structure. This method demonstrated
>
> 60% precision and 52% recall on a test set of 138 uncontrolled images of naked people.
>
>
>
Here are a few figures from the study explaining the algorithm:
[](https://i.stack.imgur.com/hMC5ll.png)
[](https://i.stack.imgur.com/qgjcYl.png)
**The following post contains visualizations of nudity for scientific purposes (hover to display):**
>
> [](https://i.stack.imgur.com/qlNd8m.png)
>
>
>
---
A more recent approach is using [convolutional networks](https://ai.stackexchange.com/questions/tagged/conv-neural-network). This [study from 2014](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.683.4319)[PDF](https://www.cs.nyu.edu/~fergus/papers/zeilerECCV2014.pdf) demonstrated impressive classification performance based on the ImageNet dataset. It's not clear '*[how and why](https://ai.stackexchange.com/q/1479/8)* they perform so well', however they can be used for classification of images with a very low error rate.
For further details, check: [What convolutional neural networks look at when they see nudity](http://blog.clarifai.com/what-convolutional-neural-networks-see-at-when-they-see-nudity/).
You will find the code example and the heatmap for how convnets see NSFW in the above link.
Upvotes: 1
|
2016/08/09
| 5,474
| 19,500
|
<issue_start>username_0: Do scientists or research experts know from the kitchen what is happening inside complex "deep" neural network with at least millions of connections firing at an instant? Do they understand the process behind this (e.g. what is happening inside and how it works exactly), or it is a subject of debate?
For example this [study](https://www.cs.nyu.edu/%7Efergus/papers/zeilerECCV2014.pdf) says:
>
> However there is no clear understanding of *why* they perform so well, or *how* they might be improved.
>
>
>
So does this mean that scientists actually don't know how complex convolutional network models work?<issue_comment>username_1: It depends on what you mean by "know what is happening".
Conceptually, yes: ANN perform nonlinear regression. The actual expression represented by the weight matrix/activation function(s) of an ANN can be explicitly expanded in symbolic form (e.g. containing sub-expressions such as $1/1+e^{1/1+e^{\dots}}$).
However, if by 'know' you mean *predicting the output of some specific (black box) ANN*, by some other means, then the obstacle is the presence of chaos in a ANN that has [high degrees of freedom](http://sprott.physics.wisc.edu/pubs/paper234.pdf).
Here's also some relatively recent work by <NAME> on understanding ANNs through [visualisation](http://arxiv.org/pdf/1506.06579.pdf).
Upvotes: 5 <issue_comment>username_2: Not sure if this is what you are searching for, but google extracted images from networks when they were fed with white noise.
See [Inceptionism: Going Deeper into Neural Networks (Google Research Blog)](https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html).
This kind of represents what the network knows.
Upvotes: 3 <issue_comment>username_3: I'm afraid I don't have the specific citations handy, but I have seen/heard quotes by experts like <NAME> and <NAME> where they clearly say that we do not really understand neural networks. That is, we understand something of the *how* they work (for example, the math behind back propagation) but we don't really understand *why* they work. It's sort of a subtle distinction, but the point is that no, we don't understand the very deepest details of how exactly you go from a bunch of weights, to, say, recognizing a cat playing with a ball.
At least in terms of image recognition, the best explanation I've heard is that successive layers of a neural network learn more sophisticated features, composed of the more granular features from earlier levels. That is to say, the first layer might recognize "edges" or "straight lines". The next layer might then learn geometric shapes like "box", or "triangle", and then a higher layer might learn "nose" or "eye" based on those earlier features, and then a higher level layer still learns "face" made up from "eye", "nose", "jaw", etc. But even that, as I understand it, is still hypothetical and/or not understood in complete detail.
Upvotes: 4 <issue_comment>username_4: Here is an answer by <NAME> to the question [What is theory behind deep learning?](https://www.quora.com/What-is-theory-behind-deep-learning/answer/Carlos-E-Perez?srid=CpS&share=0f965f46)
>
> [...]
>
>
> The underlying mathematics of Deep Learning has been in existence for several decades, however the impressive results that we see today are part a consequence of much faster hardware, more data and incremental improvements in methods.
>
>
> Deep Learning in general can be framed as optimization problem where the objective is a function of the model error. This optimization problem is very difficult to solve consider that the parameter space of the model (i.e. weights of the neural network) leads to a problem in extremely high dimension. An optimization algorithm could take a very long time to explore this space. Furthermore, there was an unverified belief that the problem was non-convex and computation would forever be stuck in local minima.
>
>
> [...]
>
>
> The theory of why machines actually converge to an attractor or in other words learn to recognize complex patterns is still unknown.
>
>
>
To sum up: we have some ideas, but we're not quite sure.
Upvotes: 3 <issue_comment>username_5: There are many approaches that aim to make a trained neural network more interpretable and less like a "black box", specifically *convolutional neural networks* that you've mentioned.
### Visualizing the activations and layer weights
[Activations visualization](https://medium.com/@awjuliani/visualizing-neural-network-layer-activation-tensorflow-tutorial-d45f8bf7bbc4) is the first obvious and straight-forward one. For ReLU networks, the activations usually start out looking relatively blobby and dense, but as the training progresses the activations usually become more sparse (most values are zero) and localized. This sometimes shows what exactly a particular layer is focused on when it sees an image.
Another great work on activations that I'd like to mention is [deepvis](https://www.youtube.com/watch?v=AgkfIQ4IGaM) that shows reaction of every neuron at each layer, including pooling and normalization layers. Here's how they [describe it](http://yosinski.com/deepvis):
>
> In short, we’ve gathered a few different methods that allow you to
> “triangulate” what feature a neuron has learned, which can help you
> better understand how DNNs work.
>
>
>
The second common strategy is to visualize the weights (filters). These are usually most interpretable on the first CONV layer which is looking directly at the raw pixel data, but it is possible to also show the filter weights deeper in the network. For example, the first layer usually learns gabor-like filters that basically detect edges and blobs.
[](https://i.stack.imgur.com/D6pXk.jpg)
### Occlusion experiments
Here's the idea. Suppose that a ConvNet classifies an image as a dog. How can we be certain that it’s actually picking up on the dog in the image as opposed to some contextual cues from the background or some other miscellaneous object?
One way of investigating which part of the image some classification prediction is coming from is by plotting the probability of the class of interest (e.g. dog class) as a function of the position of an occluder object.
If we iterate over regions of the image, replace it with all zeros and check the classification result, we can build a 2-dimensional heat map of what's most important for the network on a particular image. This approach has been used in [Matthew Zeiler’s Visualizing and Understanding Convolutional Networks](https://arxiv.org/pdf/1311.2901.pdf) (that you refer to in your question):
[](https://i.stack.imgur.com/O10WA.jpg)
### Deconvolution
Another approach is to synthesize an image that causes a particular neuron to fire, basically what the neuron is looking for. The idea is to compute the gradient with respect to the image, instead of the usual gradient with respect to the weights. So you pick a layer, set the gradient there to be all zero except for one for one neuron and backprop to the image.
Deconv actually does something called *guided backpropagation* to make a nicer looking image, but it's just a detail.
### Similar approaches to other neural networks
Highly recommend [this post by <NAME>](http://karpathy.github.io/2015/05/21/rnn-effectiveness/), in which he plays a lot with Recurrent Neural Networks (RNN). In the end, he applies a similar technique to see what the neurons actually learn:
>
> The neuron highlighted in this image seems to get very excited about
> URLs and turns off outside of the URLs. The LSTM is likely using this
> neuron to remember if it is inside a URL or not.
>
>
>
### Conclusion
I've mentioned only a small fraction of results in this area of research. It's pretty active and new methods that shed light to the neural network inner workings appear each year.
To answer your question, there's always something that scientists don't know yet, but in many cases they have a good picture (literary) of what's going on inside and can answer many particular questions.
To me the quote from your question simply highlights the importance of research of not only accuracy improvement, but the inner structure of the network as well. As <NAME> tells in [this talk](https://www.youtube.com/watch?v=ghEmQSxT6tw), sometimes a good visualization can lead, in turn, to better accuracy.
Upvotes: 7 [selected_answer]<issue_comment>username_6: Short answer is **no**.
*Model interpretability* is a hyper-active and hyper-hot area of current research (think of holy grail, or something), which has been brought forward lately not least due to the (often tremendous) success of deep learning models in various tasks; these models are currently only black boxes, and we naturally feel uncomfortable about it...
Here are some general (and recent, as of Dec 2017) resources on the subject:
* A recent (July 2017) article in Science provides a nice overview of the current status & research: [How AI detectives are cracking open the black box of deep learning](http://www.sciencemag.org/news/2017/07/how-ai-detectives-are-cracking-open-black-box-deep-learning) (no in-text links, but googling names & terms will pay off)
* DARPA itself is currently running a program on [Explainable Artificial Intelligence (XAI)](https://www.darpa.mil/program/explainable-artificial-intelligence)
* There was a workshop in NIPS 2016 on [Interpretable Machine Learning for Complex Systems](http://nuit-blanche.blogspot.gr/2016/12/nips2016-interpretable-machine-learning.html), as well as an [ICML 2017 tutorial on Interpretable Machine Learning](http://people.csail.mit.edu/beenkim/icml_tutorial.html) by [<NAME>](https://beenkim.github.io/) of Google Brain.
And on a more practical level (code etc):
* The What-If tool by Google, a brand new (September 2018) feature of the open-source TensorBoard web application, which let users analyze an ML model without writing code ([project page](https://pair-code.github.io/what-if-tool/), [blog post](https://ai.googleblog.com/2018/09/the-what-if-tool-code-free-probing-of.html))
* The Layer-wise Relevance Propagation (LRP) toolbox for neural networks ([paper](http://www.jmlr.org/papers/v17/15-618.html), [project page](http://www.explain-ai.org/), [code](https://github.com/sebastian-lapuschkin/lrp_toolbox), [TF Slim wrapper](https://github.com/VigneshSrinivasan10/interprettensor))
* FairML: Auditing Black-Box Predictive Models, by Cloudera Fast Forward Labs ([blog post](http://blog.fastforwardlabs.com/2017/03/09/fairml-auditing-black-box-predictive-models.html), [paper](https://arxiv.org/abs/1611.04967), [code](https://github.com/adebayoj/fairml))
* LIME: Local Interpretable Model-agnostic Explanations ([paper](https://arxiv.org/abs/1602.04938), [code](https://github.com/marcotcr/lime), [blog post](https://www.oreilly.com/learning/introduction-to-local-interpretable-model-agnostic-explanations-lime), [R port](https://cran.r-project.org/web/packages/lime/index.html))
* A very recent (November 2017) paper by <NAME>, [Distilling a Neural Network Into a Soft Decision Tree](https://arxiv.org/abs/1711.09784), with an independent [PyTorch implementation](https://github.com/kimhc6028/soft-decision-tree)
* SHAP: A Unified Approach to Interpreting Model Predictions ([paper](https://arxiv.org/abs/1705.07874), authors' [Python code](https://github.com/slundberg/shap), [R package](https://github.com/redichh/ShapleyR))
* Interpretable Convolutional Neural Networks ([paper](https://arxiv.org/abs/1710.00935), authors' [code](https://github.com/zqs1022/interpretableCNN))
* Lucid, a collection of infrastructure and tools for research in neural network interpretability by Google ([code](https://github.com/tensorflow/lucid); papers: [Feature Visualization](https://distill.pub/2017/feature-visualization/), [The Building Blocks of Interpretability](https://distill.pub/2018/building-blocks/))
* Transparecy-by-Design (TbD) networks ([paper](https://arxiv.org/abs/1803.05268), [code](https://github.com/davidmascharka/tbd-nets), [demo](https://hub.mybinder.org/user/davidmascharka-tbd-nets-nex18rln/notebooks/full-vqa-example.ipynb))
* SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability ([paper](https://arxiv.org/abs/1706.05806), [code](https://github.com/google/svcca), [Google blog post](https://ai.googleblog.com/2017/11/interpreting-deep-neural-networks-with.html))
* TCAV: Testing with Concept Activation Vectors ([ICML 2018 paper](https://arxiv.org/abs/1711.11279), [Tensorflow code](https://github.com/tensorflow/tcav/))
* Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization ([paper](https://arxiv.org/abs/1610.02391), authors' [Torch code](https://github.com/ramprs/grad-cam), [Tensorflow code](https://github.com/Ankush96/grad-cam.tensorflow), [PyTorch code](https://github.com/meliketoy/gradcam.pytorch), Keras [example notebook](http://nbviewer.jupyter.org/github/fchollet/deep-learning-with-python-notebooks/blob/master/5.4-visualizing-what-convnets-learn.ipynb))
* Network Dissection: Quantifying Interpretability of Deep Visual Representations, by MIT CSAIL ([project page](http://netdissect.csail.mit.edu/), [Caffe code](https://github.com/CSAILVision/NetDissect), [PyTorch port](https://github.com/CSAILVision/NetDissect-Lite))
* GAN Dissection: Visualizing and Understanding Generative Adversarial Networks, by MIT CSAIL ([project page](https://gandissect.csail.mit.edu/), with links to paper & code)
* Explain to Fix: A Framework to Interpret and Correct DNN Object Detector Predictions ([paper](https://arxiv.org/abs/1811.08011), [code](https://github.com/gudovskiy/e2x))
* InterpretML by Microsoft ([code](https://github.com/interpretml/interpret) still in alpha)
* Anchors: High-Precision Model-Agnostic Explanations ([paper](https://arxiv.org/abs/1905.07697), [code](https://github.com/marcotcr/anchor))
* Diverse Counterfactual Explanations (DiCE) by Microsoft ([paper](https://arxiv.org/abs/1905.07697), [code](https://github.com/interpretml/DiCE), [blog post](https://www.microsoft.com/en-us/research/blog/open-source-library-provides-explanation-for-machine-learning-through-diverse-counterfactuals/))
* Axiom-based Grad-CAM (XGrad-CAM): Towards Accurate Visualization and Explanation of CNNs, a refinement of the existing Grad-CAM method ([paper](https://arxiv.org/abs/2008.02312), [code](https://github.com/Fu0511/XGrad-CAM))
Lately, there has been a surge of interest to start building a more theoretical basis for deep learning neural nets. In this context, renowned statistician and compressive sensing pioneer David Donoho has very recently (fall 2017) started offering a course at Stanford, [Theories of Deep Learning (STATS 385)](https://stats385.github.io/), with almost all the material available online; it is highly recommended...
**UPDATES**:
* [Interpretable Machine Learning](https://christophm.github.io/interpretable-ml-book/), an online Gitbook by <NAME> with [R code](https://github.com/christophM/iml) (although in mostly covers algorithms other than neural networks)
* A [Twitter thread](https://twitter.com/ledell/status/995930308947140608), linking to several interpretation tools available for R.
* A short (4 hrs) online course by Kaggle, [Machine Learning Explainability](https://www.kaggle.com/learn/machine-learning-explainability), and the accompanying [blog post](https://towardsdatascience.com/why-model-explainability-is-the-next-data-science-superpower-b11b6102a5e0)
* A new ICML 2018 Tutorial, [Toward theoretical understanding of deep learning](http://unsupervised.cs.princeton.edu/deeplearningtutorial.html), by <NAME>
* A **whole bunch** of resources in the [Awesome Machine Learning Interpretability repo](https://github.com/jphall663/awesome-machine-learning-interpretability)
**NOTE**: I do no longer keep this answer updated; for updates, see my answer in [Which explainable artificial intelligence techniques are there?](https://ai.stackexchange.com/questions/12870/which-explainable-artificial-intelligence-techniques-are-there/24138#24138)
Upvotes: 4 <issue_comment>username_7: >
> Do scientists know what is happening inside artificial neural networks?
>
>
>
### YES
>
> Do scientists or research experts know from the kitchen what is happening inside complex "deep" neural network with at least millions of connections firing at an instant?
>
>
>
I guess "to know from the kitchen" means "to know in detail"?
Let me give you a series of analogies:
1. Does an airplane engineer know from the kitchen what happens inside the airplane?
2. Does a chip designer know in detail what happens in the chip (s)he designed?
3. Does a civil engineer know everything about the house he constructed?
The devil is in the detail, but a crucial point here is that it's about artificial structures. They don't randomly appear. You need a lot of knowledge to get anything useful. For Neural Networks, I would say it took roughly 40 years from the publication of the key idea (Rosenblatt perceptron, 1957) to the first application (US Postal Service, 1989). And from there again 13 years of active reserach to really impressive systems (ImageNet 2012).
**What we know super well is how the training works**. Because it needs to be implemented. So on a very small structure, we know it in detail.
Think of computers. The chip designers know very well how their chip works. But they will likely only have a very rough idea how the Linux operating system works.
Another example is physics and chemistry: Physics describes the core forces of the universe. Does that mean they know everything about chemistry as well? Hell no! A "perfect" physicist can explain everything in chemistry ... but it would be pretty much useless. He would need a lot more information, not be able to skip the irrelevant parts. Simply because he "zoomed in" too much - considers details which are in practice neither interesting nor important. Please note that the knowledge of the physicist is not wrong. Maybe one could even deduce the knowledge from the chemist from it. But this "high-level" understanding of molecule interaction is missing.
The key insight from those two examples are abstraction layers: **You can build complexity from simple structures**.
What else?
**We know well what is in principle achievable with the neural networks** we design:
* A neural network designed to play Go - no matter how sophisticated - will never even be able to play chess. You can, of course, add another abstraction layer around it and combine things. But this approach needs humans.
* A neural network designed for distinguishing dogs from cats which has only seen pudels and Persian cats will likely perform really bad when it has to decide for Yorkshire Terriers.
Oh, and of course we have analytical approaches for neural networks. I wrote my masters thesis about [Analysis and Optimization of Convolutional Neural Network Architectures](https://arxiv.org/pdf/1707.09725.pdf). In this context [LIME](https://github.com/marcotcr/lime) (Local Interpretable Model-Agnostic Explanations) is nice:
[](https://i.stack.imgur.com/wXzfn.png)
Upvotes: 2
|
2016/08/09
| 2,729
| 10,579
|
<issue_start>username_0: For example, I would like to train my neural network to recognize the type of actions (e.g. in commercial movies or some real-life videos), so I can "ask" my network in which video or movie (and at what frames) somebody was driving a car, kissing, eating, was scared or was talking over the phone.
What are the current successful approaches to that type of problem?<issue_comment>username_1: A neural network can be used but must be trained to expect the information (pattern of data, pixels or groupings of loose range such as color, and location) at any given location in the network, first a vision system must but implemented. Then a facial recognition, multiple partial individual body fixing (finding body part and there partners to a person) then training on some states and you'll have it work. MIT have done research and have made a seemy accurate implementation.
I'm an AI Researcher and Software Engineer for the past 7 years.
Upvotes: 2 <issue_comment>username_1: MIT have done research and implemented an incomplete version of action video recognition.
With the use of MATLAB, NNetworks and a large set of training videos.
My suggested set of comments on my previous answer indicate the usage of a multi interconnected NNet, verus MIT's image based NNet.
Upvotes: 2 <issue_comment>username_2: This [study from 2012](http://dx.doi.org/10.1109/TPAMI.2012.59) uses 3D [convolutional neural networks (CNN)](https://ai.stackexchange.com/questions/tagged/conv-neural-network) for automated recognition of human actions in surveillance videos. The 3D CNN model extracts features from both the spatial and the temporal dimensions by performing 3D convolutions, thereby capturing the motion information encoded in multiple adjacent frames. A very similar deep learning approach based on 3D CNN is demonstrated in the [LIRIS and Orange Labs study from 2011](http://liris.cnrs.fr/Documents/Liris-5228.pdf).
---
This [Oxford study from 2014](https://www.robots.ox.ac.uk/~vgg/publications/2014/Simonyan14b/simonyan14b.pdf.pdf) also uses a similar approach, but with two-stream CNN which incorporates spatial and temporal networks which can achieve good performance despite having limited training data. It recognises action from motion in the form of dense optical flow. For example:
[](https://i.stack.imgur.com/RWcT3.png)
---
[Another study from 2007](http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=4224216) demonstrates a method by detecting human falls based on a combination of motion history and human shape variation by analysing the video frames. It uses Motion History Image (MHI) to quantify the motion of the person.
[](https://i.stack.imgur.com/Hzn4z.png)
Source: [harishrithish7/Fall-Detection](https://github.com/harishrithish7/Fall-Detection) at GitHub
---
An alternative general approach could be action detection based on the posture using DNN. See: [How to achieve recognition of postures and gestures?](https://ai.stackexchange.com/q/1644/8)
Upvotes: 3 <issue_comment>username_2: There are several approaches as to how this can be achieved.
One recent study from 2015 about [Action Recognition in Realistic Sports Videos](http://link.springer.com/chapter/10.1007%2F978-3-319-09396-3_9)[PDF](http://cs.stanford.edu/~amirz/index_files/Springer2015_action_chapter.pdf) uses the action recognition framework based on the three main steps of feature extraction (shape, post or contextual information), dictionary learning to represent a video, and classification ([BoW framework](https://en.wikipedia.org/wiki/Bag-of-words_model)).
A few examples of methods:
* Spatio-Temporal Structures of Human Poses
[](https://i.stack.imgur.com/AgfDk.png)
* a joint shape-motion
[](https://i.stack.imgur.com/1qR9x.png)
* Multi-Task Sparse Learning (MTSL)
* Hierarchical Space-Time Segments
[](https://i.stack.imgur.com/jBMuj.png)
* Spatio-Temporal Deformable Part Models (SDPM)
[](https://i.stack.imgur.com/5Zehk.jpg)
Here are the results based on training of 10 action classes based on the UCF sports dataset:
[](https://i.stack.imgur.com/v77Pv.png)
Source: [Action Recognition in Realistic Sports Videos](http://link.springer.com/chapter/10.1007%2F978-3-319-09396-3_9).
Upvotes: 4 [selected_answer]<issue_comment>username_3: **No General Movie Search Yet**
There have been successes in recognizing a very narrow sequence of a very narrow set of possible actions, but nothing like a general movie searching system that can return a set of matches with the start time, end time, and movie instance for each match to one of the search criteria listed in this question.
* Somebody was driving a car
* Kissing
* Eating
* Scared
* Talking over the phone
**Normalizing the List**
First of all, "Was scared," is not the description of an action. It should be, "Becoming scared." Secondly, "Talking over the phone," is not a proper action description. It should be a conjunctive action such as, "Talking into a phone *AND* listening to the same phone." To make the list homogenous in format, the first item should be "Car driving," since the actor is human in every other case.
* Car driving
* Kissing
* Eating
* Becoming scared
* Talking into a phone and listening to the same phone.
**Realistic System Design Expectations**
It is unrealistic to think that an artificial neural net, by itself, can be trained to return as output the set of start and stop ranges and associated movie instances from a database of movies and one of the above list items as input. This will require a complex system with many ANNs and other ML devices and may require other AI components that are not activation type networks at all. Certainly convolution kernels and various types of encoders should be considered as key system components.
You will need a large amount of training data to cover the above six cases (the last of the five items actually being two distinct actions that we normally associate and consider one). If you want to detect more actions, you will need a large amount of training data for them too.
**Verbs and Nouns**
The reason this question is interesting to me is that recognizing ACTIONS are not the same as recognizing ITEMS. All mammals learn ITEMS first and ACTIONS later. Linguistically, nouns come before verbs in child language development. That is because, just as detecting edges is preliminary to detecting shapes, which is preliminary to detecting objects, detecting motion is preliminary to detecting action.
Verbs like, "Eating," are an abstraction over the top of the motion, and, in the case of eating, the motion is complex. Also, eating is not the same thing as gum chewing, so the sequence detected must be as follows:
1. Insertion of food into the face through the mouth
2. Chewing
3. Swallowing
The probability of a sequence is the product of the probability of its parts so that math is simple and easy to implement. Concurrency, as in the case of conjunctive actions like talking into and listening to the same phone, is also relatively easy to handle in general.
**A Realistic Approach**
Certainly, generalization (and more specifically feature extraction) will need to occur in object recognition, collision detection, motion detection, facial recognition, and other planes simultaneously. A complex topology, perhaps employing equilibria as in GAN design, will most likely be necessary to assemble elements of criteria associated with the movie query string and to run windows over the frames of each movie.
To provide a service that returns results within a few days or weeks will probably require a cluster and DSP hardware (perhaps leveraging GPUs).
**Special Cases that Human Brains Handle**
Determining how long one of the two elements of concurrency can be undetected before it invalidates the conjunction can be tricky. (How long can one not speak into the phone before it appears that it is no longer considered phone conversation?)
If in the movie, only the swallowing is shown, a human can infer eating. That kind of conclusion reliability from sparse data is a huge AI challenge discussed in various contexts throughout the literature.
**The Emergence of Associated Technology — A Projection**
I suspect that the system topography comprised of ANNs, encoders, convolution kernels, and other components to perform the search for any of a select set of actions will emerge within the next ten years. Work seems to be tracking in that direction in the literature.
A system that will acquire its own training information, sustainably grow in knowledge and perform general searches if increasing breadth and complexity may be anywhere from forty to two hundred years out. It is difficult to predict.
**Gross Overoptimistic Predictions**
Every generation seems to view knowledge growth as an exponential function and tends to make unrealistic predictions about the advent of certain coveted technology capabilities. Most of the predictions fail dramatically. I have come to believe that exponential growth is an illusion created by the inverse exponential decay of interest in the past with respect to time.
We lose track of the energy and rate of growth in eras before us because they become socially irrelevant. People into scientific history, like Whitehead, Kuhn, and Ellul know that technology has moved forward quickly for at least a few hundred years. Vernadsky inferred in his *The Biosphere* that life may not have arisen, that like matter and energy, it may always have existed. I wonder if technology has been moving at an essentially constant rate for the last 50,000 years.
Germany decided to double its solar panel energy output every year and published its exponential success, until a few years ago when doubling it again would cost a hundred billion dollars more than what they had to spend. They stopped publishing the exponential growth graphs.
Upvotes: 0
|
2016/08/09
| 815
| 3,260
|
<issue_start>username_0: I'm playing with an LSTM to generate text. In particular, this one:
<https://raw.githubusercontent.com/fchollet/keras/master/examples/lstm_text_generation.py>
It works on quite a big demo text set from Nietzsche and says
>
> If you try this script on new data, make sure your corpus
> has at least ~100k characters. ~1M is better.
>
>
>
This pops up a couple of questions.
A.) If all I want is an AI with a very limited vocabulary where the generate text should be short sentences following a basic pattern.
E.g.
*I like blue sky with white clouds*
*I like yellow fields with some trees*
*I like big cities with lots of bars*
...
Would it then be reasonable to use a much much smaller dataset?
B.) If the dataset really needs to be that big. What if I just repeat the text over and over to reach the recommended minimum? If that would work though, I'd be wondering how that is any different from just taking more iterations of learning with the same shorter text?
Obviously I can play with these two questions myself and in fact I am experimenting with it. One thing I already figured out is that with a shorter text following a basic pattern I can get to a very very low ( ~0.04) quite fast but the predicted text just turns out as gibberish.
My naive explanation for that would be that there are just not enough samples to proof against whether the gibberish actually makes sense or not? But then again I wonder if more iterations or duplicating the content would actually help.
I'm trying to experiment with these questions myself so please don't think I'm just too lazy and are aiming for others to do the work. I'm just looking for more experienced people to give me a better understanding of the mechanics that influence these things.<issue_comment>username_1: A.) The algorithm can only learn patterns from the data, so if you want it to align to a specific functional form of language, it would make sense to curate your data to only contain sentences of that very structure.
If you tune your hyper-parameters such that the model is not as complex as the default, it should work for datasets that are smaller. But this relies on the observation that the thing you are trying to learn isn't as complex (as opposed to learning from a large corpus of Nietzsche).
B.) What is happening is that you are *overfitting the data*, such that the LSTM isn't generalizing to your intended goal. In essence, overfitting means that your model is learning irrelevant details that *by chance* happen to predict the intended goal in the training data. To alleviate this, use a validation set.
And finally (but very importantly), these machine learning techniques don't learn the same way as humans do. To understand why you'd need to learn the math behind NNs, and I can't think of an intuitive explanation as to why.
Upvotes: 2 <issue_comment>username_2: LSTMs are good and everything but you can try some exotic methods like Convolutional Neural Network for NLP. Perhaps it would be more suited for this need. Have a look at this.
1. <http://www.wildml.com/2016/01/attention-and-memory-in-deep-learning-and-nlp/>
2. <http://www.wildml.com/2015/12/implementing-a-cnn-for-text-classification-in-tensorflow/>
Upvotes: 0
|
2016/08/09
| 376
| 1,630
|
<issue_start>username_0: For example I would like to implement transparent AI in the RTS game which doesn't offer any AI API (like old games), and I'd like to use image recognition algorithm for detecting the objects which can talks to another algorithm which is responsible for the logic.
Given I'd like to use two neural networks, what are the approaches to setup the communication between them? Is it just by exporting result findings of the first algorithm (e.g. using CNN) with list of features which were found on the screen, then use it as input for another network? Or it's more complex than that, or I need to have more than two networks?<issue_comment>username_1: The underlying abstraction (which is essentially what you'd be using the first network for) is that of reducing the state-space of the raw input via feature extraction/synthesis and/or dimensionality reduction.
At present, there are few definite rules for doing this: practice is more a question of 'informed trial and error'.
If you add some information to your question regarding what has been previously attempted in this area (e.g. on the
[ALE](https://ai.stackexchange.com/questions/1490/what-are-the-benefits-of-the-vgdl-over-the-ale) platform), this it might be possible to offer some more specific advice.
Upvotes: 3 [selected_answer]<issue_comment>username_2: That depends on what type of network you want to use for your second network, instead of feeding the outputs of the first layer, it would be much better if you jointly train both the networks. But that depends on the architecture of the second network ('logic' network).
Upvotes: 0
|
2016/08/09
| 1,688
| 6,418
|
<issue_start>username_0: Do we know why Tesla's autopilot mistaken empty sky with a high-sided lorry which resulted in fatal crash involving a car in self-drive mode? Was it AI fault or something else? Is there any technical explanation behind this why this happened?
The articles [Tesla Driver In First Self-Drive Fatal Crash](http://news.sky.com/story/tesla-driver-in-first-self-drive-fatal-crash-10330121) and [Tesla driver killed in crash with Autopilot active, NHTSA investigating](http://www.theverge.com/2016/6/30/12072408/tesla-autopilot-car-crash-death-autonomous-model-s) talk about this topic, but do not go in detail.<issue_comment>username_1: As far as I know, Tesla cars autopilot is not a 100% AI pilot, it's an *assitant*: as it detects hands off wheel it slows down, so it's incorrect to speak about AI mistake: it is not trained/designed to drive a car all by itself. A human driver is responsible in that incident.
Upvotes: 2 <issue_comment>username_2: Tesla's technology is assistive, as Alexey points out, so this is not a case of an autonomous system (e.g. an AGI) doing some fatal stunt (the product name AutoPilot is famously misleading). Now on why the car assistance led to this tragic accident, there is some information related to AI technologies.
---
*Warning: I cannot find again the source critical to the next paragraph, and reading again pages over pages, I cannot find similar argument in other reports. I still remember vividly the point below, but please keep in mind it may be incorrect. The rest of the answer is weakly related, so I leave it all, with this warning.*
An independent report (link needed, I can't find it...) explained that the assistive system was unable to detect the truck due to an exceptionally low contrast (bright sky perceived as white---colour of the truck). The report also said that a human driver would have been unable to make the difference either. In other words, it is possible that car sensors (presumably camera) and the human eye could not have detected an obstacle, and could not have triggered any safety measure. This [short graphical explanation](http://www.nytimes.com/interactive/2016/07/01/business/inside-tesla-accident.html) sums up the car sensors: Camera, radar, GPS, etc.
The assistive sub-system is based on proprietary AI technologies. We can *only speculate* under some hypothesis. \_This is not very useful, honestly, except for **illustration purpose**. Assuming the assistive system relies on ML technologies to learn about obstacles from a video stream (such systems do exist):
* It may be that the learning data was not "good enough" to cover the truck scenario.
* It may be the technology was not powerful enough yet (lack generalization power, or simply too slow).
* It may be a hardware problem, notably from the sensors: If the "car's eyes" are defective, the "car's brain" (the assistive system) is unable to react properly.
Why those technologies did not work in that case will remain a secret. We can say however that *any system*---whether built with AI technology or not---has limits. Beyond these limits, the system reaction is unpredictable: It could stop, reset, shutdown. The difficulty here is to define what a "default behaviour" is. A machine will basically do whatever it is designed to do, so an AI-based system too.
We could speculate even more on what would happen if the assistive system was really autonomous, the elusive AGI, but that is really not the case here.
Upvotes: 3 [selected_answer]<issue_comment>username_3: Tesla model S has [Autopilot](https://www.tesla.com/models) which allows to steer within a lane, change lanes with the simple tap of a turn signal, and can manage speed by using traffic-aware cruise control. Multiple digital controls helps to avoid collisions. Based on that, this isn't fully self-driving car.
However it is using a computer vision detection system, but it is not intended to be used hands-free.
So basically what is known is that the accident involved the side of a truck trailer (of a large white 18-wheel truck) and most likely the camera had a washed out of picture possibly due to glare or blooming from overexposure which made that the side of the trailer white and thin which failed to distinguish with the sky which was bright as well.
This may have happened in part, because the crash-avoidance system only engage when both radar and vision system detect an obstacle which could not happen.
Further more it was suggested by *The Associated Press* that the driver most likely was watching a *Harry Potter* at the time of the crash and assuming system would alert Brown, we don't know if he was able to retake controls quickly enough to avoid impact. As mentioned again, the system wasn't intended for hands-free driving and parts of the system was unfinished. Not to mention that the car was driving with full speed under the trailer.
Tesla officially said about this crash in a statement on its website:
>
> **The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances** of the impact caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S.
>
>
> Neither Autopilot nor the driver noticed the white side of the tractor-trailer against a brightly lit sky, so the brake was not applied.
>
>
>
They also said, according to techno-optimists, that they will tweaks their code, so this particular case won't happen again.
To summarize, this was a 'technical failure' of braking system and most likely Autopilot was not at as Tesla told Senate.
[](https://i.stack.imgur.com/obdLM.png)
The New York Times |Source: Florida traffic crash report
Sources:
* [Tesla Autopilot death highlights autonomous risks](http://www.freep.com/story/money/cars/2016/07/01/tesla-autopilot-death-highlights-autonomous-risks/86591130/)
* [Layers of Autonomy](https://robotfuturesbook.wordpress.com/2016/07/01/layers-of-autonomy)
* [Inside the Self-Driving Tesla Fatal Accident](http://www.nytimes.com/interactive/2016/07/01/business/inside-tesla-accident.html)
* [Tesla driver dies in first fatal crash while using autopilot mode](https://www.theguardian.com/technology/2016/jun/30/tesla-autopilot-death-self-driving-car-elon-musk)
Upvotes: 1
|
2016/08/09
| 1,624
| 6,378
|
<issue_start>username_0: I would like to know what kind of dataset I need (to prepare) for training the network to recognize the spelling mistakes in individual words for English text.
Given the large database of words, having correct one for each incorrect. What kind of input is more efficient for that tasks? Is it using one input per each letter, syllable, whole word or I should use different pattern syllable?
Then the input should be incorrect word, output correct, and if the word doesn't need correction, then both input and output should be the same. Is that the right approach?<issue_comment>username_1: As far as I know, Tesla cars autopilot is not a 100% AI pilot, it's an *assitant*: as it detects hands off wheel it slows down, so it's incorrect to speak about AI mistake: it is not trained/designed to drive a car all by itself. A human driver is responsible in that incident.
Upvotes: 2 <issue_comment>username_2: Tesla's technology is assistive, as Alexey points out, so this is not a case of an autonomous system (e.g. an AGI) doing some fatal stunt (the product name AutoPilot is famously misleading). Now on why the car assistance led to this tragic accident, there is some information related to AI technologies.
---
*Warning: I cannot find again the source critical to the next paragraph, and reading again pages over pages, I cannot find similar argument in other reports. I still remember vividly the point below, but please keep in mind it may be incorrect. The rest of the answer is weakly related, so I leave it all, with this warning.*
An independent report (link needed, I can't find it...) explained that the assistive system was unable to detect the truck due to an exceptionally low contrast (bright sky perceived as white---colour of the truck). The report also said that a human driver would have been unable to make the difference either. In other words, it is possible that car sensors (presumably camera) and the human eye could not have detected an obstacle, and could not have triggered any safety measure. This [short graphical explanation](http://www.nytimes.com/interactive/2016/07/01/business/inside-tesla-accident.html) sums up the car sensors: Camera, radar, GPS, etc.
The assistive sub-system is based on proprietary AI technologies. We can *only speculate* under some hypothesis. \_This is not very useful, honestly, except for **illustration purpose**. Assuming the assistive system relies on ML technologies to learn about obstacles from a video stream (such systems do exist):
* It may be that the learning data was not "good enough" to cover the truck scenario.
* It may be the technology was not powerful enough yet (lack generalization power, or simply too slow).
* It may be a hardware problem, notably from the sensors: If the "car's eyes" are defective, the "car's brain" (the assistive system) is unable to react properly.
Why those technologies did not work in that case will remain a secret. We can say however that *any system*---whether built with AI technology or not---has limits. Beyond these limits, the system reaction is unpredictable: It could stop, reset, shutdown. The difficulty here is to define what a "default behaviour" is. A machine will basically do whatever it is designed to do, so an AI-based system too.
We could speculate even more on what would happen if the assistive system was really autonomous, the elusive AGI, but that is really not the case here.
Upvotes: 3 [selected_answer]<issue_comment>username_3: Tesla model S has [Autopilot](https://www.tesla.com/models) which allows to steer within a lane, change lanes with the simple tap of a turn signal, and can manage speed by using traffic-aware cruise control. Multiple digital controls helps to avoid collisions. Based on that, this isn't fully self-driving car.
However it is using a computer vision detection system, but it is not intended to be used hands-free.
So basically what is known is that the accident involved the side of a truck trailer (of a large white 18-wheel truck) and most likely the camera had a washed out of picture possibly due to glare or blooming from overexposure which made that the side of the trailer white and thin which failed to distinguish with the sky which was bright as well.
This may have happened in part, because the crash-avoidance system only engage when both radar and vision system detect an obstacle which could not happen.
Further more it was suggested by *The Associated Press* that the driver most likely was watching a *Harry Potter* at the time of the crash and assuming system would alert Brown, we don't know if he was able to retake controls quickly enough to avoid impact. As mentioned again, the system wasn't intended for hands-free driving and parts of the system was unfinished. Not to mention that the car was driving with full speed under the trailer.
Tesla officially said about this crash in a statement on its website:
>
> **The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances** of the impact caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S.
>
>
> Neither Autopilot nor the driver noticed the white side of the tractor-trailer against a brightly lit sky, so the brake was not applied.
>
>
>
They also said, according to techno-optimists, that they will tweaks their code, so this particular case won't happen again.
To summarize, this was a 'technical failure' of braking system and most likely Autopilot was not at as Tesla told Senate.
[](https://i.stack.imgur.com/obdLM.png)
The New York Times |Source: Florida traffic crash report
Sources:
* [Tesla Autopilot death highlights autonomous risks](http://www.freep.com/story/money/cars/2016/07/01/tesla-autopilot-death-highlights-autonomous-risks/86591130/)
* [Layers of Autonomy](https://robotfuturesbook.wordpress.com/2016/07/01/layers-of-autonomy)
* [Inside the Self-Driving Tesla Fatal Accident](http://www.nytimes.com/interactive/2016/07/01/business/inside-tesla-accident.html)
* [Tesla driver dies in first fatal crash while using autopilot mode](https://www.theguardian.com/technology/2016/jun/30/tesla-autopilot-death-self-driving-car-elon-musk)
Upvotes: 1
|
2016/08/10
| 2,143
| 8,312
|
<issue_start>username_0: I believe *artificial intelligence* (AI) term is overused nowadays. For example, people see that something is self-moving and they call it AI, even if it's on autopilot (like cars or planes) or there is some simple algorithm behind it.
What are the minimum general requirements so that we can say something is AI?<issue_comment>username_1: It's true that the term has become a buzzword, and is now widely used to a point of confusion - however if you look at the definition provided by <NAME> and <NAME>, they write it as follows:
>
> We define AI as the study of agents that **receive percepts from the environment and perform actions**. Each such agent implements a function that maps percept sequences to actions, and we cover different ways to represent these functions, such as reactive agents, real-time planners, and decision-theoretic systems. We explain the role of **learning** as extending the reach of the designer into **unknown environments**, and we show how that role constrains agent design, favoring explicit **knowledge representation and reasoning**.
>
>
>
[Artificial Intelligence: A Modern Approach - <NAME> and <NAME>](https://rads.stackoverflow.com/amzn/click/com/9332543518)
So the example you cite, "autopilot for cars/planes", is actually a (famous) form of AI as it has to **use a form of knowledge representation to deal with unknown environments and circumstances**. Ultimately, these systems also collect data so that the knowledge representation can be updated to deal with the new inputs that they have found. They do this with [autopilot for cars](http://fortune.com/2015/10/16/how-tesla-autopilot-learns/) all the time
So, directly to your question, for something to be considered as "having AI", **it needs to be able to deal with unknown environments/circumstances in order to achieve its objective/goal**, and render knowledge in a manner that provides for new learning/information to be added easily. There are many different types of well defined knowledge representation methods, ranging from the popular [neural net](http://neuralnetworksanddeeplearning.com/), through to probabilistic models like [bayesian networks (belief networks)](https://en.wikipedia.org/wiki/Bayesian_network) - but fundamentally actions by the system must be derived from whichever representation of knowledge you choose for it to be considered as AI.
Upvotes: 6 [selected_answer]<issue_comment>username_2: In addition to what has already been said about AI, I have the following to add. "AI" has had quite a history going all the way back to the original [Perceptron](https://en.wikipedia.org/wiki/Perceptron). <NAME> slammed the Perceptron in 1969 for not being able to solve the XOR problem and anything that was not linearly separable, so "Artifical Intelligence" became a dirty word for a while, only to regain interests in the 1980s. During that time, neural nets were revived, backpropagation used to train them was developed, and as computer technology continued its exponential growth, so did "AI" and what became possible.
Today, there are lots of things we take for granted which would've been considered "AI" 10 or 15 years ago, like speech recognition, for example. I got my starts in "AI" speech recognition back in the late 70s where you had to train the voice models to understand a single human speaker. Today, speech recognition is an afterthought with your Google apps, for example, and no a priori training is needed. Yet this technology is not, at least in general audiences, considered "AI" anymore.
And so, what would be "minimum requirements"? That would depend on whom you ask. And what time. It would appear that that term only applies to technology "on the bleeding edge". Once it becomes developed and commonplace, it is no longer referred to as AI. This is true even of Neural Nets, which are dominant in data science right now, but are referred to as "machine learning".
Also check out the lively discussion on [Quora](https://www.quora.com/What-are-the-main-differences-between-artificial-intelligence-and-machine-learning).
Upvotes: 3 <issue_comment>username_3: This is an "in human language" (non-technical;) synopsis of the core of [username_1](https://ai.stackexchange.com/a/1516/1671)'s excellent answer.
---
>
> In the most basic sense, any decision making algorithm can be regarded as a form of Artificial Intelligence.
>
>
>
The [History of Artificial Intelligence wiki](https://en.wikipedia.org/wiki/History_of_artificial_intelligence) gives a pretty good overview. The roots of the field are generally ascribed to [Symbolic Artificial Intelligence](https://en.wikipedia.org/wiki/Symbolic_artificial_intelligence), but it might be said to go back as far as [Babbage](https://en.wikipedia.org/wiki/Analytical_Engine). The first functional game AI in the form of an "analytic engine" may be [Nimatron](https://fr.wikipedia.org/wiki/Nimatron) ([1940](https://patents.google.com/patent/US2215544A/)). More recently, Machine Learning in all of its various forms, including [Neural Networks](https://en.wikipedia.org/wiki/Artificial_neural_network) and [Genetic Algorithms](https://en.wikipedia.org/wiki/Evolutionary_algorithm), have been delivering exciting results. [Bayesian networks](https://en.wikipedia.org/wiki/Bayesian_network) are another form of probabilistic AI.
>
> Utility, the means by which we evaluate the *degree* of intelligence of algorithms, is separate from the mechanism.
>
>
>
AIs can be weak or strong. Strong means better performance at a task than a competing [rational agent](https://en.wikipedia.org/wiki/Rational_agent#Artificial_intelligence_2), typically humans. ("Man is the measure of all things." [Protagoras](https://en.wikipedia.org/wiki/Protagoras)) The term strong in relation to AI has traditionally been taken to mean [Artificial General Intelligence](https://en.wikipedia.org/wiki/Artificial_general_intelligence) [see also the [Turing Test]](https://en.wikipedia.org/wiki/Turing_test), but current algorithmic intelligences are only "narrowly strong".
Intelligence is a spectrum, therefore:
>
> The minimum requirement for AI is that an algorithm make decisions based on data, irrespective of the quality of the decisions.
>
>
>
Upvotes: 2 <issue_comment>username_4: There is also the [AI effect](https://en.wikipedia.org/wiki/AI_effect), that is, the tendency to not consider something an AI once it is well understood. For example, neural networks are not yet fully understood, so people still tend to call them AI. Once we know exactly all the details about neural networks and their inner workings, we might start to consider them just *computation*. This is an old philosophical topic that goes back at least to the famous Jacques de Vaucanson's [defecating duck](https://web.stanford.edu/dept/HPS/DefecatingDuck.pdf) and [automatic loom](https://en.wikipedia.org/wiki/Jacquard_loom).
Upvotes: 2 <issue_comment>username_5: From "[Artificial Intelligence And Life In 2030: One Hundred Year Study On Artificial Intelligence](https://ai100.stanford.edu/sites/g/files/sbiybj9861/f/ai_100_report_0831fnl.pdf)":
>
> In fact, the field of AI is a continual endeavor to push forward the frontier of
> machine intelligence. Ironically, AI suffers the perennial fate of losing claim to its
> acquisitions, which eventually and inevitably get pulled inside the frontier, a repeating pattern known as the “AI effect” or the “odd paradox”—AI brings a new technology into the common fold, people become accustomed to this technology, it stops being considered AI, and newer technology emerges.
>
>
>
Consequentially, I believe we can not choose a fixed set of requirements for something to be considered AI; rather, at any given moment in history, AI is a set of programs which can achieve something that before was generally considered to be solvable by humans only. As technology evolves, the boundaries keep getting pushed and pushed, and the bar rises higher. Consider chess playing: once chess engines were considered one of the pinnacles of AI, while nowadays such programs are perceived as "blind search" and not truly intelligent.
To quote <NAME>, *Intelligence is whatever machines haven't done yet*.
Upvotes: 1
|
2016/08/10
| 2,132
| 8,273
|
<issue_start>username_0: I believe normally you can use [genetic programming](https://en.wikipedia.org/wiki/Genetic_programming) for sorting, however I'd like to check whether it's possible using ANN.
Given the unsorted text data from input, which neural network is suitable for doing sorting tasks?<issue_comment>username_1: It's true that the term has become a buzzword, and is now widely used to a point of confusion - however if you look at the definition provided by <NAME> and <NAME>, they write it as follows:
>
> We define AI as the study of agents that **receive percepts from the environment and perform actions**. Each such agent implements a function that maps percept sequences to actions, and we cover different ways to represent these functions, such as reactive agents, real-time planners, and decision-theoretic systems. We explain the role of **learning** as extending the reach of the designer into **unknown environments**, and we show how that role constrains agent design, favoring explicit **knowledge representation and reasoning**.
>
>
>
[Artificial Intelligence: A Modern Approach - <NAME> and <NAME>](https://rads.stackoverflow.com/amzn/click/com/9332543518)
So the example you cite, "autopilot for cars/planes", is actually a (famous) form of AI as it has to **use a form of knowledge representation to deal with unknown environments and circumstances**. Ultimately, these systems also collect data so that the knowledge representation can be updated to deal with the new inputs that they have found. They do this with [autopilot for cars](http://fortune.com/2015/10/16/how-tesla-autopilot-learns/) all the time
So, directly to your question, for something to be considered as "having AI", **it needs to be able to deal with unknown environments/circumstances in order to achieve its objective/goal**, and render knowledge in a manner that provides for new learning/information to be added easily. There are many different types of well defined knowledge representation methods, ranging from the popular [neural net](http://neuralnetworksanddeeplearning.com/), through to probabilistic models like [bayesian networks (belief networks)](https://en.wikipedia.org/wiki/Bayesian_network) - but fundamentally actions by the system must be derived from whichever representation of knowledge you choose for it to be considered as AI.
Upvotes: 6 [selected_answer]<issue_comment>username_2: In addition to what has already been said about AI, I have the following to add. "AI" has had quite a history going all the way back to the original [Perceptron](https://en.wikipedia.org/wiki/Perceptron). <NAME> slammed the Perceptron in 1969 for not being able to solve the XOR problem and anything that was not linearly separable, so "Artifical Intelligence" became a dirty word for a while, only to regain interests in the 1980s. During that time, neural nets were revived, backpropagation used to train them was developed, and as computer technology continued its exponential growth, so did "AI" and what became possible.
Today, there are lots of things we take for granted which would've been considered "AI" 10 or 15 years ago, like speech recognition, for example. I got my starts in "AI" speech recognition back in the late 70s where you had to train the voice models to understand a single human speaker. Today, speech recognition is an afterthought with your Google apps, for example, and no a priori training is needed. Yet this technology is not, at least in general audiences, considered "AI" anymore.
And so, what would be "minimum requirements"? That would depend on whom you ask. And what time. It would appear that that term only applies to technology "on the bleeding edge". Once it becomes developed and commonplace, it is no longer referred to as AI. This is true even of Neural Nets, which are dominant in data science right now, but are referred to as "machine learning".
Also check out the lively discussion on [Quora](https://www.quora.com/What-are-the-main-differences-between-artificial-intelligence-and-machine-learning).
Upvotes: 3 <issue_comment>username_3: This is an "in human language" (non-technical;) synopsis of the core of [username_1](https://ai.stackexchange.com/a/1516/1671)'s excellent answer.
---
>
> In the most basic sense, any decision making algorithm can be regarded as a form of Artificial Intelligence.
>
>
>
The [History of Artificial Intelligence wiki](https://en.wikipedia.org/wiki/History_of_artificial_intelligence) gives a pretty good overview. The roots of the field are generally ascribed to [Symbolic Artificial Intelligence](https://en.wikipedia.org/wiki/Symbolic_artificial_intelligence), but it might be said to go back as far as [Babbage](https://en.wikipedia.org/wiki/Analytical_Engine). The first functional game AI in the form of an "analytic engine" may be [Nimatron](https://fr.wikipedia.org/wiki/Nimatron) ([1940](https://patents.google.com/patent/US2215544A/)). More recently, Machine Learning in all of its various forms, including [Neural Networks](https://en.wikipedia.org/wiki/Artificial_neural_network) and [Genetic Algorithms](https://en.wikipedia.org/wiki/Evolutionary_algorithm), have been delivering exciting results. [Bayesian networks](https://en.wikipedia.org/wiki/Bayesian_network) are another form of probabilistic AI.
>
> Utility, the means by which we evaluate the *degree* of intelligence of algorithms, is separate from the mechanism.
>
>
>
AIs can be weak or strong. Strong means better performance at a task than a competing [rational agent](https://en.wikipedia.org/wiki/Rational_agent#Artificial_intelligence_2), typically humans. ("Man is the measure of all things." [Protagoras](https://en.wikipedia.org/wiki/Protagoras)) The term strong in relation to AI has traditionally been taken to mean [Artificial General Intelligence](https://en.wikipedia.org/wiki/Artificial_general_intelligence) [see also the [Turing Test]](https://en.wikipedia.org/wiki/Turing_test), but current algorithmic intelligences are only "narrowly strong".
Intelligence is a spectrum, therefore:
>
> The minimum requirement for AI is that an algorithm make decisions based on data, irrespective of the quality of the decisions.
>
>
>
Upvotes: 2 <issue_comment>username_4: There is also the [AI effect](https://en.wikipedia.org/wiki/AI_effect), that is, the tendency to not consider something an AI once it is well understood. For example, neural networks are not yet fully understood, so people still tend to call them AI. Once we know exactly all the details about neural networks and their inner workings, we might start to consider them just *computation*. This is an old philosophical topic that goes back at least to the famous Jac<NAME> Vaucanson's [defecating duck](https://web.stanford.edu/dept/HPS/DefecatingDuck.pdf) and [automatic loom](https://en.wikipedia.org/wiki/Jacquard_loom).
Upvotes: 2 <issue_comment>username_5: From "[Artificial Intelligence And Life In 2030: One Hundred Year Study On Artificial Intelligence](https://ai100.stanford.edu/sites/g/files/sbiybj9861/f/ai_100_report_0831fnl.pdf)":
>
> In fact, the field of AI is a continual endeavor to push forward the frontier of
> machine intelligence. Ironically, AI suffers the perennial fate of losing claim to its
> acquisitions, which eventually and inevitably get pulled inside the frontier, a repeating pattern known as the “AI effect” or the “odd paradox”—AI brings a new technology into the common fold, people become accustomed to this technology, it stops being considered AI, and newer technology emerges.
>
>
>
Consequentially, I believe we can not choose a fixed set of requirements for something to be considered AI; rather, at any given moment in history, AI is a set of programs which can achieve something that before was generally considered to be solvable by humans only. As technology evolves, the boundaries keep getting pushed and pushed, and the bar rises higher. Consider chess playing: once chess engines were considered one of the pinnacles of AI, while nowadays such programs are perceived as "blind search" and not truly intelligent.
To quote <NAME>, *Intelligence is whatever machines haven't done yet*.
Upvotes: 1
|
2016/08/10
| 1,761
| 6,774
|
<issue_start>username_0: On Wikipedia, we can read about different type of [intelligent agents](https://en.wikipedia.org/wiki/Intelligent_agent):
* abstract intelligent agents (AIA),
* autonomous intelligent agents,
* virtual intelligent agent (IVA), which I've found on other websites, e.g. [this one](https://www.techopedia.com/definition/26646/intelligent-virtual-agent-iva).
What are the differences between these three to avoid confusion?
---
For example I've used term *virtual artificial agent* [here](https://ai.stackexchange.com/a/1512/8) as:
>
> Basically a robot is a mechanical or virtual artificial agent which exhibit intelligent behavior (AI).
>
>
>
so basically I'd like to know where other terms like autonomous or abstract agents can be used and in what context. Can they be all defined under 'virtual' robot definition? How to distinguish these terms?<issue_comment>username_1: I don't think there are many contexts where there is any really meaningful distinction between these terms. Even in the WP article you refer to, it is shown that "abstract intelligent agent" and "autonomous intelligent agent" are generally just synonyms for "intelligent agent" but used to highlight certain aspects of intelligent agents in some contexts. Net-net, I'd say there just isn't any difference there that's going to matter in practice.
"Virtual intelligent agent" OTOH, used in the context you used it, suggests the distinction between an IA that's implemented in software only, versus one that has a physical manifestation. I don't know how useful that distinction is and I haven't seen anybody else make it.
All in all, I expect that in almost every possible context, if you just say "Intelligent Agent" with no qualifiers, that's going to be sufficient. But if there were going to be an exception, I'd say it would be around the term "autonomous" since an agent which is truly autonomous, versus one that needs to operate in a specific, constrained environment, is a distinction that - at least in principle - could be useful.
Upvotes: 2 <issue_comment>username_2: There is definitely a usage and etymological dimension to this issue, which I can elaborate on if you'd like. (It's an issue I come across frequently, and is particularly relevant in regards to original work.)
The use of "Agent" may have arisen partly out of Game Theory, with ideas like "[multi-agent systems](https://en.wikipedia.org/wiki/Multi-agent_system)", possibly because it's quite general (i.e. one does not have to say participant, or player, or actor, etc., and it is species non-specific.)
"Automata" seems to be favoured in some circles, and although [this term was originally reserved for physical, mechanical devices](https://en.wikipedia.org/wiki/Automaton), it now encompasses a range of physical and information-based systems (with perhaps a contemporary emphasis on the virtual. A famous extension is the categorization of Conway's Game of Life as a "[cellular automata](https://en.wikipedia.org/wiki/Cellular_automaton)". Conversely, "robot" would seem to have usurped "automaton" for physical systems.)
I tend to use an agent when I'm thinking in an economic sense, or in a game theory sense as related to computing [see [Rational Agent](https://en.wikipedia.org/wiki/Rational_agent)], and automata when thinking in a procedural sense in terms of algorithmic intelligence [see [Automata Theory](https://en.wikipedia.org/wiki/Automata_theory)]. But the distinction is partly semantic.
[Robot](https://en.wikipedia.org/wiki/Robot#Origin_of_the_term_.27robot.27) seems to be avoided in the academic literature in regards to information systems, possibly because of the deeply entrenched popular understanding of "robot" as physical automata. The use of the informal "[bots](https://en.wikipedia.org/wiki/Bot)" for describing information-based systems seems to have arisen in the public sphere, almost certainly a product of [hacker culture](https://en.wikipedia.org/wiki/Hacker_culture), which has a tradition of playfulness. I like the term bots as it is being currently applied, but I feel it has the connotation of [trivial](https://en.wikipedia.org/wiki/Triviality_(mathematics)) (i.e. small and discrete) or massively multi-agent systems, and haven't really seen it used for "strong" narrow AI.\*
---
Note: "[Agent](https://en.wiktionary.org/wiki/agent)" does not automatically carry the concept of autonomy, although it is sometimes implied. The use of the term is in the sense of "[agency](https://en.wikipedia.org/wiki/Agency_(philosophy))" as an action taken by an agent. Agents may be acting on their own behalf, or as [proxies](https://en.wiktionary.org/wiki/proxy#English).
\*"Strong" in relation to AI is a term that seems to be evolving. Initially reserved for [Artificial General Intelligence and conscious algorithms](https://en.wikipedia.org/wiki/Strong_AI), I've seen recent mentions by respected scholars of "strong narrow AI" to refer to recent milestones in Machine Learning. However, the term "strong" and "weak" in relation to mathematics and AI have a history in [Combinatorial game theory](https://en.wikipedia.org/wiki/Combinatorial_game_theory) per the concept of the [solved game](https://en.wikipedia.org/wiki/Solved_game). Thus, there exist "strong AI" for games such as [Nim](https://en.wikipedia.org/wiki/Nim), [Hex](https://en.wikipedia.org/wiki/Hex_(board_game)), and Tic-Tac-Toe.
Again, there is an element of subjectivity here, but I tend to favour the CGT approach for two reasons:
1. It's a mathematical definition, unlike the philosophical definitions ("consciousness" and "humanlike" re: AGI)
2. Artificial Intelligence is rooted in combinatorial games. [[Nimatron](https://fr.wikipedia.org/wiki/Nimatron) may have been the first functional algorithmic intelligence and combinatorial games have been used as milestones since that time, most notably with [Deep Blue](https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)), [Giraffe Chess](https://motherboard.vice.com/en_us/article/the-chess-engine-that-died-so-alphago-could-live-giraffe-matthew-lai) and [AlphaGo](https://en.wikipedia.org/wiki/AlphaGo). Non-trivial Combinatorial games are useful because they have simple parameters but [complexity](https://en.wikipedia.org/wiki/Complexity#Applications) akin to nature.
My sense of how "strong" is starting to be used is for algorithms, such as Machine Learning systems, that can outperform humans within a given set of parameters. Thus "strong narrow AI".
---
On Intelligence:
The term "intelligence" is highly subjective and the only functionally meaningful definition I've come across is the concept of [bounded rationality](https://en.wikipedia.org/wiki/Bounded_rationality).
Upvotes: 2 [selected_answer]
|
2016/08/10
| 661
| 2,437
|
<issue_start>username_0: According to [Wikipedia](http://en.wikipedia.org/wiki/Prolog),
>
> Prolog is a general-purpose logic programming language associated with artificial intelligence and computational linguistics.
>
>
>
Is it still used for AI?
---
This is based off of a question on the 2014 closed beta. The author had the UID of 330.<issue_comment>username_1: Remembering that artificial intelligence has been an academic endeavour for the longest time, Prolog was amongst one of the early languages used as part of the study and implementation of it. It has rarely made its way into large commercial applications, having said that, a famous commercial implementation is in [Watson, where prolog is used for NLP](http://www.cs.nmsu.edu/ALP/2011/03/natural-language-processing-with-prolog-in-the-ibm-watson-system/).
The [University of Edinburgh](http://www.ed.ac.uk/informatics/) contributed to the language and it was sometimes referred to as "Edinburgh Prolog". It is [still used in academic teachings](http://www.inf.ed.ac.uk/teaching/courses/lp/) there as part of the artificial intelligence course.
The reason why Prolog is considered powerful in AI is because the language allows for easy management of recursive methods, and pattern matching.
To quote [<NAME> from the IBM Thomas J. Watson Research Center](http://www-03.ibm.com/innovation/us/watson/research-team/systems.html), and [<NAME> from Stony Brook University](http://www3.cs.stonybrook.edu/~pfodor/):
>
> the Prolog language is very expressive allowing recursive rules to represent reachability in parse trees and the operation of negation-as-failure to check the absence of conditions.
>
>
>
Upvotes: 6 [selected_answer]<issue_comment>username_2: Yes, as mentioned in other answers, Prolog is actually used in IBM Watson. Prolog doesn't get much "hype" and "buzz" these days, but it is absolutely still used. As always, it has certain specific areas where it shines, and specific techniques that map well to its use. Specifically, things like [Inductive Logic Programming](https://en.wikipedia.org/wiki/Inductive_logic_programming), [Constraint Logic Programming](https://en.wikipedia.org/wiki/Constraint_logic_programming), [Answer Set Programming](https://en.wikipedia.org/wiki/Answer_set_programming) and some [NLP](https://en.wikipedia.org/wiki/Natural_language_processing) applications may involve extensive use of Prolog.
Upvotes: 4
|
2016/08/11
| 1,988
| 8,506
|
<issue_start>username_0: When it comes to neural networks, it's often only explained what abstract task they do, say for example detect a number in an image. I never understood what's going on under the hood essentially.
There seems to be a common structure of a directed graph, with values in each node. Some nodes are input nodes. Their values can be set. The values of subsequent nodes are then calculated based on those along the edges of the graph until the values for the output nodes are set, which can be interpreted a result.
How exactly is the value of each node determined? I assume that some formula is associated with each node that takes all incoming nodes as input to calculate the value of the node. What formula is used? Is the formula the same throughout the network?
Then I heard that a network has to be trained. I assume that such training would be the process to assign values to coefficients of the formulas used to determine the node values. Is that correct?
In layman's terms, what are the underlying principles that make a neural network work?<issue_comment>username_1: I will overly simplify ANNs in order to point how they work. Examples might not be 100% accurate.
In the simplest form, network is trained using the apriori information extracted from the ground truth. This basically means that ANN uses the relation between the input and output.
For instance, if you are to classify shrubs and trees, one of the input could be height and the other could be the width of the tree. Now, if you have only input and output layers, increasing height means increasing chance for the object to be a tree. Thus, input height would have a positive weight connecting to tree output and a negative weight to shrub output. However, as the plant gets wider, the chance of it being a shrub increases. Taller shrubs are wider than shorter ones. Thus input weight would have positive weight connecting to the shrub output. Finally, the chance of being a tree is not affected by the width and thus will have close to 0 weight between this input and output. This network will effectively work like a linear discriminant classifier.
Now instead of assigning weights by hand, you may use a learning algorithm that tries to adjust weights so that the output is correct when the series of input is supplied. Ideally this training algorithm should reach to the conclusion that we have made in the previous example. Most training algorithms are recursive. They supply the inputs multiple times, and in a simple sense, they reward pathways that are correct by increasing their weights and punishes pathways that are causing incorrect answer.
When hidden layers are used in a system, they would be able to correlate input on higher degrees. Thus, as the number of layers get higher, ANN learns the input set much better. However, this does not mean it gets better. If the ANN over fits the input set, it would be affected from the random noise that is in the dataset. This problem is generally referred as memorization. There are learning algorithms that try to minimize memorization and maximize generalization ability. But ultimately, the number of training samples should be high enough so that ANN cannot overfit to the data.
Upvotes: 2 <issue_comment>username_2: I'll try to do something intuitive; Each node in a neural network is referred to as a neuron. To understand what's going on under the hood of a neural network you only really need to understand an individual neuron.
Now each neuron has a set of inputs (other neurons; they can potentially be the inputs to the network as a whole as well), and each input has a weight associated with it. Every time the network is used, each neuron computes its output as the weighted sum of its inputs passed through some gate (The "Activation Function", a mathematical function designed to get a particular behaviour. For example sigmoid AF takes an input of any size and transforms it into an output in the range [0, 1].) Obviously, this is driven from the inputs to the neural network so that no neuron is computing its outputs before all of the neurons used as its inputs have done the same.
When you refer to the value of a node; there isn't a single value. Each neuron has several weights associated with it as it may be the input to several other neurons, and each of those neurons assigns it a different weight. Instead, it is better to thing of a neural network as a directed graph of nodes (neurons) which are labelled with a particular activation function, and edges (input/output connections) which are labelled with a particular weight. While the structure and activation functions used in the neural networks is a matter of topology design, there are a number of algorithms for designing a ANN for a particular topology.
The most commonly used (and possibly easiest to explain) is backpropagation. In pseudo-Layman's terms we start off with random weights on all edges in the network. We then compute the output of the network for a training set (a set of known input/output pairs). By careful choice of activation function, it is possible to differentiate the error (computed analogously to the expected output minus the actual output of the ANN for each input/output pair) with respect to the weights of the neural network. This allows us to compute a gradient for each weight; the direction in which we can move the weight to reduce the error on the training set. By doing this until we find an optima (a point where all movements increase error), we can find some 'good' configuration of weights for that particular ANN.
There's a nice tutorial on BP here <http://www.cse.unsw.edu.au/~cs9417ml/MLP2/BackPropagation.html>. The diagram associated with it does nicely to explain my point:
[](https://i.stack.imgur.com/O2d6L.jpg)
Upvotes: 2 <issue_comment>username_3: Based on my experience which is that of a beginner.
For a simple neural network such as:
* 2 nodes, indicated by the letters `i` and `j`,
* `x` indicates the output of a node,
* `w` denotes a weight that connects two nodes.
The output of a given node is of the following form.
[](https://i.stack.imgur.com/i7GFf.png)
Which can be *translated* as
>
> Applying the activation function (lambda) to the sum of the
> products of the value of each nodes' of the previous layer and the weights
> that connect them to the current node.
>
>
>
This activation function can be something like
[](https://i.stack.imgur.com/u2lBz.png)
(This special function is called the *sigmoid* function.)
If you enter that function in say GeoGebra, you would get the following
curve
[](https://i.stack.imgur.com/bpDEp.png)
Clearly this activation function takes any input and outputs
a unique number between 0 and 1. Since the function is
growing larger and larger the order is preserved.
During the training phase, when the network reaches its termination,
we compute the *total error* of the network
which is something that resembles the difference between
the output in the training set and the one we obtain from
the network.
Obviously, this value decreases each time the output improves.
For each weight of the network, a gradient is computed.
This gradient is a number that can be read as the influence
of adding a small number to the weight over the total error.
This gradient can be computed from a formula derived from
the network structure or it can be computed as simply as
trying to add up to the weight and see what happens on the fly.
* if the gradient is positive, it means that adding to the weight
will add to the total error, we should subtract,
* if the gradient is negative, it means that adding to the weight
will lead to a lesser total error, we should add.
By repeating this a lot of time, the total error will reach its
minimum.
Finally a thing I didn't know, don't forget to switch inputs between
iterations of this process. If you don't, your network will only be
properly trained against the last item it processed.
I hope this helped a little. Please write your suggestions in the comment.
As you probably guessed I'm not native English speaker.
I recommend reading **Neural Networks, A Visual Introduction For Beginners**
by <NAME>.
Upvotes: 1
|
2016/08/11
| 243
| 1,217
|
<issue_start>username_0: How does employing evolutionary algorithms to design and train artificial neural networks have advantages over using the conventional backpropagation algorithms?<issue_comment>username_1: Unlike backpropagation, evolutionary algorithms do not require the objective function to be differential with respect to the parameters you aim to optimize. As a result, you can optimize "more things" in the network, such as activation functions or number of layers, which wouldn't be possible in the standard backpropagation.
Another advantage is that by defining the mutation and crossover functions, you can influence how the parameter search space should be explored.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Further to Franck's answer, there may be better optima (even global optima) that exist in the opposite direction to the gradient (which may be in the direction of some local optima). Evolutionary algorithms have scope to search the surrounding area, while backpropagation will always move in the direction of the gradient. With no guarantee (due to their randomness), evolutionary algorithms may be capable of finding solutions that backpropagation simply cannot.
Upvotes: 3
|
2016/08/11
| 1,042
| 4,208
|
<issue_start>username_0: Genetic Algorithms has come to my attention recently when trying to correct/improve computer opponents for turn-based strategy computer games.
I implemented a simple Genetic Algorithm that didn't use any cross-over, just some random mutation. It seemed to work in this case, and so I started thinking:
**Why is cross-over a part of genetic algorithms? Wouldn't mutation be enough?**
This is from a data dump on an old AI site. The asker had the UID of 7.<issue_comment>username_1: Crossover allows to combine two parents (vs. mutation, which only uses one parent). In some cases, it is useful (e.g., if you train a FPS bot, if one parent is good at shooting and another parent is good at moving, it makes sense to combine them). In some other cases, it is not.
Upvotes: 3 <issue_comment>username_2: Mutation is usually defined to be a *global* operator, i.e. iterated mutation is (eventually) capable of reaching every point in the vector space defined by the geneome. In that sense, mutation alone is certainly 'enough'.
Regarding the motivation for crossover - from [Essentials of Metaheuristics](https://cs.gmu.edu/~sean/book/metaheuristics/), p42:
>
> Crossover was originally based on the premise that highly fit individuals often share certain traits, called *building blocks*, in common.
> For example, in the boolean individual 10110101, perhaps
> \*\*\*101\*1 might be a building block
>
>
>
(where \* means "either 0 or 1")
So the idea is that crossover works by spreading building blocks quickly throughout the population.
>
> Crossover methods also assume that there is some degree of linkage between genes on the chromosome: that is, settings for certain genes in groups are strongly correlated to fitness improvement. For example, genes A and B might contribute to fitness only when they’re both set to 1: if either is set to 0, then the fact that the other is set to 1 doesn’t do anything.
>
>
>
Also note that *crossover is not a global operator*. If the only operator is crossover then (also from p42):
>
> Eventually the population will converge, and often (unfortunately) prematurely
> converge, to copies of the same individual. At this stage there’s no escape: when an individual crosses over with itself, nothing new is generated.
>
>
>
For this reason, crossover is generally used together with some global mutation operator.
Upvotes: 4 [selected_answer]<issue_comment>username_3: When thinking about crossover its important to think about the fitness landscape.
Consider a hypothetical scenario where we are applying a genetic algorithm to find a solution that performs well at 2 tasks. This could be from Franck's example (moving and shooting) for an AI, or perhaps it could be predicted 2 outputs in a genetic machine learning scenario, but really most scenarios where GAs are applied are synonymous (even at solving a single task, there may be different aspects of the task to be addressed).
Suppose we had an individual, 1, that was performing reasonably well at both tasks, and we found a series of mutations which produced 2 new individuals, 2 and 3, which performed better than Individual 1 at tasks 1 and 2 respectively. Now while both of these are improvements, ideally we want to find a generally good solution, so we want to combine the features that we have been found to be beneficial.
This is where crossover comes in; by combining the genomes of Individuals 2 and 3, we may find some new individual which produces a mixture of their performances. While it is possible that such an individual could be produced by a series of mutations applied to Individual 2 or Individual 3, the landscape may simply not suit this (there may be no favorable mutations in that direction, for example).
[](https://i.stack.imgur.com/bsVBEm.png)
You are partially right therefore; it may sometimes be the case that the benefits of crossover could be replicated with a series of mutations. Sometimes this may not be the case and crossover may smooth the fitness landscape of your GA, speeding up optimization and helping your GA escape local optima.
Upvotes: 2
|
2016/08/11
| 1,242
| 4,831
|
<issue_start>username_0: While thinking about AI, this question came into my mind. Could curiosity help in developing a true AI? According to this [website](http://psychologia.co/creativity-test/) (for testing creativity):
>
> Curiosity in this context refers to persistent desire to learn and discover new things and ideas. A curious person
>
>
> * always looks for new and original ways of thinking,
> * likes to learn,
> * searches for alternative solutions even when traditional solutions are present and available,
> * enjoys reading books and watching documentaries,
> * wants to know how things work inside out.
>
>
>
Let's take [Clarifai](https://www.clarifai.com/demo), an image/video classification startup, which can classify images and video with the best accuracy (according to them). If I understand correctly, they trained their deep learning system using millions of images with supervised learning. In the same algorithm, what would happen if we somehow added a "curiosity factor" when the AI has difficulty in classifying an image or its objects? It would ask a human for help, just like a curious child.
Curiosity makes a human being learn new things and also helps to generate new original ideas. Could the addition of curiosity change Clarifai into a true AI?<issue_comment>username_1: >
> when the AI has difficulty in classifying a image or its objects it should ask a human for help just like a curious child
>
>
>
It's called [active learning](https://en.wikipedia.org/wiki/Active_learning_(machine_learning)), it's already used quite often.
Upvotes: 3 <issue_comment>username_2: >
> Does this addition of curosity changes clarifai into a true AI?
>
>
>
As per my answer to [this question](https://ai.stackexchange.com/questions/1420/how-close-are-we-to-creating-ex-machina), we don't know what the ingredients for a 'true AI' are. Via the Turing Test and its variants, the best we can do is "know one when we see one".
Curiosity certainly appears *necessary* for intelligence, though it doesn't seem *sufficient* - a lemming-like creature curious to see what's at the bottom of a steep cliff might not survive long enough to learn caution, even if it had the learning mechanisms to do so.
Here is some work by Schmidhuber on [Artificial Curiousity](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.86.3978&rep=rep1&type=pdf).
[<NAME>](http://www.pyoudeyer.com/active-learning-and-artificial-curiosity-in-robots/) has also done quite a lot of work on this and Active Learning/Intrinsic motivation.
Upvotes: 3 <issue_comment>username_3: It's a well known concept that's already used
---------------------------------------------
What we call "curiosity" in humans and animals is in effect the chosen level of the "exploit vs explore" tradeoff for any active system. For example, the field of [reinforcement learning](https://en.wikipedia.org/wiki/Reinforcement_learning "Reinforcement learning") is one approach that studies implementations of what essentially is the equivalent of curiosity; and we have research on how much curiosity is best e.g. [multi-armed bandit](https://en.wikipedia.org/wiki/Multi-armed_bandit) concept.
So "using curiosity" is something that we already do as much as we can/should/are able to, but it would usually be called in some other, more specific term to specify the exact meaning instead of the vague word of "curiosity".
Upvotes: 2 <issue_comment>username_4: Curiosity is used successfully with Random Network Distillation (RND). OpenAI has published a [detailed article](https://blog.openai.com/reinforcement-learning-with-prediction-based-rewards/) about their approach using this method, which was especially successful with previously unsolved games like Montezuma’s Revenge.
While this does not fully answer your question about curiosity being required to build a true AI, it shows that previously unsolved problems became solvable introducing curiosity in the reward system.
Upvotes: 0 <issue_comment>username_5: Curiosity by itself does not improve intelligence.
It increases the chances of better understanding a given subject, given that curiosity is coupled with actions in that direction.
For example:
I am curious about how to make pancakes and decide to find a recipe but stop at the first instance of an answer with steps to follow.
Curiosity needs to be coupled with the desire to improve a given understanding and be followed by a review of current knowledge with the aim of updating the previousely reached conclusions. Provided that the used logic that judges improvements is correct.
Curiosity will not necessarily be benefitial for an improved intelligence. But to allow for an improved intelligence, curiosity is a mandatory requisite.
Curiosity is a symptom of an evolving intelligence.
Upvotes: 1
|
2016/08/11
| 914
| 3,693
|
<issue_start>username_0: Significant AI vs human board game matches include:
* **chess**: [Deep Blue vs Kasparov](https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)#Deep_Blue_versus_Kasparov) in 1996,
* **go**: [DeepMind AlphaGo vs Lee Sedol](https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol) in 2016,
which demonstrated that AI challenged and defeated professional players.
Are there known board games left where a human can still win against an AI? I mean based on the final outcome of authoritative famous matches, where there is still the same board game where AI cannot beat a world champion of that game.<issue_comment>username_1: Artificially intelligent computer programs should be able to be at the same level or beat humans at every game that we play. This is because games follow rules that are scriptable, and [artificial intelligence](http://www.aaai.org/ojs/index.php/aimagazine/article/view/2310) is designed to focus on one specific game and learn from its failures. The difference between humans and artificial intelligence is that artificial intelligence focuses on one specific task like learning to master Go while our brain is dedicated to mastering multiple tasks like...living. Even Arimaa, a game designed to be difficult for artificially intelligent systems was beaten by a bot called Sharp: <https://en.wikipedia.org/wiki/Arimaa>.
Upvotes: 2 <issue_comment>username_2: Not all games (or even board games) are computationally algorithmic. Even the least skilled player is likely to trounce the hottest pattern-matching algorithm in a game of **[Pictionary](https://en.wikipedia.org/wiki/Pictionary)** (for example).

If you want to say that the movement of pieces upon successful completion of a task is only ancelary to the object of the game, than your answer will be largely self-selecting. A sufficiently sophisticated algorithm will brute force a computational problem better than human intuition… [**eventually.**](https://en.wikipedia.org/wiki/AI_effect)
Upvotes: 3 <issue_comment>username_3: For many years, the focus has been on games with perfect information. That is, in Chess and Go both of us are looking at the same board. In something like Poker, you have information that I don't have and I have information that you don't have, and so for either of us to make sense of each other's actions we need to model what hidden information the other player has, and *also* manage how we leak our hidden information. (A poker bot whose hand strength could be trivially determined from its bets will be easier to beat than a poker bot that doesn't.)
Current research is switching to tackling games with imperfect information. Deepmind, for example, [has said](http://www.businessinsider.com/google-deepmind-could-play-starcraft-2016-3) they might approach Starcraft next.
I don't see too much difference between video games and board games, and there are several good reasons to switch to video games for games with imperfect information.
One is that if you want to beat the best human to be a major victory, there needs to be a pyramid of skill that human is atop of--it'll be harder to unseat the top Starcraft champion than the top Warcraft champion, even though the bots might be comparably difficult to code, just because humans have tried harder at Starcraft.
Another is that many games with imperfect information deal with reading faces and concealing information, which an AI would have an unnatural advantage at; for multiplayer video games, players normally interact with each other through a server as an intermediary and so the competition will be more normal.
Upvotes: 4 [selected_answer]
|
2016/08/12
| 842
| 3,561
|
<issue_start>username_0: Has there any research been done on how difficult certain languages are to learn for chatbots?
For example, CleverBot knows a bit of Dutch, German, Finnish and French, so there are clearly chatbots that speak other languages than English. (English is still her best language, but that is because she speaks that most often)
I would imagine that a logical constructed language, like lobjan, would be easier to learn than a natural language, like English, for example.<issue_comment>username_1: Artificially intelligent computer programs should be able to be at the same level or beat humans at every game that we play. This is because games follow rules that are scriptable, and [artificial intelligence](http://www.aaai.org/ojs/index.php/aimagazine/article/view/2310) is designed to focus on one specific game and learn from its failures. The difference between humans and artificial intelligence is that artificial intelligence focuses on one specific task like learning to master Go while our brain is dedicated to mastering multiple tasks like...living. Even Arimaa, a game designed to be difficult for artificially intelligent systems was beaten by a bot called Sharp: <https://en.wikipedia.org/wiki/Arimaa>.
Upvotes: 2 <issue_comment>username_2: Not all games (or even board games) are computationally algorithmic. Even the least skilled player is likely to trounce the hottest pattern-matching algorithm in a game of **[Pictionary](https://en.wikipedia.org/wiki/Pictionary)** (for example).

If you want to say that the movement of pieces upon successful completion of a task is only ancelary to the object of the game, than your answer will be largely self-selecting. A sufficiently sophisticated algorithm will brute force a computational problem better than human intuition… [**eventually.**](https://en.wikipedia.org/wiki/AI_effect)
Upvotes: 3 <issue_comment>username_3: For many years, the focus has been on games with perfect information. That is, in Chess and Go both of us are looking at the same board. In something like Poker, you have information that I don't have and I have information that you don't have, and so for either of us to make sense of each other's actions we need to model what hidden information the other player has, and *also* manage how we leak our hidden information. (A poker bot whose hand strength could be trivially determined from its bets will be easier to beat than a poker bot that doesn't.)
Current research is switching to tackling games with imperfect information. Deepmind, for example, [has said](http://www.businessinsider.com/google-deepmind-could-play-starcraft-2016-3) they might approach Starcraft next.
I don't see too much difference between video games and board games, and there are several good reasons to switch to video games for games with imperfect information.
One is that if you want to beat the best human to be a major victory, there needs to be a pyramid of skill that human is atop of--it'll be harder to unseat the top Starcraft champion than the top Warcraft champion, even though the bots might be comparably difficult to code, just because humans have tried harder at Starcraft.
Another is that many games with imperfect information deal with reading faces and concealing information, which an AI would have an unnatural advantage at; for multiplayer video games, players normally interact with each other through a server as an intermediary and so the competition will be more normal.
Upvotes: 4 [selected_answer]
|
2016/08/12
| 1,765
| 6,823
|
<issue_start>username_0: Humans have been endowed with personalities by nature, and it is not clear (to me at least) if this is a feature or a bug. This has been explored in science fiction by various notions of [Borg](http://memory-alpha.org/Borg)-like entities. It is my belief that, for narrative reasons, such stories usually end with the humans with their flawed personalities winning in the end.
*Are there experts who have analyzed, perhaps mathematically, design criteria for an AI agent with weakly enforced goals (e.g. to maximize reproduction in the human case) in an uncertain environment, and ended up with the answer that a notion of personality is useful?*
If there are AI researchers, philosophers (or maybe even science fiction) writers who have examined this question in their work, I would be happy to know about those too.<issue_comment>username_1: 'Personality' is something of a 'suitcase word' (Minsky) for quite a large collection of (presumably reasonably consistent) observable traits.
It seems clear that there is a certain collective advantage in having a consistent personality - specifically that it affords observers some learning gradient in an otherwise uncertain environment. This is of particular importance because those consistencies might have been arrived at using different learning mechanisms than the ones a given observer has.
Hence, in any non-trivial coevolutionary system, other organisms will inevitably make use of any such consistencies. Consider a simple robot, called Alice, say, that has the trait of 'quickly flashing red when it sees a blue robot'. It makes sense for all observers to exploit *everything* that they perceive as correlating with Alice's behavior, in particular, the prediction that a blue robot is likely to be present.
The best reference I can recommend on this (which shows that we tend to ascribe 'personality' to even very simple mechanisms) is ['Vehicles'](https://mitpress.mit.edu/books/vehicles) by <NAME>.
Upvotes: 2 <issue_comment>username_2: **First, a note on the question itself.**
>
> Humans have been endowed with personalities by nature, and it is not clear (to me at least) if this is a feature or a bug.
>
>
>
In my opinion, this is a statement that constrains the question, since it assumes that the personality is *given*. To me, it feels a bit like *playing god*: Artificial (given) Intelligence would hence imply Artificial (given) Personality. This approach to the problem seems to be supported by the next fragment:
>
> a notion of personality is useful
>
>
>
I point to the above because I don't think that artificial intelligence... ***Intelligence*** itself, actually, need to be given or assigned, or even have a *use* in the sense of a *purpose*.
---
**The previous note** was about *emergence*, which is a topic that [user217281728](https://ai.stackexchange.com/users/42/user217281728) briefly addressed in [their answer](https://ai.stackexchange.com/a/1596/70). In this second approach, the particular traits *just happen*, or *develop*. The interaction between the (so-called) agents and their environment, as well as fellow agents can give place to new behaviour patterns, not designed beforehand.
**In an evolutionary** approach, if the personality would happen to have an advantage (or at least not represent a disadvantage), then it could just appear. Of course, I am making a number of assumptions and demarcations here as well:
* I am thinking about embodied intelligence
* I speak of evolutionary robotics
* I think on social issues being of importance
* I assume that *personality* could emerge
**Now, an example** that I find extremely interesting is that of the little mobile robots which could move around and end-up in a pool of *food* or a pool of *poison*. And they, somehow, by some odd chance, recognised or made a relation between signals sent by other robots, and the presence of food. Or not. That was more or less the thing: Some robots (kind of) learned to *conceal* information and thus had more time to eat themselves. Well, I would have a couple of personality adjectives for such guys.
[Here you find the article](http://www.pnas.org/content/106/37/15786.full) and [here you find some videos](https://wp.unil.ch/mitrilab/) and related stuff.
**And with that**, we land at my last point: [We humans](https://en.wikipedia.org/wiki/Anthropomorphism#In_computing) put the adjectives, according to our social conditioning. We call [Marvin](https://en.wikipedia.org/wiki/Marvin_(character)) *depressive* and [R2D2](http://www.smithsonianmag.com/arts-culture/why-do-we-love-r2-d2-and-not-c-3po-180951176/) lovely and charming.
If they perceive their personalities as constructive or damaging, will always depend on our own judgment. In the end, it is quite common under humans to disagree on personality issues, too.
---
**Bonus**
Remember when [HAL got emotional](https://www.youtube.com/watch?v=UgkyrW2NiwM), on the face of death?
It gets *human* when it loses its cool, before the flawed-personality human astronaut :)
Upvotes: 2 <issue_comment>username_3: "Usefulness" can only be measured against some purpose. Once you pass AGI - which really means "generally animal-like AI, because it seems general to us" - then you've passed into a world of potentially undefined behavior.
Part of what makes a human free and sets us apart from the other animals is the fact that our purposes, capabilities and possibilities aren't fully defined. We're open ended.
To clarify terms, I interpret "Strong AGI" as "potentially super intelligence, but at least human level."
When we say "Strong AGI" vs just "AGI," we're not saying that one is *more* open ended than the other. We are saying that the stronger one is simply smarter on some axis.
So to ask whether a particular trait would be "more useful" to a Strong AGI - that would depend on the purpose of the AGI. But here's the catch: if a thing had just one purpose, then the most efficient solution to fulfilling that one purpose will always be a narrow solution, not a general one. When the purpose of the object is known before hand, giving that object more general capability than is necessary for that purpose is counterproductive.
That's why it's impossible to make declarative prescriptions about what a free, open-ended AGI should or shouldn't need. Such prescriptions would nullify the open-ended freedom of utility that its generality implies. We *can* speak declaratively about lesser robots and animals.
But for any given problem, the solution we will want to find is the most well-defined, narrow, efficient one available - not the most general one.
In other words, sure, personalities could be useful for a Strong AGI, assuming the problems in question involved personalities.
Upvotes: 0
|
2016/08/14
| 1,091
| 4,677
|
<issue_start>username_0: How does a domestic autonomous robotic vacuum cleaner - such as a [Roomba](https://en.wikipedia.org/wiki/Roomba) - know when it's working cleaned area (aka virtual map), and how does it plan to travel to the areas which hasn't been explored yet?
Does it use some kind of [A\*](https://en.wikipedia.org/wiki/A*_search_algorithm) algorithm?<issue_comment>username_1: I think it uses a kind of algorithm you presented, in combination with various sensors. It uses the sensors to make a virtual map and can then traverse the terrain with a combination of these sensors and the virtual map. Of course it uses a kind of path-planning algorithm to find the best way from A to B.
Maybe you should look at this wikipedia page:
<https://en.wikipedia.org/wiki/Robotic_mapping>
The new robotic vacuum cleaner from Samsung, I think, uses a 360° camera to perceive its environment.
Upvotes: 0 <issue_comment>username_2: Autonomous vacuum cleaners usually have these following task environment properties:
i. Partially observable environment
ii. Deterministic environment.
iii. Sequential environment.
iv. Static environment.
v. Discrete environment
vi. Single agent environment
Since it's a partially observable environment, the agent performs its actions based on what is sees and thus it's is in an deterministic environment where the current action will be the result of previous actions. And since, the agent is continuously performing, it's an sequential environment. Since, the environment doesn't change when agent is working, the environment is static and discrete since there are only finite no. of discrete states in which agent can be in.
There are certain rules written which tells an agent how to react when in a particular state. If there are multiple rules which are satisfied, then agent uses its experience to choose an optimal action to perform. The rules are written by the programmer keeping in mind the task envt. properties.
The agent's actions also depend on the type of agent it is decided by the programmer. It can be simple reflex-based, or a goal based, or a goal-based + feedback or a complete learning based agent. An agent can't use A\* algorithm because the entire environment is not visible and it will be useless to use A\* algorithm where we don't know when our goal may be reached.
The agent has various sensors attached which give it info. about the surrounding, like cameras, sound recorders, thermal sensors, etc. An autonomous vacuum cleaner may also have a dirt sensor to detect the dirt. The agent performs an action using one of its actuators like wheels, robot arms, etc.
Upvotes: 0 <issue_comment>username_3: An agent perceives the environment through sensors and act according to the incoming percepts (agent's perceptual input at any instant). An autonomous vacuum cleaner can be as simple as
>
> (blocki, clean) --> Move to blocki+1
>
> (blocki, dirty) --> Clean
>
>
>
>
This is just a general description, actual one is more complicated. Or the bot can have a memory where it stores all its previous decision and incorporate those while taking new ones.
This can be helpful if the bot wants to remember where an obstacle (like wall, in this case bot don't want to go and check the presence of wall each and every single time it is turned on) is, or where it is more probable to find dirt. If the bot is not remembering its history then it will be scanning the whole house over and over again, sensing the same obstacle every time and going across them.
Bot which keeps no log of its history will take the same procedure again and again, making the same mistakes again and again. This is not an efficient way and a waste of its energy (or battery).
Normally today bots have ordinary sensors which can only sense the dirt and obstacle. This limits the number of tasks a bot can perform. If a bot has decent camera as a sensor, and some algorithms of Image Processing are dumped into it, then it increases the tasks it can perform. Like detecting the stairs and cleaning different floors. Normally **stairs will be considered obstacle and bot will just go around them**. In case, when camera sensor is provided, **stairs are potentially a path to be taken**.
**A\*** algorithm is not necessarily used in case when the bot is not remembering the map of the house (or room). A normal robot which just scans the room and cleans it, will not be needing, as it don't know it's destination. Its only goal is to clean if it finds something dirty. But a bot which knows the map of the room and where there is a high probability of finding dirt, the A\* algorithm can be used.
Upvotes: 1
|
2016/08/14
| 831
| 3,630
|
<issue_start>username_0: For a deterministic problem space, I need to find a neural network with the optimal node and link structure. I want to use a genetic algorithm to simulate many neural networks to find the best network structure for the problem domain.
I've never used genetic algorithms for a task like this before. *What are the practical considerations? Specifically, how should I encode the structure of a neural network into a genome?*<issue_comment>username_1: Section 4.2 of ["Essentials of Metaheuristics"](https://cs.gmu.edu/~sean/book/metaheuristics/) has a wealth of information on alternative ways of encoding graph structures via Genetic Algorithms.
With particular regard to evolving ANNs, I would personally not be inclined to implement this sort of thing 'from scratch':
The field of neuroevolution has been around for some time, and the implementation some of the methods, such as Neuroevolution of Augmenting Topologies ([NEAT](http://www.cs.ucf.edu/~kstanley/neat.html)) now incorporate the results of much practical experience.
According to the above link:
>
> We also developed an extension to NEAT called HyperNEAT that can evolve neural networks with millions of connections and exploit geometric regularities in the task domain. The HyperNEAT Page includes links to publications and a general explanation of the approach.
>
>
>
Upvotes: 5 [selected_answer]<issue_comment>username_2: Using evolutionary algorithms to evolve neural networks is called [neuroevolution](http://www.scholarpedia.org/article/Neuroevolution).
Some neuroevolution algorithms optimize only the *weights* of a neural network with fixed topology. That sounds not like what you want. Other neuroevolution algorithms optimize both the *weights* and *topology* of a neural net. These kinds of algorithms seem more appropriate for your aims, and are sometimes called TWEANNs (Topology and Weight Evolving Neural Networks).
One popular algorithm is called [NEAT](https://www.cs.ucf.edu/~kstanley/neat.html), and is probably a good place to start, if only because there are a multitude of implementations, one of which hopefully is written in your favorite language. That would at least give you a baseline to work with.
NEAT encodes a neural network genome directly as a graph structure. Mutations can operate on the structure of the network by adding new links (by connecting two nodes not previously connected) or new nodes (by splitting an existing connection), or can operate only on changing the weights associated with edges in the graphs (called mutating the weights).
To give you an idea of the order of magnitude of the sizes of ANNs this particular algorithm works with, it would likely struggle with more than 100 or 200 nodes.
There are more scalable TWEANNs, but they're more complex and make assumptions about the kinds of structures they generate that may not always be productive in practice. For example, another way to encode the structure of a neural network, is as the product of a seed pattern that is repeatedly expanded by a grammar (e.g. an L-system). You can much more easily explore larger structures, but because they're generated by a grammar they'll have a characteristic self-repeating sort of feel. HyperNEAT is a popular extension of NEAT that makes a different sort of assumption (that patterns of weights can be easily expressed as a function of geometry), and can scale to ANNs with millions of connections when that assumption well-fits a particular domain.
There are a few survey papers linked in the top link if you want to observe a greater variety of techniques.
Upvotes: 2
|
2016/08/15
| 456
| 2,086
|
<issue_start>username_0: Were there any studies which checked the accuracy of neural network predictions of greyhound racing results, compared to a human expert? Would it achieve a better payoff?<issue_comment>username_1: >
> Practically, no.
>
>
>
In greyhound racing (or horse racing), there is no definite underlying pattern that can be associated with the outcome of the race. There are far too many variables to record and code as features. Most of which cannot be accessed by the public. This includes eating, sleeping, and training patterns. Furthermore, there are variables that cannot be readily quantified, such as the trainer's techniques, training effort, health history, and genes. A mere history of racing results and age won't be that helpful.
A neural network can only be as good as the features that are used to represent the instances. If the features don't capture the necessary characteristics of the instances that are associated with the problem, then the learner cannot generalize to predict the real world outcome.
Upvotes: 3 [selected_answer]<issue_comment>username_2: I would strongly recommend that you check the book "The Perfect Bet" by <NAME>. It does not mention technical methods such as neural networks but it gives a good history (and very nice stories) on what people had done on that field. It gives you the notion that in order to get achieve a better payoff, your goal is not actually making a better prediction but choosing the better options by considering what other players are doing. If you ask why it is not just the better prediction, the answer is that there is always a balance of risk and payoff and since you cannot find a 100% guaranteed way of prediction, you will have to balance risk and payoff in order to gain in the long run. In addition, although theory suggests that you can make a good prediction by gathering all variables, this is not really possible in practice. Thus, human assistance and observing what others are doing is used as a way to improve mathematical predictions and possible payoffs.
Upvotes: 2
|