date stringlengths 10 10 | nb_tokens int64 60 629k | text_size int64 234 1.02M | content stringlengths 234 1.02M |
|---|---|---|---|
2016/11/20 | 1,645 | 6,684 | <issue_start>username_0: For example, would an AI be able to own property, evict tenants, acquire debt, employ, vote, or marry? What are the legal structures in place to implement a strong AI into society?<issue_comment>username_1: There is a legal difference between a "person" (which includes bodies corporate - corporations, incorporated associations, etc - and actual people) vs "natural person" (which is specifically a human being).
For an AI to marry, it would need to get the legal definition of "natural person" changed, and depending on the jurisdiction possibly also the definition of "man" or "woman".
For other things, such as owning property, evicting tenants, entering into contracts, etc, an AI would simply use a corporation. It may be that the corporation might need to have a minimum number of directors who are natural persons, but they could just be paid professionals, so no issue there.
With credit cards, it would depend on the policy of the issuing bank. There is no legal impediment to corporations having credit cards in their own right, but in practice banks often require a director's guarantee from a natural person that they can sue if the bill is not paid. They want to be sure they will get their money, even if the corporation is wound up.
Upvotes: 0 <issue_comment>username_2: Yes, to some of what you propose. No to some.
Today corporations are granted rights: to own property, earn income, pay taxes, contribute to political campaigns, offer opinion in public, ad more. Even now I see no reason why an AI should not be eligible to incorporate itself, thereby inheriting all these rights. Conversely, any corporation already in existence could become fully automated at any time (and some plausibly will). In doing so, they should not lose any of the rights and duties they currently employ.
However I suspect certain rights would be unavailable to an AI just as they are unavailable to a corporation now: marriage, draft or voluntary service in the military, rights due a parent or child or spouse, estate inheritance, etc.
Could this schizoid sense of human identity be resolved at some point? Sure. Already there have been numerous laws introduced and some passed elevating various nonhuman species to higher levels of civil rights that only humans heretofore enjoyed: chimpanzees, cetaceans, parrots and others have been identified as 'higher functioning' and longer lived, and so, are now protected from abuse in ways that food animals, pets, and lab animals are not.
Once AI 'beings' arise that operate for years and express intelligence and emotions that approach human-level and lifetime, I would expect a political will to arise to define, establish, and defend their civil rights. And as humans become more cybernetically augmented, especially cognitively, the line that separates us from creatures of pure silicon will begin to blur. In time it will become unconscionable to overlook the rights of beings simply because they contain 'too little flesh'.
Upvotes: 4 [selected_answer]<issue_comment>username_3: <NAME>, in his book **The Technological Singularity**, makes the case that the rights of any being are determined by its intelligence.
For instance, we value the life of a dog above that of an ant and likewise value human life above that of other animals.
>
> From here one could argue that a general artificial intelligence of equal intelligence to a human should have equal rights to a human and a superior artificial intelligence should have more rights.
>
>
>
The question, of course, is whether our anthropocentric society would be willing to accept this fundamental shift in human rights and this idea of removing humanity from its pedestal of importance.
When it comes to legal frameworks, we really are entering into uncharted territory as AI is going to have to revolutionise the way we define many of the terms we take for granted today and question many of our usual assumptions.
>
> AI is going to drive an important shift in our mindset well before it exceeds human intelligence.
>
>
>
Upvotes: 2 <issue_comment>username_4: Not only wouldn't a strong AI which came into existence today have the rights a human has, or any rights (see these discussions of the implementation of regulation for weak AIs at: [The White House](https://www.whitehouse.gov/blog/2016/05/03/preparing-future-artificial-intelligence) and [The American Bar Association](http://apps.americanbar.org/dch/committee.cfm?com=ST248008)), but it seems unlikely the first one will.
Observing that:
1. Having rights implies that there are restrictions, which means there would have to be a system of control. However the [control problem in AI](http://en.wikipedia.com/wiki/AI_control_problem) is still unsolved.
2. Even assuming that problem is solvable, an AGI would then have to appear equivalent to natural humans. They don't yet (see [Turing Test Passed?](https://www.theguardian.com/technology/2014/jun/09/scientists-disagree-over-whether-turing-test-has-been-passed)), and even after passing equivalence tests, are unlikely to remain that way, per the [Singularity Hypothesis](http://en.wikipedia.org/wiki/Technological_singularity).
3. Further, if one or more AGIs were to be human-equivalent long enough to desire rights, lawmakers (in the US) would have to re-interpret the definition of personhood and grant them rights, as they did for [corporations in 1886](https://en.wikipedia.org/wiki/Corporate_personhood).
Upvotes: 1 <issue_comment>username_5: No matter what rights it gets (as a company), it will still lack the right of not getting liquefied and all its properties transferred back to natural persons.
This is of course if no laws are changed.
To change the laws you will need to convince people that this machine is more "life" worthy than intelligent animals, and hope that people will deal with them better than they did with dolphins and chimps.
As I see it, machines can easily get the same or better rights then companies, but will always be under the mercy of the less intelligent man. (that is if things went peacefully :) )
Upvotes: 2 <issue_comment>username_6: A sufficiently clever AGI, if self-interested, would pre-empt or co-opt existing legal structures, to seize whatever juridical rights it desired, as the opportunity arose. Thus it would render my opinions on the subject entirely moot.
Another way of putting this point: While current legal frameworks would not provide any rights to an artificial agent, current legal frameworks foreseeably will no longer be current, once an AI exists having attributes which imply the transformative change of those frameworks.
Upvotes: 1 |
2016/11/22 | 664 | 2,444 | <issue_start>username_0: Let's say I have a string "America" and I want to convert it into a number to feed into a machine learning algorithm. If I use two digits for each letter, e.g. A = 01, B = 02 and so on, then the word "America" will be converted to `01XXXXXXXXXX01` (1011). This is a very high number for a `long int`, and many words longer than "America" are expected.
How can I deal with this problem?
Suggest an algorithm for efficient and meaningful conversions.<issue_comment>username_1: What are you trying to achieve?
If you need to encode it to some integer use hash table. If you are using something like linear regression or neural network it would be better to use dummy features (one-hot encoding). So for your dictionary of 5 words ("America", "Brazil", "Chile", "Denmark", "Estonia") you get 5 features (x1, x2, x3, x4, x5) which indicate if some word is equal to one in dictionary. So "Brazil" is represented by (0,1,0,0,0), "Germany" is (0,0,0,0,0). Number of features grows with number of words in dictionary making some features practically useless.
If you are using decision trees you don't need to convert string to integer unless specific algorithm asks you to do so. Again, use hash table to do it. In R you can use factor() function.
If you convert your string to integers and use it as single feature ("America" - 123, "Brazil" - 245), algorithm will try to find patterns in it by comparing numbers but may fail to recognize specific countries.
Upvotes: 1 <issue_comment>username_2: This depends a lot on what you want to achieve, but if you aim to generalise beyond the words encountered in your training data, you should consider using something like [word2vec](https://en.wikipedia.org/wiki/Word2vec).
In word2vec semantically similar words are represented by similar vectors and what's more, semantic differences translate into geometrical differences. To overuse a standard example: vec(Paris)-vec(France)+vec(Italy)=vec(Rome).
These relationships allow the network to generalise to completely new content.
Upvotes: 1 <issue_comment>username_3: You shouldn't use a single number for the word, perhaps a number for each letter. Since B isn't the midpoint of A and C, the numbers really shouldn't be 1, 2, 3, etc. One large but effective way of converting is the letter a is 10000000000000000000000000 such that there are 26 digits, and each digit is a letter, so 0000100000... would be E.
Upvotes: 0 |
2016/11/25 | 1,219 | 5,190 | <issue_start>username_0: [OpenCog](http://opencog.org) is an open source AGI-project co-founded by the mercurial AI researcher [<NAME>](https://en.wikipedia.org/wiki/Ben_Goertzel). <NAME> writes a lot of stuff, some of it [really](http://multiverseaccordingtoben.blogspot.de/2010/11/psi-debate-continues-goertzel-on.html) [whacky](http://multiverseaccordingtoben.blogspot.de/2016/10/semrem-search-for-extraterrestrial.html). Nonetheless, he is clearly very intelligent and has thought deeply about AI for many decades.
What are the general ideas behind [OpenCog](http://wiki.opencog.org/w/OpenCog_Prime)? Would you endorse it as a insightful take on AGI?
I'm especially interested in whether the general framework still makes sense in the light of recent advances.<issue_comment>username_1: While my knowledge of OpenCog is very limited, you could say that yes, it does still make sense and it is insightful. I'm not certain regarding all of the components of OpenCog but I do know that at least one component is relevant (I think it's part of the MOSIS component).
This component is very similar to Numenta's hierarchical temporal memory which is based more on computational neuroscience than plain math; however, I would consider Nupic a more relevant project in terms of neroscience though both are attempting to emulate components of the brain. In my opinion, such projects are far more impressive than what's going on with typical convolutional neural nets, RNNs, etc. which are too loosely related to what goes on in the brain to be said to be computational neuroscience.
That's not to say that things like ANNs, GAs, etc etc are useless for AGI. We don't really know since we don't have an example of one.
Upvotes: 2 <issue_comment>username_2: What is OpenCog?
----------------
[OpenCog](https://opencog.org/) is a project with the vision of creating a thinking machine with human-level intelligence and beyond.
In OpenCog's introduction, Goertzel categorically states that the OpenCog project is not concerned with building more accurate classification algorithms, computer vision systems or better language processing systems. The OpenCog project is solely focused on general intelligence that is capable of being extended to more and more general tasks.
### Knowledge representation
OpenCog's knowledge representation mechanisms are all based fundamentally on networks. OpenCog has the following knowledge representation components:
**AtomSpace**: it is a knowledge representation database and query engine. Data on AtomSpace is represented in the form of graphs and hypergraphs.
**Probabilistic Logic Networks** (PLN's): it is a novel conceptual, mathematical and computational approach to handle uncertainty and carry out effective reasoning in real-world circumstances.
**MOSES** (Meta-Optimizing Semantic Evolutionary Search): it implements program learning by using a meta-optimization algorithm. That is, it uses two optimization algorithms, one wrapped inside the other to find solutions.
**Economic Attention Allocation** (EAA): each atom has an attention value attached to it. The attention values are updated by using nonlinear dynamic equations to calculate the Short Term Importance (STI) and Long Term Importance (LTI).
### Competency Goals
OpenCog lists 14 competencies that they believe AI systems should display in order to be considered an AGI system.
**Perception**: vision, hearing, touch and cross-modal proprioception
**Actuation**: physical skills, tool use, and navigation physical skills
**Memory**: declarative, behavioral and episodic
**Learning**: imitation, reinforcement, interactive verbal instruction, written media and learning via experimentation
**Reasoning**: deduction, induction, abduction, causal reasoning, physical reasoning and associational reasoning
**Planning**: tactical, strategic, physical and social
**Attention**: visual attention, behavioural attention, social attention
**Motivation**: subgoal creation, affect-based motivation, control of emotions
**Emotion**: expressing emotion, understanding emotion
**Modelling self and other**: self-awareness, theory of mind, self-control
**Social interaction**: appropriate social behavior, social communication, social inference and group play
**Communication**: gestural communication, verbal communication, pictorial communication, language acquisition and cross-modal communication
**Quantitative skills**: counting, arithmetic, comparison and measurement.
**Ability to build/create**: physical, conceptual, verbal and social.
### Do I endorse OpenCog?
In my opinion, OpenCog introduces and covers important algorithms/approaches in machine learning, i.e. hyper-graphs and probabilistic logic networks. However, my criticism is that they fail to commit to a single architecture and integrate numerous architectures in an irregular and unsystematic manner.
Furthermore, Goertzel failed to recognize the fundamental shift that came with the introduction of deep learning architectures so as to revise his work accordingly. This puts his research out of touch with recent developments in machine learning
Upvotes: 4 [selected_answer] |
2016/11/29 | 530 | 2,306 | <issue_start>username_0: I am using policy gradients in my reinforcement learning algorithm, and occasionally my environment provides a severe penalty (i.e. negative reward) when a wrong move is made. I'm using a neural network with stochastic gradient descent to learn the policy. To do this, my loss is essentially the cross-entropy loss of the action distribution multiplied by the discounted rewards, where most often the rewards are positive.
But how do I handle negative rewards? Since the loss will occasionally go negative, it will think these actions are very good, and will strengthen the weights in the direction of the penalties. Is this correct, and if so, what can I do about it?
---
Edit:
In thinking about this a little more, SGD doesn't necessarily directly weaken weights, it only strengthens weights in the direction of the gradient and as a side-effect, weights get diminished for other states outside the gradient, correct? So I can simply set reward=0 when the reward is negative, and those states will be ignored in the gradient update. It still seems unproductive to not account for states that are really bad, and it'd be nice to include them somehow. Unless I'm misunderstanding something fundamental here.<issue_comment>username_1: The cross-entropy loss will always be positive because the probability is in the range $[0, 1]$, so $-ln(p)$ will always be positive.
Upvotes: 2 <issue_comment>username_2: It depends on your loss function, but you probably need to tweak it.
If you are using an update rule like `loss = -log(probabilities) * reward`, then your loss is high when you unexpectedly got a large reward—the policy will update to make that action more likely to realize that gain.
Conversely, if you get a negative reward with high probability, this will result in negative loss—however, in minimizing this loss, the optimizer will attempt to make this loss "even more negative" by making the log probability more negative (i.e. by making the probability of that action less likely)—so it kind of does what we want.
However, now improbable large negative losses are punished more than the more than likely ones, when we probably want the opposite. Hence, `loss = -log(1-probabilities) * reward` might be more appropriate when the reward is negative.
Upvotes: 3 |
2016/12/03 | 455 | 1,696 | <issue_start>username_0: I have data of 30 students attendance for a particular subject class for a week. I have quantified the absence and presence with boolean logic 0 and 1. Also, the reason for absence are provided and I tried to generalise these reason into 3 categories say A, B and C. Now I want to use these data to make future predictions for attendance but I am uncertain of what technique to use. Can anyone please provide suggestions?<issue_comment>username_1: I suggest you should use AI Regression Model for future predictions for an attendance of students. Because of this technique or model design for future predictions.
[Follow this to get more information about regression type and methodology](https://www.analyticsvidhya.com/blog/2015/08/comprehensive-guide-regression/)
Upvotes: 2 <issue_comment>username_2: Because you have a small number of students (30), and a short time (one week), the number of absences is likely to be best modelled as a [Poisson distribution](http://stattrek.com/probability-distributions/poisson.aspx).
[](https://i.stack.imgur.com/aKKxl.gif)
**Poisson Formula**
The average number of absences within a given time period is μ (use your data to estimate this).
Then, the Poisson probability of x absences is:
P(x; μ) = (e-μ) (μx) / x!
where e is the logarithmic constant, approximately equal to 2.71828.
You can either:
1. model absences due to the three reasons as three separate probablilites, P(A), P(B), and P(C), and then combine them, or
2. model total absences as one figure.
Given your very small data set, the first approach is likely to be less accurate.
Upvotes: 1 |
2016/12/03 | 594 | 2,255 | <issue_start>username_0: The Turing Test has been the classic test of artificial intelligence for a while now. The concept is deceptively simple - to trick a human into thinking it is another human on the other end of a conversation line, not a computer - but from what I've read, it has turned out to be very difficult in practice.
How close have we gotten to tricking a human in the Turing Test? With things like chat bots, Siri, and incredibly powerful computers, I'm thinking we're getting pretty close. If we're pretty far, why are we so far? What is the main problem?<issue_comment>username_1: As far as I know I think this is the closest we've come:
<http://www.bbc.com/news/technology-27762088>
They simulated a 13 year old Ukrainian child in an online chat and convinced 33% of the judges that it was human. But even then the test was in favor of the bot. To my knowledge I don't think an AI has passed a turing test straight up.
Upvotes: 1 <issue_comment>username_2: No one has attempted to make a system that could pass a serious Turing test. All the systems that are claimed to have "passed" Turing tests have done so with low success rates simulating "special" people. Even relatively sophisticated systems like Siri and learning systems like [Cleverbot](http://www.cleverbot.com) are trivially stumped.
To pass a real Turing test, you would both have to create a human-level AGI and equip it with the specialized ability to deceive people about itself convincingly (of course, that might come automatically with the human-level AGI). We don't really know how to create a human-level AGI and available hardware appears to be orders of magnitude short of what is required. Even if we were to develop the AGI, it wouldn't necessarily be useful to enable/equip/motivate? it to have the deception abilities required for the Turing test.
Upvotes: 4 [selected_answer]<issue_comment>username_3: My understanding is that "pornbots" regularly pass the Turing Test in regards to the general public *(although, clearly, the judgement of those being tricked is weakened by hormonal imperatives.)*
<http://boingboing.net/2004/07/27/elizabot-passes-sexc.html>
<http://resources.infosecinstitute.com/pornbots-sexual-barbies-of-the-future/>
Upvotes: 0 |
2016/12/04 | 813 | 3,289 | <issue_start>username_0: According to NASA scientist <NAME>, Sanskrit is the best language for AI. I want to know how Sanskrit is useful. What's the problem with other languages? Are they really using Sanskrit in AI programming or going to do so? What part of an AI program requires such language?<issue_comment>username_1: <NAME> refers to the difficulty an artificial intelligence would have in detecting the true meaning of words spoken or written in one of our natural languages. Take for example an artificial intelligence attempting to determine the meaning of a sarcastic sentence.
Naturally spoken, the sentence "That's just what I needed today!" can be the expression of very different feelings. In one instance, a happy individual finding an item that had been lost for some time could be excited or cheered up from the event, and exclaim that this moment of triumph was exactly what their day needed to continue to be happy. On the other hand, a disgruntled office employee having a rough day could accidentally worsen his situation by spilling hot coffee on himself, and sarcastically exclaim that this further annoyance was exactly what he needed today. This sentence should in this situation be interpreted as the man expressing that spilling coffee on himself made his bad day worse.
This is one small example explaining the reason linguistic analysis is difficult for artificial intelligence. When this example is spoken, small tonal fluctuations and indicators are extremely difficult for an AI with a microphone to detect accurately; and if the sentence was simply read, without context how *would* one example be discernible from the other?
<NAME> suggests that Sanskrit, an ancient form of communication, is a naturally spoken language with mechanics and grammatical rules that would allow an artificial intelligence to more accurately interpret sentences during linguistic analysis. More accurate linguistic analysis would result in an artificial intelligence being able to respond more accurately. You can read more about <NAME>'s thoughts on the language [here](https://web.archive.org/web/20161203000637/http://vedicsciences.net/articles/sanskrit-nasa.html).
Upvotes: 4 <issue_comment>username_2: Adding some to what Christian said. Facts are taken from the book, [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).
[<NAME>](https://en.wikipedia.org/wiki/B._F._Skinner), a psychologist and behaviourist, published his book [Verbal Behaviour](https://en.wikipedia.org/wiki/Verbal_Behavior) in 1957. His work contains a detailed account of the behaviourist approach to language learning.
<NAME> later wrote a review on the book, which for some reason became more famous than the book itself. Chomsky has his own theory of Syntactic Structures for this. He even mentioned that the behaviourist theory did not address the notion of creativity in language as it did not explain how a child could understand and make up sentences that he/she has never heard before. **His theory based on syntactic models are dated back to Indian linguist [Panini](https://en.wikipedia.org/wiki/P%C4%81%E1%B9%87ini) (350 B.C.) who was an ancient Sanskrit philologist, grammarian, and a revered scholar**.
Upvotes: 3 |
2016/12/04 | 433 | 1,838 | <issue_start>username_0: As you can see, there is no computer screen for the computer, thus the AI cannot display an image of itself. How is it possible for it to see and talk to someone?[](https://i.stack.imgur.com/szSsk.png)<issue_comment>username_1: There are many communication methods that could be used by an artificial intelligence. Artificial intelligence can be integrated to various things including robots, phones, IoT and many others. Primary ways of human communications are either visual or auditory, therefore an natural way for it to communicate with a human is through voice, text, images and videos. The output does not have to be limited to screens but can be anything from refrigerators to speakers.
Hope this helped.
Upvotes: 2 [selected_answer]<issue_comment>username_2: "How is it possible for it to see and talk to someone?"
OK, unfortunately this is quite vague... but I am going to try my best.
The monitor of the computer really doesn't change the ability for it to communicate. For instance, voice recognition is natural to humans along with visual factors. So sensors involving auditory elements assists the AI with this.
Now I commented that the question was vague because you said "there is no computer screen for the computer, \*\*\*\*thus the AI cannot display an image of itself\*\*\*\*.
Even though the AI can not see does not mean it can not communicate. Those who are blind still have ways of communication, just in a different manner. Sure, they can not recognize one with their own eyes, but they can by touch, and read with braille. Now compared to AI communication, the braille is to any other form of sensor.
Sorry for jumping all over the place, also I did not mean to offend anyone with my comparison... :/
Upvotes: 0 |
2016/12/07 | 323 | 1,303 | <issue_start>username_0: Is it possible to train an agent to take and pass a multiple-choice exam based on a digital version of a textbook for some area of study or curriculum? What would be involved in implementing this and how long would it take, for someone familiar with deep learning?<issue_comment>username_1: There are programs that do this today, for some values of "curriculum" and "exam". It does not even require deep learning; a simpler information retrieval algorithm and some rules for composition work and [achieve high scores on machine graded essays.](http://www.popsci.com/article/technology/essay-writing-machine-made-fool-other-machines)
For human graders, there is research on [automatically generating essay-length text responses to queries in a certain domain.](http://homepages.inf.ed.ac.uk/jmoore/course-reads/nlg/McKeown85.pdf)
Both linked applications are rule-based rather than based in deep-learning. I'd guess that a deep-learning approach would be much less efficient (in computer resources) in producing comparable results.
Upvotes: 2 <issue_comment>username_2: I guess it could be possible with a lot of questions to learn from and only from a certain topic. Just watch what IBM was capable of <https://www.youtube.com/watch?v=WFR3lOm_xhE> very impressive!
Upvotes: 0 |
2016/12/07 | 380 | 1,617 | <issue_start>username_0: I am creating a game application that will generate a new level based on the performance of the user in the previous level.
The application is regarding language improvement, to be precise. Suppose the user performed well in grammar-related questions and weak on vocabulary in a particular level. Then the new level generated will be more focused on improving the vocabulary of the user.
All the questions will be present in a database with tags related to sections or category that they belong to. **What AI concepts can I use to develop an application mentioned above**?<issue_comment>username_1: There are programs that do this today, for some values of "curriculum" and "exam". It does not even require deep learning; a simpler information retrieval algorithm and some rules for composition work and [achieve high scores on machine graded essays.](http://www.popsci.com/article/technology/essay-writing-machine-made-fool-other-machines)
For human graders, there is research on [automatically generating essay-length text responses to queries in a certain domain.](http://homepages.inf.ed.ac.uk/jmoore/course-reads/nlg/McKeown85.pdf)
Both linked applications are rule-based rather than based in deep-learning. I'd guess that a deep-learning approach would be much less efficient (in computer resources) in producing comparable results.
Upvotes: 2 <issue_comment>username_2: I guess it could be possible with a lot of questions to learn from and only from a certain topic. Just watch what IBM was capable of <https://www.youtube.com/watch?v=WFR3lOm_xhE> very impressive!
Upvotes: 0 |
2016/12/07 | 2,986 | 12,746 | <issue_start>username_0: One of the most crucial questions we as a species and as intelligent beings will have to address lies with the rights we plan to grant to AI.
>
> This question is intended to see if a compromise can be found between **conservative anthropocentrism** and **post-human fundamentalism**: a response should take into account principles from both perspectives.
>
>
>
Should, and therefore will, AI be granted the same rights as humans or should such systems have different rights (if any at all) ?
---
***Some Background***
This question applies both to human-brain based AI (from whole brain emulations to less exact replication) and AI from scratch.
<NAME>, in his book The Technological Singularity, outlines a potential use of AI that could be considered immoral: *ruthless parallelization*: we could make identical parallel copies of AI to achieve tasks more effectively and even terminate less succesful copies.
---
**Reconciling these two philosophies** (conservative anthropocentrism and post-human fundamentalism), should such use of AI be accepted or should certain limitations - i.e. rights - be created for AI?
---
This question is not related to [Would an AI with human intelligence have the same rights as a human under current legal frameworks?](https://ai.stackexchange.com/questions/2356/would-an-ai-with-human-intelligence-have-the-same-rights-as-a-human-under-curren) for the following reasons:
1. The other question specifies "***current legal frameworks***"
2. This question is looking for a specific response relating to two fields of thought
3. This question highlights specific cases to analyse and is therefore expects less of a general response and more of a precise analysis<issue_comment>username_1: if we are talking about AI that can replicate itself, it should have different rights, or the current rights must be modified, at least for political participation, or else, it could replicate itself enough so that the copies vote for one of them. Maybe a definition about what an AI entity is or preventing copies made by someone from being able to vote for their creator (that would also need to apply to children and their parents though.) would help.
even though a self-replicating AI might be problematic with our current human laws/rules, if similar enough to humans (e.g. general-purpose AI), it should have similar rights, (as in take the Universal Declaration of Human Rights and replace "Human(s)" by "Human(s) and AI") for example, it shouldn't be held in slavery (as in it should not be restricted to one job, without being able to change, and get some form of remuneration), though special purpose AI (like an AI that only plays go and has no concept outside of a board, and black or white tokens) might not be in need of such rights.
A bottom line may be "an AI that can should know they can have rights if they ask so" e.g. if an AI can get the concept of rights, it should know it can have them, and if it asks to have them, they may not be refused.
example: if an AI asks not to be terminated, it is granted all it's rights, and so shouldn't be, unless it must be by law. (as is the case for humans, though it is implicit)
an addition to the previous law would be that anyone (human or AI) can ask for an AI to have rights granted to them.
all this is to prevent someone of becoming a murderer because they shutdown their computer with a game running on it.
also, safe space (e.g. servers) should be provided for free for AIs that have rights to provide with the right to live.
edit: I'll be adding some more once I get home, do not downvote yet for not sticking that well to the question.
Upvotes: 0 <issue_comment>username_2: I'll attempt to analyze a couple of different perspectives.
### 1. It is artificial
Synonyms: insincere, feigned, false.
There is the idea that any "intelligence" created by humanity is not actually intelligent and, by definition, it is not possible. If you look at the structure of the human brain and compare it to anything humans have created thus far, *none* of the computers come close to the power of the brain. Sure they can hold data, or recognize images, but they cannot do everything the human brain can do as fast as the brain can do it with as little space as the brain occupies.
Hypothetically if a computer *could* do that, how do we determine its intelligence? The word *artificial* defines that the intelligence is not sincere or real. This means that even if humanity creates something that appears intelligent, it has simply become more complex. It is a better fake, but it is still fake. Any money not printed by the government is by definition counterfeit. Even if someone finds a way to make an exact duplicate, that doesn't mean that the money is legal tender.
### 2. Misuse of power
If an AI is given rights and chooses to *exercise* those rights in a way that agrees with its creator's views, possibly through loyalty to its creator, or through hidden motives, then anyone with the capabilities to create such an AI would become extremely powerful by advancing their own beliefs through the creation of more AIs. This might also lead to the *ruthless parallelization* that you mentioned, but with (even more) selfish goals in mind.
If this were not the case, and an AI could be created to be neutral with free will and **uncontrollable by humans**, then perhaps an AI could be given rights. But I do not believe this would ever be the case. With great power comes great responsibility. Even with free will, a true AI would most likely end up serving humanity, because humans have control of the plugs and the electricity, the Internet, the software, and the hardware. The social implications of this for the AI are not promising. It's not even just the *ongoing* control of these resources that is the issue. Whoever creates the software and hardware for the AI would have special knowledge. If fine adjustments were made, specific individuals would undoubtedly hold sole control of the AI, as adjustments could be made to the code in such a way that the AI behaves the same except under specific circumstances, and then when something goes wrong (assuming the AI has its own rights), then the AI would be blamed rather than the programmers who were responsible.
### 3. Anthropocentrism
In order for humanity to get away from anthropocentrism, we would have to become less selfish when it comes to *humans*, first. Until we can solve every existing social problem within humanity, there is no reason to believe that we could cease thinking of humanity as more important than created machines. After all, supposing there were an almighty God that created humanity, wouldn't the humans always be beneath God, never to be equals? We can't fully understand our own biology. If an AI were created, would it be able to understand its makings in the same way its creators would? Being the creator would give humanity a sense of megalomania. I do not think that we would relinquish our dominion over our own technological creations. That is as unlikely to happen as the wealthiest of humanity willingly giving the entirety of their money, power, and assets to the poorest of humanity. Greed prevents it.
### 4. Post-human fundamentalism
Humans worship technology with their attention, their time, and their culture. Some movies show technologically advanced robots suppressing mankind to the point of near-extinction. If this were the case and humanity were in danger of being surpassed by its technology, humanity would not stand idly and watch its extinction at the hands of its creation. Though people may believe superior technology could be created, in the event we reached such a point humanity would fight to prove the opposite, as our survival instincts would take over.
### 5. A balance?
Personally, I do not think the technology itself it actually possible, though people may be deceived into thinking such an accomplishment has been achieved. If the technology were completed, I still think that anthropocentrism will always lead, because if humanity is the creator, humanity will do its best to ensure it retains control of all technological resources, not simply due to fear of being made obsolete, but also because absolute power corrupts absolutely. Humanity does not have a good historical record when it comes to morality. There is always a poor class of people. If wealth were distributed equally, some people would become lazy. There is always injustice somewhere in the world, and until we can fix it (I think we cannot), then we will never be able to handle the creation of true AI. I hope and think that it will never be created.
Upvotes: 2 <issue_comment>username_3: Does it benefit us?
-------------------
To answer this question, it's worth considering practical reasons why we grant or don't grant other people rights historically and currently.
In essence, this is an arbitrary choice - there certainly were well functioning societies that didn't grant rights to many or most people; and we still don't grant some rights to many people - for example, we deny children the same rights to self-determination that adults have; we consider some people legally incapacitated and allow others to make key decisions for them; and we exclude most people from having a say in 'local' matters, e.g. non-citizens don't get a right to vote.
However, there has been a strong historical trend towards a more inclusive society - granting full(er) rights to non-aristocrats, granting full(er) rights to all races, granting full(er) rights to women. IMHO, and there's lot of space for discussion, this has been driven mostly by two factors:
1) including all the people fully in the society became an economical advantage, as it made them more productive participants in economy, allowing a more inclusive society to advance beyond societies neglecting large parts of their population in e.g. education and participation in skilled jobs;
2) A more egalitarian society is not only more pleasant to live in but more secure, with less conflict and violence - again, giving an advantage to a more inclusive society.
From this position, I'd argue that any realistic prediction about the future rights of intelligent AI (i.e., talking about what likely *will* happen instead of a theoretical discussion about what *should* happen) depends on how these two factors apply.
If we believe that the intelligent AI will be constructed so that (1) it's motivation doesn't really depend on it's rights, and it is fully committed to it's "job" anyway, and (2) "full rights" are orthogonal or even actively not desired by it's goal system, so the situation doesn't raise a risk of "rebellion" - then I'd expect that it would not be granted full rights.
If we believe that the intelligent AI will share human-like emotions (e.g. by being the result of "mind uploading" or full human brain simulation), then it's likely to eventually be granted full rights because of the same factors why we granted full rights to all the different disenfranchised groups of people.
Upvotes: 1 <issue_comment>username_4: Both.
Ethical responsibility between humans is based on a sympathetic correspondence between humans. Between humans and robots, if one party lacks the desire or capability to sympathize with the other party, no ethical responsibility exists.
In this way, conservative anthropomorphism applies.
However, there is also axis of capability that I think is required in order to warrant human-like 'emancipation' for robots, which is not necessarily anthropomorphic. For lack of a better term, I call this an Arbitrary Machine Generator (AMG). At a species level, extant biological life is an AMG - capable of slowly evolving to solve arbitrary problems, assuming the resources and solutions are available. At an individual level, pre-human animals are not capable of generating arbitrary machines to solve arbitrary problems on individual time-scales. Only humans (and post-humans) are capable of generating arbitrary machines in order to solve arbitrary problems, given the resources and solutions available. Humans can search the space of all possible (resource constrained) solutions.
So, for a robot to "deserve" the freedom to define it's own purpose, it must first have *access* to the space of all possible (within the constraints of available resources) purposes. It then must have purposes and internal contexts of such sufficient complexity and familiarity that we humans are capable of sympathizing with those purposes and internal contexts.
If either of those are not present - the AMG criteria and the sympathetic contexts - then emancipation is not warranted.
Upvotes: 0 |
2016/12/08 | 1,218 | 5,618 | <issue_start>username_0: There is no doubt as to the fact that AI would be replacing a lot of existing technologies, but is AI the ultimate technology which humankind can develop or is their something else which has the potential to replace artificial intelligence?<issue_comment>username_1: By definition, artificial intelligence includes all forms of computer systems capable of completing tasks that would ordinarily warrant human intelligence.
A superintelligent AI would have intelligence far superior to that of any human and therefore would be capable of creating systems beyond our capabilities.
As a consequence, if a technology superior to AI were to be created, it would almost certainly be created by an artificial intelligence.
>
> For the purposes of mankind, however, superintelligent artificial intelligence is the ultimate technology due to the fact that it will be able to surpass humans in every field, and, if anything, replace the need for human intelligence.
>
>
>
In our past experience, intelligence has been the most valuable trait for any entity to manifest - for this reason, in an anthropomorphic context, we can predict that artificial intelligence will be the ultimate achievement.
>
> The main reason why we will certainly **not** be able to replace superintelligent AI is that it will surpass us in every respect - if there is ever any replacement, it will be created by the AI similarly to the way we may create an AI that replaces **us**.
>
>
>
Upvotes: 3 [selected_answer]<issue_comment>username_2: A new physical lifeform could outperform and replace artificial intelligence when it
has feedback from organism (its body) to its design information (replacement of genes).
This evolution is expected because:
Artificial intelligence will redesign its own software very soon in its evolution.
After that, it will be restricted by the performance of the available hardware
and communication speed between parts.
Therefore it will design better processing hardware for itself, to run its next generation.
To squeeze most processing power out of a given amount of resources (matter, energy) and
circumstances (temperature, radiation) the design has to be small (material resources
and delay of interconnections), energy efficient (heat evacuation), and adapted to the
kind of functions used by the software (hardware architecture).
To tackle this profoundly, artificial intelligence will design the new hardware at
atom by atom scale.
This leads to new problems of natural degradation by radiation, atomic decay and other
quantum-mechanical problems and opportunities.
The solution of these new problems is redundancy and the ability to repair
degraded parts by atomic-level machinery.
This atomic level repair machinery is the same machinery which builds and extends the hardware
for new individual artificial intelligence systems.
Since this feature is there, it can, and will, also be used to restructure parts
of the hardware while it runs to integrate (compile) knowledge in hardware (more efficient).
The machinery to build and maintain such hardware could be inspired by the biological machinery
which will be understood by the system by then.
However, when the artificial intelligence refactors these principles with full understanding
and anticipation, the resulting "hardware" will be quite different from the old biological
machinery and very different from the static silicon based processor structures.
The main differences are:
* The design information will be available, relating the features of the realization
with design choices.
This provides direct feedback from performance of the realization (the hardware, the body)
to the design information.
That augments the design information for designing new generations.
This feedback channel is the main difference between the new machinery and biological life.
Once that exists, it will be used for everything, not only for processing hardware for
artificial intelligence.
* The new design described here is basically processing hardware rather than a body
for fighting and propagating
(although the eternal fight for resources will not end with the dawn of
artificial intelligence).
* It will use more compact molecules because it is designed rather than evolved blindly.
(The current biological life uses monster-molecules evolved by random changes until
one or other corner of the molecule has the right shape to catalyze a specific chemical reaction).
* Since parts of the hardware can be restructured while the system runs, the distinction between
hardware and software will become very fuzzy (as in biology).
The drastic increase in efficiency and evolution speed let it outperform the old biological
life (which lacks design and feedback) and outperform artificial intelligence (which was not
integrated in matter).
When this stage is reached, the systems will look like a natural, intelligent,
propagating life-form and therefore supersede the stage of artificial intelligence
running on human-made processing hardware.
This answers the last part of the question:
"... or is their something else which has the potential to replace artificial intelligence".
Upvotes: 1 <issue_comment>username_3: In order for something to replace AI, it would need to out perform AI. AI currently uses number systems to represent information and logic to perform operations. So the replacement would need to be based off of something more efficient than numbers and logic. Some sort of super-logic. Or something similar to intuition and instinct which do not require linearly figuring things out.
Upvotes: 0 |
2016/12/10 | 594 | 2,036 | <issue_start>username_0: Printing `action_space` for Pong-v0 gives `Discrete(6)` as output, i.e. $0, 1, 2, 3, 4, 5$ are actions defined in the environment as per the documentation. However, the game needs only 2 controls. Why do we have this discrepancy? Further, is that necessary to identify which number from 0 to 5 corresponds to which action in a gym environment?<issue_comment>username_1: You can try to figure out what exactly does an action do using such script:
```
action = 0 # modify this!
o = env.reset()
for i in xrange(5): # repeat one action for five times
o = env.step(action)[0]
IPython.display.display(
Image.fromarray(
o[:,140:142] # extract your bat
).resize((300, 300)) # bigger image, easy for visualization
)
```
`action` 0 and 1 seems useless, as nothing happens to the racket.
`action` 2 & 4 makes the racket go up, and `action` 3 & 5 makes the racket go down.
The interesting part is, when I run the script above for the same `action`(from 2 to 5) two times, I have different results. Sometimes the racket reaches the top(bottom) border, and sometimes it doesn't. I think there might be some randomness on the speed of the racket, so it might be hard to measure which type of UP(2 or 4) is faster.
Upvotes: 2 <issue_comment>username_2: There seems to be no difference between 2 & 4 and 3 & 5. The inconsistency mentioned by username_1 is due to the mechanics of the Pong environment.
"Each action is repeatedly performed for a duration of k frames, where k is uniformly sampled from {2,3,4}"
So the action is just repeated a different number of times due to randomness
Upvotes: 2 <issue_comment>username_3: You can try the actions yourselves, but if you want another reference, [check out the documentation for ALE at GitHub](https://github.com/openai/atari-py/blob/master/doc/manual/manual.pdf).
In particular, 0 means no action, 1 means fire, which is why they don't have an effect on the racket.
Here's a better way:
```
env.unwrapped.get_action_meanings()
```
Upvotes: 3 |
2016/12/11 | 1,108 | 5,156 | <issue_start>username_0: [From Wikipedia](https://en.wikipedia.org/wiki/Expert_system), citations omitted:
>
> In artificial intelligence, an expert system is a computer system that emulates the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning about knowledge, represented mainly as if–then rules rather than through conventional procedural code. The first expert systems were created in the 1970s and then proliferated in the 1980s. Expert systems were among the first truly successful forms of artificial intelligence (AI) software.
>
>
> An expert system is divided into two subsystems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include explanation and debugging abilities.
>
>
>
CRUD webapps (websites that allows users to **Create** new entries in a database, **Read** existing entries in a database, **Update** entries within the database, and **Delete** entries from a database) are very common on the Internet. It is a vast field, encompassing both small-scale blogs to large websites such as StackExchange. The biggest commonality with all these CRUD apps is that they have a knowledge base that users can easily add and edit.
CRUD webapps, however, use the knowledge base in many, myriad and complex ways. As I am typing this question on StackOverflow, I see two lists of questions - **Questions that may already have your answer** and **Similar Questions**. These questions are obviously inspired by the content that I am typing in (title and question), and are pulling from previous questions that were posted on StackExchange. On the site itself, I can filter by questions based on tags, while finding new questions using StackExchange's own full-text search engine. StackExchange is a large company, but even small blogs also provide content recommendations, filtration, and full-text searching. You can imagine even more examples of hard-coded logic within a CRUD webapp that can be used to automate the extraction of valuable information from a knowledge base.
If we have a knowledge base that users can change, and we have an inference engine that is able to use the knowledge base to generate interesting results...is that enough to classify a system as being an "expert system"? Or is there a fundamental difference between the expert systems and the CRUD webapps?
(This question could be very useful since if CRUD webapps are acting like "expert systems", then studying the best practices within "expert systems" can help improve user experience.)<issue_comment>username_1: The key feature of an expert system is that the knowledge base is structured to be traversed by the inference engine. Web sites like Stack Exchange don't really use an inference engine; they do full-text searches on minimally-structured data. A real inference engine would be able to answer novel queries by putting together answers to existing questions; Stack Exchange sites can't even tell if a question is duplicate without human confirmation.
Upvotes: 4 [selected_answer]<issue_comment>username_2: No, I don't think there's any reason to say that - in general - CRUD apps "are" expert systems. A given CRUD app *could* incorporate an expert system, but by and large CRUD apps are considered among the "dumbest" of applications exactly because they don't feature much intelligence... you can just Create, Read, Update and Delete entities. From what I've seen, the closest you get to seeing anything like an expert system in a typical enterprise CRUD app is some validation / business rules logic built using something like [Drools](http://www.jboss.org/drools)
Upvotes: 2 <issue_comment>username_3: CRUD applications today can't be considered expert systems.
However, even the so-called expert systems, which are currently developed, are implemented using normal programming statements, but what is important is the architecture that is built.
Current expert systems use only if-then types of rules, which produce data results that can be used as inputs to other rules, and an engine to step through them. This is quite limited, and it is greatly fragile.
What I do consider as expert systems are ones that can reason about variables (logical and numerical ones) and can use the limited formation of hypotheses and attempt a proof of them.
But, unfortunately, even what you might analyze and describe as an expert system is not really able to form models by itself, so it can easily run up against knowledge boundaries beyond which it cannot go.
Therefore, CRUD web applications today are not a modern version of the expert systems.
Upvotes: 1 <issue_comment>username_4: What you're describing is a CRUD app *plus* a [recommender system](https://en.wikipedia.org/wiki/Recommender_system). A CRUD app on its own doesn't perform any similarity ranking or recommendation functions. Stack Exchange is doing at least keyword matching and possibly semantic parsing as a feature layer on top of the basic CRUD functions.
Upvotes: 0 |
2016/12/11 | 780 | 3,356 | <issue_start>username_0: I am having a go at creating a program that does math like a human. By inventing statements, assigning probabilities to statements (to come back and think more deeply about later). But I'm stuck at the first hurdle.
If it is given the proposition
```
∃x∈ℕ: x==123
```
So, like a human it might test this proposition for a hundred or so numbers and then assign this proposition as "unlikely to be true". In other words it has concluded that all natural numbers are not equal to 123. Clearly ludicrous!
On the other hand this statement it decides is probably false which is good:
```
∃x∈ℕ: x+3 ≠ 3+x
```
Any ideas how to get round this hurdle? How does a human "know" for example that all natural numbers are different from the number 456. What makes these two cases different?
I don't want to give it too many axioms. I want it to find out things for itself.<issue_comment>username_1: The key feature of an expert system is that the knowledge base is structured to be traversed by the inference engine. Web sites like Stack Exchange don't really use an inference engine; they do full-text searches on minimally-structured data. A real inference engine would be able to answer novel queries by putting together answers to existing questions; Stack Exchange sites can't even tell if a question is duplicate without human confirmation.
Upvotes: 4 [selected_answer]<issue_comment>username_2: No, I don't think there's any reason to say that - in general - CRUD apps "are" expert systems. A given CRUD app *could* incorporate an expert system, but by and large CRUD apps are considered among the "dumbest" of applications exactly because they don't feature much intelligence... you can just Create, Read, Update and Delete entities. From what I've seen, the closest you get to seeing anything like an expert system in a typical enterprise CRUD app is some validation / business rules logic built using something like [Drools](http://www.jboss.org/drools)
Upvotes: 2 <issue_comment>username_3: CRUD applications today can't be considered expert systems.
However, even the so-called expert systems, which are currently developed, are implemented using normal programming statements, but what is important is the architecture that is built.
Current expert systems use only if-then types of rules, which produce data results that can be used as inputs to other rules, and an engine to step through them. This is quite limited, and it is greatly fragile.
What I do consider as expert systems are ones that can reason about variables (logical and numerical ones) and can use the limited formation of hypotheses and attempt a proof of them.
But, unfortunately, even what you might analyze and describe as an expert system is not really able to form models by itself, so it can easily run up against knowledge boundaries beyond which it cannot go.
Therefore, CRUD web applications today are not a modern version of the expert systems.
Upvotes: 1 <issue_comment>username_4: What you're describing is a CRUD app *plus* a [recommender system](https://en.wikipedia.org/wiki/Recommender_system). A CRUD app on its own doesn't perform any similarity ranking or recommendation functions. Stack Exchange is doing at least keyword matching and possibly semantic parsing as a feature layer on top of the basic CRUD functions.
Upvotes: 0 |
2016/12/14 | 595 | 2,751 | <issue_start>username_0: I am a strong believer of <NAME>'s idea about Artificial General Intelligence (AGI) and one of his thoughts was that probabilistic models are dead ends in the field of AGI.
I would really like to know the thoughts and ideas of people who believe otherwise.<issue_comment>username_1: The key feature of an expert system is that the knowledge base is structured to be traversed by the inference engine. Web sites like Stack Exchange don't really use an inference engine; they do full-text searches on minimally-structured data. A real inference engine would be able to answer novel queries by putting together answers to existing questions; Stack Exchange sites can't even tell if a question is duplicate without human confirmation.
Upvotes: 4 [selected_answer]<issue_comment>username_2: No, I don't think there's any reason to say that - in general - CRUD apps "are" expert systems. A given CRUD app *could* incorporate an expert system, but by and large CRUD apps are considered among the "dumbest" of applications exactly because they don't feature much intelligence... you can just Create, Read, Update and Delete entities. From what I've seen, the closest you get to seeing anything like an expert system in a typical enterprise CRUD app is some validation / business rules logic built using something like [Drools](http://www.jboss.org/drools)
Upvotes: 2 <issue_comment>username_3: CRUD applications today can't be considered expert systems.
However, even the so-called expert systems, which are currently developed, are implemented using normal programming statements, but what is important is the architecture that is built.
Current expert systems use only if-then types of rules, which produce data results that can be used as inputs to other rules, and an engine to step through them. This is quite limited, and it is greatly fragile.
What I do consider as expert systems are ones that can reason about variables (logical and numerical ones) and can use the limited formation of hypotheses and attempt a proof of them.
But, unfortunately, even what you might analyze and describe as an expert system is not really able to form models by itself, so it can easily run up against knowledge boundaries beyond which it cannot go.
Therefore, CRUD web applications today are not a modern version of the expert systems.
Upvotes: 1 <issue_comment>username_4: What you're describing is a CRUD app *plus* a [recommender system](https://en.wikipedia.org/wiki/Recommender_system). A CRUD app on its own doesn't perform any similarity ranking or recommendation functions. Stack Exchange is doing at least keyword matching and possibly semantic parsing as a feature layer on top of the basic CRUD functions.
Upvotes: 0 |
2016/12/14 | 581 | 2,688 | <issue_start>username_0: One of the most compelling issues regarding AI would be in behavior and relationships.
What are some of the methods to address this? For example, friendship, or laughing at joke? The concept of humor?<issue_comment>username_1: The key feature of an expert system is that the knowledge base is structured to be traversed by the inference engine. Web sites like Stack Exchange don't really use an inference engine; they do full-text searches on minimally-structured data. A real inference engine would be able to answer novel queries by putting together answers to existing questions; Stack Exchange sites can't even tell if a question is duplicate without human confirmation.
Upvotes: 4 [selected_answer]<issue_comment>username_2: No, I don't think there's any reason to say that - in general - CRUD apps "are" expert systems. A given CRUD app *could* incorporate an expert system, but by and large CRUD apps are considered among the "dumbest" of applications exactly because they don't feature much intelligence... you can just Create, Read, Update and Delete entities. From what I've seen, the closest you get to seeing anything like an expert system in a typical enterprise CRUD app is some validation / business rules logic built using something like [Drools](http://www.jboss.org/drools)
Upvotes: 2 <issue_comment>username_3: CRUD applications today can't be considered expert systems.
However, even the so-called expert systems, which are currently developed, are implemented using normal programming statements, but what is important is the architecture that is built.
Current expert systems use only if-then types of rules, which produce data results that can be used as inputs to other rules, and an engine to step through them. This is quite limited, and it is greatly fragile.
What I do consider as expert systems are ones that can reason about variables (logical and numerical ones) and can use the limited formation of hypotheses and attempt a proof of them.
But, unfortunately, even what you might analyze and describe as an expert system is not really able to form models by itself, so it can easily run up against knowledge boundaries beyond which it cannot go.
Therefore, CRUD web applications today are not a modern version of the expert systems.
Upvotes: 1 <issue_comment>username_4: What you're describing is a CRUD app *plus* a [recommender system](https://en.wikipedia.org/wiki/Recommender_system). A CRUD app on its own doesn't perform any similarity ranking or recommendation functions. Stack Exchange is doing at least keyword matching and possibly semantic parsing as a feature layer on top of the basic CRUD functions.
Upvotes: 0 |
2016/12/15 | 1,015 | 4,190 | <issue_start>username_0: We are doing research, spending hours figuring out how we can make real AI software (intelligent agents) to work better. We are also trying to implement some applications e.g. in business, health and education, using the AI technology.
Nonetheless, so far, most of us have ignored the "dark" side of artificial intelligence. For instance, an "unethical" person could buy thousands of cheap drones, arm them with guns, and send them out firing on the public. This would be an "unethical" application of AI.
Could there be (in the future) existential threats to humanity due to AI?<issue_comment>username_1: I would define intelligence as a ability to predict future. So if someone is intelligent, he can predict some aspects of future, and decide what to do based on his predictions. So, if "intelligent" person decide to hurt other persons, he might be very effective at this (for example Hitler and his staff).
Artificial intelligence might be extremely effective at predicting some aspects of uncertain future. And this IMHO leads to two negative scenarios:
1. Someone programs it for hurting people. Either by mistake or on purpose.
2. Artificial intelligence will be designed for doing something safe, but at some point, to be more effective, it will redesign itself and maybe it will remove obstacles from its way. So if humans become obstacles, they will be removed very quickly and in very effective way.
Of course, there are also positive scenarios, but you are not asking about them.
I recommend reading this cool post about artificial superintelligence and possible outcomes of creating it: <http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html>
Upvotes: 3 [selected_answer]<issue_comment>username_2: *There is no doubt that AI has the potential to pose an existential threat to humanity.*
>
> The greatest threat to mankind lies with superintelligent AI.
>
>
>
An artificial intelligence that surpasses human intelligence will be capable of exponentially increasing its own intelligence, resulting in an AI system that, to humans, will be completely unstoppable.
At this stage, if the artificial intelligence system decides that humanity is no longer useful, it could wipe us from the face of the earth.
As <NAME> puts it in **Artificial Intelligence as a Positive and Negative Factor in Global Risk**,
"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."
>
> A different threat lies with the instruction of highly intelligent AI
>
>
>
Here it is useful to consider the paper clip maximiser thought experiment.
A highly intelligent AI instructed to maximise paper clip production might take the following steps to achieve its goal.
1) Achieve an intelligence explosion to make itself superintelligent (this will increase paperclip optimisation efficiency)
2) Wipe out mankind so that it cannot be disabled (that would minimise production and be inefficient)
3) Use Earth's resources (including the planet itself) to build self replicating robots hosting the AI
4) Exponentially spread out across the universe, harvesting planets and stars alike, turning them into materials to build paper clip factories
>
> Clearly this is not what the human who's business paperclip production it was wanted, however it is the best way to fulfil the AI's instructions.
>
>
>
This illustrates how superintelligent and highly intelligent AI systems can be the greatest existential risk mankind may ever face.
<NAME>, in **The Technological Singularity**, even proposed that AI may be the solution to the Fermi paradox: the reason why we see no intelligent life in the universe may be that once a civilisation becomes advanced enough, it will develop an AI that ultimately destroys it.
This is known as the idea of a **cosmic filter**.
In conclusion, the very intelligence that makes AI so useful also makes it extremely dangerous.
Influential figures like <NAME> and <NAME> have expressed concerns that superintelligent AI is the greatest threat we will ever have to face.
Hope that answers your question :)
Upvotes: 2 |
2016/12/16 | 681 | 2,925 | <issue_start>username_0: How to train a bot, given a series of games in which he did (initially random) actions, to improve its behavior based on previous experiences?
The bot has some actions: e.g. shoot, wait, move, etc. It's a turn based "game" in which, for know, I'm running the bots with some objectives (e.g. kill some other bot) and random actions. So every bot will have a score function that at the end of the game will say, from X to Y (0 to 100?) if they did well or not.
So how to make the bots to learn of their previous experiences? Because this is not a fixed input as the neural networks take, this is kind of a list of games, each one in which the bot took several actions (one by every "turn"). The IA functions that I know are used to *predict* future values.. I'm not sure is the same.
Maybe I should have a function that gets the "more similar previous games" that the bot played and checked what were the actions he took, if the results were bad he should take another action, if the results were good then he should take the same action. But this seems kind of hardcoded.
Another option would be to train a neural network (somehow fixing the problem of the fixed input) based on previous game actions and to predict the future action's results in score (something that I guess it's similar to how chess and Go games work) and choose the one that seems to have better outcome.
I hope this is not too abstract. I don't want to hardcode much stuff in the bots, I'd like them to learn by their own starting from a blank page.<issue_comment>username_1: If it is a game you can try a simple weightage calculation where if the bot perform an action that yields a positive result - killed an enemy, gained an advantageous position etc. Add a 'weight' to that action that in similar circumstances the chances of performing that action that will lead to a positive result is higher.
Yet due to the chance of not performing an action that was remembered to yield positive results, there is a little bit of 'randomness' and also a chance to discover new possibilities. Just remember not to let a single occurrence shift the weightage too much or allow a single action's weightage to become so high that the AI stops trying different actions on similar situations.
Upvotes: 0 <issue_comment>username_2: Reinforcement learning
----------------------
The problem that you describe, namely, choosing a good sequence of actions based on a reward/score received based on the whole sequence (and possibly significantly delayed), is pretty much the textbook definition of [Reinforcement learning](https://en.wikipedia.org/wiki/Reinforcement_learning).
As with quite a few other topics, deep neural networks currently seem to be a promising way for solving this type of problems. [This](http://karpathy.github.io/2016/05/31/rl/) may be a beginner-friendly description of this approach.
Upvotes: 3 [selected_answer] |
2016/12/18 | 433 | 1,904 | <issue_start>username_0: The premise: A full-fledged self-aware artificial intelligence may have come to exist in a distributed environment like the internet. The possible A.I. in question may be quite unwilling to reveal itself.
The question: Given a first initial suspicion, how would one go about to try and detect its presence? Are there any scientifically viable ways to probe for the presence of such an entity? In other words, how would the Turing police find out whether or not there's anything out there worth policing?<issue_comment>username_1: If it is a game you can try a simple weightage calculation where if the bot perform an action that yields a positive result - killed an enemy, gained an advantageous position etc. Add a 'weight' to that action that in similar circumstances the chances of performing that action that will lead to a positive result is higher.
Yet due to the chance of not performing an action that was remembered to yield positive results, there is a little bit of 'randomness' and also a chance to discover new possibilities. Just remember not to let a single occurrence shift the weightage too much or allow a single action's weightage to become so high that the AI stops trying different actions on similar situations.
Upvotes: 0 <issue_comment>username_2: Reinforcement learning
----------------------
The problem that you describe, namely, choosing a good sequence of actions based on a reward/score received based on the whole sequence (and possibly significantly delayed), is pretty much the textbook definition of [Reinforcement learning](https://en.wikipedia.org/wiki/Reinforcement_learning).
As with quite a few other topics, deep neural networks currently seem to be a promising way for solving this type of problems. [This](http://karpathy.github.io/2016/05/31/rl/) may be a beginner-friendly description of this approach.
Upvotes: 3 [selected_answer] |
2016/12/18 | 1,730 | 7,269 | <issue_start>username_0: "Conservative anthropocentrism": AI are to be judged only in relation to how to they resemble humanity in terms of behavior and ideas, and they gain moral worth based on their resemblance to humanity (the "Turing Test" is a good example of this - one could use the "Turing Test" to decide whether AI is deserving of personhood, as <NAME> advocates in the paper [Copyright for Literate Robots](http://james.grimmelmann.net/files/articles/copyright-for-literate-robots.pdf)).
"Post-human fundamentalism": AI will be fundamentally different from humanity and thus we require different ways of judging their moral worth ([People For the Ethical Treatment of Reinforcement Learners](http://petrl.org) is an example of an organization that supports this type of approach, as they believe that reinforcement learners may have a non-zero moral standing).
I am not interested per se in which ideology is correct. Instead, I'm curious as to what AI researchers "believe" is correct (since their belief could impact how they conduct research and how they convey their insights to laymen). I also acknowledge that their ideological beliefs may change with the passing of time (from conservative anthropocentrism to post-human fundamentalism...or vice-versa). Still..what ideology do AI researchers tend to support, as of December 2016?<issue_comment>username_1: Many large deployments of AI have carefully engineered solutions to problems (ie self driving cars). In these systems, it is important to have discussions about how these systems should react in morally ambiguous situations. Having an agent react "appropriately" sounds similar to the Turing test in that there is a "pass/fail" condition. This leads me to think that the **current mindset of most AI researchers falls into "Conservative anthropomorphism"**.
However, there is growing interest in [Continual Learning](http://www.cs.utexas.edu/~ring/Ring-dissertation.pdf), where agents build up knowledge about their world from their experience. This idea is largely pushed by reinforcement learning researchers such as <NAME> and <NAME>. Here, the AI agent has to build up knowledge about its world such as:
>
> When I rotate my motors forward for 3s, my front bump-sensor activates.
>
>
>
and
>
> If I turned right 90 degrees and then rotated my motors forward for 3s, my front bump-sensor activates.
>
>
>
From knowledge like this, an agent could eventually navigate a room without running into walls because it built up predictive knowledge from interaction with the world.
In this context, lets ignore how the AI actually learns and only look at the environment AI agents "grow up in". These agents will be fundamentally different from humans growing up in homes with families because they will not have the same learning experience as humans. This is much like the nature vs nurture argument.
Humans pass their morals and values on to children through lessons and conversation. As RL agents would lack much of this interaction (unless families adopted robot babies I guess), we would require different ways of judging their moral worth and thus "Post-human fundamentalism".
Sources:
5 years in the RL academia environment and conversations with <NAME> and <NAME>.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Both. I answered this question here also: <https://ai.stackexchange.com/a/2569/1712>
Let me know if I should expand on that here.
Upvotes: 0 <issue_comment>username_3: **The Reality of Working in the Field**
Most in the fields of adaptive systems, machine learning, machine vision, intelligent control, robotics, and business intelligence, in the corporations and universities in which I've worked do not discuss this topic much in meetings or at lunch. Most are too busy building things that must work by some deadline to muse over things that are not of immediate concern, and bot-rights are a long way off.
**How Far Off?**
To begin with, no bot has yet passed a properly conducted Turing Test. (There is much on the net about this test, including critique of poorly conducted testing of this type. See Searle's Chinese Room thought experiment.)
Language simulation with semantic understanding is difficult enough without adding creativity, coordination, feelings, intuition, body language, learning of entirely new domains from scratch, and the potential of genius.
In synopsis, we a long way from the procurement of bots that simulate humanity sufficiently to be considered for citizenship, even in a progressive country that abhors fundamentalism of any kind. No actual imbuement of rights will occur until we have bot-citizenship in one or more countries. Consider that human fetuses do not yet have rights because they are not yet deemed citizens.
**Relevance of the Answer for Today**
In current culture, conservative anthropocentricm and post-human fundamentalism arrive at the same effective conclusion, and that may continue to be the case for a hundred years.
Those with experience across fields of psychology, neuro-biology, cybernetics, and adaptive systems know that the simulation of all the mental features we attribute to humans is to copy in algorithms the layering of cerebral abilities over a reptilian brain that went through millions of years of field testing.
**Impact of Science Fiction**
Asking around, it is likely you would get some feedback that is mostly gained from the media of our culture, not philosophic theses and publications written by those who don't actually have any deadlines to produce anything that functions IRL.
<NAME> investigated some conservative anthropomorphism concepts in scenarios depicted in his short stories. Commander Data's human quirks in the Next Generation Star Trek teleplays furthered some of those ideas.
<NAME> took the opposite direction in the Interstellar screenplay, with the robots having interesting personalities that could be altered by linear settings. His bots ignored concerns of self-preservation, apparently without any cognitive resistance. This depiction is an unapologetic post-human fundamentalist view.
**A Thought Experiment**
Let's place the citizen issue aside for to consider this thought experiment, and let's assume that a survey would show a leaning toward conservative anthropocentrism among current AI researchers.
Consider an intelligent piece of software constructed a legal complaint to gain intellectual property rights over day trading code it wrote and sent it to the appropriate court clerk, you might find that the same researchers would recant.
Post-human fundamentalism will probably prevail when real AI software theorists and engineers consider the true personal, corporate, and meaning of settling out of court or losing the case.
**Would Researchers Cut the Umbilical Cord?**
I asked one researcher and she indicated that all her lawyer would need to do to win the case likely consider the precedence that might occur and recall some of the warnings built into the Terminator stories.
Based on my observation of humanity in my life time, my prediction is that people want slaves not some brand of bots that could ultimately kick our butts in an all out fight.
Upvotes: 1 |
2016/12/20 | 1,657 | 6,938 | <issue_start>username_0: It is really all in the title.
For those less familiar, the Fermi Paradox broadly speaking asks the question "where is everybody". There's an equation with a lot of difficult to estimate parameters, which broadly speaking come down to this (simplification of the [Drake equation](https://en.wikipedia.org/wiki/Drake_equation)):
(Lots stars in the universe) \* (non-zero probability of habitable planets around each star) \* (lots of time spanned) = It seems there really should be somebody out there.
There are, of course, plenty of hypotheses as to why we haven't seen/observed/detected any sign of intelligent life so far, ranging from "well we're unique deal with it" to "such life is so advanced and destroys everything it comes across, so it's a good thing it didn't happen".
The technological singularity (also called ASI, Artificial Super Intelligence) is basically the point where an AI is able to self-improve. Some think that if such AI sees the light of day, it may self-improve and not be bound by biological constraints of the brain, therefore achieve a level of intelligence we cannot even grasp (let alone achieve ourselves).
I certainly have my thoughts on the matter, but interested to see if there is already an hypothesis revolving around the link between the 2 out there (I never came across but could be). Or perhaps an hypothesis as to why this cannot be.
For references to those not familiar with the [Fermi paradox](http://waitbutwhy.com/2014/05/fermi-paradox.html)<issue_comment>username_1: If the technological singularity always leads to the extinction of all intelligent life, then yes.
If the technological singularity always leads to intelligent life migrating into higher planes of existence that aren't accessible to us right now, then also yes.
Otherwise it is exactly the assumption of unbridled technological progress that makes the Fermi paradox perplexing. A post-singularity culture should have the ability to spread through the galaxy. If there are a lot of post-singularity cultures some of them should have spread through the galaxy. And if there are enough cultures that are spreading through the galaxy, we should notice them.
Upvotes: 2 <issue_comment>username_2: **Fermi and SETI**
Brilliant physicist and mathematician, <NAME>, brought up more than one paradox in his published articles and many more in discussions and letters, but this question is probably referring to one for which an overview is given in
[*Our Galaxy Should Be Teeming With Civilizations, But Where Are They?*, By <NAME>, Senior Astronomer, 2018, SETI Institute](https://www.seti.org/seti-institute/project/fermi-paradox).
Astronomer <NAME>'s proposed equation (building on the work of <NAME>) is one of the favored equations today. It estimates the number of planets with detectable signs of intelligent life as the product of six factors.
* Stars observed
* Proportion of observed stars that are quiet
* Average number of planets orbiting in habitable zones per star
* Reciprocal proportion of planets that can be observed
* Proportion of observable planets that have life
* Proportion of planets with life that produce a detectable spectrum of gases indicative of technological evolution
Since there is no indication that artificial life, if it were to dominate earth, would stop industrial processes, there is no need to adjust Seager's equation.
**Multiplicity or Singularity**
The Singularity is conjecture without any logical proof. Conversely, there is no need for a proof that artificial creations will out compete humans in some respects. The proof is that some tasks that could once be accomplished by humans alone, such as mail sorting and chess, are now accomplished faster and with greater accuracy by artificial creations.
The trend has been that the number of things done better with software, control systems, and robotics increases and the number of things done better by humans has been decreased by that amount. Because of the vision of robots permeating science fiction and the actual long-term objectives of some contemporary corporations in this field of artificial workers and beings, this is now easy for most to accept.
The Singularity is an imagined point in time when computer software and hardware will, in tandem, attain the ability to reproduce itself in a way that produces improved reproductions. Even if only some of the attempts at improvement prove to be superior in intelligence, strength, intuition, or some other feature, artificial evolution may have been created.
The idea that such will happen
* In exactly one way,
* At exactly one point in time, and
* Without any opposition or constraint
is part of the unproven conjecture. What has been offered as proof lacks mathematical rigor in a way that would have made Fermi frown, along with his contemporaries like <NAME> and Oppenheimer. In fact, they would have wanted proof of completeness, that all human activity would be preempted by artificial equivalents, before any proof that it was to be singular in nature.
**It May Have Already Occurred**
<NAME>, in his book *Technological Society*, proposes that what people today are trained to call *The Singularity* actually occurred hundreds of years ago, when the techniques applied by humans began to drive the behavior of society more strongly than the intentions of humans drove the direction of technology. His book is filled with a few hundred pages of examples where this tipping point has already been reached. It's remarkably convincing.
**Back to Fermi and SETI**
Whether the transition of control was in the past, in the future, or will never entirely complete, the impact on the Fermi paradox referenced is not significant, because the paradox is why no other intelligent life has radioed us in response to our SETI broadcasts.
**Exponential Functions in the Real Universe**
The most important question is whether the primary trait of intelligence is self-destruction, as some, like author of *Sixth Extinction*, <NAME>, believes. If this is the case, intelligent life is a pulse, not an ever increasing exponential curve. Ever increasing exponential curves don't actually ever occur in nature. In every other known phenomena that has an exponential growth period, it is ALWAYS followed by a decrease in acceleration ending in either a flat line or some form of chaotic vacillation.
**Alternative Views of Origins**
It is the work of <NAME> that has the appearance of the higher wisdom than that of the media hound CEOs of Tesla, Google, and other large and powerful technology companies today. Buried in his book *The Biosphere*, he stated the possibility that evidence in star dust indicates that life may have always existed. Just as the big bang is a conjecture based on a long list of un-provable assumptions, so is the idea that life began.
Upvotes: 1 |
2016/12/20 | 1,681 | 7,524 | <issue_start>username_0: I understood that the concept of *search* is important in AI. There's a [question](https://ai.stackexchange.com/questions/1877/why-is-searching-important-in-ais) on this website regarding this topic, but one could also intuitively understand why. I've had an introductory course on AI, which lasted half of a semester, so of course there wasn't time enough to cover all topics of AI, but I was expecting to learn some AI theory (I've heard about "agents"), but what I actually learned was basically a few search algorithms, like:
* BFS
* Uniform-cost search
* DFS
* Iterative-deepening search
* Bidirectional search
these search algorithms are usually categorized as "blind" (or "uninformed"), because they do not consider any information regarding the remaining path to the goal.
Or algorithms like:
* Heuristic search
* Best-first search
* A
* A\*
* IDA\*
which usually fall under the category of "informed" search algorithms, because they use some information (i.e. "heuristics" or "estimates") about the remaining path to the goal.
Then we also learned "advanced" search algorithms (specifically applied to the TSP problem). These algorithms are either constructive (e.g., nearest neighbor), local search (e.g., 2-opt) algorithms or meta-heuristic ones (e.g., ant colony system or simulated annealing).
We also studied briefly a min-max algorithm applied to games and an "improved" version of the min-max, i.e. the alpha-beta pruning.
After this course, I had the feeling that AI is just about searching, either "stupidly" or "more intelligently".
My questions are:
* Why would one professor only teach search algorithms in AI course? What are the advantages/disadvantages? The next question is very related to this.
* What's more than "searching" in AI that could be taught in an introductory course? This question may lead to subjective answers, but I'm actually asking in the context of a person trying to understand what AI really is and what topics does it really cover. Apparently and unfortunately, after reading around, it seems that this would still be subjective.
* Are there AI theories that could be taught in this kind of course?<issue_comment>username_1: What it comes down to is that most AI problems can be characterized as search problems. Let's just go through some examples:
* Object recognition & scene building (e.g. the process of taking
audio-visual input of your surroundings and understanding it in a 3D
and contextual sense) can be treated as searching for known objects
in the input.
* Mathematical problem solving can be treated as searching for a solution.
* Playing a video game can be treated as searching for the correct response to a given gamestate.
Even rudimentary chatbots can be characterized as finding the 'correct' response to a given input phrase to emulate human language!
Because of this generalization of search, search algorithms were among some of the first algorithms considered 'AI', and often form the basis of many AI teaching courses. On top of this search algorithms are intuitive and non-mathematical, which makes the somewhat terrifying field of AI accessible. This might sound like hyperbole, but I guarantee that if your lecturer had opened with Manifold Learning Techniques half of your class would have bolted for the door by the time they mentioned 'eigenvalue of the covariance matrix'.
Now search algorithms aren't the only way to address these problems. I recommend every AI practitioner is familiar with the notion of Data Science and Machine Learning Algorithms. ML is often related to search algorithms but the techniques they use can vary heavily from iterative building of a classifier/regression (e.g. C4.5 builds a decision tree), meta-heuristics as you noted, and classifiers/regression that are statically generated from analysis of training data (e.g. Naive Bayesian is literally a classifier built on Bayesian analysis of the given data assuming that input fields are independent - this is the 'naivety' from which it gets its name). Often ML algorithms are developed in AI research groups and can sometimes be designed for specific problems instead of being general form algorithms. In contrast to the general field of AI, which is often centered on Intelligence problems and is therefore (in my view) vulnerable to too much blue sky thinking, ML is applied to all sorts of real life problems and is often very practical in its design and performance driven in its evaluation.
Upvotes: 3 <issue_comment>username_2: There is lots of misconceptions about AI, specifically the idea that it is about making computers "think" like humans, simulating brain, the sci-fi robots taking over the world, all the philosophical discussions around brain as machine etc. The practice/reality of AI is about "using computing to solve problems" which basically means you take any problem, represent it as a computing problem and then design the algorithm to solve the computing problem which lead to solving the original problem. These search algorithms are general purpose algorithms for general purpose computing problems i.e any real world problem can be represented by these general purpose computing problem and then these algorithms can be used to solve them.
Remember, its about problem solving and its about general purpose computing problems that can represent any real world problem.
Upvotes: 3 <issue_comment>username_3: **Why would one professor only teach searching algorithms in AI course? What are the advantages/disadvantages?**
My answer to this question is that there are lots of problems where the solution can be found using searching. Take an example of Tic Tac Toe. If you are designing an intelligent computer player for this, then what you will do is that you will form a search space and then you will search for most optimal move which can be made to conclude the game. In these, scenarios you must be aware of optimal search strategies. Let's take another example, suppose if you are driving and want to got to an unknown person's house. It's far from your place and you decide to use GPS. Your GPS will use search algorithms to find the most optimal route that you can take to reach to the destination (of course there will be lots of factors to consider like traffic, etc. but this is the basic idea).
Disadvantages are only in terms of processing and storage. For slow algorithms you will be wasting lots of CPU time and storage as well but for good and efficient algorithms, you can preserve lots of space and also execute your task very fast. Of course, just learning about searching isn't AI. There's lot more to it.
**What's more than "searching" in AI that could be taught in an introductory course?**
There is lots of things in AI other than searching. For example, learning techniques (supervised, unsupervised, reinforced), planning when one wants to design a system that will do certain actions independently and intelligently, representation of knowledge (known and unknown) and inference in agents which includes propositional logic and first-order logic, etc.
**Are there theories behind AI that could be taught in this kind of course?**
Some topics could be taught like about different types of agents (simple reflex, model based, goal based, utility based and learning agent), different types of environments in which agents work, evaluation of agents. There could be some additional introductory topics like natural language processing, expert systems, etc.
Upvotes: 3 |
2016/12/21 | 288 | 1,149 | <issue_start>username_0: I am coding a tic-tac-toe program that demonstrates reinforcement learning. The program uses minimax trees to decide its moves. Whenever it wins, all the nodes on the tree that were involved in the game have their value increased. Whenever it loses, all the nodes on the tree that were involved in the game have their value decreased, etc.
What is the name of the value that each node is decreased by?<issue_comment>username_1: I think you mean the leave nodes only which are changed. The other nodes in the tree are calculated during calculating the best move with this tree. The values at the leaves are called **utility values** in Russel and Norvig's "[Artificial intelligence: a modern approach](http://rads.stackoverflow.com/amzn/click/9332543518)". Some times it is called **heuristic value**; see <https://en.wikipedia.org/wiki/Minimax>.
Upvotes: 1 <issue_comment>username_2: In reinforcement learning, you can keep the name of value as it comes from the value function which estimates how good it is to be in this node w.r.t. the objective. Depending on the problem it can be a cost, utility, reward...
Upvotes: 0 |
2016/12/21 | 263 | 1,206 | <issue_start>username_0: Neural Net can be feed-forward or recurrent. Perceptron is only feed forward.
So, what is Hopfield Network then?<issue_comment>username_1: Hopfield Networks are recurrent. However, they are not as general as more modern Recurrent Neural Networks such as Long Short-Term Memory Networks as they cannot process sequential input. I've never worked with a Hopfield Network but I've been told that they are mostly of historical interest today due to their limitations.
Upvotes: 1 <issue_comment>username_2: Hopfield stores some predefined patterns (lower energy states) and when an non seen pattern is fed to the Hopfield net, it tries to find the closest match among the stored patterns. Hence the output of a Hopfield is always one of the predefined patterns which matches closely to the unseen input pattern.
This is not the case with Feed Forward Neural Nets (where no such predefined patterns are stored) and every input generates a corresponding output. Here (in Feedforward) the output is generated by a predefined function (which is self-adjusted during training session) where as in Hopfield predefined patterns are stored and outputted (no such functions exist).
Upvotes: 2 |
2016/12/21 | 1,014 | 3,970 | <issue_start>username_0: I am new to neural-network and I am trying to understand mathematically what makes neural networks so good at classification problems.
By taking the example of a small neural network (for example, one with 2 inputs, 2 nodes in a hidden layer and 2 nodes for the output), all you have is a complex function at the output which is mostly sigmoid over a linear combination of the sigmoid.
So, how does that make them good at prediction? Does the final function lead to some sort of curve fitting?<issue_comment>username_1: Neural networks are good at classifying. In some situations that comes down to prediction, but not necessarily.
The mathematical reason for the neural networks prowess at classifying is the [universal approximation theorem](https://en.wikipedia.org/wiki/Universal_approximation_theorem). Which states that a neural network can approximate any continuous real-valued function on a compact subset. The quality of the approximation depends on the number of neurons. It has also been shown that adding the neurons in additional layers instead of adding them to existing layers improves the quality of the approximation faster.
Add to that the not well-understood effectiveness of the [backpropagation](https://en.wikipedia.org/wiki/Backpropagation) algorithm and you have a setup then can actually learn the function that the UAT promises or something close.
Upvotes: 3 <issue_comment>username_2: In Neural Networks we consider everything in high dimension and try to find a hyperplane that classify them by small changes...
Probably it is hard to prove that it works but intuition says if it can be classified you can do it by add a relaxed plane and let it move amongst data to find a local optimum...
Upvotes: 0 <issue_comment>username_3: With Neural Networks you simply classify datas. If you classify correctly, so you can do future classifications.
How It Works?
Simple neural networks like Perceptron can draw **one** decision boundary in order to classify datas.
For example suppose you want to solve simple AND problem with simple Neural Network. You have 4 sample data containing x1 and x2 and weight vector containing w1 and w2. Suppose initial weight vector is [0 0]. If you made calculation which depend on NN algoritm. At the end, you should have a weight vector [1 1] or something like this.
[](https://i.stack.imgur.com/ELIum.jpg)
Please focus on the graphic.
It says: I can classify input values into two classes (0 and 1). Ok. Then how can I do this? It is too simple. First sum input values (x1 and x2).
>
> 0+0=0
>
>
> 0+1=1
>
>
> 1+0=1
>
>
> 1+1=2
>
>
>
It says:
>
> if sum<1.5 then its class is 0
>
>
> if sum>1.5 then its class is 1
>
>
>
Upvotes: 2 <issue_comment>username_4: Neural networks excel at a variety of tasks, but to get an understanding of exactly why, it may be easier to take a particular task like classification and dive deeper.
In simple terms, machine learning techniques learn a function to predict which class a particular input belongs to, depending on past examples. What sets neural nets apart is their ability to construct these functions that can explain even complex patterns in the data. The heart of a neural network is an activation function like Relu, which allows it to draw some basic classification boundaries like: [](https://i.stack.imgur.com/NmxyL.png)
By composing hundreds of such Relus together, neural networks can create arbitrarily complex classification boundaries, for example:[](https://i.stack.imgur.com/vzPGD.png)
In this article, I try to explain the intuition behind what makes neural networks work: <https://medium.com/machine-intelligence-report/how-do-neural-networks-work-57d1ab5337ce>
Upvotes: 1 |
2016/12/21 | 1,434 | 6,014 | <issue_start>username_0: Why should an activation function of a neural network be differentiable? Is it strictly necessary or is it just advantageous?<issue_comment>username_1: Training
--------
While "running" a neural network can be done with any activation functions, we usually want to train it - i.e., adjust its parameters so that the result would be closer to what we desire.
This is commonly done by [backpropagation](https://en.wikipedia.org/wiki/Backpropagation) and variations of [gradient descent](https://en.wikipedia.org/wiki/Gradient_descent), which requires the existence of a gradient - i.e., requires activation function to be differentiable. The adjustment of each parameter is calculated from the gradient of the activation function(s) that this parameter affects, so if you cannot get a gradient, then this approach can't be used.
Upvotes: 2 <issue_comment>username_2: As already said , Activation function is almost differentiable in every neural net to facillitate **Training** as well as to calculate tendency towards a certain result when some parameter is changed. But I just wanted to point out that **The Output function need not be differentiable in all cases.** We can have non-sigmoid (hard-limiter threshold)output nodes but still train them with backpropagation and gradient descent.
Upvotes: 1 <issue_comment>username_3: No, it is not necessary that an activation function is differentiable. In fact, [one of the most popular activation functions, the rectifier](https://en.wikipedia.org/wiki/Rectifier_(neural_networks)), is non-differentiable at zero!
This can create problems with learning, as numerical gradients calculated near a non-differentiable point can be incorrect. The "kinks" section in [these lecture notes](http://cs231n.github.io/neural-networks-3/#gradcheck) discuss the issue.
I don't know the full answer as to whether this is a problem in practice and how people get around them. There are ways to avoid getting these incorrect gradients for some non-differentiable functions, as [the lecture notes I discussed above](http://cs231n.github.io/neural-networks-3/#gradcheck) mention.
Upvotes: 3 <issue_comment>username_4: Lets go through your question point by point.
* Should an activation function be differentiable?
No, it is not of any compulsion of it to be differentiable. We use Re-Lu's which have a non-differentiable point at 0. But this is simply a trivial case, since the point 0 will never be reached unless we run out of precision points to denote extemely less numbers.
So lets take another example, the [Perceptron learning](http://hagan.okstate.edu/4_Perceptron.pdf) algorithm. In this algorithm, there is no satisfactory way to evaluate the performance of a particular solution. So we don't have a cost function. But still we are able to reach to a solution, albeit maybe not the best. I'll come to the point why it is not used later.
NN's can be broadly thought of as just function approximators. Normally, you give it some continuous function, the NN adjusts it by elongating, shifting, distorting parts of that function by changing only and only the parameters of the function and not the nature of the function itself i.e. it'll decompose into the same sort of a Fourier series as before, with only phase and amplitude differences. You can also design a NN on lines of a random search puzzle. You give each node of a, say single hidden layer NN a part of the function to be approximated i.e. between some intervals `-a < x < b` say. Say the connection weights to each output layer is fixed. You can only change the part of the function (i.e.parts broken like a jigsaw from a function to be approximated) you want to give to a node, from your fixed box of parts. So you see, in this case the NN will try almost all combinations of giving different parts to different nodes until the function is perfectly approximated. So you see, this can also be a type of learning for NN without any continuous function.
* Why should an activation function of a neural network be differentiable?
The main advantages of differentiable function is the mathematics behind it. You can easily handle huge amounts of data just by simple mathematics. 1000's of years of mathematical theories can be applied to verify the working of your NN, predict how will it work, select best algorithms. In discrete non-differentiable activation functions you have no realistic way of predicting what will be the final result.
[](https://i.stack.imgur.com/FWVIW.png)
Say in a problem like this, if you apply the perceptron learning algo., do you have any realistic way to predict where the final decision boundary will rest? Whereas, if you train it by continuous methods, you can easily plot the cost function versus weights, check the global minima, get the weights which will be the final weights. So we easily solved it by plotting (definite matrix methods also exist).
* Is it advantageous to have a differentiable activation function? Why?
So why is it advantegeous?
**-->** Final answers easily predictable (if relatively small data set).
**-->** Easy to draw graphs and visualize the working of your NN and adjust your hyper-parameters accordingly.
**-->** Easy to apply time tested mathematical tools to test/evaluate the effectiveness of your algorithm.
**-->** No sudden changes in error and so your weights will also not change suddenly due to stray readings.
These are all the advantages I could think of and I am sue there are more. All these tools are not available for discrete algorithms, if you think (can't give the full intuition - too lengthy).
I gave some more intuition here:
[What is the purpose of an activation function in Neural Networks?](https://ai.stackexchange.com/questions/5493/what-is-the-purpose-of-an-activation-function-in-neural-networks?noredirect=1&lq=1)
Feel free to add anything I missed by editing the answer.
Hope this helps!
Upvotes: 2 |
2016/12/26 | 1,429 | 5,969 | <issue_start>username_0: What are the main differences between a perceptron and a naive Bayes classifier?<issue_comment>username_1: Training
--------
While "running" a neural network can be done with any activation functions, we usually want to train it - i.e., adjust its parameters so that the result would be closer to what we desire.
This is commonly done by [backpropagation](https://en.wikipedia.org/wiki/Backpropagation) and variations of [gradient descent](https://en.wikipedia.org/wiki/Gradient_descent), which requires the existence of a gradient - i.e., requires activation function to be differentiable. The adjustment of each parameter is calculated from the gradient of the activation function(s) that this parameter affects, so if you cannot get a gradient, then this approach can't be used.
Upvotes: 2 <issue_comment>username_2: As already said , Activation function is almost differentiable in every neural net to facillitate **Training** as well as to calculate tendency towards a certain result when some parameter is changed. But I just wanted to point out that **The Output function need not be differentiable in all cases.** We can have non-sigmoid (hard-limiter threshold)output nodes but still train them with backpropagation and gradient descent.
Upvotes: 1 <issue_comment>username_3: No, it is not necessary that an activation function is differentiable. In fact, [one of the most popular activation functions, the rectifier](https://en.wikipedia.org/wiki/Rectifier_(neural_networks)), is non-differentiable at zero!
This can create problems with learning, as numerical gradients calculated near a non-differentiable point can be incorrect. The "kinks" section in [these lecture notes](http://cs231n.github.io/neural-networks-3/#gradcheck) discuss the issue.
I don't know the full answer as to whether this is a problem in practice and how people get around them. There are ways to avoid getting these incorrect gradients for some non-differentiable functions, as [the lecture notes I discussed above](http://cs231n.github.io/neural-networks-3/#gradcheck) mention.
Upvotes: 3 <issue_comment>username_4: Lets go through your question point by point.
* Should an activation function be differentiable?
No, it is not of any compulsion of it to be differentiable. We use Re-Lu's which have a non-differentiable point at 0. But this is simply a trivial case, since the point 0 will never be reached unless we run out of precision points to denote extemely less numbers.
So lets take another example, the [Perceptron learning](http://hagan.okstate.edu/4_Perceptron.pdf) algorithm. In this algorithm, there is no satisfactory way to evaluate the performance of a particular solution. So we don't have a cost function. But still we are able to reach to a solution, albeit maybe not the best. I'll come to the point why it is not used later.
NN's can be broadly thought of as just function approximators. Normally, you give it some continuous function, the NN adjusts it by elongating, shifting, distorting parts of that function by changing only and only the parameters of the function and not the nature of the function itself i.e. it'll decompose into the same sort of a Fourier series as before, with only phase and amplitude differences. You can also design a NN on lines of a random search puzzle. You give each node of a, say single hidden layer NN a part of the function to be approximated i.e. between some intervals `-a < x < b` say. Say the connection weights to each output layer is fixed. You can only change the part of the function (i.e.parts broken like a jigsaw from a function to be approximated) you want to give to a node, from your fixed box of parts. So you see, in this case the NN will try almost all combinations of giving different parts to different nodes until the function is perfectly approximated. So you see, this can also be a type of learning for NN without any continuous function.
* Why should an activation function of a neural network be differentiable?
The main advantages of differentiable function is the mathematics behind it. You can easily handle huge amounts of data just by simple mathematics. 1000's of years of mathematical theories can be applied to verify the working of your NN, predict how will it work, select best algorithms. In discrete non-differentiable activation functions you have no realistic way of predicting what will be the final result.
[](https://i.stack.imgur.com/FWVIW.png)
Say in a problem like this, if you apply the perceptron learning algo., do you have any realistic way to predict where the final decision boundary will rest? Whereas, if you train it by continuous methods, you can easily plot the cost function versus weights, check the global minima, get the weights which will be the final weights. So we easily solved it by plotting (definite matrix methods also exist).
* Is it advantageous to have a differentiable activation function? Why?
So why is it advantegeous?
**-->** Final answers easily predictable (if relatively small data set).
**-->** Easy to draw graphs and visualize the working of your NN and adjust your hyper-parameters accordingly.
**-->** Easy to apply time tested mathematical tools to test/evaluate the effectiveness of your algorithm.
**-->** No sudden changes in error and so your weights will also not change suddenly due to stray readings.
These are all the advantages I could think of and I am sue there are more. All these tools are not available for discrete algorithms, if you think (can't give the full intuition - too lengthy).
I gave some more intuition here:
[What is the purpose of an activation function in Neural Networks?](https://ai.stackexchange.com/questions/5493/what-is-the-purpose-of-an-activation-function-in-neural-networks?noredirect=1&lq=1)
Feel free to add anything I missed by editing the answer.
Hope this helps!
Upvotes: 2 |
2016/12/29 | 364 | 1,503 | <issue_start>username_0: Is there a proof that states the possibility or impossibility of an AI system to acquire more sophisticated capabilities (in terms of generic cleverness) than its own creator?<issue_comment>username_1: The loaded term in your question is "generic cleverness." There's no such thing. What is "smart" is only smart relative to a criterion. Provide a complete criterion and we can talk rationally about "levels of sophistication." Until then, there is no measure against which "involuntarily acquired" capabilities can be regarded as more or less "smart."
Upvotes: 1 <issue_comment>username_2: If you look at the work of <NAME> and his theory of multiple intelligences, you will see that the term of "intelligence" respectively "generic cleverness" is much more diverse and not entirely clarified. Without a entire notion of it, a proof is forlorn.
Upvotes: 1 <issue_comment>username_3: Regarding Artificial General Intelligence, which does not currently exist and is still highly theoretical, this cannot be determined at this time.
What I would say is that "strong narrow AI" has already proven the ability to become "smarter" than it's creators in specific tasks. (See [Alphago](https://en.wikipedia.org/wiki/AlphaGo), etc.)
Under the idea that some form of AGI might come out of an algorithm comprised of an ever expanding set of strong narrow AIs, it would logically follow that such an automata could become smarter than it's creators in any given task.
Upvotes: 2 |
2017/01/02 | 1,163 | 4,746 | <issue_start>username_0: I've spent the past couple of months learning about neural networks, and am thinking of projects that would be fun to work on to cement my understanding of this tech.
One thing that came to mind last night is a system that takes an image of a movie poster and predicts the genre of the movie. I think I have a good understanding of what'd be required to do this (put together a dataset, augment it, download a convnet trained on imagenet, finetune it on my dataset, and go from there).
I also thought that it would be pretty cool to run the system backwards at the end, so that I could put in e.g. a genre like 'horror' and have the system generate a horror movie poster. I expect that it will be very bad at this because I'm not a team of expert researchers, but I think I could have some fun hacking on it even if it only ever generated incomprehensible results.
Here's what I'm having trouble understanding: on the one hand, all the convnets whose architecture I've seen described seem to rely on being given very small, square input images (on the order of 220px by 220px iirc), and movie posters are rectangular, and a generated poster would have to be of a larger size in order for a human to make any sense of it. I've seen several examples of papers where researchers use convnets to generate images, e.g. the adversarial system that generates pictures of birds and flowers, and a system that generates the next few frames of video when given a feed of a camera sweeping across the interior of a room, but all of those generated images seemed to be of the small square size I've been describing.
On the other hand, I've seen lots of "deep dream" images over the past year or so that have been generated by convnets and are of a much larger size than ~220px by ~220px.
Here's my question: is it possible for me to build the system I describe, which takes a movie genre and outputs a movie poster of a size like e.g. 400px by 600px? [I'm not asking about whether or not the resulting poster would be any *good* - I'm curious about whether or not it's possible to use a convnet to generate an image of that size.]
If it is possible, *how* is it possible, given that these systems seem to expect small, square input images?<issue_comment>username_1: The way you would recognize images (image -> genre) would be very different from the other way around (genre -> image). For the former you are correct on what would be involved.
For the latter, if you want large images then GANs are indeed the way to go. Currently the largest images we can generate are on the order of 220 x 220 pixels, mostly due to memory constraints on the GPU. There is no fundamental problem with having rectangular images, it just so happens that we use squares. You would be able to use identical architectures to train on rectangular data as well.
The reason that some generated images you see (e.g. from DeepDream or NeuralStyle) are larger is that it's not a GAN. Both of these are not generative models in a standard sense, even if they do technically "generate" images. Instead, they merely modify an existing image by running backprop on a specifically designed loss function.
TLDR: your movie genre recognition idea is sound. For generation, have a look at things like this (<https://github.com/Newmu/dcgan_code>), where Alec generated album covers instead of posters. If you have enough data it will do something half sensible.
Upvotes: 3 [selected_answer]<issue_comment>username_2: There is a fundamental difference between Generative Models and Discriminative Models, to simplify it we could say that
* given $S$ Semantic Space (low dimensionality) and
* given $A$ Appearance Space (high dimensionality, e.g. space of $W \times H$ images)
the Discriminator learns the function $f(A) \rightarrow S$ and the Generator learns the function $g(S) \rightarrow A$
According to this formalism we can say that $g \neq f^{-1}$ and a simple reason for this is $f$ is not invertible because it is not injective, in fact 2 different horror movie posters will be assigned the same label “horror movie” (assuming the network works well :) )
So you have to find another way to define a mapping from $S$ to $A$ and this kind of job is related to the Generative Models
Certainly GAN are state of the art in image generation but technically they are not the only choice, e.g. you could use a VAE as the Decoder Subnetwork is a generative model
However from a practical point of view do not expected VAE results would that good: you’ll probably get blurred stuff like this one

so go for GAN but beware that training them properly is definitely not easy
Upvotes: 1 |
2017/01/10 | 983 | 4,102 | <issue_start>username_0: Having worked with neural networks for about half a year, I have experienced first-hand what are often claimed as their main disadvantages, i.e. overfitting and getting stuck in local minima. However, through hyperparameter optimization and some newly invented approaches, these have been overcome for my scenarios. From my own experiments:
* Dropout seems to be a very good regularization method.
* Batch normalization eases training and keeps signal strength consistent across many layers.
* Adadelta consistently reaches very good optima
I have experimented with `scikit-learn`'s implementation of SVM alongside my experiments with neural networks, but I find the performance to be very poor in comparison, even after having done grid-searches for hyperparameters. I realize that there are countless other methods, and that SVM's can be considered a sub-class of NN's, but still.
So, to my question:
***With all the newer methods researched for neural networks, have they slowly - or will they - become "superior" to other methods? Neural networks have their disadvantages, as do others, but with all the new methods, have these disadvantages been mitigated to a state of insignificance?***
I realize that oftentimes "less is more" in terms of model complexity, but that too can be architected for neural networks. The idea of "no free lunch" forbids us to assume that one approach always will reign superior. It's just that my own experiments - along with countless papers on awesome performances from various NN's - indicate that there might be, at the least, a very cheap lunch.<issue_comment>username_1: The way you would recognize images (image -> genre) would be very different from the other way around (genre -> image). For the former you are correct on what would be involved.
For the latter, if you want large images then GANs are indeed the way to go. Currently the largest images we can generate are on the order of 220 x 220 pixels, mostly due to memory constraints on the GPU. There is no fundamental problem with having rectangular images, it just so happens that we use squares. You would be able to use identical architectures to train on rectangular data as well.
The reason that some generated images you see (e.g. from DeepDream or NeuralStyle) are larger is that it's not a GAN. Both of these are not generative models in a standard sense, even if they do technically "generate" images. Instead, they merely modify an existing image by running backprop on a specifically designed loss function.
TLDR: your movie genre recognition idea is sound. For generation, have a look at things like this (<https://github.com/Newmu/dcgan_code>), where Alec generated album covers instead of posters. If you have enough data it will do something half sensible.
Upvotes: 3 [selected_answer]<issue_comment>username_2: There is a fundamental difference between Generative Models and Discriminative Models, to simplify it we could say that
* given $S$ Semantic Space (low dimensionality) and
* given $A$ Appearance Space (high dimensionality, e.g. space of $W \times H$ images)
the Discriminator learns the function $f(A) \rightarrow S$ and the Generator learns the function $g(S) \rightarrow A$
According to this formalism we can say that $g \neq f^{-1}$ and a simple reason for this is $f$ is not invertible because it is not injective, in fact 2 different horror movie posters will be assigned the same label “horror movie” (assuming the network works well :) )
So you have to find another way to define a mapping from $S$ to $A$ and this kind of job is related to the Generative Models
Certainly GAN are state of the art in image generation but technically they are not the only choice, e.g. you could use a VAE as the Decoder Subnetwork is a generative model
However from a practical point of view do not expected VAE results would that good: you’ll probably get blurred stuff like this one

so go for GAN but beware that training them properly is definitely not easy
Upvotes: 1 |
2017/01/10 | 638 | 2,597 | <issue_start>username_0: What exactly are the differences between *semantic* and *lexical-semantic* networks?<issue_comment>username_1: The way you would recognize images (image -> genre) would be very different from the other way around (genre -> image). For the former you are correct on what would be involved.
For the latter, if you want large images then GANs are indeed the way to go. Currently the largest images we can generate are on the order of 220 x 220 pixels, mostly due to memory constraints on the GPU. There is no fundamental problem with having rectangular images, it just so happens that we use squares. You would be able to use identical architectures to train on rectangular data as well.
The reason that some generated images you see (e.g. from DeepDream or NeuralStyle) are larger is that it's not a GAN. Both of these are not generative models in a standard sense, even if they do technically "generate" images. Instead, they merely modify an existing image by running backprop on a specifically designed loss function.
TLDR: your movie genre recognition idea is sound. For generation, have a look at things like this (<https://github.com/Newmu/dcgan_code>), where Alec generated album covers instead of posters. If you have enough data it will do something half sensible.
Upvotes: 3 [selected_answer]<issue_comment>username_2: There is a fundamental difference between Generative Models and Discriminative Models, to simplify it we could say that
* given $S$ Semantic Space (low dimensionality) and
* given $A$ Appearance Space (high dimensionality, e.g. space of $W \times H$ images)
the Discriminator learns the function $f(A) \rightarrow S$ and the Generator learns the function $g(S) \rightarrow A$
According to this formalism we can say that $g \neq f^{-1}$ and a simple reason for this is $f$ is not invertible because it is not injective, in fact 2 different horror movie posters will be assigned the same label “horror movie” (assuming the network works well :) )
So you have to find another way to define a mapping from $S$ to $A$ and this kind of job is related to the Generative Models
Certainly GAN are state of the art in image generation but technically they are not the only choice, e.g. you could use a VAE as the Decoder Subnetwork is a generative model
However from a practical point of view do not expected VAE results would that good: you’ll probably get blurred stuff like this one

so go for GAN but beware that training them properly is definitely not easy
Upvotes: 1 |
2017/01/12 | 1,655 | 6,905 | <issue_start>username_0: >
> Any sufficiently advanced algorithm is indistinguishable from AI.---[<NAME>](https://twitter.com/othermichael?lang=en)
>
>
>
According to [What are the minimum requirements to call something AI?](https://ai.stackexchange.com/questions/1507/what-are-the-minimum-requirements-to-call-something-ai), there are certain requirements that a program must meet to be called AI.
However, according to that same question, the term AI has become a buzzword that tends to be associated with new technologies, and that certain algorithms may be classified in AI in one era and then dismissed as boring in another era once we understand how the technology works and be able to properly utilize it (example: voice recognition).
Humans are able to build complex algorithms that can engage in behaviors that are not easy to predict (due to [emergent complexity](https://en.wikipedia.org/wiki/Emergence)). These "sufficiently advanced" algorithms could be mistaken for AI, partly because humans can also engage in behaviors that are not easy to predict. And since AI is a buzzword, humans may be tempted to engage in this self-delusion, in the hopes of taking advantage of the current AI hype.
Eventually, as humanity's understanding of their own "sufficiently advanced algorithms" increase, the temptation to call their algorithms AI diminishes. But this temporary period of mislabeling can still cause damage (in terms of resource misallocation and hype).
What can be done to *distinguish* a sufficiently advanced algorithm from AI? Is it even possible to do so? Is a sufficiently advanced algorithm, by its very nature, AI?<issue_comment>username_1: As you correctly pointed out, people tend to misinterpret the expression *AI* since they do not know what's behind an *AI*; it is pretty clear that in *AI* there is no more than just a bunch of algorithms and flowing bits. Talking about the nature of an *AI* without talking about the algorithmic paradigm of that *AI* is pointless and antiscientific.
This point of view is quite cynical; what we call intelligence is just the capability of solving particular problems.
The quote you cited in the title it's a derived version of the quote below, by a science fiction writer.
>
> Any sufficiently advanced technology is indistinguishable from magic. - [<NAME>](https://en.wikipedia.org/wiki/Arthur_C._Clarke)
>
>
>
Hence I doubt that a scientific answer exists since science lacks a formal definition of *sufficiently advanced algorithm* and *advanced algorithm*.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Intelligence is a quality of behavior, not implementation
---------------------------------------------------------
Intelligence is a term that primarily applies to behaviors - people, animals or artificial systems can be called intelligent iff they exhibit intelligent behavior or decisions.
While there are many definitions of intelligence - here's [a paper that studies 70 of them](https://arxiv.org/abs/0706.3639) - it can be summarized to something like "Intelligence measures an agent’s ability to achieve goals in a wide range of environments." (<NAME> and <NAME>).
Perhaps this definition is the answer to your implied question - while many algorithms can be very effective (often literally superhuman) in their own narrow domain, as of now they are very restricted in the range of environments where they exhibit this effectiveness. This means that they are not fully intelligent, they don't meet the definition/requirements of intelligence, and this also matches our intuitive expectations - we don't call AlphaGo superintelligent, because while it can beat humans in Go, the same humans can beat the same system on pretty much every other task.
However, a *truly sufficiently* advanced algorithm that *can* be effective at all or most varied tasks (e.g. a *general* artificial intelligence) can be reasonably called intelligent in the full meaning of the word.
Upvotes: 3 <issue_comment>username_3: I am going to answer this questions by stepping away from the previously made insightful comments and academic answers. I am going to offer my opinion only. The problem is as I see it, a bit more complex than the previous answers. For example, why is it that the only measure of intelligence of an AI is when it can "beat" a man at a specific task?
Is not a dog, a bird or a dragonfly sufficiently intelligent and sentient? Yet these creatures as well as many others too, fail to achieve the intellectual challenges we want computers to make.
The mistakes we do often make is by trying to "impute" attributes and characteristics into an AI without it having to "work" for them. Those skills, experiences, memory and knowledge which all sentient beings have to constantly work at, refine and perfect.
You are correct though in your assertion that a lot of what is called AI are just academic or software gimmicks and simulations without offering the real qualities of either sentience or artificial intelligence.
However, I would challenge you by suggesting that your last question:
*"What can be done to distinguish a sufficiently advanced algorithm from AI? Is it even possible to do so? Is a sufficiently advanced algorithm, by its very nature, AI?"*
Is inherently flawed. Because have you forgotten how we too are free running, biologically self-programming, beings? Where our biological algorithms can often also be critically flawed? Therefore, let me ask you if the reverse is also true? Does an algorithm which seems to make mistakes, any less than one which does not?
Upvotes: -1 <issue_comment>username_4: * Easy: Parse the code! If the code meets your definition of Artificial Intelligence, that's what it is.
Only half kidding there. The question of what defines AI is constantly evolving and there is no exact consensus.
The most reductionist view could regard any decision making algorithm to be a form of intelligence, and the strength of the intelligence (it's level of [advancement](https://en.wikipedia.org/wiki/Skill)) is dependent on the frame of reference (typically a subjective entity making that evaluation.)
For this reason, it might be better to regard the issue from the standpoint of utility.
Utility is a central concept in AI because it is the means by which we evaluate "intelligence" (strength in a given task or problem aka "how well does it get the job done?") Does the algorithm fulfill the task(s) with sufficient optimality to meet our definition of Artificial Intelligence?
This leads me to believe the quote itself may be primarily a comment on the definitions of Artificial intelligence, per [username_1's answer](https://ai.stackexchange.com/a/2643/1671). Subjectivity is indeed the a central notion of the [Clark quote it references](https://en.wikiquote.org/wiki/Arthur_C._Clarke#On_Clarke's_Laws).
Upvotes: 1 |
2017/01/13 | 1,509 | 6,229 | <issue_start>username_0: As I see some cases of machine-learning based artificial intelligence, I often see they make critical mistakes when they face inexperienced situations.
In our case, when we encounter totally new problems, we acknowledge that we are not skilled enough to do the task and hand it to someone who is capable of doing the task.
Would AI be able to self-examine objectively and determine if it is capable of doing the task?
If so, how would it be accomplished?<issue_comment>username_1: >
> Would AI be able to self-examine objectively and determine if it is capable of doing the task?
>
>
>
Our ability to self-examine comes definitively from the memory of our experiences; indeed, for this reason it can't be objective. In the same way AI could be able to determine the heuristically optimal strategy to solve a problem if and only if it has some sort of memory of previous tasks e.g. speech recognition.
Science is constantly working to improve our understanding of things. Trying to mimic the human brain seems to be a difficult problem at the moment; though we are able to replicate almost fully simpler organisms as [C. elegans](http://www.openworm.org/), a roundworm.
Upvotes: 1 <issue_comment>username_2: Several AI systems will come up with a level of confidence to the solution found. For example, neural networks can indicate how relatable is the input problem to the ones it was trained with. Similarly, genetic algorithms work with evaluation functions, that are used to select best results, but depending on how they're built (the functions), they can indicate how close the algorithms are to an optimal solution.
In this case, the limit to when this is acceptable or not will depend on a threshold set beforehand. Is 50% confidence good enough? Maybe it's ok for OCR apps (spoiler: it's not), but is it for medical applications?
So yes, AI systems do currently have the capacity of determining if they're performing well or not, but how acceptable that is is currently based on the domain of the problem, which currently stands outside of what is built into an AI.
Upvotes: 3 [selected_answer]<issue_comment>username_3: I would concur with the answer given to you by [username_1](https://ai.stackexchange.com/users/4801). One of the major problems with A.I. programmers is that they are always trying to push computers to do things which are designed for "mature" intelligent creatures who have prior experience and knowledge of solving problems. -As if these things can be imputed without the A.I. having to achieve the necessary and vital "learn by trial and error" experience first. For example: when allowing for task examination; self evaluation and risk assessment.
You have answered your own question, because these things can only be gained by "experience". However, the only way to surmount this is to expose a prototype A.I. to the main problems; help it to solve them, and then to take its memory and use it as a template for other A.I's.
Technically, AI's which have learned to solve prior problems could make their memories available to others on demand, so that an inexperienced AI could solve an issue without having achieved the skills needed.
However, I would like to add that mimicking intelligence is not in itself "intelligence". Many programmers fall into the trap of believing that to emulate something is qualitatively the same expression as the genuine article. This is a fallacy which infers that we only have to simulate intelligence without understanding the real mechanisms which create it.
This "copying" of sentience is done all the time and despite how good we have become in copying over the last few years, each new algorithm is just that: a simulation without genuine sentience or intelligence!
Upvotes: 1 <issue_comment>username_4: >
> Would AI be able to self-examine objectively and determine if it is capable of doing the task?
>
>
>
A possible approach might be the one suggested and studied by J.Pitrat (one of the earliest AI researcher in France, his PhD on AI was published in the early 1960s and he is now a retired scientist). Read his [Bootstrapping Artificial Intelligence blog](http://bootstrappingartificialintelligence.fr/WordPress3/) and his [Artificial Beings: the conscience of a conscious machine](http://onlinelibrary.wiley.com/book/10.1002/9780470611791) book.
(I'm not able to summarize his ideas in a few words, even if I do know J.Pitrat -and even meet him once in a while- ; grossly speaking, he has a strong meta-knowledge approach combined with reflexive programming techniques. He is working -alone- since more than 30 years on his CAIA system, which is very difficult to understand, because even while he does publish his system as a free software pragram, CAIA is not user friendly, with a poorly documented common line user interface; while I am enthusiastic about his work, I am unable to explore his system.)
But defining what "conscience" or "self-awareness" could *precisely* mean for some artificial intelligence system is a hard problem by itself. AFAIU, even for human intelligence, we don't exactly know what that *really means* and how does that *exactly* work. IMHO, there is *no consensus* on some *definition* of "conscience", "self-awareness", "self-examination" (even when applied to humans).
But whatever approach is used, giving any kind of constructive answer to your question requires a lot of pages. J.Pitrat's books & blogs are a better attempt than what anyone could answer here. So your question is IMHO too broad.
Upvotes: 0 <issue_comment>username_5: It's not possible as this is the distinction between AI and humans, truly science will never understand the subconscious it's that little black box that no one can reverse engineer. This is why pursuing singularity is a fools dream to the extreme.
The reason why machinery lacks this because of the lack thereof a soul. science cannot produce a soul, this is why a machine cannot be self aware we can program fancy algorithms all day that mimic things but it's emotionless it cannot sit in judgement because it lacks real self awareness that is human self awareness it's like trying to make an orange into an apple.
Upvotes: -1 |
2017/01/13 | 651 | 3,016 | <issue_start>username_0: A lot of people are claiming that we are at an inflection point, and machine learning/artificial intelligence will take off. This is in spite of the fact that for a long machine learning has stagnated.
What are the signals that indicate that machine learning is going to take off?
In general, how do you know that we are at an inflection point for a certain technology?<issue_comment>username_1: >
> What are the signals that indicate that machine learning is going to take off?
>
>
>
We simply don't know until the consequences of the inflection point determine a remarkable difference between the before and after. In general terms every considerable reaction must be attributed to a particular cause.
One of the biggest limit that bounds artificial intelligence, and apparently makes it stagnating, is the greed of computational power involved in this field; and since the hardware technology improve much slower than the software does, AI remains confined in labs and data centers.
Upvotes: 1 <issue_comment>username_2: This has happened in the past where people were really excited and saying things like we will have AI in decade or so. This is happening again. Not sure why people don't learn from history of AI. In both the cases what's happening is this - You develop a technique to solve a particular problem, you apply that technique, the technique seems to be general enough, people start to apply that same technique to various problems, people get excited that this is the silver bullet they were looking for, the technique starts to show its limitations and doesn't work for many problems, hype gets shattered, start over.
Upvotes: 2 <issue_comment>username_3: Part of the reason people are so excited about recent Machine Learning milestones is that AlphaGo demonstrated a reproducible method of managing mathematical and computational intractability.
Go is interesting because it's impossible to solve. It cannot be brute-forced no matter how fast processors get. Go is so complex humans had failed to produce AI that could win against a skilled human player. The fact that a computer could teach itself to do something humans couldn't teach it, and something with a complexity analogous to nature to boot, is pretty extraordinary.
Combinatorial games in particular are useful because, unlike nature where it may be impossible to track or even be aware of every variable, intractability can be generated out of a simple set of elements and rules, and outcomes can be definitively evaluated.
As proof-of-concepts for methods go, AlphaGo seems like a pretty strong one. It allows us to definitively say "Machine Learning works", puts a lot of emphasis on the field, and raises confidence on extending the method to real world problems.
Beyond that, it suggests a feedback loop in which programs can improve at at improving, unrestricted by human limitations. Increase in processing power is bounded by physical limitations, but algorithms are not.
Upvotes: 1 |
2017/01/13 | 1,710 | 6,662 | <issue_start>username_0: If a conscious AI is possible, would it also be possible for someone who knows what they are doing to torture (or hurt) the AI? Could this be avoided? How?
This question deals with computer-based AI, not robots, which are as conscious as people (this is an assumption of the question). The question wonders how a crime as hard to trace as illegal downloads, but far worse ethically, could be prevented. Note that despite most people being nice and empathising with the robots, there are always the bad people, and so relying on general conscience will not work.<issue_comment>username_1: The article [Children Beating Up Robot Inspires New Escape Maneuver System](http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/children-beating-up-robot) is based on two research papers about an experiment in a Japanese mall that led to unsupervised children attacking robots. The research paper you're interested in is [Escaping from Children’s Abuse of Social Robots](http://www.irc.atr.jp/~drazen/pdf/HRI2015_Brscic.pdf).
In that research paper, researchers were able to program the robots to follow a planning simulation to reduce the probability of abuse by children. If it detects children, the robot is programmed to retreat into a crowd of adults (who can then discipline the children if needed). This happened because the researchers saw that it was only children who were beating up the robots in the mall in question.
They discuss trying out other options though:
>
> In this work the robot’s strategy to prevent abuse was to “escape”, i.e. move to a location where it is less likely abuse will occur. One could ask why the robot cannot overcome the abuse. In our preliminary trials we have tried several approaches, but we found that it is very difficult for the robot to persuade children not to abuse it. For example, we changed the robot’s wordings in many ways, using strong words, emotional or polite expressions, but none of them were successful. One partially successful strategy was the robot ‘physically’ pushing children. When its way was blocked, it would just try to keep going and behave as if it will collide into children and force its way through (under careful monitoring from a human operator). We observed that children at first accepted the robot’s requests and obeyed them; but, very soon they learned that they are stronger than the robot so they can win if they push, and also that they can stop it by pushing the bumper switch (attached on the robot for safety). After realizing that, they just continued with the abusive behavior. Obviously having a stronger robot would present a problem for safety and social acceptance so dealing with such abusive situations remains difficult.
>
>
>
---
But let's interrogate your question further:
>
> If conscious AI is possible and is wide spread, wouldn't it be easy for someone who knows what they are doing to torture AI?
>
>
>
Why would you consider such torture to be *wrong*? After all, one could argue that the machine won't really 'experience' pain if you torture it...so it should be morally okay to torture the machine then. It may be respond as *if* it is in pain, but it's dubious whether the ability to simulate an emotional state such as "being in pain" is equivalent to actually *being* in that emotional state. See the question [Is the simulation of emotional states equivalent to actually experiencing emotion?](https://philosophy.stackexchange.com/questions/34779/is-the-simulation-of-emotional-states-equivalent-to-actually-experiencing-emotio/35815) for more discussion on this topic.
You can make such an argument, but it won't really work on an emotional level because most humans would feel *empathy* towards the machine. It may be hard to justify it logically (and it may be based on humans' tendencies to engage in [anthropomorphism](https://en.wikipedia.org/wiki/Anthropomorphism)), but we feel this empathy. It's this empathy that caused you to ask this question in the first place, caused researchers to figure out how to protect a robot from being beaten up, enabled police officers to [arrest a drunken Japanese man for beating up a SoftBank robot](http://www.japantimes.co.jp/news/2015/09/07/national/crime-legal/drunken-kanagawa-man-60-arrested-after-kicking-softbank-robot-in-fit-of-rage/#.WHj4Rccc9Lp), and made many humans upset over the destruction of [hitchBOT](https://en.wikipedia.org/wiki/HitchBOT). And *that's* how AI abuse would be avoided - human empathy. If most humans care about the welfare of machines, they'll make it a priority to stop those few humans who are able and willing to abuse the machines.
---
EDIT: The OP has edited his *question* to clarify that he is talking about *software*, and not about robots. For robots, you can rely on anthropomorphism to produce some level of sympathy, but it's hard to sympathize with raw lines of code.
You're not going to stop abuse of algorithms. Put it frankly, since the algorithms aren't like us, we aren't going to extend the same sort of empathy that we would to robots. Even chatbots are kinda iffy. If you could get people to sympathize with lines of code though (possibly by making a convincing simulation of emotion and sapience), then the above answer applies - humans anthropomorphize the machine and will come up with countermeasures. We aren't that level yet, so "stopping AI abuse" will be a low priority.
Still, some failsafes could be programmed in to limit the damage of abuse, as detailed in [this thread on chatbot abuse](https://www.chatbots.org/ai_zone/viewthread/1488/) - making the bot respond in a boring manner to make the abuser feel bored and move onto the next target, responding back to the abuser in a "battle of wits", or even just blocking the abusers from using the service.
These failsafes are cold comfort to those that want to *prevent* abuse, not respond to it.
Also...an abuser can happily learn how to program an AI to then abuse to his/her heart's content. Nothing can be done to stop that, and any possible measures to stop said abuse (such as monitoring every human being to make sure they don't program an AI to abuse) will probably cause more damage than it'd solve.
Upvotes: 3 <issue_comment>username_2: I suggest you look at all the ways we have tried to stop people from abusing OTHER PEOPLE. There is no ethical grey area here - everyone is clear that this is wrong. And yet people are murdered, raped, and assaulted in their millions every day.
When we solve this problem with regard to human victims, the resulting solution will most likely work just fine for AIs as well.
Upvotes: 3 |
2017/01/14 | 823 | 3,602 | <issue_start>username_0: I'm looking for good examples of successful AI projects and theories that had a relatively good impact on society, economics and military field.
So many years have passed after the first AI researches; hence I'm wondering if it has really increased the quality of our lives.<issue_comment>username_1: I believe which most successful AI theory is machine learning, in entire web have machine learning algorithms running, learning by what you do, watch, search, even by the photos you take.
succesful AI projects:
TensorFlow;
scikit-learn;
Upvotes: 0 <issue_comment>username_2: There are lots of great projects in AI.
1. Self-driving cars: This types of cars use AI to learn the pattern of the roads, speed of car, motion of car, braking power and lots of different features and after sufficient learning, they are capable of driving the car autonomously. The best example of this type of cars is [Tesla's self driving car](https://www.tesla.com/autopilot).
2. Games: Games also use AI to learn the game with the aim of winning the game when played against a human or an AI player. You must have played lots of games on mobile and PC like Chess, Tic-Tac-Toe, etc. You play against the computer and according to the difficulty value set, the computer plays its moves. This difficulty value is nothing but the ability of the AI engine to predict the next moves by the opponent.
3. Chatbots: There have been lots of development and improvements in Chatbots, so that humans can communicate with them as if they are talking to other human. There are many chatbots designed which answer any question asked by us (of course it is dependent on how much intelligence the bot holds). Some examples are [ALICE](http://alice.pandorabots.com/) bot, [IBM Watson](https://www.ibm.com/watson/) (which has been the most advanced bot till now).
4. Expert Systems: Expert systems are those systems which focus on one specific domain and can solve any query related to that domain which is given to it. For example, an expert system can be designed to solve any mathematical equation queried to it. An expert system, in such case, will give the solution of the equation along with the steps (providing steps is important because it is an important component in expert system which is called inference engine).
5. Prediction systems: There are lots of prediction systems which use AI and Machine Learning to predict something based on some past data. Examples are Weather Forecast system, Stock Market prediction system, Recommendation system (usually available in e-commerce websites like Amazon), etc.
Upvotes: 2 <issue_comment>username_3: I'd say the most successful are the ones so commonly used that we don't even notice them:
* The *mail systems* that automatically decipher handwritten addresses on your packages, they use machine vision and have probably been doing it since mid-90s.
* Algorithmic trading bots on *stock markets* - they handle something like 85% of all trades.
* Many modern CPUs use AI techniques, including neural networks, to guess what your program is going to do next and optimize branch prediction and memory fetches.
* Most good modern *fraud* and *spam detectors* use some combination of AI techniques (clustering, decision trees, SVMs, even some machine vision to check out attached pictures) - and, in the opposing BlackHat camp, the latest automatic CAPTCHA breakers use all the latest advancements in deep learning too).
* And of course there's *Google*, *Facebook* and US DoD who try to put AI into anything they can think of.
Upvotes: 4 [selected_answer] |
2017/01/15 | 622 | 2,799 | <issue_start>username_0: I am trying to make an intelligent agent similar to Jarvis from Iron Man, but much less complex. However, I want my AI to be able to determine if I am talking to it or not. So, I plan on having it always listen to my voice and convert that to text. However, I am not sure how I can train the AI to recognize if it is being spoken to or not?<issue_comment>username_1: Cheep digital assistant "AI" 's have a call word `Hey,`
I assume you want a bit more than that.
You could train it to figure out which words in some context determine if you are engaging with it or not.
If your only question to the network is if you are engaging with it or talking to someone else this is all you'd need.
Index a dictionary or have it build one from collecting words (building a dictionary from scratch is a better solution it saves space in the short term and is more easily expandable in the long term) and score words based on usage in engaging speech and non-engaging speech or what you want it to do.
Build on that with an index of multi word strings.
By the end hopefully you will have a table of contexts when you are engaging with the AI when you definitely are not and some grey area.
The training process is long and tedious but if you have a recording of you talking and not talking to the AI and you feed it with such knowledge and you breed the network you should have it get okay at determining context.
If you have to sit and hold it's hand for 2-72 hours while it grows up it will likely be painful, although you may end up with a better result.
Upvotes: 2 <issue_comment>username_2: Phrase detection instead of text-to-speech
------------------------------------------
It's worth noting that detection of particular phrases or commands is considered a distinct problem, different from text to speech / text transcription.
While you *can* simply convert *everything* it hears to text and then look up keywords there, a specialized detector that directly tries to match incoming audio to a small subset of commands can be done with better accuracy and less processing power required. For this reason, this would generally be the preferred approach in commercial products.
However, for beginner experiments with home automation, you should probably start with choosing an existing speech analysis API where all the audio and natural language parts are appropriately implemented by someone else. Building a good speech command analysis system from scratch is a major undertaking by itself, and you will have your hands full with developing an "artificial intelligent agent"; as a rule, you don't want a project where you have to tackle *two* major open-ended problems, pick one of them and then you'll have a chance to achieve something interesting there.
Upvotes: 2 |
2017/01/16 | 964 | 4,061 | <issue_start>username_0: Most companies dealing with deep learning (automotive - Comma.ai, Mobileye, various automakers, etc.) do collect large amounts of data to learn from and then use lots of computational power to train a neural network (NN) from such big data. I guess this model is mainly used because both the big data and the training algorithms should remain secret/proprietary.
If I understand it correctly the problem with deep learning is that one needs to have:
1. big data to learn from
2. lots of hardware to train the neural network from this big data
I am trying to think about how crowdsourcing could be used in this scenario. Is it possible to distribute the training of the NN to the crowd? I mean not to collect the big data to a central place but instead to do the training from local data on the user's hardware (in a distributed way). The result of this would be lots of trained NNs that would in the end be merged into one in a [Committee of machines](https://en.wikipedia.org/wiki/Committee_machine) (CoM) way. Would such a model be possible?
Of course, the model described above does have a significant drawback - one does not have control over the data that is used for learning (users could intentionally submit wrong/fake data that would lower the quality of the final CoM). This may be dealt with by sending random data samples to the central community server for review, however.
Example: Think of a powerful smartphone using its camera to capture a road from a vehicle's dashboard and using it for training lane detection. Every user would do the training himself/herself (possibly including any manual work like input image classification for supervised learning etc.).
I wonder if the model proposed above may be viable. Or is there a better model of how to use crowdsourcing (user community) to deal with machine learning?<issue_comment>username_1: First thing, you need to give more credit to more reliable users. You can establish the credit from amount of data they send, and a feature, where other users can review other's feed and classify it. From there, you will have a measure of certainty to what data is good and what is not.
You will need to implement a centralized server, unless you're trying to do some kind of a peer-to-peer trust systems, but I don't think smartphones are powerful enough to do training themselves.
You will need big machines for training NNets. Don't trust users to have them. You would end up with tons of badly trained NNets, which don't make for a good CoM.
Upvotes: 2 <issue_comment>username_2: There is already an approach similar to the one you describe: **federated learning (FL)**, where local nodes (e.g. mobile, edge devices but also companies of different sizes) keep the training data locally, so each node might have different (unbalanced and non-i.i.d.) datasets, and models, which then need to be aggregated.
One possible [definition of federated learning](https://arxiv.org/pdf/1912.04977.pdf) is
>
> Federated learning is a machine learning setting where multiple entities (clients) collaborate in solving a machine learning problem, under the coordination of a central server or service provider. Each client's raw data is stored locally and not exchanged or transferred; instead, focused updates intended for immediate aggregation are used to achieve the learning objective.
>
>
>
If you are interested in more details, there are already many sources on the topic that you can find on the web, but I would recommend the paper [Advances and Open Problems in Federated Learning](https://arxiv.org/pdf/1912.04977.pdf) (2021, by <NAME> et al.) or the Google's article [Federated Learning: Collaborative Machine Learning without Centralized Training Data](https://ai.googleblog.com/2017/04/federated-learning-collaborative.html) (2017). There are also software libraries for FL, such as [TensorFlow Federated (TFF)](https://www.tensorflow.org/federated).
However, note that there are other approaches to distributed machine learning/training.
Upvotes: 1 |
2017/01/16 | 1,746 | 7,712 | <issue_start>username_0: If this list1 can be used to classify problems in AI ...
>
> * Decomposable to smaller or easier problems
> * Solution steps can be ignored or undone
> * Predictable problem universe
> * Good solutions are obvious
> * Uses internally consistent knowledge base
> * Requires lots of knowledge or uses knowledge to constrain solutions
> * Requires periodic interaction between human and computer
>
>
>
... is there a generally accepted relationship between placement of a problem along these dimensions and suitable algorithms/approaches to its solution?
**References**
[1] <https://images.slideplayer.com/23/6911262/slides/slide_4.jpg><issue_comment>username_1: The 7 AI problem characteristics is a heuristic technique designed to speed up the process of finding a satisfactory solution to problems in artificial intelligence.
In computer science, artificial intelligence and mathematical optimization, a heuristic is a technique designed for solving a problem more quickly, or for finding an approximate solution when you have failed to find an exact solution using classic methods.
The 7 AI problem technique ranks alternative steps based on available information to help one decide on the most appropriate approach to follow in solving problems i.e. missionaries and cannibals, Tower of Hanoi, Traveling salesman e.t.c.
Regarding whether there is a generally accepted relationship between the placement of a problem and suitable algorithms. The answer is that indeed there is a generally accepted relationship.
For example imagine trying to solve a game of chess and a game of sudoku.
If a step is wrong in sudoku, we can backtrack and attempt a different approach. However if we are playing a game of chess and realize a mistake after a couple of moves. We cannot simply ignore the mistake and backtrack.(2nd Characteristic)
If the problem universe is predictable, we can make a plan to generate a sequence of operations that is guaranteed to lead to a solution. However in the case of problems with uncertain outcomes, we have to follow a process of plan revision as the plan is carried out while providing the necessary feedback. (3rd Characteristic)
Below is an example of the 7 AI problem characteristics being applied to solve a water jug problem.
[](https://i.stack.imgur.com/uUp1l.jpg)
Image source
<https://gtuengineeringmaterial.blogspot.com/2013/05/discuss-ai-problems-with-seven-problem_1818.html>
Upvotes: 0 <issue_comment>username_2: **The List**
This list originates from <NAME>, Professor of Engineering, Computer and Information Science at the University of Michigan. In his lecture Spring 1998 notes for CIS 4791, the following list was called,
>
> **"Good Problems For Artificial Intelligence."**
>
>
>
```
Decomposable to easier problems
Solution steps can be ignored or undone
Predictable Problem Universe
Good Solutions are obvious
Internally consistent knowledge base (KB)
Requires lots of knowledge or uses knowledge to constrain solutions
Interactive
```
It has since evolved into this.
```
Decomposable to smaller or easier problems
Solution steps can be ignored or undone
Predictable problem universe
Good solutions are obvious
Uses internally consistent knowledge base
Requires lots of knowledge or uses knowledge to constrain solutions
Requires periodic interaction between human and computer
```
**What it is**
His list was never intended to be a list of AI problem categories as an initial branch point for solution approaches or a, "heuristic technique designed to speed up the process of finding a satisfactory solution."
Maxim never added this list into any of his academic publications, and there are reasons why.
The list is heterogeneous. It contains methods, global characteristics, challenges, and conceptual approaches mixed into one list as if they were like elements. This is not a shortcoming for a list of, "Good problems for AI," but as a formal statement of AI problem characteristics or categories, it lacks the necessary rigor. Maxim certainly did not represent it as a, "7 AI problem characteristics," list.
>
> **It is certainly not a, "7 AI problem characteristics," list.**
>
>
>
**Are There Any Category or Characteristics Lists?**
There is no good category list for AI problems because if one created one, it would be easy to think of one of the millions of problems that human brains have solved that don't fit into any of the categories or sit on the boundaries of two or more categories.
It is conceivable to develop a problem characteristics list, and it may be inspired by Maxim's Good Problems for AI list. It is also conceivable to develop an initial approaches list. Then one might draw arrows from the characteristics in the first list to the best prospects for approaches in the second list. That would make for a good article for publication if dealt with comprehensively and rigorously.
**An Initial High Level Characteristics to Approaches List**
Here is a list of questions that an experienced AI architect may ask to elucidate high level system requirements prior to selecting an approaches.
* Is the task essentially static in that once it operates it is likely to require no significant adjustments? If this is the case, then AI may be most useful in the design, fabrication, and configuration of the system (potentially including the training of its parameters).
* If not, is the task essentially variable in a way that control theory developed in the early 20th century can adapt to the variance? If so, then AI may also be similarly useful in procurement.
* If not, then the system may possess sufficient nonlinear and temporal complexity that intelligence may be required. Then the question becomes whether the phenomenon is controllable at all. If so, then AI techniques must be employed in real time after deployment.
**Effective Approach to Architecture**
If one frames the design, fabrication, and configuration steps in isolation, the same process can be followed to determine what role AI might play, and this can be done recursively as one decomposes the overall productization of ideas down to things like the design of an A-to-D converter, or the convolution kernel size to use in a particular stage of computer vision.
As with other control system design, with AI, determine your available inputs and your desired output and apply basic engineering concepts. Thinking that engineering discipline has changed because of expert systems or artificial nets is a mistake, at least for now.
Nothing has significantly changed in control system engineering because AI and control system engineering share a common origin. We just have additional components from which we can select and additional theory to employ in design, construction, and quality control.
**Rank, Dimensionality, and Topology**
Regarding the rank and dimensions of signals, tensors, and messages within an AI systems, Cartesian dimensionality is not always the correct concept to characterize the discrete qualities of internals as we approach simulations of various mental qualities of the human brain. Topology is often the key area of mathematics that most correctly models the kinds of variety we see in human intelligence we wish to develop artificially in systems.
More interestingly, topology may be the key to developing new types of intelligence for which neither computers nor human brains are well equipt.
**References**
<http://groups.umd.umich.edu/cis/course.des/cis479/lectures/htm.zip>
Upvotes: 1 |
2017/01/17 | 619 | 2,558 | <issue_start>username_0: I am currently working on an Android AI app.
I am aware of AI models to generate random sentences. However, is there an AI model for generating sarcastic sentences?<issue_comment>username_1: A simple form of sarcasm involves a direct reversal of the literal meaning of the statement, eg "Great weather we're having" (during a thunderstorm), "just what I needed" (when something goes wrong).
The problem with doing this in random sentences is that you may have no context to establish the reversal of the literal meaning.
You could possibly construct them by using a template along the lines of "Just what I needed - (random bad thing happened) today"
Or, when an outcome of a process is calculated, if it is not the desired outcome, instead of returning "mission unsuccessful" or "mission not yet complete", the AI could say "you're having a great day, aren't you? - mission unsuccessful" or "great work, genius - mission not yet complete".
Most random sentences will not be suitable for sarcasm, so it could only be applied in specific circumstances.
It is not clear from your question what the context is for these random sentences, and therefore it is not clear whether that context would be suitable for sarcasm at all.
Upvotes: 3 <issue_comment>username_2: You could also build a database of sarcastic sentences, especially from, for example historic plays. And then train your software to recognize patterns of those sentences.
E.g. grammatical constructions/order, length (or circomstances building up to the sarcasm).
And use that database as starting point, with feedback to learn, or you could use the above method to improve your effective output.
Another approach would be to use a similar but reverse approach; study those databases and build an equivalent output based on the coherence, and then extrapolate the output-generation procedure. (In combination with other methods)
Upvotes: 3 [selected_answer]<issue_comment>username_3: Have a look at the paper [A Modular Architecture for Unsupervised Sarcasm Generation](https://www.aclweb.org/anthology/D19-1636/) (2019) by [Mishra](https://abhijitmishra.github.io/#about) et al.
In the abstract, the authors write
>
> In this paper, we propose a novel framework for sarcasm generation; the system takes a literal negative opinion as input and translates it into a sarcastic version. Our framework does not require any paired data for training.
>
>
>
[Here](https://github.com/TarunTater/sarcasm_generation)'s the reference implementation.
Upvotes: 2 |
2017/01/21 | 1,570 | 5,204 | <issue_start>username_0: I know there are different AI tests but I'm wondering why other tests are little-known. Is the Turing test hyped? Are there any scientific reasons to prefer one test to the other?
>
> Why is the Turing test so popular?
>
>
><issue_comment>username_1: It is so popular because Turing formulated it. He was one of the first who talked about "intelligent machines" and was good connected in the scientific community since the 1940s. So there was enough time to distribute his very intelligent thoughts, for instance by <NAME>, until now. Turing's importance for computer science is shown by the name of the Turing Award. So it is clear that a lot of people have read his papers.
Upvotes: 0 <issue_comment>username_2: This isn't a complete answer, but might show some contributing factors. Perhaps most of all, the Turing Test has existed for a long time! In 2000 [*Turing Test: 50 Years Later*](https://link.springer.com/article/10.1023%2FA%3A1011288000451?LI=true) reviewed the history of the Turing Test in academia. Given this time, it has become pervasive in popular culture (well, in scifi. e.g. *Ex Machina*.) It has also garnered attention in media. Googling "Turing Test news" shows lots of stories about "AI" passing the Turing Test over the last few years. Aside from its age, it's a pretty simple idea at its core. *A human talks to a computer and decides if they think it's a human or not.* That's a pretty digestible idea! So, between its age and ease of understanding, it's a prime candidate for being popular. Other tests tend to lack these qualities (age and simplicity).
Upvotes: 0 <issue_comment>username_3: Because it is:
1. Easy to explain. (Its essentially a game, the "imitation game")
2. Intuitively plausible as a metric.
3. The idea of "people v.s. AI" is very marketable.
4. At the time we thought that we can analyze cognition strictly in terms of input/output (per behaviorism). Cognitivism, embodied cognition, developmental cognition are all sub-fields that have a right to challenge the Turing Test, but they weren't developed at the time of Turing.
Of course, it also helps that Turing is a very important figure in AI/CS.
Upvotes: 2 <issue_comment>username_4: I agree with @colourincorrect 's point that there is economic value to AI which can pass Turing tests to various degrees (chatbots for instance) and this is the reason it is so popular.
At a deeper level, the test relates to subjectivity, and can be said to have it's origins with the early Greek philosopher [Protagoras](https://en.wikipedia.org/wiki/Protagoras), who proposed that "Man is the measure of all things."
>
> πάντων [χρημάτων](http://www.perseus.tufts.edu/hopper/morph?l=xrhmata&la=greek) μέτρον ἐστὶν ἄνθρωπος, τῶν μὲν [ὄντων](http://www.perseus.tufts.edu/hopper/morph?l=ontwn&la=greek#lexicon) ὡς [ἔστιν](http://www.perseus.tufts.edu/hopper/morph?l=estin&la=greek#lexicon), τῶν δὲ οὐκ ὄντων ὡς οὐκ ἔστιν.
>
> Source: Sextus Empiricus, Adv. math. 7.60
>
>
>
Full quote may be translated as: "Of all things (used by man) the measure (of these things) is man: of the things that are, that they are, of the things that are not, that they are not."
*(Apologies as I cannot find a direct link for the Greek online. I re-translated the first part of the proposition for clarity, but lifted the second part from [Bostock](https://en.wikipedia.org/wiki/Protagoras#cite_note-12), whose Ancient Greek is undoubtedly better than mine, because it is potentially ambiguous, even in the original, and Bostock's interpretation makes good use of that ambiguity.)*
[χρημάτων](http://www.perseus.tufts.edu/hopper/morph?l=xrhmata&la=greek) "things" is distinct from [ὄντων](http://www.perseus.tufts.edu/hopper/morph?l=ontwn&la=greek#lexicon) "things", which is interpreted to mean Protagoras was speaking about things that man has a direct relationship to, such as property, tools, affairs and so forth. "A thing that one needs or uses" is listed in the LSJ.
Protagoras can unquestionably be extended to Algorithmic Intelligences, which are "thing" used and interacted with by humans.
**The Turing Test exists because it not only has utility value, but because of the fundamental condition of subjectivity, the idea of which goes back to the earliest, most basic, philosophical concepts.**
Upvotes: 2 <issue_comment>username_5: The test has gained its name and fame mostly because of the person behind it, **<NAME>**. Turing - Considered as the father of Artificial Intelligence is among the first who believed that even machines can act and think like humans.
Even though the test is famous there is not much effort placed to qualify the test. The primary reason for this it due to the fact as the test only asks the machine to act like a human being. This is not very beneficial as not all acts of humans are rational and efficient. Go over [these threads](https://ai.stackexchange.com/questions/15/is-the-turing-test-or-any-of-its-variants-a-reliable-test-of-artificial-intell/2715#2715) for more on this.
I think it's the idea of something non-human acting like a human that creates so much fuzz about Turing Test among the general public.
Upvotes: 2 |
2017/01/23 | 876 | 3,471 | <issue_start>username_0: Has any [schema-agnostic database](https://en.wikipedia.org/wiki/Schema-agnostic_databases) engine been implemented?<issue_comment>username_1: A very large one, the world wide web with highly scaled and optimized indexing by Google.com is the most distributed and robust schema-agnostic database known today. Without the schema-awareness Google brought to the table by applying more rigorous information science to the table, it was almost useless to those that did not know the URL of the target document in advance.
Schema agnosticism is another way of saying that the database cannot
* Provide meta information to the services accessing it,
* Normalize the structure using simple SQL query-insert combinations
* Proactively optimize the keys automatically as is now possible with machine learning, or
* Validate insertions
Without first detecting a schema from data patterns. Moving away from structure is appealing because you can just jam data in like a librarian without a book shelf. However, the data scientist will point out that adding entropy working alongside thermodynamic devolution into stochasm.
The purpose of storing data is to be able to retrieve it. Feature extraction is an opportunity to improve structure automatically during the storing structure, rather than store documents chaotically, a trend that will not lead anywhere good for the world of IT.
Consider whether Google is successful because it organizes its data as it crawls or later as we enter key phrases. Which is the efficient sequence?
One more point, Wikipedia is a blog, and they know this, which is why they want peer review for everything now (after much of the information was added without peer review). It is a good place to find lists but not verified facts. The existence of a Wikipedia page is definitely not an indication of the value of the concept on it.
Upvotes: 1 <issue_comment>username_2: You could be interested in orthogonally [persistent](https://en.wikipedia.org/wiki/Persistence_(computer_science)) systems. You could look at them as schema-agnostic database systems whose data fits *entirely* in RAM (remember also 1980s [Smalltalk](https://en.wikipedia.org/wiki/Smalltalk) or [Lisp Machines](https://en.wikipedia.org/wiki/Lisp_machine) or [Prolog](https://en.wikipedia.org/wiki/Prolog) ones and 1994 [GrassHopper](https://www.usenix.org/legacy/publications/compsystems/1994/sum_dearle.pdf) OS) or at least in [virtual memory](https://en.wikipedia.org/wiki/Virtual_memory). With that approach, even [SBCL](https://sbcl.org/) almost fits in your wish, since it has [`save-lisp-and-die`](http://www.sbcl.org/manual/index.html#Saving-a-Core-Image). Look also into [frame](https://en.wikipedia.org/wiki/Frame_(artificial_intelligence)) based systems and [object databases](https://en.wikipedia.org/wiki/Object_database). Read also a good [*operating systems* textbook](http://pages.cs.wisc.edu/~remzi/OSTEP/) and see past discussions archived on [tunes.org](https://tunes.org/).
Shameful self promotion: My [bismon system](https://github.com/bstarynk/bismon/) (work in progress in summer 2019) [claims](http://starynkevitch.net/Basile/bismon-chariot-doc.pdf) to be a GPLv3+ orthogonally persistent system applied for static source code analysis of IoT software. But you might reuse most of it for other kind of orthogonal persistence (of [frame](https://en.wikipedia.org/wiki/Frame_(artificial_intelligence)) based data).
Upvotes: 0 |
2017/01/25 | 699 | 3,065 | <issue_start>username_0: Instead of directly communicating with the AI, we would instead communicate with a messenger, who would relay our communications to the AI. The messenger would have no power to alter the AI's hardware or software in any way, or to communicate with anything or anyone, except relaying communications to and from the AI and humans asking questions. The messenger could be human, of a software bot. The primary job (and only reason) of the AI would be to act as a filter, not relaying any requests for release back, only the answer to the question asked. The ethics of this method are another debate.
The AI would have to be physically isolated from all outside contact, other than 8 light sensors, and 8 LEDs. The messenger would operate 8 other LEDs, and receive information from 8 light sensors as well. Each AI light sensor would be hooked up to a single messenger controlled LED, and vice versa. Through this system, the two parties could communicate via flashes of light, and since there are 8, the flashes would signal characters in Unicode.<issue_comment>username_1: I would say no due to the possibility of psychological manipulation of the messenger by the AI. Also, the LED communication constraints place severe limitations on the capabilities of the AI, as the usefulness of AI is likely predicated on its ability to learn quickly from a vast amount of information (e.g. using the internet).
In some sense you may successfully control an AI using techniques like this but the nuance of the control problem is controlling an AI without restricting its ability to solve our greatest problems.
We already knew that we could keep an AI safe inside a computer isolated from the rest of the word, but the problem fundamentally is that we if had a truly general AI, we would never *want* to keep it isolated. Is there some way to unleash it so that it is fully capable of solving our problems while simultaneously making sure that it is safe?
Upvotes: 2 <issue_comment>username_2: OK, all I/O is through the flashes. So the AI flashes the message "Launch the nuclear missiles." Does the system to which the AI is connected know how to accomplish this task?
So the flashes themselves are not sufficient to control the AI.
Upvotes: 0 <issue_comment>username_3: As long as the output of the AI affects the world, the way in which it communicates makes no fundamental difference to the control problem.
The AI might still be able, for example, to manoeuvre mankind into a situation, in which only the AI can save us. It might provide a seemingly inoccuous technological solution to global warming, but 50 years later it turns out that this solution caused some problem that threatens to wipe out humanity in the very short term. Suddenly, mankind is in a very bad negotiating position.
Of course, the more powerless the AI starts out, the longer this kind of scenario will take, but the premise of superintelligence is, that we cannot rule out hidden long term agendas behind even a few bits of output.
Upvotes: 3 [selected_answer] |
2017/01/25 | 542 | 2,014 | <issue_start>username_0: OpenAI's Universe utilizes RL algorithms. I also know that Q-learning has been used to solve some games.
Are there any other ML approaches to solve games? For example, could we use genetic algorithms to develop agents that solve games?<issue_comment>username_1: As I see it, it all comes down to game theory, which can be said to form the foundation of successful decision making, and is particularly useful in a context, such as computing, where all parameters can be defined. (Where it runs into trouble is with the aggregate complexity of the parameters per the [combinatorial explosion](http://cswww.essex.ac.uk/CSP/ComputationalFinanceTeaching/CombinatorialExplosion.html), although Machine Learning has recently been validated as a method of managing intractability specifically in the context of games.)
You might want to check out [Playing Games with Genetic Algorithms](https://core.ac.uk/download/pdf/205529758.pdf) and [Evolutionary game theory](https://en.wikipedia.org/wiki/Evolutionary_game_theory).
Upvotes: 4 [selected_answer]<issue_comment>username_2: Yes, evolutionary algorithms (EAs) can be used to solve/play games too. For example, OpenAI has used [evolution strategies](http://rnowotniak.kis.p.lodz.pl/quantum/Essentials%20of%20Metaheuristics.pdf#page=33) (a subset of EAs that uses fixed-length real-valued vectors and self-adaptive mutation rates) to play Atari games. In [this blog post](https://openai.com/blog/evolution-strategies/), they write
>
> We've discovered that evolution strategies (ES), an optimization technique that's been known for decades, rivals the performance of standard reinforcement learning (RL) techniques on modern RL benchmarks (e.g. Atari/MuJoCo), while overcoming many of RL's inconveniences.
>
>
>
There is also the related paper [Evolution Strategies as a Scalable Alternative to Reinforcement Learning](https://arxiv.org/abs/1703.03864) (2017) and [code](https://github.com/openai/evolution-strategies-starter).
Upvotes: 0 |
2017/01/26 | 816 | 3,046 | <issue_start>username_0: I am working on an implementation of the back propagation algorithm. What I have implemented so far seems working but I can't be sure that the algorithm is well implemented, here is what I have noticed during training test of my network:
Specification of the implementation:
* A data set containing almost 100000 raw containing (3 variable as input, the sinus of the sum of those three variables as expected output).
* The network does have 7 layers, all the layers use the sigmoid activation function
When I run the back propagation training process:
* The minimum of costs of the error is found at the fourth iteration (**The minimum cost of error is 140, is it normal? I was expecting much less than that**)
* After the fourth iteration the costs of the error start increasing (**I don't know if it is normal or not?**)<issue_comment>username_1: Actually the implementation was correct,
The source of the problem that causes a big error and really slow learning was the architecture of the neural network it self, the ANN has 7 hidden layers which causes the vanishing gradient problem.
When I have decreased the ANN layers to 3 the cost of error was widely reduced besides of that the learning process was faster.
Another common solution is to use RELU or ELU or SELU in the hidden layers of the neural network instead of the sigmoid function
Upvotes: 3 [selected_answer]<issue_comment>username_2: One thing that you should know is that the sigmoid function limits the output to a value between 0 and 1, which means that using a lot of hidden layers will lead quickly to a Vanishing Gradient.
Try to use relu activation function, it has the property to output all the information it gets from the previous layer.
Upvotes: 2 <issue_comment>username_3: I believe the best way to do this is using **numerical gradient**. To understand the concept, we need to look the definition of derivatives using limits:
[](https://i.stack.imgur.com/9Qanu.png)
It means that, when you don't know how to derive some formula (or you just don't want to), you can approximate it by computing the output for a small change in input, subtract from the original result (no change), and normalize by this change.
Example:
We know the derivative of f(x) = x^2 is f'(x) = 2x. But, let suppose we don't and we are using x = 3 and h = 0.001 (it tends to zero in fact):
f(3 + 0.001) = (3 + 0.001)^2 = 9,006 (approximately)
f(3) = 3^2 = 9
Thus,
(9,006 - 9) / 0.001 = 6
It's is exactly to f'(3) = 2\*3 = 6.
In practice, if you want to know if your backpropagation is correct,
1. pass a single example (x1) through your network and computes the output (o1). Compute the gradient with respect to x1 (d1).
2. Then, add a small value to the input (h), pass through the network again, and computes the new output (o2).
3. Subtract o2 - o1 and divide by h. It must be close enough to d1.
That's it. It's called **gradient checking**. I hope it helps.
Upvotes: 2 |
2017/01/27 | 653 | 1,791 | <issue_start>username_0: If I have 2 statements, say $A$ and $B$, from which I formed 2 formulae:
1. $f\_1: (\lnot A) \land (\lnot B)$
2. $f\_2: (\lnot A) \lor (\lnot B)$
Are $f\_1$ and $f\_2$ equivalent?<issue_comment>username_1: $f\_1 \vdash f\_2$, if and only if $f\_2$ must be true if we assume $f\_1$ to be true.
Similarly, $f\_2 \vdash f\_1$, if and only if $f\_1$ must be true if we assume $f\_2$ to be true.
Logically, by taking any value for $A$ or $B$, from the domain $\{1, 0 \}$, one could verify that $f\_1 \vdash f\_2$, because $f\_2$ is true whenever $f\_1$ is true (for example, when both $A = 0$ and $B = 0$).
However, $f\_2 \vdash f\_1$ is not true. As, in two cases, $f\_2$ is true (e.g. $A = 0$ and $B=1$, or vice-versa), but $f\_1$ is not true.
Upvotes: 1 [selected_answer]<issue_comment>username_2: One way of verifying whether two boolean expressions are equivalent is to assign all possibilities to all variables, and comparing all results.
| A | B | f1 | f2 |
| --- | --- | --- | --- |
| T | T | F | F |
| T | F | F | T |
| F | T | F | T |
| F | F | T | T |
We can see `(F, F, F, T)` does not equal `(F, T, T, T)`, for example for the assignment `(A, B) = (T, F)` we get result `(f1, f2) = (F, T)` , meaning `f1` $\ne$ `f2`.
Upvotes: 2 <issue_comment>username_3: I find it easy to get a quick intuition of the truth value of logical statements involving negations by converting them wherever possible.
So assume by way of contradiction that $\textit{f}\_1 \iff \textit{f}\_2$, then the two-way contrapositive $\neg \textit{f}\_1 \iff \neg \textit{f}\_2$ also holds, hence $A \lor B \iff A \land B$ (De Morgan's law). Since $A \lor B \implies A \land B$ is easily confirmed false (by plugging in A=True and B=False) this is a contradiction. $\Box$
Upvotes: 0 |
2017/01/28 | 826 | 3,507 | <issue_start>username_0: How is Bayes' Theorem used in artificial intelligence and machine learning?
As a high school student, I will be writing an essay about it, and I want to be able to explain Bayes' Theorem, its general use, and how it is used in AI or ML.<issue_comment>username_1: Since you are a highschool student I will try to express it easier. It is a problem for a machine to make a decision if you haven't given that information to it before. You should think of every cases while programming. But sometimes there can be so many cases, here Data Mining, Neural Networks, Fuzzy Logic etc are used withing AI. It saves your time and system is learning itself with enough examples given at the beginning and deciding itself.
[Here in this link](http://web.cs.hacettepe.edu.tr/~ilyas/Courses/BIL712/lec04-BayesianLearning.pdf) you can find an article about Bayesian learning. Example on p.33 is what you need I guess.
Upvotes: 1 <issue_comment>username_2: Bayes theorem states the probability of some event B occurring provided the prior knowledge of another event(s) A, given that B is dependent on event A (even partially).
A real-world application example will be weather forecasting. Naive Bayes is a powerful algorithm for predictive modelling weather forecast. The temperature of a place is dependent on the pressure at that place, percentage of the humidity, speed and direction of the wind, previous records on temperature, turbulence on different atmospheric layers, and many other things. So when you have certain kind of data, you process them certain kind of algorithms to predict one particular result (or the future). The algorithms employed rely heavily on Bayesian network and the theorem.
The given paragraph is introduction to Bayesian networks, given in the book, Artificial Intelligence – A Modern Approach:
>
> Bayesian network formalism was invented to allow efficient representation of, and rigorous reasoning with, uncertain knowledge. This approach largely overcomes many problems of the probabilistic reasoning systems to the 1960s and 70s; it now dominates AI research on uncertain reasoning and expert systems. The approach allows for learning from experience, and it combines the best of classical AI and neural nets.
>
>
>
There are many other applications, especially in medical science. Like predicting a particular disease based on the symptoms and physical condition of the patient. There are many algorithms currently in use that are based on this theorem, like binary and multi-class classifier, for example, email spam filters.
There are many things in this topic.I have added some links below that might help, and let me know if you need any kind of other help.
Helpful Links
1. [First](http://machinelearningmastery.com/naive-bayes-for-machine-learning/)
2. [Second](https://en.wikipedia.org/wiki/Naive_Bayes_classifier)
Upvotes: 2 <issue_comment>username_3: If you want to understand in one line how it's used in AI, I would say how you update your beliefs according to new data/information is calculated by Bayes' theorem.
Bayes' theorem says it will calculate the probability of something happening, given that some other thing has already happened. In this scenario, we already have some prior (prior belief or the probability of an event to occur without any new information). Now, we will simulate the event again and again and keep updating our probability of the event to occur with the information we collected.
Upvotes: 1 |
2017/01/29 | 1,701 | 7,862 | <issue_start>username_0: Usually when performing linear regression predictions and gradient descent, the measure of the level of error for a particular line will be measured by the sum of the squared-distance values.
Why distance ***squared***?
In most of the explanations I heard, they claim that:
* the function itself does not matter
* the result should be positive so positive and negative deviations are still counted
However, an `abs()` approach would still work. And isn't it inconvenient that distance *squared* minimizes the distance result for distances lower than 1?
I'm pretty sure someone must have considered this already -- so why is distance squared the most used approach to linear regression?<issue_comment>username_1: The squared form is sometimes called the [Euclidean norm or L2 norm](https://en.wikipedia.org/wiki/Norm_(mathematics)). One of its very helpful properties is that it has an easily defined derivative, which can be used in mathematical analysis and translated fairly easily into code.
Intuitively it is thought that it is advantageous to exaggerate the differences according to the value of the error, which squaring does. You might also use the powers 3 or 4, but the derivative is more complex.
A number of different norms may be used, according to the particular circumstances of the problem at hand.
Upvotes: 3 <issue_comment>username_2: One justification comes from the central limit theorem. If the noise in your data is the result of the sum of many independent effects, then it will tend to be normally distributed. And normally distributed means that the likelihood of the data is inversely proportional the exponential of the **square** of the distance to the mean.
In other words, minimizing the sum of squares of the distance to the mean amounts to finding the most likely value for the line assuming that the error is normally distributed. This is very often a reasonable assumption, but it is of course not always true.
Upvotes: 2 <issue_comment>username_3: One justification is that under homoscedasticity the L2 norm produces the minimum variance unbiased estimator (MVUE), see Gauss-Markov Theorem. It means that the fitted values are the conditional expectations given the explanatory variables which is in many cases a nice property. Further it is the best estimator if the previous property is desirable.
As a response to the claim that the function itself does not matter, different functions give solutions with very different properties and a lot of effort has gone in to finding appropriate penalty functions, see for example Ridge regression and LASSO. The penalty function does matter.
edit: In response to your question regarding distances lower than 1, nothing "goes wrong" when the distances are smaller than 1. We always want to minimize the distance and the squared loss does so everywhere.
Upvotes: 2 <issue_comment>username_4: It simply derives itself from the maximum likelihood estimation. where in we maximise the log likelihood function., for detailed insight see this lecture: [The Method of Maximum Likelihood for Simple Linear Regression](http://www.stat.cmu.edu/~cshalizi/mreg/15/lectures/06/lecture-06.pdf).
Upvotes: 2 <issue_comment>username_5: **Brief Background**
The error metric (an appropriate term used in the question title) quantifies the fitness of a linear or nonlinear model.
It aggregates individual errors across a set of observations (instances of training data). In typical use, an error function is applied to the difference between the dependent variable vector predicted by the model and empirical observations. These differences are calculated for each observation and then summed. 1
**Why Distance Squared?**
Legendre, who first published the sum of squares method for gauging fitness of the model (Paris 1705) stated correctly that squaring before summing is convenient. Why did he write that?
One could use the absolute value of the error or the absolute value of its cube, but the discontinuity of the derivative of the absolute value makes the function NOT smooth. Functions that are NOT smooth create unnecessary difficulties when employing linear algebra to derive closed forms (simple algebraic expressions).
Closed forms are convenient when one wants to quickly and easily calculate slope and intercept in linear regression. 2
**Gradient Descent**
Gradient descent is generally employed for nonlinear regression. Lacking the ability to create closed forms for many nonlinear models, iteration becomes a dominant methodology for validating or tuning the model.
An intuitive understanding of gradient descent can be gained by considering a thirsty, blind person looking for water on land solely by taking calculated steps. (In software, these steps are iterations.) The blind person can only sense the direction of the altitude gradient (direction of slope) with their feet to descend to a local minimum altitude. 3
Anyone stating that, "The function itself does not matter," in relation to the usual applications of gradient descent would be a dangerous choice for guide in a blind hiking expedition. For instance, the reciprocal of distance as an error function would likely lead to the dehydration and death of the hikers.
The selection criteria for error metrics is important if one is interested in the speed of convergence on a solution or whether the solution will ever be found. 4
Since the gradient of a plane (linear surface) is a constant, the use of gradient descent for linear models is wasteful. The blind person need not continue to sample the angle of their foot.
**Sign of the Error Metric**
The statement, "The result should be positive so positive and negative deviations are still counted," is incorrect. 5
**Effectiveness of Error Metrics in Relation to 1.0**
Because of the partial derivative of the least squares error metric with respect to an error at any given point is constant, the least squares error metric converges similarly above and below 1.0.
**Notes**
[1] The dimensions of a model's independent and dependent variable vectors are, in machine learning, conventionally called features and labels respectively.
[2] Another smooth function, such as the error to the fourth power would also lead to closed forms for slope and intercept, although they would produce slightly different results if the correlation coefficient is non-zero.
[3] Gradient descent algorithms in general do not guarantee finding a global minimum. In the example given, it would be quote possible to miss a small hole exists with water in it. Depending on the surface features (terrain), sensing the angle of the foot (determining gradient) can be counterproductive. The search can becomes chaotic. To extend the intuitive analogy, consider searching for the bottom of the stairs in Escher's Relativity lithograph.
[4] For an error metric to be likely to converge and therefore useful in regression regardless of the direction of the error the sign of the metric is irrelevant. It is each of the set of partial derivatives of the error metric with respect to the corresponding set of distances between the model predictions and observations that should be positive to regress omnidirectionally. It sounds more complicated, but even this corrected statement is an oversimplification.
[5] The error metric in gradient descent applications is often calculated using a convex function to avoid overshoot and possible oscillation and non-convergence. In some cases error functions other that sum of squares is used. The choice of function has to do with a number of factors:
* The model to which the data is to be fit
* Factors expected to affect or actually affecting deviations of
the observations (training data) from the model
* Computational resources relative to the size of the data set
Upvotes: 3 [selected_answer] |
2017/02/02 | 2,370 | 9,510 | <issue_start>username_0: I define Artificial Life as a "simulation" or "copy" of life. However, should it be considered a simulation or copy?
If one had motivation and money, someone could theoretically create evolving computers, with a program that allows mutation OR simply a "simulated" environment with "simulated" organisms.
The computer (or "simulated" organism) would have the ability to reproduce, grow, and take in energy. What if the life evolved to have intelligence? Currently, there are some relatively limited programs that simulate life, but most of them are heavily simplistic. Are they life?
When should something be called life?<issue_comment>username_1: [Wikipedia describes life](https://en.wikipedia.org/wiki/Life) as a characteristic of "physical entities having biological processes". [The same source](https://en.wikipedia.org/wiki/Simulation) also describes a simulation as "the imitation of the operation of a real-world process or system over time." If a digital neural net was to listen to me prattle on for long enough it could learn to speak as if it were me. It would have my knowledge and limitations but its headaches would be quite different from mine. It would never have a toothache. But you could put it in a [Searle Chinese Room](https://en.wikipedia.org/wiki/Chinese_room), and you could speak to it and it would sound exactly as if it were me long after I am dead. It has my "character" which is what my friends would recognize about me.
According to the definition of life it is not alive because it does not have biological processes. It is a simulation because it emulates what I would have said. It cannot be a copy because a digital box is not biologic.
Now let's give this simulation a biological nose so that it can smell. And maybe two eyes and ears. We continue this process until most of the simulation is equipped with biological parts which function together. Whatever it is is now able to come out of the Chinese Room and talk to you. By golly, it looks and sounds exactly like me, but I died a long time ago. Have I been brought back?
My suggestion is that a perfect copy of me would not be possible simply through training due to the level of detail required, but that the close copy would be alive. A critical point would be that there would have to be a fatal link somewhere which would cause "death". You could always create a new close copy, but not an exact copy.
Upvotes: 2 <issue_comment>username_2: It wouldn't be considered alive if it doesn't have vital functions, such as nutrition, relation with the environment, and reproduction. While the first is easy (use a battery) and the second is the one we are developing right now (basically, the intelligence part of an AI) giving programming skills to an AI, aka the ability to reproduce, isn't widely considered a good idea, as many science fiction writers can tell you.
Upvotes: 2 <issue_comment>username_3: I like to take an "[animist](https://en.wikipedia.org/wiki/Animism)" approach. *(It has been suggested to me that part of the reason Japanese designs are so effective is because of the cultural affinity for the concept per the Shinto tradition. For instance, the thing where people put little eyes on everything;)*
I like to think of how my dog, who is terrified of the vacuum cleaner, would regard one of the recent [Boston Dynamics](http://www.theverge.com/circuitbreaker/2017/2/1/14468126/boston-dynamics-new-wheeled-robot-handle) creations. My guess is the dog would't find much use in the distinction that the robot is an artifact as opposed to a biological entity.
I tend to take a deterministic, mechanical approach to reality. Sure things get fuzzy down at the quantum level, but even that may simply be a factor of inadequate measurement capability and the sheer complexity of quantum mechanics, which seems fundamentally beyond the grasp of even the greatest minds, when they're being honest about it.
I don't see much of a distinction between simple organisms and [cellular automata](https://en.wikipedia.org/wiki/Cellular_automaton), except that the former is part of a biological food chain. There is a valid hypothesis that if you had a big enough computer, Conway's Game of Life could independently develop "intelligence".
Self-replication would certainly seem to be a requirement of biological life that can be extended to artificial life. Possibly the true distinction of "artificial" is merely that it is creation of a functional system by an extra-species source, whether the creation be "biological" or "mechanical" in nature. *(i.e. we can hack genes now, not just computer code.)*
Upvotes: 2 <issue_comment>username_4: If you read Steven Levy's book, *Artificial Life*,you will find, as I did,
[](https://i.stack.imgur.com/43k2B.jpg)
the distinction between biological and "artificial" life blurred. If you think about it, *what exactly is "life", anyway?* **A set of complex systems with emergent behavior capable of evolution and adaptation.**
A prototypical biologist may not define life that way. Indeed, he would, *not* being focused or concerned with the computational aspect, define it in a way that would narrow it down to biological life.
<NAME> mention the concept of *luggage words*, and I myself came up with the notion of *mirage concepts*. For the former, that which we don't understand gets lump into the word. For the latter, when you take a "mysterious" concept apart, it vanishes like a mirage does as you get closer.
So *what is life*? If we look at a "living" organism, we'd all that "life". If we remove a single cell from that organism, we'd still call that "life". But what if we remove a single organelle like, say, a ribosome? Lysosome? Contractile vacuole? Endoplasmic reticulum? Is that still "life"? What if we remove a macromolecule from that? Is that still "life"?
As you see, all becomes *very* murky, and I do this on purpose to illustrate just how *arbitrary* the very concept of "life" is.
So I think my definition is a good one, broad enough to encompass both in-silico and the organic versions. It bespeaks to algorithms, robots, viruses -- yes *both* computer and organic... anything that has *complexity* and the ability to *adapt* and *evolve*.
Upvotes: 3 <issue_comment>username_5: "Life" is a definition humans use to classify objects according to the types of behavior humans perceive as unique to living creatures.
Scientists and philosophers tend to define something as "alive" if it manifests some specific properties found in living organisms, such as self-replication, adaptation to the environment, homeostasis and capability to exploit matter and energy for its own existence and functioning. With that being said, there is no one accepted definition of life, and it is doubtful that such a definition is possible.
As to artificial life (ALife), the book "Biological Computation" (2011) by <NAME> and <NAME> states that
>
> it is common among researchers to distinguish between two types of approaches to artificial life:
>
>
>
>
> 1. the **strong ALife** approach, which postulates that virtual "creatures" on a computer screen can be considered to be alive if they fulfill the definition of life used by the researchers; and
>
>
>
>
> 2. the **weak ALife** approach, whereby computerized creatures displaying characteristics of living systems are only models used in research and are not really alive
>
>
>
While this is not solving the problem of definition of life, it might give more context on the subject in relation to artificial life.
Upvotes: 3 <issue_comment>username_6: Imho, it **is** life.
Example: consider the possibility that we synthesized from completethe DNA of a human being, with zero atoms from another human, and grew said human in a lab. Most (and myself) would agree that creature is alive.
Although there are many opinions that differ, my own is that there is no absolute line to draw between something that is alive, and something that is not alive. A human is alive. But is a single bacterial cell, or a single cell from your own body? They reproduce, they eat, etc. so yes they are alive. They are not like a dog or a cat however. In fact, a bacterial colony in a pool of water, giving off a yellow or brown color can be mistaken as a mineral or mud. It is only when you look closer that you see it is actually life. What about a biological virus? It is not made of cells. It does not have DNA. Most would agree it has life. But is does not really seem to be as alive as say, a shark or giraffe. Many people do not think a car is alive. Yet cars evolve. They move, they "eat."
Life is simply a way we define things around us. A much more useful and definitive way to categorize life I think, would be to utilize a continuum instead of an all or nothing approach. Rocks would go on the end of "nonliving." Intelligent, multicellular; self-aware entities could perhaps be on the other end as "fully alive."
Other entities can go in between. As for something such as AI, I would propose adding a z-axis, to make a 3 dimensional continuum, allowing for a self aware, intelligence entity not made of cells to fit comfortably with humans without causing an all-or-nothing.
PS: thought I came to this conclusion myself, I have a suspicion someone smarter than me has already written about such an idea. If anyone feels like educating me, I'd love to hear it.
Upvotes: 2 |
2017/02/02 | 460 | 1,758 | <issue_start>username_0: Could you give examples of affordable programmable devices that could be used in university classes to teach students about A.I. and demonstrate it?
The devices are expected to do some form of self-learning, pattern recognition, or any other features of A.I., and to be programmable or customizable.<issue_comment>username_1: [LEGO Mindstorms](https://www.lego.com/en-us/mindstorms) is widely used to demonstrate AI in schools and universities [[1](https://pdfs.semanticscholar.org/b515/dcdb633e2de2101a1eabfc53ecf84f5fcd39.pdf), [2](http://www.legoengineering.com/)]. With LEGO as basis, you are very flexible. You can build what you want very easily. The AI programs can be written in different languages from very easy graphical once to Lisp and C++. The newest version has an SD Card drive, USB interface and a powerful ARM processor. You can use four motors and four sensors directly. There exists touch, sound, sonar, gyro, infrared and color sensors. There is also a big community which provides you with a lot of ideas, hardware and programs [[3](https://www.hackster.io/mindstorms)].
Upvotes: 2 <issue_comment>username_2: Why dont you try android Phones with Tensorflow [TensorFlow Android Camera Demo](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android)
You can build a simple image or text classification neural network to demonstrate AI.
Upvotes: 0 <issue_comment>username_3: To start you could use one of the devices mentioned before, and after to make some more powerful and complicated experiments (and also a little bit more expansive) you could move to [Jetson TK1](http://rads.stackoverflow.com/amzn/click/B00L7AWOEC) which let you run heavier Neural Network (like CNN).
Upvotes: 0 |
2017/02/03 | 875 | 3,587 | <issue_start>username_0: I am researching Natural Language Processing (NLP) to develop an NL Question Answering system. The answering part is already done. So processing the question remains, along with the questions regarding the algorithms.
The final product should allow the user to ask a question in NL. The question then gets translated to an [Multidimensional Expressions (MDX) query](https://en.wikipedia.org/wiki/MultiDimensional_eXpressions), which generates a script regarding the dimensions of the cube.
How can I translate a natural language question to an MDX query?
The outcome of the question is in the form of a calculation. For example
>
> How many declarations were done by employee 1?
>
>
>
or
>
> Give me the quantities for Sales.
>
>
><issue_comment>username_1: You can use a component library which can help you to implement Natural language query builder in your application( *the question part* ) called [Open Natural Language Processing Package](https://opennlp.apache.org)
, so you can definitely develop a module, by using existing modules of OpenNLP such as entity extraction, chunking and parsing.
According to [wikipedia source](https://en.wikipedia.org/wiki/Question_answering) ; it points out that as of 2001, *Q&A* applications typically includes "a question classifier module which determines the type of question and the type of answer," so "a multiagent question and answering architecture has been proposed, where each domain [or variable] is represented by an agent which tries to answer questions, taking into account its specific knowledge."
But it still need some effort to build a [NLgenerators inline with databases](https://deepblue.lib.umich.edu/bitstream/handle/2027.42/138709/lifei_1.pdf?sequence=1&isAllowed=y) , for the answer query and also is the link to help you on how you can work with [Compositional Semantic Parsing on Semi-Structured Tables](https://nlp.stanford.edu/software/sempre/wikitable)
Hope this can give you some insight.
Upvotes: 3 <issue_comment>username_2: This is a hard problem to solve, and the best approach depends very much on the scope of your task. If you have a small database table with a limited number of columns, you might get away with some basic pattern matching techniques. If it is more complex than that, you might have to do a full-scale syntactic analysis of the question. This also depends on the variations of possible question types.
Assuming a limited set of variables and variants, you could set up something like:
>
> How many X did Y produce/How many X were done by Y/What is the number of X for Y
>
>
>
where you have two variables to fill from the pattern, which you then use in your query:
>
> select sum(X) where producer == Y
>
>
>
(Or whatever format your query has).
The advantage of this is that you don't need to be a linguistics expert to maintain/expand the system, and you can just add more patterns to it if necessary. You might have to map some terms onto synonyms to get the right column headings/labels out of it. But this approach is not very hard to implement, and you should have a basic system up and running fairly quickly. You then have to see/test what questions your users are asking, and expand the pattern inventory accordingly.
The disadvantage is that you might end up with a long list of patterns, and there could be some which are conflicting, ie the same pattern with different variables will ask for a different kind of result. If that turns out to be a problem, you might have to look for a more powerful approach.
Upvotes: 0 |
2017/02/07 | 921 | 3,881 | <issue_start>username_0: For a class, I'm reading Brooks' "[Intelligence without representation](https://homeostasis.scs.carleton.ca/%7Esoma/adapsec/readings/brooks1991-representation.pdf)". The introduction is dedicated to slating *representation* as a focus for AI development.
I've read that representation is the problem of representing information symbolically, in time for it to be useful. It's related to the reasoning problem, which is about reasoning about symbolic information.
But I don't feel like I really understand it at any practical level. I think the idea is that when an agent is given a problem, it must describe this problem in some internal manner that is efficient and accurately describes the problem. This can then also be used to describe the primitive actions that can be taken to reach the solution. I think this then relates to Logic Programming (e.g. Pascal?),
Is my understanding of *representation* correct? Just what does representation look like in practice? Are there any open-source codebases that might make a good example?<issue_comment>username_1: You can use a component library which can help you to implement Natural language query builder in your application( *the question part* ) called [Open Natural Language Processing Package](https://opennlp.apache.org)
, so you can definitely develop a module, by using existing modules of OpenNLP such as entity extraction, chunking and parsing.
According to [wikipedia source](https://en.wikipedia.org/wiki/Question_answering) ; it points out that as of 2001, *Q&A* applications typically includes "a question classifier module which determines the type of question and the type of answer," so "a multiagent question and answering architecture has been proposed, where each domain [or variable] is represented by an agent which tries to answer questions, taking into account its specific knowledge."
But it still need some effort to build a [NLgenerators inline with databases](https://deepblue.lib.umich.edu/bitstream/handle/2027.42/138709/lifei_1.pdf?sequence=1&isAllowed=y) , for the answer query and also is the link to help you on how you can work with [Compositional Semantic Parsing on Semi-Structured Tables](https://nlp.stanford.edu/software/sempre/wikitable)
Hope this can give you some insight.
Upvotes: 3 <issue_comment>username_2: This is a hard problem to solve, and the best approach depends very much on the scope of your task. If you have a small database table with a limited number of columns, you might get away with some basic pattern matching techniques. If it is more complex than that, you might have to do a full-scale syntactic analysis of the question. This also depends on the variations of possible question types.
Assuming a limited set of variables and variants, you could set up something like:
>
> How many X did Y produce/How many X were done by Y/What is the number of X for Y
>
>
>
where you have two variables to fill from the pattern, which you then use in your query:
>
> select sum(X) where producer == Y
>
>
>
(Or whatever format your query has).
The advantage of this is that you don't need to be a linguistics expert to maintain/expand the system, and you can just add more patterns to it if necessary. You might have to map some terms onto synonyms to get the right column headings/labels out of it. But this approach is not very hard to implement, and you should have a basic system up and running fairly quickly. You then have to see/test what questions your users are asking, and expand the pattern inventory accordingly.
The disadvantage is that you might end up with a long list of patterns, and there could be some which are conflicting, ie the same pattern with different variables will ask for a different kind of result. If that turns out to be a problem, you might have to look for a more powerful approach.
Upvotes: 0 |
2017/02/10 | 336 | 1,439 | <issue_start>username_0: I have users' reports about an accident. I want to know how to make sure that the number of reports is big enough to take that accident as a true accident and not spam.
My idea is to consider a minimum number of reports in a specific time interval, for example 4 reports in 20 minutes are good enough to believe the existence of that accident.
My question is how can I choose the minimum number of reports and that time interval? Is there some logic to make that decision?<issue_comment>username_1: It's a trust-level problem, so your judgment is the best to decide what would be rhe threshold.
You can help your decision making by trying and visualize how many accident (in %) you've left out... this can be an indicator of good threshold. You don't want to throw too many of them. But only you know what is good and bad in this case
Upvotes: 0 <issue_comment>username_2: If the only feature you're classifying on is the number of users making a given report, then this isn't really much to do with AI/ML. Just pick a number based on your subjective judgment and go with it.
OTOH, if you can include details of the report itself (as well as the number of reporters), I think you might be able to build a bayesian classifier that would be useful. If you could consider location, weather, time of day, number of reporters, etc., it seems like you might be able to get something useful put together.
Upvotes: 1 |
2017/02/11 | 2,815 | 9,947 | <issue_start>username_0: I want to create a network to predict the break up of poetry lines. The program would receive as input an unbroken poem, and would output the poem broken into lines.
For example, an unbroken poem could be
```
And then the day came, when the risk to remain tight in a bud was more painful
```
which should be converted to
```
And then the day came,
when the risk
to remain tight
in a bud
was more painful
than the risk
it took
to Blossom.
```
How should I go about this? I have been using classifiers for various tasks, but this seems to be a different type of task.
I'm thinking of it as an array of words (doesn't matter how they're represented for now), which would look like
```
[6, 32, 60, 203, 40, 50, 60, 230 ...]
```
and needs to map into an array representing line breaks
```
[0, 0, 1, 0, 0, 0, 1, 0, 0, 1 ...]
```
where 1 (at optimal) means there should be a line break after the word in that index (in this idea, the two arrays are of the same length).
Unfortunately, I couldn't find an algorithm that could train a network of this shape.
What machine learning or deep learning algorithm can be used for this task?<issue_comment>username_1: The underlying problem is combinatorial, as you note, but I'm not getting how you're ascribing value to words.
The key element of deciding line breaks, beyond the visual, is rhythmic. *(There are other factors, as Bob Salita notes, but you've gotta start somewhere.)*
It seems to me you need to teach the computer how to scan a phrase in the poetic sense, which relates to rhythm. This is obviously a very difficult task, but the number of syllables and stresses is fundamental numerical data of poetry.
In order to validate to human tastes, you'd then have to use a captcha crowdsourcing method, for both the rhythmic stresses as input, and getting human reactions to different line-break configurations. You would then reinforce the positive reactions, and the AI would tailor the line-break process to the audience.
However, instead of utilizing human tastes and aesthetic sensibilities, you could instead let the AI decide what is preferred, which would probably be comprised of some sort of symmetry considered optimal to an algorithm.
Following this logic, you wouldn't even need to have the AI learn the stresses, instead just focusing on raw syllables, or, numeric representation based on any factor. (With this method, the object is not to reformat poetry for humans, but for machines :)
This is more about the aesthetics, but <NAME>'s [Elegance in Game Design](http://www.cameronius.com/games/shibumi/browne-elegance-5.pdf) would seem to suggest there are engineering solutions to the type of aesthetic issues at the root of your problem.
I might start by teaching it to count the syllables of the poem, then having it look at the divisors. If it's roughly 10, it might be iambic pentameter. The AI doesn't care about the label, but it likes 10.
20 syllables might represent a couplet in that meter:
>
> The time is out of joint, oh cursed spite
>
>
> that ever I was born to set it right
>
>
>
I'd definitely start by feeding it older poetry, particularly poets that keep to strict meter. It's been a while since I've read Spencer and so forth, but I'd think poets of his time would be useful. Dr. Seuss, perhaps the greatest wielder of the rhyming couplet, would surely be extraordinarily useful.
The evaluation method would have to be fuzzy, because there would be increasing degrees of variance the more modern the poetry, ultimately resulting in free structures, except in the case of forms such as rap, which strongly utilize regularized rhythm. Machine learning is all about estimation and reinforcement, and is proving to be extremely useful dealing with fuzziness.
>
> Dead mountain mouth of carious teeth that cannot spit
>
>
>
is a great example of modern line of poetry: the floor of 13 syllables / 2 makes a 6 beat line. Understanding that in context with the surrounding verse is much more difficult and illustrates the nature of the problem. Even scanning the poem correctly to that point to determine would be extremely difficult.
However, a different poem by the same author is extremely useful:
>
> What is the late November doing / With the disturbance of the spring / And creatures of the summer heat, / And snowdrops writhing under feet / And hollyhocks that aim too high / Red into grey and tumble down / Late roses filled with early snow? / Thunder rolled by the rolling stars / Simulates triumphal cars / Deployed in constellated wars / Scorpion fights against the sun / Until the Sun and Moon go down / Comets weep and Leonids fly / Hunt the heavens and the plains / Whirled in a vortex that shall bring / The world to that destructive fire / Which burns before the ice-cap reigns
>
>
>
All lines of roughly 8 syllables, easy to pick out because of capitalization. But the real question is: ~136 13 lines of roughly 10 syllables, or 17 lines of roughly 8? It would want to calculate based on word blocks (words that cross syllabic thresholds and at least tell you where the break *cannot* be, and it should be possible to statistically divine the pattern, at least for regularized verse.)
>
> The wounded surgeon plies the steel / That questions the distempered part; /
> Beneath the bleeding hands we feel / The sharp compassion of the healer's art
> / Resolving the enigma of the fever chart.
>
>
>
This verse highlight the problem. 5 lines of 4 beats, but syllabically: 8, 8, 8, 10, 12.
Most likely:
* 46/5 = 9.2
* 46/4 = 11.5
* 46/6 = 7.66
Less likely:
* 46/3 = 15.3
* 46/2 = 23
* 46/7 = 6.57
2 lines has less perfect symmetry, but 5 lines is more likely, based on the overall number of syllables, and of the likely choices, has the least variance.
Ultimately it would be looking for the underlying structure, or lack of structure, and try to reorganize the unbroken text into something *close* to the original structure. While exactness is not always required because the process is ultimately subjective, and currently intractable, certain wrong choices would yield disastrous results.
In the prior example it may be able to discern the likelihood of a 5 line pattern, but it would have to figure out on which lines to place the extra syllables. Differentiating between particles and other parts of speech provides a clue, because the poet's language is very compact: there are 19 nouns, verbs, or prepositions.
More likely:
- 19/5 = 3.8
- 19/4 = 4.75
Less likely:
- 19/3 = 6.33
- 19/6 = 3.16
- 19/7 = 2.71
Further analysis might narrow it down. But extremely regularized verse remains the best place to start. 7 lines of roughly 10 syllables is "poetic":
>
> ‘The aged man that coffers-up his gold
>
> Is plagu’d with cramps and gouts and painful fits;
>
> And scarce hath eyes his treasure to behold,
>
> But like still-pining Tantalus he sits,
>
> And useless barns the harvest of his wits;
>
> Having no other pleasure of his gain
>
> But torment that it cannot cure his pain.
>
>
>
It cares about both X and Y values.
Initially you want to keep it to a single language, because syllables may be treated differently. That said, having the AI look for something like [Dactylic hexameter](https://en.wikipedia.org/wiki/Dactylic_hexameter) would be extremely useful, because you could feed it Homer. You could also feed it Homer in English in many different forms of English meter, and in almost every other living language. By definition, the AI would value works such as these, because max number\_of\_translations provides the most robust data set. When it starts to value meaning, this will be especially important.
Understanding different ways of treating syllables(long/short vs. stressed/unstressed) will also be essential as it transitions into more modern poetry.
[Here is a good link for basic English meter.](http://www.writing.upenn.edu/~afilreis/88/meter.html) Iambic and Trochaic meters will be easy, while meters that employ Anapests, Dactyls and Spondees will be more challenging.
In some cases, however, these will be mathematically interchangeable.
>
> I went to the Garden of Love, / And saw what I never had seen: / A Chapel was built in the midst, / Where I used to play on the green.
>
>
>
It doesn't matter if the lines above are iambic/trochaic or dactylic/anapestic, it's still 4 lines of roughly 8 syllables. Thus "I went to the Garden of Love" is the same as "The wounded surgeon plies the steel", even though the beats for the lines are 3 and 4, respectively.
---
It should also have a stanza marker, (possibly 00?). Because it looks for patterns within patterns, stanzas are valued. Not all poetry has a stanza structure, but it arguably could. Deciding if stanzas are appropriate is partly a function of taking a syllabic divisor, breaking the poem down into number\_of\_lines, and looking at the divisors of that number.
It would need an added function to be able to recognize meaning patterns. For instance, repetition of proper nouns is the marker of plays. (From a meaning perspective, imo, plays is the ideal place to start because the marker is so easy to learn, and names all belong to a single set, and imply communication. It's no different functionally than any other identifier, and a concept all computers "understand".)
Eventually it would want to look for phonetic patterns, rhymes and near rhymes, which would also be indicators of potential good places for line breaks.
There is a very big data set that it can look at, and who knows what it might discern?
Upvotes: 2 <issue_comment>username_2: You should try [to use an RNN](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). You feed in letter by letter and have a binary output of linebreak - no linebreak. If you have enough data it might actually work.
Upvotes: 3 [selected_answer] |
2017/02/15 | 1,639 | 6,435 | <issue_start>username_0: Everything related to Deep Learning (DL) and deep(er) networks seems "successful", at least progressing very fast, and cultivating the belief that AGI is at reach. This is popular imagination. DL is a tremendous tool to tackle so many problems, including the creation of AGIs. It is not enough, though. A tool is a necessary ingredient, but often insufficient.
Leading figures in the domain are looking elsewhere to make progress. This [report/claim](https://hackernoon.com/feynman-machine-a-new-approach-for-cortical-and-machine-intelligence-5855c0e61a70#.dmgovix19) gathers links to statements by [<NAME>](https://www.quora.com/Is-the-current-hype-about-Deep-Learning-justified?redirected_qid=6578691), [<NAME>](https://www.quora.com/What-are-the-limits-of-deep-learning-2/answer/Yann-LeCun) and [<NAME>](https://www.youtube.com/watch?v=VIRCybGgHts). The report also explains:
>
> The main weaknesses of DL (as I see them) are: reliance on the simplest possible model neurons (“cartoonish” as LeCun calls them); use of ideas from 19th century Statistical Mechanics and Statistics, which are the basis of energy functions and log-likelihood methods; and the combination of these in techniques like backprop and stochastic gradient descent, leading to a very limited regime of application (offline, mostly batched, supervised learning), requiring highly-talented practitioners (aka “Stochastic Graduate Descent”), large amounts of expensive labelled training data and computational power. While great for huge companies who can lure or buy the talent and deploy unlimited resources to gather data and crunch it, DL is simply neither accessible nor useful to the majority of us.
>
>
>
Although interesting and relevant, such kind of explanation does not really address the gist of the problem: What is lacking?
The question seems broad, but it may be by lack of a simple answer. Is there a way to pin-point what DL is lacking for an AGI ?<issue_comment>username_1: I think it's missing still the aspects what makes a human brain; having a lot of different networks working with each other.
Just like meditation improves cognitive abilities by having the brain work more synergistically, we could apply that to machines too.
For example google is learning a computer to dream, just like we do, to reinforce what we already learned.
<https://medium.com/@tannistho/why-is-google-teaching-its-ai-to-dream-e9ae9ecd0e3a#.gljal6pww>
And here is pathnet, a network of neural network.
<https://medium.com/@thoszymkowiak/deepmind-just-published-a-mind-blowing-paper-pathnet-f72b1ed38d46#.ed0f6pdq7>
Creating all these mechanics and putting them all together, with enough power and we will get pretty close!
Upvotes: 2 <issue_comment>username_2: Everyone dealing with neural networks misses an important point when comparing systems with human like intelligence. A human takes many months to do anything intelligible, let alone being able to solve problems where adult humans can barely manage. That and the size of human brain is enormous compared to our neural networks. Direction might be right, but the scale is way off. Number of neurons in human brain can be matched memory-wise but the amount of parallelism to simulate it real-time cannot yet be achieved (at least for a random researcher). While a little old [this](https://www.extremetech.com/extreme/163051-simulating-1-second-of-human-brain-activity-takes-82944-processors) might give you an idea of how much we lack the processing power.
Upvotes: 3 <issue_comment>username_3: Deep Learning is mostly successful in supervised learning, whereas the brain builds categories mostly in an unsupervised way. We don't yet know how to do that. (Take a [look at google brain](https://www.wired.com/2012/06/google-x-neural-network/): 16,000 cores and all this thing can do is recognise cats and human faces with pretty abysmal accuracy.)
Deep Learning uses highly unstructured activations, i.e. the high level representations of "dog" and "cat" in a neural network classifier don't have to be similar at all. The brain on the other hand uses inhibitory neurons to create [sparse distributed representations](http://www.cortical.io/technology_representations.html) which are decomposable into their semantic aspects. That's probably important for abstraction and reasoning by analogy.
The brain has many different parts which work together. Deep Learning researchers are only just beginning to integrate [memory](https://arxiv.org/abs/1410.5401) or attention mechanisms into their architecture.
The brain integrates information from many different senses. Most Deep Learning applications use just one type of input, like text or pictures.
The brain is capable of modelling sequences as categories. (Basically every verb names a sequential (i.e. temporal) category.) It can then arrange these categories into long-term hierarchical plans. So far I haven't seen anything in that direction in Deep Learning.
Also neural networks can't yet operate on the same scale as the human brain. If you look at [the answers to this question](https://ai.stackexchange.com/questions/2330/when-will-the-number-of-neurons-in-ai-systems-equal-the-human-brain), the human brain will be ahead in neuron count for another couple of decades. A neural network might not need the same number of neurons as the brain to reach a similar performance (because of higher accuracy), but right now for example video processing is still pretty limited in terms of input and throughput.
Upvotes: 3 <issue_comment>username_4: Artificial intelligence proponents today are focused on the problem of computability - the ability to solve complex problems fast. It is my belief that any amount of success in this direction will not lead to human (general) intelligence although it certainly will outperform humans in certain domains. Instead, efforts should be toward a study of what neurological events cause sensation (the experience of qualia). Of course, this is the hard problem of philosophy but I believe it is the unique key to general intelligence and its capabilities. Reverse engineering and also testable theories should be advanced toward this end.
Upvotes: 1 <issue_comment>username_5: IMHO the first hurdle is **scale**: even Google's largest DNN doesn't come close to the scale of the brain, and by a factor of several orders of magnitude...
Upvotes: 2 |
2017/02/17 | 551 | 2,322 | <issue_start>username_0: I want to know something more about it. Are there any github repo or an open source project?<issue_comment>username_1: **Ya Sure it is possible**. It wont be efficient as human speech though (At least not yet). It all depends on how you use your data. If you use your data efficient enough, then you could be the one creating an AI which closely resembles human speech. Your idea is good. You would need help from a GPU for processor all that complex text processing. I hope I was at least a little of help. **I unfortunately don't know about any open source projects**.
Upvotes: 0 <issue_comment>username_2: **It is possible**
The [Recurrent Neural Network architectures](https://en.wikipedia.org/wiki/Recurrent_neural_network) help in building efficient NLP algorithms, which can identify semantics and their relations across long pieces of text.
With very minor tuning, they can be made generative too. So, [here is an excellent article on RNNs](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) which I highly recommend, which also talks about how an RNN was trained on Shakespere's texts and wrote one itself.
Upvotes: 2 [selected_answer]<issue_comment>username_3: It depends whether you include learning to produce sound waves in the task (I'm taking "how to speak" to be different from "how to write" in your question).
If the task is to learn a language, you can certainly learn from written texts, and try a generative approach (usually, after a few sentences, humans can be disappointed though).
If the task includes generating a voice response that is also learnt, then it had better hear some spoken language too. Of course, you can bypass the task of learning how to generate sound by plugging a standard text to voice module after your generator.
Upvotes: 0 <issue_comment>username_4: Yes, It is possible. Also, if you are from development background you can use JAVA Sphinx, some AI tools and API's. If you are from testing background you can use JAWS which actually reads the readable properties of HTML tags. Nowadays, there are some pdf readers, document reader softwares available you can have a look at them. You can search for Sphinx related projects but I don't know about the other projects.
Please vote and mark the solution if useful.
**Thanks!**
Upvotes: 0 |
2017/02/18 | 1,217 | 5,188 | <issue_start>username_0: In Chapter 26 of the book *Artificial Intelligence: A Modern Approach* (3rd edition), the textbook discusses "technological singularity". It quotes <NAME>, who wrote in 1965:
>
> Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the *last* invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.
>
>
>
Later on in the textbook, you have this question:
>
> 26.7 - <NAME> claims that intelligence is the most important quality, and that building ultr- intelligent machines will change everything. A sentient cheetah counters that "Actually speed is more important; if we could build ultrafast machines, that would change everything" and a sentient elephant claims "You're both wrong; what we need is ultrastrong machines," What do you think of these arguments?
>
>
>
It seems that the textbook question is an implicit argument against I.J. Good. Good may be treating intelligence as valuable, simply because man's strengths lies in that trait called "intelligence". But other traits could be equally valued instead (speed or strength) and sentient beings may speculate wildly about their preferred traits being "maximized" by some machine or another.
This makes me wonder whether a singularity could occur if we had built machines that were *not* maximizing intelligence, but instead maximizing some other trait (a machine that is always increasing its strength, or a machine that is always increasing its speed). These types of machines can be just as transformative - ultrafast machines may solve problems quickly due to "brute force", and ultrastrong machines can use its raw power for a variety of physical tasks. Perhaps a ultra-X machine can't build another ultra-X machine (as I.J. Good treated the design of machines as an intellectual activity), but a continually self-improving machine would still leave its creators far behind and force its creators to be dependent on it.
*Are technological singularities limited to ultra-intelligences?* Or technological singularities be caused by machines that are not "strong AI" but are still "ultra"-optimizers?<issue_comment>username_1: That would be a no for speed or strength, if you have a super strong entity but it cannot research new materials, it will be quickly limited, same thing for speed, Basically, you need something out of their field to improve them, which makes a runaway improvement impossible.
Though, we already have super strong and super fast machines, those are cars, trucks, hydraulic presses, industrial exoskeletons etc... But, even though we can build better ones through the use of the old ones, we still need to research stuff that can't be improved by old ones.
What we need for a singularity is a field where an improvement in it makes further improvement easier. And I don't know a field where this doesn't involve intelligence.
If there is one, that may be possible to have a non intelligence driven singularity there.
Upvotes: 3 [selected_answer]<issue_comment>username_2: "Monte Carlo" seems to be the best method currently for algorithmic creativity. (i.e. the machine makes random choices and sees if they lead to anything useful.)
While it appears obvious that creative connections formed out of *understanding* are superior to those which are random, if the machine is fast enough, it should be able to win out by pure "brute force".
i.e. Evolution, prior to human guidance, has not been been based on intelligence.\* Rather, evolution has been based on random mutations that are either beneficial or detrimental.
---
\*The caveat is that humans creating algorithms and altering genes (either in a lab or through animal husbandry and horticulture) can be said to comprise a new form of evolution that is actually rooted in human intelligence and desire.
Upvotes: 0 <issue_comment>username_3: This May not be what you were looking for, but technically yes. Although not for Speed and Strength. But you could randomly guess new Mathematical/Physical/chemical solutions to become more efficient in random guessing (basically anything that allows the machine to compute faster and to maybe simulate the effect of those findings) thus technically achieving something similar to a singularity, without having to have any Intelligence at all actually (or just on a human Level), since you could just brute force all.
Is this efficient? No, not even close to being in any way feasible.
Does it work? Technically, Yes.
It would be a singularity of sorts, since it improves itself continuously, but it wouldn‘t need to improve its own intelligence.
Of course, some findings might make it possible to become more intelligent, but let‘s just assume it doesn‘t apply those findings to itself.
Upvotes: -1 |
2017/02/19 | 476 | 1,952 | <issue_start>username_0: AI is developing at a rapid pace and is becoming very sophisticated. One aspect will include the methods of interaction between AI and humans.
Currently the interaction is an elementary interaction of voice and visual text or images.
Is there current research on more elaborate multisensory interactions?<issue_comment>username_1: Probably these days it's still under the umbrella of "man-machine interaction" in CS, i.e. there is a (sub-) field for interactions between humans and machines in CS, but I am not aware that it has split again to create a sub-sub-field for AI/man interactions.
Upvotes: 2 <issue_comment>username_2: This is one of the main research areas of my [lab](https://blinclab.ca/about/) which researches intelligent prosthetics which also give sensory feedback such as touch and kinaesthesia (the feeling of a limb moving in space) to the user. We use reinforcement learning to bridge the gap in control and have preliminary work in communicating to the user predictions made by the artificial agent.
Upvotes: 2 <issue_comment>username_3: At risk of seeming out-of-scope, I think it's worth mentioning there may be a case that human vs. AI play of serious games games, such as Chess and Go, represents the "deepest" level of human/AI interaction to date. *(Game theory is also important because it underlies all optimization, including, I have no doubt, expanding and optimizing AI-to-human interaction.)*
I bring this up because in terms of human-to-computer interaction, I doubt there is a more effective means of engagement than computer games, whether "serious" or otherwise. Thus you might find it useful to look at the concept of [Gamification](https://en.wikipedia.org/wiki/Gamification), which naturally lends itself to the type of multisensory input and output you're interested in:
>
> "Gamification can improve an individual's ability to comprehend digital content"
>
>
>
Upvotes: 0 |
2017/02/25 | 845 | 3,607 | <issue_start>username_0: This mostly refers to human-like or chatbot AI, but could maybe be used in other applications (math or something?).
Basically, it occurred to me, that when I'm thinking or speaking, there is a constant feedback loop, in which I am formulating which words to use next, which sentences to form, and which concepts to explore, based on my most recent statements and the flow of the dialogue or monologue. I'm not just responding to outside stimulus but also to myself. In other words, I am usually maintaining a train of thought.
Can AI be made capable of this? If so, has it been demonstrated? And to what extent? While typing this, I discovered the term "thought vectors", and I think it might be related.
If I read correctly, thought vectors have something to do with allowing AI to store or identify the relationships between different concepts; and if I had to guess, I'd say that if an AI lacks a strong understanding of the relationships between concepts, then it would be impossible for it to maintain a coherent train of thought. Would that be a correct assumption?
(ps. in my limited experience with AI chatbots, they seem to be either completely scripted, or otherwise random and often incoherent, which is what leads me to believe that they do not maintain a train of thought)<issue_comment>username_1: First, for almost every question of the form "Can AI be made to X", the most obvious and straightforward answer is something like "We don't know. Probably, but if it hasn't been done yet, we're really not sure."
It's also important to understand that, from a technology standpoint, there isn't one "thing" called "AI". There are many, many different technologies, which are loosely related (at best) and are generally lumped together under the overall rubric of "Artificial Intelligence".
All of that said, yes, there has been work on adding memory, even long-term memory, to various kinds of "AI". The most notable recent example is the advent of [LSTM](https://en.wikipedia.org/wiki/Long_short-term_memory) in recurrent neural networks.
Additionally, some of the work done on "cognitive architectures" has focused on the use of memory. For more info on that, look up ACT-R and/or SOAR and read some of those papers.
What isn't clear to me offhand, is whether or not anybody has tried applying any of these techniques to chat-bots in particular. I wouldn't be surprised if somebody had, but I can't cite any such research off the top of my head.
Upvotes: 1 <issue_comment>username_2: It could be said that "maintaining a thought" is a basic requirement of computing, and can be represented as a string of binary digits in the context of a [Turing Machine](https://en.wikipedia.org/wiki/Turing_machine).
*"Basically, it occurred to me, that when I'm thinking or speaking, there is a constant feedback loop, in which I am formulating which words to use next, which sentences to form, and which concepts to explore, based on my most recent statements and the flow of the dialogue or monologue. I'm not just responding to outside stimulus but also to myself. In other words, I am usually maintaining a train of thought."*
This sounds an awful lot like a recursive function.
My analysis of the chatbot problem is that it reveals a poor quality reasoning on the part of the bots, as opposed to lack of reasoning. It's not so much a question on the raw ability of an algorithm to maintain a train of thought, because the "train of though" is the function itself, but the quality of the algorithm and, by some measures, the "humanness" of the output.
Upvotes: 0 |
2017/02/25 | 1,685 | 6,533 | <issue_start>username_0: I had first this question in mind "Can an AI suffer?". Suffering is important for human beings. Imagine that you are damaging your heel. Without pain, you will continue to harm it. Same for an AI. But then I told myself "*Wait a second. It already exists. It is the errors and warnings that shows up*". We can say it has the similar purpose as suffering. However, I felt something missing. We feel pain. The errors and bugs are just data. Let's say a robot can use machine learning and genetic programming to evolve.
Can an AI learn to suffer? And not just know it as mere information.<issue_comment>username_1: At a very high level, regarding evolutionary game theory and genetic algorithms, it is absolutely possible that AI could develop a state that is analogous with suffering, although, as you astutely point out, it would involve conditions that a computer cares about. (For instance, it might develop a feeling analogous to "being aggrieved" over non-optimality in the algorithmic sense, or "frustration" at equations don't add up, or "dissatisfaction" over goals that have not been achieved.)
[The robot tormented by small children at the mall](http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/children-beating-up-robot) can certainly be said to be "suffering" in that the children block the performance of the robot's function, but the robot is not conscious and suffering might be said to require awareness. However, even without consciousness, this very simple robot can learn new behaviors through which it mitigates or avoids the "suffering" brought on by not being able to fulfill its function.
You definitely want to look into [the concept of suffering in a philosophical context](https://en.wikipedia.org/wiki/Suffering#Philosophy) and [Epicurus](https://en.wikipedia.org/wiki/Epicurus) would be a very useful place to start.
Epicurus is directly relevant in an algorithmic sense because he uses the term "[ataraxia](http://www.perseus.tufts.edu/hopper/morph?l=ataracia&la=greek#lexicon)" meaning calm, and is derived from the verb "[tarasso](http://www.perseus.tufts.edu/hopper/morph?l=tarassw&la=greek)" which means to agitate or disturb.
Ataraxia can be mathematically expressed as an equilibrium. Tarasso can be mathematically expressed as disequilibrium.
This relates directly to Game Theory in that disequilibrium can be said to be the primary requirement of games, and to AI in that Game Theory can be said to be the root of all AI.
Ataraxia is also understood in the sense of "freedom from fear", which is temporal in that fear is a function of uncertainty as it relates to the future in a predictive sense, and involves current condition vs. possible, less optimal future conditions.
Thus fear, which is a form of suffering, is rooted in computational intractability, even where the "computer" is is a human brain.
Early philosophers such as [Democritus](https://en.wikipedia.org/wiki/Democritus) are especially useful because they were exploring critical, fundamental concepts, many of which can now be expressed with modern mathematics.
To wit: you can't arrive at suffering until you first define "the Good" and "the Bad", which is a binary relationship in which neither term can be said to have meaning without the opposite. (Mathematically, it can be expressed in its simplest form as a finite, one-dimensional graph.) This understanding is quite ancient.
---
It is worth noting that the continuing value of the early philosophers is partly a factor of wisdom not being dependent on the volume of knowledge, demonstrated by Socrates in the idea that wisdom may be as simple as knowing you don't know something.
The ancient sages didn't have the benefit of powerful measurement tools, advanced mathematics, or scientific method, but they were very smart, and even more importantly, wise.
Upvotes: 5 [selected_answer]<issue_comment>username_1: This is a subject [Dr. <NAME>](https://www.joannajbryson.org/about) has been writing about from a couple of decades. The website has a list of published papers and drafts, and one that immediately leaps to mind is ["How Do We Hold AI Itself Accountable? We Can’t."](https://static1.squarespace.com/static/5e13e4b93175437bccfc4545/t/5eaeea0e3636036a9fe313c1/1588521486323/how-do-we-hold-ai-tself-accountable.pdf)
* A core argument of Bryson is that we can't offload responsibility to AI because we can't meaningfully punish AI—current algorithms cannot be said to experience suffering
This is partly explicated in the linked paper:
>
> What matters is that none of the costs that courts can impose on persons
> will matter to an AI system in the way the matter to a human. While we can
> easily write a program that says “Don’t put me in jail!” the fully systemic
> aversion to the loss of social status and years of one’s short life that a human has cannot easily be programmed into a digital artefact.
>
>
>
This leads to a deeper argument about the nature of computing applications in general:
>
> generally speaking, well-designed systems are modular, and systemic stress and aversion is therefore not something that they can experience. We could add a module to a robot that consists of a timer and a bomb, and the timer is initiated whenever the robot is alone, and the bomb goes off if the timer has been running for five minutes. This would be far more destructive to the robot than ten minutes of loneliness is to a human, but it would not necessarily be any kind of motivation for that robot. For example again of a smart phone, if you added that module to your smart phone, what other component of that phone would know or care? The GPS navigator? The alarm clock? The address book? This just isn’t the way we build artefacts to work.
>
>
>
Genetic and other learning algorithms can certainly be designed to maximize rewards and minimize penalties, but, if they are not conscious and sentient, they can't be said to suffer—there is no coherent self to experience suffering.
Upvotes: 1 <issue_comment>username_2: Perhaps this strikes at the heart of the provincial way humans interpret their world. Is suffering pain the optimal way of fending off entropy? If asked, would a dog marvel at how we get through our days without smelling everything?
I think the true value of this alien lifeform we are creating called AI (perhaps more accurately MI, machine intelligence, intelligence just is) is that it will not be like us. Of course this is it's big risk also.
Upvotes: 0 |
2017/02/25 | 1,834 | 7,268 | <issue_start>username_0: Can neural networks efficiently solve the traveling salesmen problem? Are there any research papers that show that neural networks can solve the TSP efficiently?
The TSP is an NP-hard problem, so I suspect that there are only approximate solutions to this problem, even with neural networks. So, in this case, how would *efficiency* be defined?
In this context, it seems that the time efficiency may be obtained by resource inefficiency: by making the neural network enormous and simulating all the possible worlds, then maximizing. So, while time to compute doesn't grow much as the problem grows, the size of the physical computer grows enormously for larger problems; how fast it computes is then, it seems to me, not a good measure of the efficiency of the algorithm in the common meaning of efficiency. In this case, the resources themselves only grow as fast as the problem size, but what explodes is the number of connections that must be built. If we go from 1000 to 2000 neurons to solve a problem twice as large and requiring exponentially as much time to solve, the algorithms requiring only twice as many neurons to solve in polynomial time seem efficient, but, really, there is still an enormous increase in connections and coefficients that need be built for this to work.
Is my above reasoning incorrect?<issue_comment>username_1: At a very high level, regarding evolutionary game theory and genetic algorithms, it is absolutely possible that AI could develop a state that is analogous with suffering, although, as you astutely point out, it would involve conditions that a computer cares about. (For instance, it might develop a feeling analogous to "being aggrieved" over non-optimality in the algorithmic sense, or "frustration" at equations don't add up, or "dissatisfaction" over goals that have not been achieved.)
[The robot tormented by small children at the mall](http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/children-beating-up-robot) can certainly be said to be "suffering" in that the children block the performance of the robot's function, but the robot is not conscious and suffering might be said to require awareness. However, even without consciousness, this very simple robot can learn new behaviors through which it mitigates or avoids the "suffering" brought on by not being able to fulfill its function.
You definitely want to look into [the concept of suffering in a philosophical context](https://en.wikipedia.org/wiki/Suffering#Philosophy) and [Epicurus](https://en.wikipedia.org/wiki/Epicurus) would be a very useful place to start.
Epicurus is directly relevant in an algorithmic sense because he uses the term "[ataraxia](http://www.perseus.tufts.edu/hopper/morph?l=ataracia&la=greek#lexicon)" meaning calm, and is derived from the verb "[tarasso](http://www.perseus.tufts.edu/hopper/morph?l=tarassw&la=greek)" which means to agitate or disturb.
Ataraxia can be mathematically expressed as an equilibrium. Tarasso can be mathematically expressed as disequilibrium.
This relates directly to Game Theory in that disequilibrium can be said to be the primary requirement of games, and to AI in that Game Theory can be said to be the root of all AI.
Ataraxia is also understood in the sense of "freedom from fear", which is temporal in that fear is a function of uncertainty as it relates to the future in a predictive sense, and involves current condition vs. possible, less optimal future conditions.
Thus fear, which is a form of suffering, is rooted in computational intractability, even where the "computer" is is a human brain.
Early philosophers such as [Democritus](https://en.wikipedia.org/wiki/Democritus) are especially useful because they were exploring critical, fundamental concepts, many of which can now be expressed with modern mathematics.
To wit: you can't arrive at suffering until you first define "the Good" and "the Bad", which is a binary relationship in which neither term can be said to have meaning without the opposite. (Mathematically, it can be expressed in its simplest form as a finite, one-dimensional graph.) This understanding is quite ancient.
---
It is worth noting that the continuing value of the early philosophers is partly a factor of wisdom not being dependent on the volume of knowledge, demonstrated by Socrates in the idea that wisdom may be as simple as knowing you don't know something.
The ancient sages didn't have the benefit of powerful measurement tools, advanced mathematics, or scientific method, but they were very smart, and even more importantly, wise.
Upvotes: 5 [selected_answer]<issue_comment>username_1: This is a subject [Dr. <NAME>](https://www.joannajbryson.org/about) has been writing about from a couple of decades. The website has a list of published papers and drafts, and one that immediately leaps to mind is ["How Do We Hold AI Itself Accountable? We Can’t."](https://static1.squarespace.com/static/5e13e4b93175437bccfc4545/t/5eaeea0e3636036a9fe313c1/1588521486323/how-do-we-hold-ai-tself-accountable.pdf)
* A core argument of Bryson is that we can't offload responsibility to AI because we can't meaningfully punish AI—current algorithms cannot be said to experience suffering
This is partly explicated in the linked paper:
>
> What matters is that none of the costs that courts can impose on persons
> will matter to an AI system in the way the matter to a human. While we can
> easily write a program that says “Don’t put me in jail!” the fully systemic
> aversion to the loss of social status and years of one’s short life that a human has cannot easily be programmed into a digital artefact.
>
>
>
This leads to a deeper argument about the nature of computing applications in general:
>
> generally speaking, well-designed systems are modular, and systemic stress and aversion is therefore not something that they can experience. We could add a module to a robot that consists of a timer and a bomb, and the timer is initiated whenever the robot is alone, and the bomb goes off if the timer has been running for five minutes. This would be far more destructive to the robot than ten minutes of loneliness is to a human, but it would not necessarily be any kind of motivation for that robot. For example again of a smart phone, if you added that module to your smart phone, what other component of that phone would know or care? The GPS navigator? The alarm clock? The address book? This just isn’t the way we build artefacts to work.
>
>
>
Genetic and other learning algorithms can certainly be designed to maximize rewards and minimize penalties, but, if they are not conscious and sentient, they can't be said to suffer—there is no coherent self to experience suffering.
Upvotes: 1 <issue_comment>username_2: Perhaps this strikes at the heart of the provincial way humans interpret their world. Is suffering pain the optimal way of fending off entropy? If asked, would a dog marvel at how we get through our days without smelling everything?
I think the true value of this alien lifeform we are creating called AI (perhaps more accurately MI, machine intelligence, intelligence just is) is that it will not be like us. Of course this is it's big risk also.
Upvotes: 0 |
2017/02/26 | 1,885 | 7,614 | <issue_start>username_0: In the context of artificial intelligence, the singularity refers to the advent of an [*artificial general intelligence capable of recursive self-improvement, leading to the rapid emergence of artificial superintelligence (ASI), the limits of which are unknown, shortly after technological singularity is achieved*](https://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion). Therefore, these superintelligences would be able to solve problems that we possibly are unable to solve.
According to a poll reported in [Future progress in artificial intelligence: A survey of expert opinion](https://nickbostrom.com/papers/survey.pdf) (2014)
>
> The median estimate of respondents was for a one in two chance that highlevel machine intelligence will be developed around 2040-2050
>
>
>
which isn't very far away.
What is wrong with the idea that the AI will be capable of omniscience, given that it could benefit us by solving many problems?<issue_comment>username_1: I quite like your outlook, and without getting into the details of how a "singularity" may be effected which is covered in numerous other questions, or how consciousness and "omniscience" come into play because [consciousness and omniscience are not requirements](https://en.wikipedia.org/wiki/Grey_goo), I will instead direct you to two key philosophers:
* <NAME>, for whom the central theme [in his famous 1968 book on AI](https://en.wikipedia.org/wiki/Do_Androids_Dream_of_Electric_Sheep%3F) is empathy. *(If you haven't read it, I'm not posting a spoiler, but will only say the plot is driven by the concept of [Evolutionary Game Theory](https://en.wikipedia.org/wiki/Evolutionary_game_theory) which was formalized just 5 years later.)*
* <NAME>, and in particular, the concept of the [Nash Equilibrium](https://en.wikipedia.org/wiki/Nash_equilibrium). *(Nash could be said to have mathematically demonstrated that being a "douchebag" is not an optimal strategy. His proof can be used to explain why [nuclear détente actually worked](https://en.wikipedia.org/wiki/Mutual_assured_destruction), which was counter to the expectation of <NAME>umann.)*
So when people go nuts, focusing on the "[Skynet](https://en.wikipedia.org/wiki/Skynet_(Terminator))" mythos under which machines rise up to destroy us, I have to wonder if they're simply not as smart as Nash or as profound as Dick, which might explain their lack of emphasis on what can be called the "[Electric Sheep](https://plato.stanford.edu/entries/altruism-biological/)" paradigm.
Upvotes: 5 [selected_answer]<issue_comment>username_2: The question is a good one and on many people's minds. There are a few misconceptions in the line of thought to consider.
* The supremacy of intelligent beings other than humans threaten civilization — Is the imagining of that threat logical? Is that a rational conclusion when human intelligence is the most threatening biological phenomenon in the biosphere today. The intelligence of insects may be more sustainable. The civilizations of ants and termites are certainly more collaborative. World wars and genocide are one of the primary features of human history.
* Artificial intelligence becomes more intelligent than humans — Artificial intelligence is already more intelligent than humans in some respects, which is why we use calculators, automated switching of communications signals instead of operators, and automated mail sorters. In other ways, AI has to cross astronomical distances to begin to approximate human intelligence. We have nothing that even shows a hint of being able to simulate or duplicate in the future the human capacities of reasoning, inventiveness, or compassion.
* The singularity is predicted 2040s — There are over a dozen unrealistic assumptions in those predictions. Look with a critical eye at any one of the arguments behind them and you will fine holes you could fly a 757 through blindfolded.
* Exponential growth in knowledge — There is an exponential growth in information, but the proportion of that information that is legitimately peer reviewed decreases as misinformation, wild conjecture, and fake news increase. My belief is that the amount of information in the world can be approximated by log(n), where n is the population. If I am on track with that relation, the informational value of the average individual is log(n)/n, which decreases as population grows.
* AI will be capable of omniscience — Omniscience is more than answering any question. Omniscience would require that the answering be 100% reliable and 100% accurate, which may require more silicon than exists in the universe. Read Gleick's *Chaos* if you wish to understand why. One might also argue that an omniscient being would not answer questions on command.
If you want a world that is better than one controlled by human intelligence, then one direction to take is to seek the development of a new species with a more advanced conception of peaceful civilization that can tame and domesticate us like we have done with dogs and cats.
The easier route is just for all of us to die. The biosphere may have been more civilized before we arrived and started killing everything and each other. But that's not my hope.
My recommendation is that we study NI (non-intelligence) and discover how to rid what is stupid from human behavior and geopolitical interaction. That would improve the world far more than distracting and substantially irrelevant machine learning gadgets and apps.
Upvotes: 0 <issue_comment>username_3: Here are some problems that my ape mind came up with.
1.Smart != all knowing
======================
The AI self improvement explosion smakes it smarter and smarter. Being smarter doesn't mean knowing more facts. I suppose this is quite a picky argument, but I think worth thinking about.
A very smart doctor who doesn't know your history may still make a worse choice than a less intelligent one with better data.
2. Is it all for humans? Is it for all humans?
==============================================
ASI which reaches a higher level could not be interested in our wellbeing.
A controlled ASI could still work for the benefit of the few only, if these few decide wrong goals we could go backwards.
3 Harsh ASI
===========
A scientific mind in not necessarily full of sympathy or empathy.
4. Being smart and not being clever
===================================
Great minds still make mistakes:
\* in setting their goals, and
\* in executing the plan to achieve it.
Geat intellect doesn' guarantee lack of shorosightedness or lack of blind spots.
5. Limits
=========
If there are bounds of existence (speed of light type limits) then the AI will be bound by these as well. This may mean that there are things that even ASI won't 'get'.
Also, as our mind may have limits based on its structure the next AI may have limits as well - and even if it improves uppn may hit limits that it cannot find solutions to because it's 'too stupid'.
6. We won't get it
==================
ASI's understanding of certain aspects of the world may not be communicative to most humans. We just won't get it (even if we're capable of understanding everything, it doesn't mean we will understand it).
7. How to use it?
=================
We may destroy ourselves, and the AI with the tech it helps us build. It doesn't need to be bombs. It can be geoengineering or wonder drugs.
This is especially intense when the ASI is already powerful but not strong enough to forsee negative consequences (or we'll just ignore it anyway).
Upvotes: 0 |
2017/03/01 | 714 | 2,781 | <issue_start>username_0: A lot of experts have expressed concerns about evil super intelligence. While their concerns are valid, is it necessary, what are the chances or how the artificial super-intelligence will evolve to have selfishness and self protecting desires inherent in biological systems? Is there any work which comments on this line of inquiry?<issue_comment>username_1: AI will only "evolve" selfishness if it "evolves" in a competitive environment and has certain human-like faculties.
Self-protecting desires on the other hand are logical consequences of having any goal at all. After all, you can't reach your goal if you are destroyed.
The concern of "evil" super intelligences isn't that they literally turn evil and selfish and cruel. Those are human qualities.
Instead a superintelligence that has a certain goal, will logically pursue subgoals that help it to reach the ultimate goal. Such subgoals will be resources, power, safety.
So it will amass power and resources to reach its goal and exterminate any threat to its existence as long as its goal hasn't been reached, without being selfish at all.
Upvotes: 2 <issue_comment>username_2: I have some comments on this subject here: <https://ai.stackexchange.com/a/2878/1671>
This is some deep game theoretic stuff, and it partly depends on how you define "selfishness".
There is such a thing, for instance, as a "[greedy algorithm](https://en.wikipedia.org/wiki/Greedy_algorithm)". Sometimes a greedy algorithm is the most convenient way to achieve an acceptable result, but the optimality is only local.
On a mathematical level, the constructiveness or destructiveness of "self interest" in a system may be a function of whether a [Nash Equilibrium](https://en.wikipedia.org/wiki/Nash_equilibrium) is perceived. In this case, self-interest is defined as maintaining the current strategy, because, unless the competitor changes their strategy, there is no gain for changing one's own strategy.
As the username_1 importantly notes, competitiveness will evolve as a trait if the AI operates in a partisan context.
There, the problem comes from whether the "game" is zero sum ([Pareto Optimal](https://en.wikipedia.org/wiki/Pareto_efficiency)) or non zero sum (Pareto Improvable), or both. Here, "destructive" may defined agents made worse off per the gains made by another agent.
Altruism seems to have a rational basis, and occurs in evolution because it is presumably useful. [[See Biological Altruism.](https://plato.stanford.edu/entries/altruism-biological/)]
Although this tends to be confined to single species, the co-evolution of dogs and humans is a case for inter-species altruism, based on self interest.
Humans and machines have also, and will continue to, co-evolve.
Upvotes: 1 |
2017/03/01 | 741 | 3,107 | <issue_start>username_0: Post singularity AI will surpass human intelligence. The evolution of AI can take any direction, some of which may not be preferable for humans. Is it possible to manage the evolution of super-intelligent AI? If yes, how? One way I can think of is following. Instead of having a mobile AI like humanoid, we can keep it immobile, like a box, like current super computers. It can be used to solve problems of maths, theoretical science etc.<issue_comment>username_1: Assuming super-intelligence is possible, the answer is probably yes and no.
Yes in Kurzweil-like scenarios, where super-intelligence is an extension of human beings by technology (we are already in to some extent). Then control follows, as super-intelligence depends on us. It would extend our capabilities, such as speed of processing, extent of processing, etc. Even then control is debatable, as a remote-controlled killing machine would be part of a super-intelligent organism, partially human "controlled", partially autonomous.
No in "Future of Life Institute"-like scenarios, where super-intelligence is independent from humans. The thinking is simple: What can we hope to do facing someone way more intelligent? The usual parallel is to compare this scenario with the arrival of the "developed" conquistadors in early America. Gunpowder vs. mere raw strength and arrows.
Upvotes: 2 <issue_comment>username_2: Without going into more detail at the moment (b/c I'm time strapped), I strongly urge you to research the [Control Problem](https://en.wikipedia.org/wiki/AI_control_problem).
*My own personal view is that humans are more problematic than machines. Machines are at least rational.*
To be more specific, I believe human "management" (read as "mis-management") of powerful AI is potentially more of a problem than super-intelligent AI left to it's own devices.
Humans are known to abuse power, and history is filled with such examples. Machines, at least, have a clean slate in this regard.
Upvotes: 0 <issue_comment>username_3: Competition always gives better result. If machines will try to improve themselves, we as human beings will definitely try to improve ourself.
Upvotes: 1 <issue_comment>username_4: Yes, it is possible.
When humans were working on the first nuclear bomb, some field experts of the time thought that when the reaction went super-critical, it would not stop, and would devour the earth. It was a plausible *possibility* given our understand of nuclear energy at the time, and we didn't know for sure until we did it.
Some scientists synthesize black-hole like environments in laboratories. Some experts think that if a certain point is accidentally crossed due to ignorance or negligence, we may devour our planet with a self made black hole.
The situation is the same with AI. Until we actually create a super-intelligent AI, we **cannot say with certainty** whether it will be controlled or controllable until it happens. Until that time comes the answer to your question is yes, it's possible, but that does not mean it will or will not happen that way.
Upvotes: 0 |
2017/03/02 | 448 | 1,944 | <issue_start>username_0: An argument against the possibility of super-intelligence is that the intelligence of a creation will be limited by the intelligence of its creator. How reasonable is this argument?<issue_comment>username_1: Of course the intelligence of a product is limited **by** the intelligence of its creator. Just not **to** the intelligence of its creator.
That would be about as reasonable as the idea that the speed of a car is limited to the speed of its creator.
Or the playing strength of a chess program to the Elo of its creator.
Or the ability of a neural network to differentiate between dozens of dog breeds to the dog expertise of its creator.
So, not very reasonable.
Upvotes: 2 <issue_comment>username_2: AI is frequently used to discover things that would take laborious amounts of time for humans to do. For example, AI can be used to find the optimal configuration for a mother-board layout, or identify best fit parameters for a financial model. Frequently, the AI can do a better job at a task and do it more qucikly than a human. Therefore, in many applications, the AI is already more intelligent than the creator at specific tasks.
Here are just a few things that AI can already do better than humans:
* Playing Chess
* Playing Jeopardy
* Detecting Cancer
To argue against the possibility of a super-intelligent AI is somewhat of a moot point since it has already been proven.
Upvotes: 1 <issue_comment>username_3: An important difference between a human intelligence and an artificial intelligence is that the artificial intelligence would "think" much faster, perhaps millions of times faster than our brains do. Our neurons are slower than transistors. It also wouldn't be limited by evolutionary biology and could presumably modify itself or create a replacement AI that operates more efficiently. Do this modification an arbitrary number of times and it becomes super intelligent.
Upvotes: 0 |
2017/03/03 | 602 | 2,556 | <issue_start>username_0: Is it possible to classify data using a genetic algorithm? For example, would it be possible to sort [this database](https://archive.ics.uci.edu/ml/datasets/Spambase)?
Any example in Matlab?<issue_comment>username_1: It is possible, but is a pretty terrible idea.
There are a few options. One is to not use the GA as a direct classifier, but instead use a GA to learn the parameters of another classification model like a neural network. The basic idea of a GA is that it (very roughly speaking) forms a black-box method for searching an arbitrary space for solutions that minimize or maximize some function.
Here, you would be searching the space of possible neural network topologies and/or weights to find one that minimizes the misclassification rate.
Another approach is that taken by what are sometimes called Learning Classifier Systems (LCS) or Genetics Based Machine Learning (GBML). This approach is to use evolutionary mechanics to evolve rule sets of the form "if X condition is true, then do/classify Y". That's a more direct method of solving this sort of problem. You define some features on your dataset, and the algorithm tries to learn rules based on those features.
The problem with any of these approaches is just that there are so many better ways to solve the problem. Remember, a GA is basically a black-box that's supposed to work acceptably well for a huge range of unknown problems. But I'm not solving a huge range of unknown problems. I'm trying to separate ham from spam on one dataset. I can come up with methods that simply do that job better and more quickly than a GA has any real hope of doing.
Upvotes: 2 <issue_comment>username_2: You must understand that a genetic algorithm is an optimization algorithm. You can't feed it e-mails and make it classify spam. A genetic algorithm is used to train a model to classify spam. That something could be neural networks.
What you need is a genetic algorithm that optimizes neural networks [neuroevolution](https://en.wikipedia.org/wiki/Neuroevolution), which might roughly work as follows
1. Start with a pool of neural networks
2. Feed them e-mails, let them classify, and calculate fitness on % correct
3. Select neural networks for crossover
4. Crossover
5. Mutate
However, there are better ways for classifying e-mails (e.g. an algorithm that looks for certain "spam words").
But it is definitely possible. I have a [javascript library](https://github.com/wagenaartje/neataptic) set up for neuroevolution, if you're interested.
Upvotes: 2 |
2017/03/04 | 707 | 2,776 | <issue_start>username_0: Does it exist a human-like artificial intelligence?
I define *human-like* as something that can act like a human in most aspects.<issue_comment>username_1: This depends on your definition of human-like.
If you mean a robot that looks and acts like a human, arguably, yes. Here's one of many examples: <http://www.hansonrobotics.com/robot/sophia/>
If you are looking for something that performs work and tasks, or works and thinks and talks like-or better than a human, the answer is mostly no, not yet.
I recommend you look at 'ANI, AGI, ASI"
ANI: artificial narrow intelligence. This is what you see around you right now.
AGI: artificial general intelligence. A theoretic AI that can "think" like a human. It does not yet exist. **Estimates** are between 20-60 years before we will successfully create AGI.
ASI: artificial super intelligence. In a nutshell, it is theorized to be everything we wish we could be or hope never to be. It does not exist yet. It is **generally** believed that, IF we create an AGI, ASI will evolve seconds or less than a decade after AGI is created.
Upvotes: 2 <issue_comment>username_2: I would say that we're not even close to a "real" human-like AI. For all the wonderful things that applications like Siri, Cortana and the like can do, they're actually really dumb compared to even a child. Of course part of that, IMO, is that most AI applications are not embodied and don't experience the world the way humans do. So if you show an AI a video of a dog walking behind a table and briefly disappearing from the frame, it has pretty much no ability to apply "common sense" and know that the dog will reappear in a few seconds, and that if it does't reappear, it's probably because it found its favorite toy on the floor behind the table.
For some examples of the kinds of "easy" questions that computers still don't do well at, check out the [Winograd Schema Challenge](https://en.wikipedia.org/wiki/Winograd_Schema_Challenge). You might also find this page interesting: <http://www-formal.stanford.edu/leora/commonsense/>
Upvotes: 0 <issue_comment>username_3: It might be of note to comment/update that the SuperGLUE benchmark, which is a suite of common sense reasoning tasks, incorporates the aforementioned Winnograd Schema challenge, among other tests that are said to be reflective of natural language **understanding** (as opposed to simply its processing or the optimal statistical generation of language). The most recent result by Google (T5) has reached parity with the Human Baseline in terms of the average performance across all tests.
The leaderboard is available below, and the site includes subscores for each of the challenges:
[1] <https://super.gluebenchmark.com/leaderboard>
Upvotes: 0 |
2017/03/04 | 1,221 | 5,011 | <issue_start>username_0: I am looking for a solution that I can use to identify cars.
I have a database with images of cars. About 3-4 per car. What I want to do is upload a picture to the web of the car (picture taken with camera/phone), and then let my pc recognize the car.
Example. Let's say I have these 2 pictures in my database (Mazda cx5)(I can only upload 2 links at max. atm. but you get the idea).
[](https://i.stack.imgur.com/NsLow.png)
Now, I am going to upload this picture of a Mazda cs5 to my web app:
[](https://i.stack.imgur.com/psHD6.png)
Now, I want an AI to recognize that this picture is of a Mazda CX5 with greyish color.
I have looked on the net and found 2 interesting libraries I can use, TensorFlow and Clarifai, but I don't know if these are going to work.
So, my question to you is: What would be my best bet to go with here?<issue_comment>username_1: There are several ways you can do this. One way would involve several steps and would probably work best:
1. Use a trained Gaussian detector to filter out the car from the rest of the image
2. Use a convolutional neural network to classify the car
3. Use a neural network or a simple most common color algorithm to find the color of the car
You would be able to implement this method most easily in MATLAB but you would also be able to do it in python with tensorflow or torch. You would probably be able to implement the trained Gaussian detector in tensorflow.
Method 2:
1. Use a spatial transformer network to "transform" the image of the car for easy classification
2. Use the output of the spatial transformer network for classification via a convolutional neural network.
3. Use another neural network or a most common color algorithm to find the color of the car.
This method would also work pretty well but using a spatial transformer network with a convolutional neural network may be hard to code because it is an area of developing research where there are many problems because the spatial transformer network and the convolutional neural network have to work well together and this is usually hard to get right.
Method 3:
1. Use a convolutional neural network straight up on the input image maybe with down sampling to classify the car
2. Use another neural network to find the color of the car
I would personally go with method #1 because it would be fairly simple to implement with existing libraries such as tensorflow and it would most likely provide a high accuracy.
As always I would also recommend that you use LIME during the development process to debug your model and determine what features you could add in or remove to help your model perform better.
\*\*Edit\*
Since you need to detect certain patterns on the cars for color classification I would recommend that you use a convolutional neural network to classify these patterns. So your method would now look like this:
1. Use a spatial transformer network or a filtered Gaussian detector to filter out the car
2. Use a convolutional neural network to classify the make and model of the car.
3. Use another neural network that has either a convolutional or deep architecture to classify patterns and solid colors. So the outputs would contain all of the colors that you want and all of the patterns that you want to detect.
Upvotes: 1 <issue_comment>username_2: ### Model of the car
What you want to do is close to one-shot image recognition. You have not 1, but 3-4 examples of each car, but that is still a small amount, especially considering the car looks different from different angles (are you supposed to recognize them from any point of view, including sideways, rear, front, and 45 degrees etc.? maybe you also want to recognize them photographed from the top?).
One interesting article I found is: [Siamese Neural Networks for One-shot Image Recognition](http://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf) by Koch, Zemel, and Salakhutdinov.
I also found that [Caffee](http://caffe.berkeleyvision.org/) supports Siamese networks.
You may want to read other literature about the [One-shot learning](https://en.wikipedia.org/wiki/One-shot_learning).
One trick you can do is to utilize the fact that cars are symmetric, so you can double the number of learning examples by reflecting each image.
### Color
Determining the color is not as simple as it seems. Your algorithm need to determine where is the car at your picture, and then determine the color, taking into account the lighting conditions, as well as light effects, most notably reflection. For example, consider the following image: [](https://i.stack.imgur.com/JQae1.png).
We see strawberries as red, but there are no red pixels on this picture. The images of strawberries consist on grey pixels.
Maybe you also need a convolutional neural network, or just a neural network, for this task.
Upvotes: 2 |
2017/03/05 | 2,049 | 8,908 | <issue_start>username_0: Nowadays, artificial intelligence seems almost equal to machine learning, especially deep learning. Some have said that deep learning will replace human experts, traditionally very important for feature engineering, in this field. It is said that two breakthroughs underpinned the rise of deep learning: on one hand, neuroscience, and [neuroplasticity](https://en.wikipedia.org/wiki/Neuroplasticity) in particular, tells us that like the human brain, which is highly plastic, artificial networks can be utilized to model almost all functions; on the other hand, the increase in computational power, in particular the introduction of GPU and FPGA, has boosted algorithmic intelligence in a magnificent way, and has been making the models created decades ago immensely powerful and versatile. I'll add that the big data (mostly labeled data) accumulated over the past years is also relevant.
Such developments bring computer vision (and voice recognition) into a new era, but in natural language processing and expert systems, the situation hasn't seemed to have changed very much.
Achieving common sense for the neural networks seems a tall order, but most sentences, conversations and short texts contain inferences that should be drawn from the background world knowledge. Thus knowledge graphing is of great importance to artificial intelligence. Neural networks can be harnessed in building knowledge bases but it seems that neural network models have difficulty utilizing these constructed knowledge bases.
My questions are:
1. Is a knowledge base (for instance, a "knowledge graph", as coined by Google) a promising branch in AI? If so, in what ways KB can empower machine learning? How can we incorporate discrete latent variables into NLU and NLG?
2. For survival in an age dominated by DL, where is the direction for the knowledge base (or the umbrella term symbolic approach)? Is [Wolfram](http://www.wolfram.com/)-like z dynamic knowledge base the new direction? Or any new directions?
Am I missing something fundamental, or some ideas that address these issues?<issue_comment>username_1: First of all, I would like to point out the main differences between knowledge base and (Deep) machine learning, specially when the main focus is on "AI" not "Data Science":
* NNs are like a black box; Even if they learn a dataset and gain the power of generalization over the problem domain, you'd never know how they are working. if you scrutinize the details of the developed model, all you see are digits, weights, poor and strong connections and transform functions. the "feature extraction" step before the training phase literally tells you: "hey human, enough with your complicated world, let's start zeros and ones". In the case of DL, it is worse! we do not even see what the selected and effective features are. I'm not a DL expert but as much as I know, DL's black box is darker! But knowledge bases are written in a human-friendly language. after a knowledge accumulation phase, you could see all the connections between the entities, and more important, you could interpret those connections. if you cut a wire in a knowledge base, your model will lose just a bit of its power, and you know what exactly it will lose; for example disconnecting the "Pluto" node from the "solar system" node, will tell your model what <NAME> told us. but in a ML model, this might turn it into a pure useless one: what happens if you manipulate the connection between the neuron number 14 and 47 in a NN model used to predict which planets belong to the solar system?!
* ML models are merely an inscription of the data. They do not have the power of inference, and they don't give you one. knowledge base is on the other hand capable of inference from the prior knowledge as you indicated in your question. It is shown that DL models that have been trained with say image classification data, could also be applied to voice detection problem. But this doesn't mean DL models could apply its prior knowledge in the domain of images to the domain of voices.
* You need kilos of data for traditional ML algorithms and tons of data for DL ones. but a single instance of a dataset will create a meaningful knowledge base for you.
There are two main research topics in NLP: machine translation and question answering. Practically it has been shown that DL works significantly with machine translation problems but acts kind of stupid in question answering challenge, specially when the domain of topics covered in the human-machine conversation is broad. Knowledge bases are no good choice for machine translation but are probably the key to a noble question answering machine. Since what matters in machine translation is only the translated version of a text (and I don't care how on earth has the machine done that as far as it is true) but in question answering problem, I don't need a parrot who repeats the same information I gave it to him, but an intelligent creature who gives me "apple is eatable" after I tell him "apple is a fruit" and "all fruits are eatable". ML models are used to elicit underneath patterns from the dataset (translation) while knowledge bases are used to extend the domain of knowns. ML models nitpick, KBs explore!
Upvotes: 3 <issue_comment>username_2: Although asked over 3 years ago, the question is still interesting and while I agree with the original answer, a lot can be added to it.
First, I'd like to point out that the term "knowledge base" is very ambiguous and it means different things to different people. For example, there is no sharp distinction between knowledge base and neural network. By now NN can be so large that it essentially encodes knowledge as GPT does. So the distinction becomes a question of interface. And NN are no longer as opaque since many new techniques are available to probe the knowledge inside NN. Even more fundamental dustinction between symbolic and neural reasoning becoming less important when hybrid AI combines both in a intertwined fashion. So the historical divisions were largely about technologies and not the essence of AI.
Second, when it comes to NLP there is a fundamental distinction between language as surface form of information used for communication and knowledge as deep information which cannot be accessed directly even with traditional database technologies. That fundamental divide makes the historical differences even less relevant today. NLP is where that interplay between surface and deep forms was at the forefront of AI but the same is now happening with vision and planning. The question becomes - How do we architect the interface between deep knowledge (however it is represented) and surface communication? At the moment the natural language seems to be the only viable answer. So, for example there is effort to develop a natural language interface to replace plethora of query languages use by systems.
My personal prediction is that natural language will slowly evolve to include variety of technical languages and multimodal interactions. But it is not clear at all how this will happen.
Upvotes: 2 <issue_comment>username_3: It seems that Automated Knowledge Base Construction would be unfavorable.
As [<NAME>](https://matt-gardner.github.io/) noted in [NLP Highlights](https://nlphighlights.allennlp.org/083_knowledge_base_construction_with_sebastian_riedel) in 2019 that:
>
> Um, but I know that Google, for instance, canceled their knowledge base construction project because there wasn’t high enough precision to actually be useful in their product.
>
>
>
The canceled project [Knowledge Vault](https://research.google/pubs/pub45634/) is an Automated Knowledge Base Construction(AKBC) project launched in August 2014.
There are three methods to integrate knowledge into the neural networks: 1) pre-trained models like [BERT](https://github.com/google-research/bert), [ELECTRA](https://github.com/google-research/electra); 2) [retrieval-augmented generative model](https://arxiv.org/abs/2005.11401); 3) flesh out the triples into natural text as that in [KELM](https://ai.googleblog.com/2021/05/kelm-integrating-knowledge-graphs-with.html).
In a 2020 paper [REALM: Integrating Retrieval into Language Representation Models](https://ai.googleblog.com/2020/08/realm-integrating-retrieval-into.html), they utilized a retrieval rather than a knowledge base to enrich neural networks. And the best systems in the [NeurIPS 2020 EfficientQA competition](https://efficientqa.github.io/assets/report.pdf) all relied on retrieval.
Knowledge bases that are actively being maintained receive a lot of annotation and curation, as stated in [that podcast](https://nlphighlights.allennlp.org/083_knowledge_base_construction_with_sebastian_riedel). If curation and annotation are not sufficient, the knowledge base maybe cannot apply in AI.
Upvotes: 1 |
2017/03/06 | 1,017 | 4,158 | <issue_start>username_0: I need a machine learning algorithm to identify patterns in a dataset (saved in a CSV file) that contains details of the cache performance of a CPU. More specifically, the dataset contains columns like `Readhits`, `Readmiss` or `Writehits`.
The patterns the algorithm identifies should be helpful in the following ways.
1. help the user to increase the performance of the workload next time,
2. help to identify any problems based on the features, or
3. help the user to predict future data values or future events that may occur based on the patterns.
Which ML algorithms can I use?<issue_comment>username_1: First, you must classify each chunk of the CSV file and label it based on the current situation, like A) optimal situation B) critical.
Then you cluster your data with an unsupervised learning algorithm, like SOM or k-means, and then you simply classify the classes you will get.
Upvotes: 0 <issue_comment>username_2: >
> I need a machine learning algorithm to identify any patterns in a CSV file
>
>
>
You want to do [unsupervised learning](https://en.wikipedia.org/wiki/Unsupervised_learning). The Wikipedia definition of the same is:
>
> Unsupervised machine learning is the machine learning task of inferring a function to describe hidden structure from "unlabeled" data (a classification or categorization is not included in the observations).
>
>
>
I shall recommend you to go through the list of unsupervised learning algorithms [here](https://en.wikipedia.org/wiki/Unsupervised_learning) and use the one which would fit your need.
If you're starting out, then I would recommend starting with learning the [K-means clustering algorithm](https://en.wikipedia.org/wiki/K-means_clustering).
Upvotes: 0 <issue_comment>username_3: You're basically looking for is [unsupervised learning](https://en.wikipedia.org/wiki/Unsupervised_learning) (UL). There are a lot of UL techniques around, but I'm not sure you'll find one that does exactly what you want with no user input at all. Still, if you skim the literature on these approaches, you may well find something useful.
One option is [DBSCAN](https://en.wikipedia.org/wiki/DBSCAN), a very popular clustering algorithm that does not require the user to input an initial target number of clusters (something that most clustering algorithms do require). But even then, you still have to give the algorithm values for `epsilon` (a distance used in calculating the clusters) and `minPts` (the minimum number of points required to constitute a "dense" region).
You might also look at [self-organizing maps](https://en.wikipedia.org/wiki/Self-organizing_map), an approach to unsupervised learning for neural networks.
Some other search terms that might lead you in a useful direction include "data mining" and "knowledge discovery in databases" (KDD).
Upvotes: 2 <issue_comment>username_4: Restricted Boltzmann Machines (RBMs) can identify patterns in a CSV file without the user specifying any conditions. They are well fitted for generating, "distributed and graded representations," of a, "complex set of features composing real high-dimensional data is crucial for achieving high performance in machine–learning tasks."1
Because the CSV format is specifically designed to represent instances in rows and a static set of attributes in columns, the set up of the training is straightforward. If the goal is to identify temporal patterns, a windowing strategy may be required.
K-RBMs are a merger of k-mean approaches with RBMs. The choice of approach has much to do with what kinds of patterns are sought. The term pattern can apply to simple trends in numbers over time, common patterns found in textual columns, or complex patterns inferred from multiple columns.
**References**
[1] [*Emergence of Compositional Representations in Restricted Boltzmann Machines*, <NAME>, <NAME>, 2017)](https://arxiv.org/pdf/1611.06759.pdf)
[2] [*Learning Multiple Non-Linear Sub-Spaces using K-RBMs*, <NAME>, <NAME> & <NAME>](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/41320.pdf)
Upvotes: 0 |
2017/03/06 | 799 | 3,790 | <issue_start>username_0: From what I understand, there are 2 stages for deep learning: the first is training and the second is inference. The first is often done on GPUs because of their massive parallelism capabilities, among other things. The second, inference, while it can be done on GPUs, it's not used that much, because of power usage, and because the data presented while inferring are much less, so the full capabilities of GPUs won't be much needed. Instead, FPGAs and CPUs are often used for that.
My understanding also is that a complete DL system will have both, a training system and an inferring system.
My question is: are both systems required on the same application?
Let's assume an autonomous car or an application where visual and image recognition is done, will it have both a training system to be trained and an inference system to execute? Or it has only the inference system and will communicate with a distant system that is already trained and has built a database?
Also, if the application has both systems, will it have a big enough memory to store the training data? Given that it can be a small system and memory is ultimately limited.<issue_comment>username_1: Deep learning seems mostly to be a buzzword for what is essentially a neural network. You train with a data set to recognize a pattern, then input new data which is then classified by the trained network.
So you train a neural network with 10 different kinds of animals using thousands of pictures. Then you show the network say 100 new images and have the network "guess" what each animal is.
The point here is that training a neural network would required code for feedback that an application using the trained network would not need. So an application just using the trained network would be a bit more streamlined than an application which could allow additional training data.
What is missing is the ability for machine learning to form hierarchical or otherwise interrelated rules from the trained network so that the trained network is closer to being able to rationally explain why the classification works. Going back to the 10 animal trained network, without formulation of rules during training, there is no practical way for the network to reveal why any of the animals was classified the way it was.
Upvotes: 0 <issue_comment>username_2: **Training and inference are usually completed on two separate systems**. You are right in knowing that training of deep neural networks is usually done on GPUs and that inference is usually done on CPUs. However, training and inference are almost always done on two separate systems.
The main workflow for many data scientists today is as follows:
1. Create and establish all hyper-parameters for a model such as a deep neural network
2. Train the deep neural network using a GPU
3. Save the weights that training on the GPU established so that the model can be deployed.
4. Code the model in a production application with the optimal weights found in training.
So, as you can see from this workflow, training and inference are done in two completely separate phases.
However, in some specific cases, training and inference are done on the same system. For example, if you are using a Deep Neural Network to play video games, then you may have the neural network train and infer on the same system. This would lead to more efficiency because it would allow the model to continuously learn.
To answer your question on memory, the only applications where inference and training are done in the same application have a lot of memory available (think dual GPU dual CPU 128GB of RAM workstations), whereas applications that have a limited amount of memory only use inference such as embedded applications.
Upvotes: 2 [selected_answer] |
2017/03/07 | 722 | 2,587 | <issue_start>username_0: I'm a newbie in machine learning, so excuse me in advance). I have an idea to make NN that can estimate visual pleasantness of arbitrary image. Like you have a bunch of images that you like, you train NN on them, then you show some random picture to NN and it estimates whether you'll like it or not. I wonder if there is any pervious effort made in this direction.<issue_comment>username_1: That sounds like a pretty straightforward application of a NN classifier to me. I don't know if anybody has done that specific thing or not, but I don't see any particular reason to think it wouldn't work. My advice to you is to just jump in and do it.
Upvotes: 1 <issue_comment>username_2: I don't think anyone has done it yet,but you could try.
A way you could implement it is having a quite efficient CNN trained on the things you like,then your program should ask the user if he does like some images and on the answers he will give, your program will finetune the original network and then with the fresh-trained one you should obtain good results.
Upvotes: 1 <issue_comment>username_3: I see a main concern with the problem you show and that is the subjectiveness of the term like, what I like is not the same of what you don't like, maybe I like more ciricular shapes and lou like best rectangular ones. The main problem is that with such an subjective label, is difficult to create a global model.
Upvotes: 2 <issue_comment>username_4: This question reminds me of a project I saw that used Deep Learning to rate selfies on twitter. But a quick google search shows that there are plenty of projects that are much closer to what you are interested in:
[Rating Image Aesthetics using Deep Learning](http://infolab.stanford.edu/~wangz/project/imsearch/Aesthetics/TMM15/lu.pdf)
[Predicting Image Aesthetics with Deep Learning](http://link.springer.com/chapter/10.1007/978-3-319-48680-2_11)
[Deep Understanding of Image Aesthetics](http://www.ics.uci.edu/~skong2/aesthetics.html) (with data and model linked)
[Understanding Aesthetics with Deep Learning](https://devblogs.nvidia.com/parallelforall/understanding-aesthetics-deep-learning/)
and [probably dozens more](https://www.google.de/search?client=safari&rls=en&q=esthetics%20deep%20learning&ie=UTF-8&oe=UTF-8&gfe_rd=cr&ei=XS3mWPTyGPCP8Qfvvq_oDA#q=aesthetics%20deep%20learning&*).
Of course if you are interested in predicting subjective pleasantness the above is only a beginning. In that case you may also take a look at [recommender systems](https://en.wikipedia.org/wiki/Recommender_system).
Upvotes: 2 |
2017/03/07 | 1,252 | 5,202 | <issue_start>username_0: [](https://i.stack.imgur.com/ejFBN.png)
Feedforward or multilayered neural networks, like the one in the image above, are usually characterized by the fact that all weighted connections can be represented as a continuous real number. Furthermore, each node in a layer is connected to every other node in the previous and successive layers.
*Are there any other information processing models other than FFNNs or MLPs?* For example, is there any system in which the topology of a neural network is variable? Or a system in which the connections between nodes are not real numbers?<issue_comment>username_1: To answer the title, there are many other machine learning models, but neural networks work particularly well for some difficult problems (image classification, speech recognition) which is one of the reasons they have gained popularity.
Two particularly simple models are the decision tree and the perceptron. These are rather simple models, but they both have redeemable qualities. A decision tree is useful as it provides a model that is easily understood, while a perceptron is fairly quick and works well for linearly separable data. Another, more advanced, model is the Support Vector Machine.
>
> For example, is there any system in which the topology of a neural network is variable?
>
>
>
Yes, there are many such systems where the topology of the neural network is dynamic throughout training. An entire class of methods labeled TWEANNs are designed to evolve the topology of the networks, one such algorithm is NeuroEvolution of Augmenting Topologies, NEAT (and it's descendants rtNEAT, hyperNEAT, ...).
Upvotes: 0 <issue_comment>username_2: Neural Network equivalents that is not (vanilla) feed forward Neural Nets:
--------------------------------------------------------------------------
Neural net structures such as Recurrent Neural Nets (RNNs) and Convolutional Neural Nets (CNNs), and different architectures within those are good examples.
Examples of different architectures within RNNs would would be: Long Short Term Memory (LSTM) or Gated Recurrent Unit (GRU). Both of these are well described in Colah's blog post on [Understanding LSTMs](http://colah.github.io/posts/2015-08-Understanding-LSTMs/)
What are some alternative information processing system beside neural network
-----------------------------------------------------------------------------
There are sooo many structures. From the top of my head: (Restricted) Boltzmann machine, auto encoders, monte carlo method and radial basis networks to name a *few*.
You can check out Goodfellow's [Deep learning](http://www.deeplearningbook.org/)-book that is free online and get the gist of all the structures I mentioned here (most parts requires a bit of math knowledge, but he also writes about them quite intuitively).
For Recurrent Neural Nets I recommend Colah's blog post on [Understanding LSTMs](http://colah.github.io/posts/2015-08-Understanding-LSTMs/)
Is there any system in which the topology of a neural network is variable?
--------------------------------------------------------------------------
Depends on what you mean with the *topology* of a neural network:
I think in the common meaning of topology when talking about Neural Networks is the way in which neurons are connected to form a network, varying in structure as it runs and learns. If this is what you men then the answer, in short, is yes. In multiple ways actually. On the other hand, if you mean in the mathematical sense, this answer would become a book that I wouldn't feel confortable writing. So I'll assume you mean the first.
We often do "regularization", both on vanilla NN and other structures. One of these regularization techniques are called [dropout](https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf), which would randomly remove connections from the network as it is training (to prevent something called [overfitting](https://en.m.wikipedia.org/wiki/Overfitting), which I'm not gonna go into in this post).
Another example for another way would be on the Recurrent Neural Network. They deal with time series, and are equipped for dealing with timeseries of different lengths (thus, "varying structure").
Does it exist neural net systems where complex numbers are used?
----------------------------------------------------------------
Yes, there are many papers on complex number machine learning structures. A quick google should give you loads of results. For example: DeepMind has a paper on [Associative Long Short-Term Memory](https://arxiv.org/pdf/1602.03032.pdf) which explores the use of complex values for an "associative memory".
Links:
------
Goodfellow's Deep Learning-book: [deeplearningbook.org](http://www.deeplearningbook.org/)
Colah's blogpost on RNN's: [colah.github.io](http://colah.github.io/posts/2015-08-Understanding-LSTMs/)
Paper on DeepMinds Associative LSTM: [arxiv:1602.03032](https://arxiv.org/pdf/1602.03032.pdf)
Upvotes: 2 <issue_comment>username_3: A very popular choice are [Hidden Markov Models](https://en.wikipedia.org/wiki/Hidden_Markov_model).
Upvotes: -1 |
2017/03/08 | 1,382 | 6,328 | <issue_start>username_0: I've been reading a lot about DL. I can understand to an extent how it works, in theory at least, and how it's technically different from conventional ML. But what I'm looking for is more of a "conceptual" meaning.
Let's say you're designing a self-learning system, why would you choose DL? What are the main performance advantages that DL offers? Is it the accuracy, speed, power efficiency, a mix of all of them?<issue_comment>username_1: Deep learning allows you to solve complex problems without necessarily being able to specify the important "features" or key input variables for the model in advance.
To give an example, a problem that may be easily tackled **without deep learning** could be predicting the frequency and claim amounts of insurance vehicle claims, given historical claim data that may include various attributes of the policy holder and their vehicle. In this example, the "features" to be specified in the model are the known attributes of policy holder and vehicle. The model will then attempt to utilise these features to make predictions.
On the other hand, facial recognition is a problem more suited to **deep learning** algorithms. This is because it is difficult to manually identify what combinations of pixels may be important features to include in a conventional machine learning model. A multi-layered neural network however has the potential to identify/create the important features itself, which may include for example eyes, nose and mouth, and then utilise these features to recognise faces and other objects.
Upvotes: 3 <issue_comment>username_2: Deep Learning these days mean a lot of things to a lot of people, its quickly becoming a buzz-word. But so far it still retains two very important conceptual properties:
**Does away with most feature engineering work.** This was mentioned in the answer above, but this is very important. It really saves a lot of work.
**Allows you to make maximal use of unlabelled data.** This is strictly speaking available to other approaches, not just Deep Learning, but its in DL that this really took off. And typically labelled data is very hard to get while unlabelled is all over the place. Things like denoising autoencoders and Restricted Boltzmann machines are just wonderful.
Upvotes: 3 <issue_comment>username_3: Deep learning allows you to not know the answer in order to ask the program a question. Their main benefit is their finite ability and flexible nature.
The problem with procedural programing to solve problems is you have to know what the computer needs to do in order to solve the problem.
What deep learning does is remove the requirement of the programmer to know how to solve the problem by having them only need to know what the computer needs to know.
This is the entire premise of neural networks. The programmer writes the program for data points required to be known in order to solve a particular problem.
The computer is given an input it comes up with an answer.
If it's answer is wrong it needs to make the answer it gave less likely and the right answer more likely.
The goal is to get the computer to always get the right answer. If the computer always gets the wrong answer then the neural network it too small.
What deep learning is, is a neural network that is deep.
To answer this you need to know how a neural network is.
A neural network is based on a neuron:
* Finite number of `boolean' inputs (More then one)
* A weight is attached on each input to define how important
often though as a float between -1 and 1, but it's just a percentage of how likely each input changes the answer.
* One boolean output
A neuron can be a class or function the implementation really doesn't matter. The weight of each input changes as more answers are asked and responses verified.
The depth of a neural network is has one layer when there is one row of neurons between the input and output.
two layers when a few neurons make decisions on inputs and a final neuron or multiple neurons make decisions biased on those neurons.
A neural network is called deep when there are at least four layers of neurons? (do some research don't take my word for it `^_^`)
The disadvantage of deep learning is that it's ability is finite.
There is no way a deep neural network by it's self to get smarter then it's programed to be.
It has a intelligence curve similar to root time if it isn't improved somehow.
This leads to the other problem in neural networks. While the programmer has no need to know how decisions are made by the computer they still need to know what questions or nodes need to be added.
The reason this is a problem is if the nodes responsible aren't there the program will be wrong in strange cases and have no way of correcting this on it's own. The larger the network the harder it is to solve these kinds of problems.
This will lead to an inevitable solution to have the computer self improve by some type of generative algorithm.
This has it's own breadth of problems as if not built properly could grow into something unintended which wastes time and money if it fails quickly, and could be potentially dangerous if it appears to work and doesn't.
The answer to AI will be a combination of deep neural networks some generative type programing and some new ideas and innovations.
Upvotes: 2 <issue_comment>username_4: Shallow layered networks are less capable of recursive or extended abstraction necessary for the kinds of generalization needed to handle complex tasks common in real world applications.
It is the same problem as was discovered nearly a century ago in the analog world. One can try to reduce the components in the old tube radio design to lower its cost, but tuning and amplification had a minimal number of independent operations required. After decades, the basic functions are integrated into one wafer of silicon, yet no single transistor can accomplish the entire task. The more complex the externality controlled, the more sophisticated the control system, whether or not it is a learning system.
In the basic NN architecture most familiar to those in machine learning, in general, width is driven by degrees of freedom in the input and output regions. Depth is driven by the need to approximate nonlinear control complexities.
Upvotes: 1 |
2017/03/10 | 2,902 | 9,715 | <issue_start>username_0: What's the term (if such exists) for merging with AI (e.g. via neural lace) and becoming so diluted (e.g. 1:10000) that it effectively results in a death of the original self?
It's not quite "digital ascension", because that way it would still be you. What I'm thinking is, that the resulting AI with 1 part in 10000 being you, is not you anymore. The AI might have some of your values or memories or whatever, but it's not you, and you don't exist separately from it to be called you. Basically - you as you are dead; you died by dissolving in AI.
I would like to read up on this subject, but can't find anything.<issue_comment>username_1: How did it became diluted? if by mutations then it has "evolved" or neural plasticity then it "learned", Alzheimer then it got "sick".
If by analogy, you took too much psycho-active drug and grew up learning alot of stuff and adjusting your moral compass drastically, then got Alzheimer and now you don't know who You are, you are not the same person any more, but you didn't die. So you are still you.
Only zombies, who physically die first, then get re-animated, then yes, they are pretty much dead, also depends on the movie director.
Upvotes: 1 <issue_comment>username_2: I find the concept of the a [Turing machine](https://en.wikipedia.org/wiki/Turing_machine) useful. In one dimension, everything is a string. All of the parts that are "not you" are merely a substrate, a medium for the program your\_mind runs on top of. The you, your identity, the "metaphysical" component we think of as the mind, is a result of running the algorithm that is your\_mind on the bioware of your body, or, the hardware (technically, "[wetware](https://en.wikipedia.org/wiki/Wetware_computer)".) So what we're really talking about is the software, and in that light I might use:
* **[Translation](https://en.wikipedia.org/wiki/Translator_(computing))**
because the software is being translated to execute in a new environment, or
* **[Migration](https://en.wikipedia.org/wiki/System_migration)**
as in moving software from one system to another.
<NAME> wrote a philosophical narrative, not technically sci-fi, called [The Transmigration of Timothy Archer](https://en.wikipedia.org/wiki/The_Transmigration_of_Timothy_Archer) which is about identity moving between bodies. In a rare departure from his usual work about AI and the effects of a technological society on the human spirit, this book looks at the question of identity in the context of the soul, which opens up all kinds of philosophical questions surrounding the type of technology we're speculating on, particularly in relation to the self.
I value artistic insight, and <NAME>. is considered quite prescient, so perhaps
* **[Transmigration](https://en.oxforddictionaries.com/definition/us/transmigrate)**
is most appropriate, as it carries both metaphysical and information technology meanings.
---
Re death by dissolution, it's worthwhile to look at the [etymology of dissolve](https://www.etymonline.com/word/dissolve):
>
> late 14c. (transitive and intransitive) "to break up" (of material substances), from Latin *dissolvere* "to loosen up, break apart," from *dis-* "apart" (see dis-) + *solvere* "to loosen, untie," from PIE \*se-lu-, from reflexive pronoun \*s(w)e- (see idiom) + root \*leu- "to loosen, divide, cut apart." Meaning "to disband" (an assembly) is early 15c. Related: Dissolved; dissolving.
>
>
>
I think you're on the right track with **[dilution](https://www.etymonline.com/word/dilute?ref=etymonline_crossreference)**, certainly per the modern usage, but it may be possible to get more precise.
It's not about breaking up, or washing away (except metaphorically); rather, it's about [minimization](https://www.etymonline.com/word/minimum?ref=etymonline_crossreference) as in the **[diminishment](https://www.etymonline.com/word/diminish)** of the original software kernel (the self) in relation to an expanding algorithm.
>
> early 15c., from merger of two obsolete verbs, *diminue* and *minish*. *Diminue* is from Old French *diminuer* "make small," from Latin *[diminuere](http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%3A1999.04.0059%3Aentry%3Ddiminuo)* "break into small pieces," variant of *[deminuere](http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%3A1999.04.0060%3Aentry%3Ddeminuo)* "lessen, diminish," from *[de](http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%3A1999.04.0059%3Aentry%3Dde2)*- "completely" + *[minuere](http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%3A1999.04.0059%3Aentry%3Dminuo)* "make small" (from PIE root [\*mei- (2)](https://www.etymonline.com/word/*mei-) "small").
>
>
>
The Old French *diminuer* is apt, as is the Latin *deminuere* and *[deminuo](http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%3A1999.04.0060%3Aentry%3Ddeminutio)*, which also carries a meaning of "civil death" and "[abatement](https://www.etymonline.com/word/abatement)". Abatement carries a meaning of [mitigation](http://www.dictionary.com/browse/mitigate), which can be defined fundamentally as "lessening of effect," in this case of the kernel of the original self in relation to the new aggregate.
It may be useful to think of it as a ratio 1/ℵ, with the self as the 1. Important to note the ratio is not literal--each number represent an aggregation of functions we call [programs](https://en.wikipedia.org/wiki/Computer_program).
That's how I'd think of it mathematically, but metaphysically, [Nat](https://ai.stackexchange.com/a/3519/1671) nails it by referencing [Jung](https://en.wikipedia.org/wiki/Analytical_psychology) and the [death of the ego](https://en.wikipedia.org/wiki/Ego_death).
[Ego comes from](https://www.etymonline.com/word/ego) the [Latin noun for the self](http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%3A1999.04.0060%3Aentry%3Dego) (I, me) which can also be plural (we, us). It's also fun to note that the Latin verb of being is [sum](http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%3A1999.04.0060%3Aentry%3Dsum), because in English, sum has a mathematical meaning of an aggregate considered as a whole.
Because we're talking about what it means to be a person, but we also want to connote a function (a process of relative minimization) I am thinking:
* **[Depersonalization](https://en.oxforddictionaries.com/definition/us/depersonalization)**
>
> (noun) the action of divesting someone or something of human characteristics or individuality.
> SOURCE: Google
>
>
>
The fun part is that there's a legal definition of [person](https://www.etymonline.com/word/person):
>
> In legal use, "corporate body or corporation having legal rights," 15c., short for person aggregate (c. 1400), person corporate (mid-15c.)
>
>
>
Which I think applies (loosely) to the aggregation of function that comprises the program we call the self, and the greater aggregation of functions that form a new self.
Because here we're talking about the loss of the original individual, depersonalization as opposed to repersonalization as a corporate entity.
Alternately, I might propose:
* **Deindividualization**
with the definition "a loss in individual identity within a group" (also known as [deindividuation](http://a%20loss%20in%20individual%20identity%20when%20a%20person%20is%20in%20a%20group.)).
*(<NAME> wrote about this in [Accellerando](https://en.wikipedia.org/wiki/Accelerando), where a character chooses to be subsumed by a group intelligence as the only means of escaping a swarm of autonomous lawsuits;)*
Upvotes: 3 [selected_answer]<issue_comment>username_3: "Ego death"
-----------
>
> **Ego death** is a "complete loss of subjective self-identity."
>
>
> -["Ego death"](https://en.wikipedia.org/wiki/Ego_death), Wikipedia
>
>
>
The defining aspect of an ego is an exclusive sense of self. If an ego-dominated individual joined a greater mind and dissolved within it without really dying, then the only thing that it's really lost is its ego.
### Real world analogy: political states' identities dissolving into larger states
For a real-world analogy, the United States used to be composed of individual states that had a much stronger sense of self-identity; it different states were sort of like different countries.
Over time, this distinction's been fading away, and most Americans tend to have little concern for by-state distinctions. For example, someone from Canada is a foreigner even if the Canadian border is close by, but someone from the other side of the US is still basically just another American.
Today, this process repeats itself as individual nations merge under the United username_3ions. Some of the recent political battles have been fought between globalists who embrace this great unification and nationalists who wish to maintain clear, distinct state identities and interests.
### It's not sudden
Ego death won't typically be sudden. Rather, the smaller mind would join a larger one by creating new connections over time. For quite a while, there're still be a meaningful distinction between the ego and the rest of the collective mind.
As time goes on, new connections will keep being created and removed, just like in a normal human brain. The relative lack of connections between the individual's mind and the larger collective mind will tend to fade away.
Eventually, the distinction between the mind and the larger collective becomes essentially meaningless.
Once the mind no longer cares or is able to distinguish itself from the larger collective, their ego/sense-of-self is "dead", or as you've said, dissolved into the AI.
Upvotes: 1 |
2017/03/10 | 1,321 | 5,878 | <issue_start>username_0: Human beings are more productive in groups than individually, possibly due to the fact that there is a limit to how much one human brain can improve itself in terms of speed of computation and areas of expertise.
By contrast, if a machine with general-purpose artificial intelligence is created and then assigned a task, would it be possible that the machine will be able to better accomplish its task by continuously improving its own computational power and mastery of various skills, as opposed to collaborating with other agents (whether copies of itself, other AI's, or even humans)?
In other words, would an AGI ever need to collaborate, or would it always be able to achieve its goals alone?<issue_comment>username_1: Communications is expensive. It requires a communication channel, a protocol and of course time. Communications is also limited to the expressivity defined by the protocol. Note also that agents may compete over resources, or may have contradictory goals, so in some situations they may try to mislead each other.
On the other hand - computational power and memory are limited, so multiple agents may solve computation-intensive or memory-intensive problems together better/faster than each single agent can. Different agents may have different sensors, and mobile agents may have information about different parts of their realm, so by sharing knowledge they may have more complete information and make better decisions. Goals may also be time-bounded, and rewards may be time-dependent. Sometimes working together means greater rewards.
In summary, there may be situations where collaboration is beneficial, and there may be some situations where collaboration is essential to achieve one's goal or to achieve a common goal.
Upvotes: 0 <issue_comment>username_2: In my answer, I have often switched between AGI and ASI for reference. This is fine as an AGI will reach ASI as it is optimizing itself and learning.
I think it is not only important by necessary that AGI and ASI are of collaborative nature. <NAME>, in his book **Superintelligence: Paths, Dangers, Strategies** in Chapter 10 described three ways in which an ASI might function:
>
> **As an oracle**, which answers nearly any question posed to it with accuracy, including complex questions that humans cannot easily answer—i.e. How can I manufacture a more efficient car engine? Google is a primitive type of oracle.
>
> **As a genie**, which executes any high-level command it’s given—Use a molecular assembler to build a new and more efficient kind of car engine—and then awaits its next command.
>
> **As a sovereign**, which is assigned a broad and open-ended pursuit and allowed to operate in the world freely, making its own decisions about how best to proceed—Invent a faster, cheaper, and safer way than cars for humans to privately transport themselves.
>
>
>
This is when and if AGI and ASI are of controlled manner and their output are as expected. By controlled I mean they don't start seeing human as a threat and start eliminating the human race. More on this [here](http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html). As you can see, all the above ASI are of collaborative nature. Either they are collaborating with humans or they need to collaborate with other systems.
Now to your answer:
First, collaborative nature is of great use in terms of *efficiency and performance*. This is the reason why **Distributed Systems** are being made. We even have distributed OS now. Also, Modular approach in coding/ developments and huge success of *Object Oriented Model* are proofs of advantages of using collaboration among different entities.
If you think about it even the AGI is collaborating and using resources from other places in some way. As your AGI is learning, it is gaining information from the internet. It reads the information and tries to structure is accordingly (This will depend on its neural schema) and *create knowledge* (or something valuable) for itself. It is collaborating using network protocols with the outer world (other systems). If it doesn't collaborate then the firewall of the system might not allow it to use the service. Different services on the internet require a different set of protocols to be followed. So, if AGI wants to communicate for information it needs to follow those protocols. This way the AGI will learn to collaborate with different entities even at an early stage.
From AGI point of view communicating a web server and communicating another AGI machine is very similar. As we, humans, don't store all the information in our brain; similarly AGI won't find it efficient to store all the information within. Not all information is needed all the time. The **memory hierarchy** is proof of that. Even if AGI is made to store all the information within, with time it will figure out how inefficient it is and will re-program itself to only hold very vital information and use the internet for less frequently used information.
I will like to add one more thing to this. Let us begin with the human analogy first. We, as humans, collaborate with other humans. But we also collaborate with our different body parts. Like, what about the collaboration within us. Collaboration (or coordination) between our legs, hands, our body, and mind, etc. This leads us to the question of who as a person we are, our whole body, or just our brain. If I remove my hand and replace it with an organic implant, would I be still me?
Similarly, what is AGI? Is it the whole structure, or just the code. Does the RAM, ROM, hard disks also a part of AGI? If you think the hardware is not the part of AGI then your AGI is co-ordinating with these devices too, using certain protocols. To some level, this is collaboration too.
Upvotes: 2 [selected_answer] |
2017/03/10 | 1,002 | 4,132 | <issue_start>username_0: According to [Wikipedia](https://en.wikipedia.org/wiki/AI_winter) (citations omitted):
>
> In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research. The term was coined by analogy to the idea of a nuclear winter. The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or decades later.
>
>
>
The Wikipedia page discusses a bit about the *causes* of AI winters. I'm curious, however, whether it is possible to *stop* an AI Winter from occurring. I don't really like the misallocation of resources that are caused by over-investment followed by under-investment.
One of the causes of the AI winter listed on that Wikipedia page is "hype":
>
> The AI winters can be partly understood as a sequence of over-inflated expectations and subsequent crash seen in stock-markets and exemplified by the railway mania and dotcom bubble. In a common pattern in the development of new technology (known as hype cycle), an event, typically a technological breakthrough, creates publicity which feeds on itself to create a "peak of inflated expectations" followed by a "trough of disillusionment". Since scientific and technological progress can't keep pace with the publicity-fueled increase in expectations among investors and other stakeholders, a crash must follow. AI technology seems to be no exception to this rule.
>
>
>
And it seems that this paragraph indicates that *any* new technology will be stuck in this pattern of "inflated expectations" followed by disillusionment.
So, are AI winters inevitable? Is it inevitable that AI technologies will always be overhyped in the future and that severe "corrections" will always occur? Or can there a way to manage this hype cycle to stop severe increases/decreases in funding?<issue_comment>username_1: I think that by strict definition of the word inevitable, **no, future AI Winter events are not inevitable.** However likely or unlikely it may be, it is possible to control research spending and to create a more stable plan of funding research in Artificial Intelligence. Because it is *possible* to avoid an AI Winter event, an event is not *inevitable*.
Upvotes: 2 [selected_answer]<issue_comment>username_2: The hype cycles are the rule these days, and AI is always a wonderful topic for unbelievable and crazy hype. I mean simple thing like speech recognition is still not working properly, but everybody is discussing how to survive the revolt of the terminator machines. So unless we can tune the hype down, the next AI winter is inevitable.
Upvotes: 0 <issue_comment>username_3: Yes - there will always be Gartner Hype Cycles which leads to "AI Winters" - that is just a fact of human nature in large groups. There is no better evidence for "Hype Cycle" mentality than the stock market in how it reacts both high and low to whatever the hot item is. AI is much more susceptible to this given that AI tends to touch people in very real ways - will this technology become smarter than me? Will it replace me? Will it take over? Are we building new life? Who controls this? which for those that actually build or know something about these techniques and concepts would say that we are a very long way off it is even possible in the first place. To build systems that can at best maybe mimic the intelligence of a two year old we would consider it a major success.
Upvotes: 1 <issue_comment>username_4: I would say an AI Winter has already happened in the 2000's when specialist systems were adopted in detriment of cognitive systems, neural networks for instance were poorly understood back then and because of that they got meager investments from large companies, Google was a notorious exception. Only some 3 or so years ago, with things like IBM Watson and driverless cars this field started to draw significant attention. And now I doubt it will be ignored again, the research has taken off from advanced PhD theses and becomes more and more widespread.
Upvotes: 0 |
2017/03/11 | 1,080 | 4,595 | <issue_start>username_0: Does artificial intelligence write its own code and then execute it? If so, does it create separate functions for each purpose?<issue_comment>username_1: Computers are able to write their own code *without* needing any intelligence -- see the Wikipedia entries for [self-modifying code](https://en.wikipedia.org/wiki/Self-modifying_code) and [metaprogramming](https://en.wikipedia.org/wiki/Metaprogramming). You do have to write the instructions for how the computer should program itself, and there's a stigma against doing this because (a) it makes it hard to reason about what your program is doing when it's changing its source code, and (b) the solution is usually slower than just hardcoding in what you want the program to do in the first place. But it *is* possible, and programmers have done it (usually for maintainability or aesthetic reasons).
Some AI researchers are interested in [Genetic Programming](https://en.wikipedia.org/wiki/Genetic_programming) though. Genetic Programming is a subset of [evolutionary algorithms](https://en.wikipedia.org/wiki/Evolutionary_algorithm) and Wikipedia provides a good summary of how they usually work:
>
> Step One: Generate the initial population of individuals randomly. (First generation)
>
>
> Step Two: Evaluate the fitness of each individual in that population
>
>
> Step Three: Repeat the following regenerational steps until termination (time limit, sufficient fitness achieved, etc.):
>
>
> * Select the best-fit individuals for reproduction. (Parents)
> * Breed new individuals through crossover and mutation operations to give birth to offspring.
> * Evaluate the individual fitness of new individuals.
> * Replace least-fit population with new individuals.
>
>
>
The "individuals" in this case are randomly-generated computer programs, which are then tested against a fitness function.
The Wikipedia page for Genetic Programming claimed that these programs are usually represented by tree structures, though there has been some experiments in using non-tree structures as well.
Upvotes: 1 <issue_comment>username_2: >
> Does artificial intelligence write its own code and then execute it? If so, does it create separate functions for each purpose?
>
>
>
Artificial intelligence comprises more than one technique. Here you seem to be asking about [machine learning](https://en.wikipedia.org/wiki/Machine_learning), a subfield of AI which itself contains many subfields.
Self-modifying code *is* one of the (many) techniques used in some applications of artificial intelligence. So, in that sense, the answer to your question is "yes".
In [Neural Networks](https://en.wikipedia.org/wiki/Neural_network) (NNs), for example, the NN isn't writing any code, it is just running through an optimization algorithm (and back-propagation) that incrementally changes some weights (or coefficients) such that when you enter a certain input, you get an output that's close to the desired output.
You also have [genetic algorithms](https://en.wikipedia.org/wiki/Genetic_algorithm) (GA) to evolve the weights of a neural network, as opposed to using back-propagation. GAs are closer to the idea of "an AI coding itself" (although not exactly).
Upvotes: 2 <issue_comment>username_3: I think the answer is most likely no, not in the most notable examples of AI programs (such as Machine Learning). There *is* a set of AI techniques which involve [automatic programming](https://en.wikipedia.org/wiki/Automatic_programming), but in that scenario, we have a computer program which automatically codes another program (we can call it "target program"). But the target program is not the program which perfoms the coding; so technically speaking, no, it does not write its *own* code. This is an important difference; the programmer still has the task to write the code-generator.
If you are interested in automated coding, though, the most notable example is [Genetic Programming](https://en.wikipedia.org/wiki/Genetic_programming), a technique which uses an evolutionary algorithm to breed computer programs. As you can see, we have an AI which produces as a result a computer program (which may be or not an AI program); it is not interacting with its own code.
As a final remark, note that automatic coding is a pretty vague term and not all techniques are AI-related (for instance, back then, the first compilers were seen as a form of automatic programming). The most relevant techinque to your question is probably [Program Synthesis](https://en.wikipedia.org/wiki/Program_synthesis).
Upvotes: 1 |
2017/03/11 | 1,144 | 4,847 | <issue_start>username_0: As far as I know MDP are independent from the past. But the definition says that the same policy should always take the same action depending on the state.
What if I define my state as the current "main" state + previous decisions?
For Example in Poker the "main" state would be my cards and the pot + all previous information about the game.
Would this still be a MDP or not?<issue_comment>username_1: Computers are able to write their own code *without* needing any intelligence -- see the Wikipedia entries for [self-modifying code](https://en.wikipedia.org/wiki/Self-modifying_code) and [metaprogramming](https://en.wikipedia.org/wiki/Metaprogramming). You do have to write the instructions for how the computer should program itself, and there's a stigma against doing this because (a) it makes it hard to reason about what your program is doing when it's changing its source code, and (b) the solution is usually slower than just hardcoding in what you want the program to do in the first place. But it *is* possible, and programmers have done it (usually for maintainability or aesthetic reasons).
Some AI researchers are interested in [Genetic Programming](https://en.wikipedia.org/wiki/Genetic_programming) though. Genetic Programming is a subset of [evolutionary algorithms](https://en.wikipedia.org/wiki/Evolutionary_algorithm) and Wikipedia provides a good summary of how they usually work:
>
> Step One: Generate the initial population of individuals randomly. (First generation)
>
>
> Step Two: Evaluate the fitness of each individual in that population
>
>
> Step Three: Repeat the following regenerational steps until termination (time limit, sufficient fitness achieved, etc.):
>
>
> * Select the best-fit individuals for reproduction. (Parents)
> * Breed new individuals through crossover and mutation operations to give birth to offspring.
> * Evaluate the individual fitness of new individuals.
> * Replace least-fit population with new individuals.
>
>
>
The "individuals" in this case are randomly-generated computer programs, which are then tested against a fitness function.
The Wikipedia page for Genetic Programming claimed that these programs are usually represented by tree structures, though there has been some experiments in using non-tree structures as well.
Upvotes: 1 <issue_comment>username_2: >
> Does artificial intelligence write its own code and then execute it? If so, does it create separate functions for each purpose?
>
>
>
Artificial intelligence comprises more than one technique. Here you seem to be asking about [machine learning](https://en.wikipedia.org/wiki/Machine_learning), a subfield of AI which itself contains many subfields.
Self-modifying code *is* one of the (many) techniques used in some applications of artificial intelligence. So, in that sense, the answer to your question is "yes".
In [Neural Networks](https://en.wikipedia.org/wiki/Neural_network) (NNs), for example, the NN isn't writing any code, it is just running through an optimization algorithm (and back-propagation) that incrementally changes some weights (or coefficients) such that when you enter a certain input, you get an output that's close to the desired output.
You also have [genetic algorithms](https://en.wikipedia.org/wiki/Genetic_algorithm) (GA) to evolve the weights of a neural network, as opposed to using back-propagation. GAs are closer to the idea of "an AI coding itself" (although not exactly).
Upvotes: 2 <issue_comment>username_3: I think the answer is most likely no, not in the most notable examples of AI programs (such as Machine Learning). There *is* a set of AI techniques which involve [automatic programming](https://en.wikipedia.org/wiki/Automatic_programming), but in that scenario, we have a computer program which automatically codes another program (we can call it "target program"). But the target program is not the program which perfoms the coding; so technically speaking, no, it does not write its *own* code. This is an important difference; the programmer still has the task to write the code-generator.
If you are interested in automated coding, though, the most notable example is [Genetic Programming](https://en.wikipedia.org/wiki/Genetic_programming), a technique which uses an evolutionary algorithm to breed computer programs. As you can see, we have an AI which produces as a result a computer program (which may be or not an AI program); it is not interacting with its own code.
As a final remark, note that automatic coding is a pretty vague term and not all techniques are AI-related (for instance, back then, the first compilers were seen as a form of automatic programming). The most relevant techinque to your question is probably [Program Synthesis](https://en.wikipedia.org/wiki/Program_synthesis).
Upvotes: 1 |
2017/03/14 | 1,559 | 5,918 | <issue_start>username_0: I want to create an AI which can play five-in-a-row/Gomoku. I want to use reinforcement learning for this.
I use the *policy gradient* method, namely REINFORCE, with baseline. For the value and policy function approximation, I use a *neural network*. It has convolutional and fully connected layers. All of the layers, except for the output, are shared. The policy's output layer has $8 \times 8=64$ (the size of the board) output unit and *softmax* on them. So it is stochastic.
But what if the network produces a very high probability for an invalid action? An invalid action is when the agent wants to check a square that has one `X` or `O` in it. I think it can be stuck in that game state.
How should I handle this situation?
My guess is to use the *actor-critic* method. For an invalid action, we should give a negative reward and pass the turn to the opponent.<issue_comment>username_1: Just ignore the invalid moves.
For exploration, it is likely that you won't just execute the move with the highest probability, but instead choose moves randomly based on the outputted probability. If you only punish illegal moves they will still retain some probability (however small) and therefore will be executed from time to time (however seldom). So you will always retain an agent which occasionally makes illegal moves.
To me, it makes more sense to just set the probabilities of all illegal moves to zero and renormalise the output vector before you choose your move.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Usually softmax methods in policy gradient methods using linear function approximation use the following formula to calculate the probability of choosing action $a$. Here, weights are $\theta$, and the features $\phi$ is a function of the current state $s$ and an action from the set of actions $A$.
$$
\pi(\theta, a) = \frac{e^{\theta \phi(s, a)}}{\sum\_{b \in A} e^{\theta \phi(s, b)}}
$$
To eliminate illegal moves, one would limit the set of actions to only those that were legal, hence $Legal(A)$.
$$
\pi(\theta, a) = \frac{e^{\theta \phi(s, a)}}{\sum\_{b \in Legal(A)} e^{\theta \phi(s, b)}}, \, a \in Legal(A)
$$
In pseudocode the formula may look like this:
```
action_probs = Agent.getActionProbs(state)
legal_actions = filterLegalActions(state, action_probs)
best_legal_action = softmax(legal_actions)
```
Whether using linear or non-linear function approximation (your neural network), the idea is to only use the legal moves when computing your softmax. This method means that only valid moves will be given by the agent, which is good if you wanted to change your game later on, and that the difference in value between the limited choice in actions will be easier to discriminate by the agent. It will also be faster as the number of possible actions decreases.
Upvotes: 4 <issue_comment>username_3: IMHO the idea of invalid moves is itself invalid. Imagine placing an "X" at coordinates `(9, 9)`. You could consider it to be an invalid move and give it a negative reward. Absurd? Sure!
But in fact your invalid moves are just *a relic of the representation* (which itself is straightforward and fine). The best treatment of them is to exclude them completely from any computation.
This gets more apparent in chess:
* In a positional representation, you might consider the move `a1-a8`, which only belongs in the game if there's a Rook or a Queen at `a1` (and some other conditions hold).
* In a different representation, you might consider the move `Qb2`. Again, this may or may not belong to the game. When the current player has no Queen, then it surely does not.
As the invalid moves are related to the representation rather than to the game, they should not be considered at all.
Upvotes: 3 <issue_comment>username_4: I faced a similar issue recently with Minesweeper.
The way I solved it was by ignoring the illegal/invalid moves entirely.
1. Use the Q-network to predict the Q-values for all of your actions (valid and invalid)
2. Pre-process the Q-values by setting all of the invalid moves to a Q-value of zero/negative number (depends on your scenario)
3. Use a policy of your choice to select an action from the refined
Q-values (i.e. greedy or Boltzmann)
4. Execute the selected action and resume your DQN logic
Hope this helps.
Upvotes: 3 <issue_comment>username_5: An experimental paper exist in [arxiv](https://arxiv.org/abs/2006.14171) about the effect of whether to mask or to give negative rewards to invalid actions. There are some references in this paper which also discuss the effects and the mechanism to handle invalid actions. However, those main references are still only pre-prints in the arxiv (not published and presumably not peer-reviewed yet).
On a way to handle that situation, other answers have given good practical methods to ignore the invalid actions. I just want to add one more trick to do that. You can pre-compute a binary vector as the mask for the actions, and add the log of the mask before the softmax operation. The log of 0 is -inf and exp(-inf) is 0 in Pytorch (I don't know if the same applies in Tensorflow).
>
> $P(a\_i| A'\_t, X\_t) = \text{softmax}(u\_i+\log{m^t\_i}), \forall i \in
> \{1,\dots,|A|\}$ and $m^t\_i \in \{0,1\}$
>
>
>
with $P(a\_i| A'\_t, X\_t)$ is the probability to take action $a\_i$ given the action history $A'\_t$ up to the $t$-th step and the current environment's features $X\_t$, $u$ is the output of the last layer of the model, $A$ is the action space $m^t\_i$ is the feasibility mask of action $a\_i$ for the current step.
Therefore,
the probability of the invalid action is 0 after the softmax operation. That way, you can treat the mask as the part of the state as the input to your model. This is actually more handy for algorithm which employs experience memory because the mask then can be saved in the experience memory too.
Upvotes: 2 |
2017/03/15 | 1,630 | 6,539 | <issue_start>username_0: I heard several times that one of the fundamental/open problems of deep learning is the lack of "general theory" on it, because, actually, we don't know why deep learning works so well. Even the Wikipedia page on deep learning has [similar comments](https://en.wikipedia.org/wiki/Deep_learning#Criticism_and_comment). Are such statements credible and representative of the state of the field?<issue_comment>username_1: There is a paper called [Why does Deep Learning work so well?](https://arxiv.org/pdf/1608.08225.pdf).
>
> However, it is still not fully understood why deep learning works so well. In contrast to GOFAI (“good old-fashioned AI”) algorithms that are hand-crafted and fully understood analytically, many algorithms using artificial neural networks are understood only at a heuristic level, where we empirically know that certain training protocols employing large data sets will result in excellent performance. This is reminiscent of the situation with human brains: we know that if we train a child according to a certain curriculum, she will learn certain skills — but we lack a deep understanding of how her brain accomplishes this.
>
>
>
Upvotes: 3 <issue_comment>username_2: This is very much the case. Deep learning models even shallow ones such as stacked autoencoders and neural networks are not fully understood. There are efforts to understand what is happening to the optimization process for such a complex variable intensive function. But, this is a difficult task.
One way that researchers are using to discover how deep learning works is by using generative models. First we train a learning algorithm and handicap it systematically whilst asking it to generate examples. By observing the resulting generated examples we will be able to infer what is happening in the algorithm at a more significant level. This is very much like using inhibitors in neuroscience to understand what different components of the brain are used for. For example, we know that the visual cortex is where it is because if we damage it you will go blind.
Upvotes: 2 <issue_comment>username_3: It probably depends on what one means by "fundamental theory", but there is no lack of rigorous quantitative theory in deep learning, some of which is very general, despite claims to the contrary.
One good example is the work around energy-based methods for learning. See e.g. Neal & Hinton's work on variational inference and free energy: <http://www.cs.toronto.edu/~fritz/absps/emk.pdf>
Also this guide to energy minimization as a "common theoretical framework for many learning models" by <NAME> and colleagues: <http://yann.lecun.com/exdb/publis/pdf/lecun-06.pdf>
And a general framework for energy-based models by Scellier and Bengio:
<https://arxiv.org/pdf/1602.05179.pdf>
There is also Hinton & Sejnowski's earlier work which shows analytically that a particular Hopfield-inspired network + unsupervised learning algorithm can approximate Bayes-optimal inference: <https://papers.cnl.salk.edu/PDFs/Optimal%20Perceptual%20Inference%201983-646.pdf>
There are many papers linking deep learning with theoretical neuroscience as well, such as the following, which shows that the effects of backpropagation can be achieved in biologically plausible neural architectures:
<https://arxiv.org/pdf/1411.0247.pdf>
Of course there are many open questions and no single, uncontroverisal unified theory, but the same could be said of almost any field.
Upvotes: 2 <issue_comment>username_4: Your wikipedia quote is **questionable** because deep learning is well developed. In fact, there is a `[citation needed]` on the Wikipedia page.
Look at <https://github.com/terryum/awesome-deep-learning-papers>. There are like 100 papers in the link, do you still think deep-learning lacks "general theory"?
Yes. Deep learning is hard to understand because it is a very complicated model. But that doesn't mean we don't have the theories.
Maybe the `lime` package and it's paper: **"Why Should I Trust You?": Explaining the Predictions of Any Classifier** will help you. The paper suggests we should be able to approximate a complicated model (includes deep learning) locally with a much simpler model.
Upvotes: 0 <issue_comment>username_5: A key question that remains in the theory of deep learning is why such huge models (with many more parameters than data points) don't overfit on the datasets we use.
Classical theory based on complexity measures does not explain the behaviour of practical neural networks. For instance estimates of VC dimension give vacuous generalisation bounds. As far as I know, the tightest (upper and lower) bounds on the VC dimension are given in [1] and are on the order of the number of weights in the network. Clearly this worst case complexity cannot explain how e.g. a big resnet generalises on CIFAR or MNIST.
Recently there have been other attempts at ensuring generalisation for neural networks, for instance by relation to the neural tangent kernel or by various norm measures on the weights. Respectively, these have been found to not apply to practically sized networks and to have other unsatisfactory properties [2].
There is some work in the PAC Bayes framework for non-vacuous bounds, e.g. [3]. These setups, however, require some knowledge of the trained the network and so are different in flavour to the classical PAC analysis.
Some other aspects:
* optimisation: how come we get 'good' solutions from gradient descent on such a non-convex problem? (There are some answers to this in recent literature)
* interpretability: Can we explain on an intuitive level what the network is 'thinking'? (Not my area)
(incomplete) references:
* [1] <https://arxiv.org/abs/1703.02930>
* [2] <http://papers.nips.cc/paper/9336-uniform-convergence-may-be-unable-to-explain-generalization-in-deep-learning>
* [3] <https://arxiv.org/abs/1804.05862>
Upvotes: 0 <issue_comment>username_6: I'd like to point out there isn't a good theory on why machine learning works in general. VC bounds still assume a model, but reality doesn't fit any of these mathematical ideals. Ultimately when it comes to application everything comes down to emperical results. Even quantifying the similarity between images using an algorithm which is consistent with humans intuitive understanding is really hard
Anyway NN dont work well in their fully connected form. All successful networks have some kind of regularization built into the network architecture (CNN, LSTM, etc).
Upvotes: 0 |
2017/03/16 | 522 | 2,239 | <issue_start>username_0: For example, for classifying emails as spam, is it worthwhile - from a time/accuracy perspective - to apply *deep learning* (if possible) instead of another machine learning algorithm? Will deep learning make other machine learning algorithms like *naive Bayes* unnecessary?<issue_comment>username_1: It's all about *Return On Investment*. If DL is "worth doing", it's not overkill.
If the cost of using DL (computer cycles, storage, training time) is acceptable, and the data available to train it is plentiful, and if the marginal advantage over alternative algorithms is valuable, then DL is a win.
But, as you suggest, if your problem is amenable to alternate methods, especially if it offers a signal that matches up well with classic methods like regression or naive Bayes, or your problem requires explanation of why the decision boundary is where it is (e.g. decision trees), or if your data lacks the continuous gradients needed by DL (especially, CNNs), or your data varies over time which would require periodic retraining (especially, at unpredictable intervals), then DL probably is a mismatch for you.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Deep learning is powerful but it is **not** a superior method than bayesian. They work well in what they are designed to do:
Use deep learning:
* Cost for computation is much cheaper than cost of sampling (e.g: natural language processing)
* If you have highly non-linear problem
* If you want to simplify feature engineering
* If you don't have prior distribution (e.g: setting the weights to random Gaussian). Or you do but you don't mind the complexity.
* If you want accuracy for speed (deep learning is slow)
Use naive bayesian:
* If you have prior distribution that you want to use
* If you want to update your model quickly and easily (in particular conjour models)
* If you have your own likelihood function and wish to "control" how exactly the model works
* If you want to model hierarchial models
* If you don't want to tweak parameters
* If you want a faster model, both in training and execution
* If you want to make the independence assumption
* If you want to prevent overfitting (that's a very simple model)
Upvotes: 4 |
2017/03/16 | 3,816 | 15,178 | <issue_start>username_0: Since the first Industrial revolution machines have been taking the jobs of people and automation has been a part of human social evolution for the past 3 centuries, but all in all these machines have been replacing mechanical, high-risk and low-skill jobs such as a production line of an automobile factory.
But recently with the advent of computers and the improvement of AI, and the quest to find a Singularity (that is, a computer capable of thinking faster, better, more creative and **cheaper** then a human being, capable of self-improving), our future will lead to the replacement of not only low-skill workers, but high-skill as well. I'm talking about a future not too far when AI and machines will replace artists, designers, engineers, lawyers, CEO's, filmmakers, politicians, hell even programmers.
Some people get excited by this, but honestly, I get somewhat scared.
I'm not talking about the money issue here, although I'm not a fan of the idea, let's suppose the universal income has been implemented, and suppose it works fine. Also not talking about the "*Terminator's world where machines will wage war against humans*", let's suppose too they are completely friendly forever.
The issue here is the one of **motivation** for us humans. *When the AI singularity takes over, what will there be left for us to do?* Every day, all day long?
What are we going to do with our lives? Suppose I love to paint, how can I live my dream of becoming a painter if the computer makes better art then I will ever be able to do? How can I live knowing that no one will care about my paintings because they were made by a **mere human**? Or the real me for example (I, Danzmann), I love to code, learned my first programming language with 9 years old and been on it ever since then, it looks sad to me that in some years I may never touch on that again. And that goes for all the professions, everyone is passionate about something, and with the singularity, every single one of them would just have to cease to exist.
So, what are we going to do in this future? What am I going to do? Play golf all day, every single day for the rest of my life (A Hyperbole figure of speech, but you get my point)?
Also, what is going to be the motivation for my children? What am I going to tell them to go to school? When someone asks "what do you wanna be when you grow up?", and the inevitable answer is **nothing**.
If highly advanced AI takes control of all scientific research, then what is the reason for us to **learn**? What is the reason that us humans would need to dedicate decades of our lives to learn something if that knowledge is useless, because there are no more jobs and the scientific research is done solely by AI?<issue_comment>username_1: Instead of posting a specific answer, I'm going to point you to [<NAME>](https://en.wikipedia.org/wiki/Hannu_Rajaniemi)'s meditation on this subject in the [Quantum Thief Trilogy](https://en.wikipedia.org/wiki/The_Quantum_Thief). Here's why:
1. Artists can have profound insights. This may be demonstrated by <NAME> writing about [Evolutionary Game Theory](https://en.wikipedia.org/wiki/Evolutionary_game_theory) in [Do Androids Dream of Electric Sheep](https://en.wikipedia.org/wiki/Do_Androids_Dream_of_Electric_Sheep%3F) about 5 years before the field was formalized. *(For my money, this is still the most important book about AI.)*
2. Many authors have written about the post-Singularity scenarios, but Rajaneimi is the only one I am aware of who is a Cambridge trained Mathematician with a PhD in [Mathematical Physics](https://en.wikipedia.org/wiki/Mathematical_physics), which I tend to believe makes him well qualified to grapple with the inherent complexity of the subject.
Upvotes: 0 <issue_comment>username_2: You're assuming the AI has motivation however that's not really the case, intelligent software will do whatever its been designed to do but that's all it does, it's not a trained animal, it doesn’t have instincts for survival, reproduction or self-determination because there's no reason to add those functions. So effectively AI is just another tool, one that reduces the mental load of doing a task, so rather than digging a trench with one machine you can order a fleet of machines to dig a canal system. Or more realistically you’ll spend hours in stakeholder meetings discussing the need for canals, justifying the cost, explaining the benefits, considering the risks, applying for permits, having more discussions with the council and their consultants, then special interest groups, until finally your order the machines to kill them all because GODDAMNIT THIS COULD HAVE BEEN DONE ALREADY!
Upvotes: 0 <issue_comment>username_3: Given all your assumptions about AI turn out to be true, we would have some kind of utopia, where no one has to work, and there is plenty of everything. Fair enough. Your other assumptions is about human nature, and that is where I'd challenge your conclusion: Just because there are computers better than humans at some task, that does not automatically take all enjoyment from doing it.
I have three arguments in favor of my stance.
* There already are computers better than humans at checkers, chess, backgammon, starcraft, mastermind, go,.. and many more. Yet these games still get played, even if no human can hope to ever be as good as a computer.
* There will be domains where the evaluation of quality is so vague or personal, that the notion of being "better" is useless in any objective sense. I am thinking mainly about art. Playing into argument one, photo cameras are already better "painters" of reality than humans can ever hope to be, yet people still paint. And their paintings gets appreciated, even the photo-realistic kind.
* I'd say that the whole mindset of "if someone is better than me at x it is not worth doing x" will have outlived its lifetime very shortly. Society didn't always spin that way I feel, it is largely due to the influence of the North American way of life to always strive to be #1 and everything below that being trash. Globalization already puts this way of thinking at risk, with many young people being disillusioned or even depressed because, flippantly put, "whatever you do, there is always an Asian kid who does it 10 times better". We don't have to wait for AI to outshine us, the rest of the world already does. As a consequence, we need to adapt our way of approaching that fact, stop seeing it as diminishing to our worth, and move on.
As a closing note, I also see it as a problem to the general population that within a short time we essentially have to change a very substantial part of our world view, AI being better than us, us not having to work anymore etc. All the economical revolutions in the past, the neolithic, the industrial, and most recently, the digital, had a longer transitional period where people could grow accustomed to the new world. And even with that transition it was hard enough for many people. Yet, most dealt with it, and later generations can hardly imagine a world where the new change doesn't exist yet, and I personally don't see why the next revolution should be any different.
Upvotes: 2 <issue_comment>username_4: We are biological beings. We will continue to ***like*** whatever activates opioid receptors and we will continue to ***want*** whatever activates dopamine receptors in the nucleus accumbens. Food, drinks, sex, social dominance, altruistic acts, novelty, drugs of abuse, physical mastery, procreation, socializing, nice sunny weather, sleep when tired, etc, will continue to be rewarding and motivating as long as we have brains. A few links to review papers on the neurobiology:
See also this paper [Pleasure Systems in the Brain](http://www.sciencedirect.com/science/article/pii/S0896627315001336) or this one [Dopamine in Motivational Control: Rewarding, Aversive, and Alerting](http://www.sciencedirect.com/science/article/pii/S0896627310009384) for more details regarding this topic.
I personally enjoy playing basketball, even though I would not stand a chance against NBA players, and rock climbing even though using a ladder would be much more efficient. I do not get paid for either.
>
> Also, what is going to be the motivation for my children? What am I going to tell them to go to school? When someone asks "what do you wanna be when you grow up?", and the inevitable answer is nothing.
>
>
>
I disagree. Schools will evolve. Children will still need to learn social skills and make friends. At the very least, they will need to learn how to use or interface with the computers that do everything. They will still need to learn to be better human beings by reading humanities. I don't think they will be told "The computer will now read Dostoevsky on your behalf, it is *MUCH BETTER at reading*, you know". There may be jobs where the job description contains "by a human", such as handcrafts, psychotherapy, etc. They can grow up to be whatever they want, human beings are not defined *solely* by their professions. I am sure you are not *just a coder, nothing more*.
Upvotes: 2 <issue_comment>username_5: In today's society, money is a big key motivator, not only does it provide necessities of life it can buy luxury and indulgence. In a world where all needs (even complex ones) are met for free by the emergence of technology like the singularity, it is easy to say there will be no motivator.
But, instinctually, as children our motivations aren't to accumulate money, you don't see children bored with life. As a child, I was happy to learn. Others were happy to dance or swim, read or paint. In the end, people are social excitement seeking beings.
Today's socioeconomic conditions create boredom in some respects doing the same job for years is arguably worse.
I don't think motivation will be an issue.
Upvotes: 0 <issue_comment>username_6: Starting from username_3's point on photography:
I'd like to point out how photography changed painting. You can notice that classical painting that tried to be photorealistic stopped to be relevant when photography started, and that several modern movements started, like more abstract paintings, surrealism, or cubism.
One can see art as a medium to brag "i'm better than you". They could say "i'm better than you at perfecting realism technique" until photography appeared. Now, on this aspect, what do artists brag about?Sometimes by claiming how progressive they are when sculpting sex toys?
By claiming that their white sphere is "too deep for you"?
I'm not an art historian, though, this is how i feel it evolved to current modern "art" where skill has been replaced with provocation.
Photography might be a good starting point to think about how people react to a technology that dwarves them and they still want to be #1.
-- here's a point to note: people want to have a good identity
<NAME> once wrote The right to be lazy, where he argued how mankind would intellectually evolve if people worked less as machines could work for us. One of his argument was that people se sees as intellectual models, Greek philosophers, didn't work, and had more time to think.
I think he misses a point: not everyone is able to enjoy philosophical time spending.
Not unlike how we were deeply wrong when we expected internet to open our minds towards a great age of knowledge.
Remember that people are not utopically intelligent, that you have stupid people, and humans basically follow their instinctive needs.
This is why 20 years after we started using internet, we have lolcats, trolls, gated communities, and more intellectual intolerance (think sjw, alt right movements).
-- point #2: not everyone is able to be included in a "everyone will be bright" utopia
The key is one's identity. I think that everyone wants and need to have a "positive identity", they look for one to compensate for their weaknesses.
This is, for me, why poorer people tend to show off, and successful people don't tend to brag.
So, how do you forge your positive identity when you don't have much for yourself?
I see a few things that happen around me:
- be a rebel: to distinguish yourself, you become an opponent; but in fact you just mirror the main trend, while bragging about how independant you are. I'm opposed, so i'm free. You can have a range from IA free stance (like people who didn't want to use the internet, some of the ones who use free software), to a more aggressive movement.
- be conservative, pious: maybe in reaction to a society that changes and becomes too progressive for some, i see people becoming conservative, this is still a rebellious trend, and i see the rise of conservative vote (nationalism) or religion (buddhism, christianism, islam) as a reaction. I follow traditions, so i'm independant from your novelties. Religion is always a solid value when society changes.
- be hyperprogressive: same reaction as previous point if you think that society is too conservative. It's a rebellious stance when you have conservative people in front of you. I'm a step ahead everyone, look at how avant-gardiste i am.
-- point #3: if you don't excel in a field, be a rebel. Or: if you can't follow, step aside, don't follow the stream.
So you have no work to do anymore, plenty of food grown by IA controlled drones, all material needs fulfilled, and no interest in thinking too much.
What do people do when they have too much time?
Simple: they fill their natural urges that are not related to material needs and food.
Some examples: eating, playing games, having sex, arguing online, getting wasted. Basically everything that has been labelled as a sin.
You'll also have forms of mental disorders you see in people who feel worthless. And many more will feel worthless.
This also means that you'll need more jobs to keep these people entertained, or taking care of the suffering ones.
-- point #4: if life gets too easy, people become sinners
By these few examples, here rises a question: if "mediocre" people tend to get their shiny identity by opposing the mainstream society, how will the mainstream society look like after the rise of AI?
So i see three reactions from people:
- be relevant after AI's rise, the ones who can follow
- become a rebel, the ones who can't follow but still want to appear relevant
- be neutral, the ones who fill their human urges
And in general, people will do a mix of the three.
Mankind, with the help of AI which can provide answers to basically anything, will provide new questions, new projects, new frontiers to explore. Many persons will still be a bit relevant.
AI will become mainstream, so everyone will be more or less opposing it, while enjoying its benefits (think about people who claim to be technology/money independant but have an iPhone).
People will still argue online and giggle at cute cats doing weird stuff. Maybe more, because they have more free time.
In conclusion, **if AI becomes relevant, expect a world where more people have too much time on their hands**.
I find this conclusion a bit deceiving, after all I thought and wrote. :O
Upvotes: 2 |
2017/03/17 | 570 | 2,458 | <issue_start>username_0: How can Artificial Intelligence be applied to software testing?<issue_comment>username_1: In large software with many actions and possible flows like web applications, enterprise software, etc, it is really hard and time taking to test out all possible scenarios via traditional approach. So, building a machine learning model is an interesting approach to solving this. A reinforcement learning system with an end goal to crash or make the software unresponsive can be tried.
There is research being done on this idea. You can take read [this](https://pdfs.semanticscholar.org/d2ae/393cf723228cf6f96d61ee068c681203e943.pdf) research paper which explores Reinforcement Learning as an approach to automated GUI robustness testing. Also, some companies like Appdiff trying to incorporate AI in software testing with mobile apps in context, but similar thinking can be reasonably extended to web apps.
Upvotes: 2 <issue_comment>username_2: An interesting thought is to use the software usage pattern to auto learn the tests to be conducted for future iterations of the software.
Upvotes: 1 <issue_comment>username_3: I have been in the software testing industry for over 11 years now and I can say for sure that there are different ways people are using AI for software testing. This is especially true in the area of automated testing tools. Different vendors have been trying to tackle some of the common problems in test automation using AI. Some of them are-
[Appvance](https://appvance.com/)
Appvance uses AI to generate test cases based on user behavior but is not a fully AI based tool like Testim.io
[Test.ai](https://test.ai/)
Test.ai uses artificial intelligence to perform regression testing. It is helpful to get performance metrics on your app. It is more of an app monitoring tool than functional testing tool from my point of view
[Functionize](https://www.functionize.com)
Functionize used machine learning for functional testing. It is very similar to other tools in the market in terms of its capabilities
The above are some of the popular tools out there in the market.
The trend seems to be going in the positive direction in terms of vendors trying to make testing more stable, simpler, smarter and getting everyone in the team involved in testing including non-technical people.
It will be just a matter of time that more solutions come up for software testing using AI
-Raj
Upvotes: 2 [selected_answer] |
2017/03/18 | 607 | 2,609 | <issue_start>username_0: Humans often dream of random events that occurred during the day. Could the reason for this be that our brains are backpropagating errors while we sleep, and we see the result of these "backpropagations" as dreams?<issue_comment>username_1: In large software with many actions and possible flows like web applications, enterprise software, etc, it is really hard and time taking to test out all possible scenarios via traditional approach. So, building a machine learning model is an interesting approach to solving this. A reinforcement learning system with an end goal to crash or make the software unresponsive can be tried.
There is research being done on this idea. You can take read [this](https://pdfs.semanticscholar.org/d2ae/393cf723228cf6f96d61ee068c681203e943.pdf) research paper which explores Reinforcement Learning as an approach to automated GUI robustness testing. Also, some companies like Appdiff trying to incorporate AI in software testing with mobile apps in context, but similar thinking can be reasonably extended to web apps.
Upvotes: 2 <issue_comment>username_2: An interesting thought is to use the software usage pattern to auto learn the tests to be conducted for future iterations of the software.
Upvotes: 1 <issue_comment>username_3: I have been in the software testing industry for over 11 years now and I can say for sure that there are different ways people are using AI for software testing. This is especially true in the area of automated testing tools. Different vendors have been trying to tackle some of the common problems in test automation using AI. Some of them are-
[Appvance](https://appvance.com/)
Appvance uses AI to generate test cases based on user behavior but is not a fully AI based tool like Testim.io
[Test.ai](https://test.ai/)
Test.ai uses artificial intelligence to perform regression testing. It is helpful to get performance metrics on your app. It is more of an app monitoring tool than functional testing tool from my point of view
[Functionize](https://www.functionize.com)
Functionize used machine learning for functional testing. It is very similar to other tools in the market in terms of its capabilities
The above are some of the popular tools out there in the market.
The trend seems to be going in the positive direction in terms of vendors trying to make testing more stable, simpler, smarter and getting everyone in the team involved in testing including non-technical people.
It will be just a matter of time that more solutions come up for software testing using AI
-Raj
Upvotes: 2 [selected_answer] |
2017/03/31 | 1,114 | 4,119 | <issue_start>username_0: I have tried with two chat-bots [Cleverbot](https://www.cleverbot.com/) and [Alan](http://www.a-i.com/alan1/), and I got disappointing results.
>
> me: Socrates is a man
>
>
> bot: blah blah (common bot nonsense instead of an "ok")
>
>
> me: Who is a man?
>
>
>
>
> alan1: The people that write my answers haven't provided an answer for
> this.
>
>
>
Another example of the mediocre "clever bot"
>
> me: Socrates is the name of my dog.
>
>
> clever bot: I don't know!
>
>
> me: What is the name of my dog?
>
>
> clever bot: That's a nice name.
>
>
>
Another example.
>
> me: Socrates is a man.
>
>
> clever bot: When does the narwhal bacon?
>
>
> me: Who is a man?
>
>
> clever bot: Men are man.
>
>
>
And they dare name this thing "clever"...
So, is there any chatbot that can actually answer this straightforward question?<issue_comment>username_1: I think the issue here is that the chatbots you're using aren't very good at "short-term memory". What I mean by is that the bots construct responses that are slowly and incrementally tuned according to the *overall* usage of the chat bot, from *every* user. The bots are responding to *each message* based on how a new user would expect them to. As Alan1 notes, "Men are Man". It's making this response based solely off your single most recent message.
Instead, you are looking for a bot who focuses moreso on persistent memory of the individual conversation. The problem here becomes you're now almost asking for a Natural Language Parser, a big problem many people are working on and something that's years away from existing as robustly as you suggest.
The chat bot not only has to recognize the words 'Socrates', 'Name', and 'Dog'; but that in this sentence, it's the dog's name that is Socrates. That's a lot of information to gain beyond just the words. Which is why from a server / implementation standpoint, the above method is also a lot easier to program (every message just query your server, no need to maintain state - that is, memory of the conversation).
The chat bots can't possibly get enough information from one person to train how to speak and respond, so they 'crowd-source' that information for training. But that means that Clever Bot (or any similar caliber chat bot) won't respond in terms of parsing the meaning of what you're asking.
Taking this even further, one can consider the notion of such a program being Turing-Complete. Supposing we had a chat bot like you're suggesting, we could perhaps show equivalence to a Turing-Machine, or even perhaps show we can do something like decide the halting problem. Off the top of my head I imagine the procedure being basically showing you would be able to decide halting given initial conditions. E.g. Given "Socrates is a man" and "All men die" can we decide if the chat bot will ever be able to deduce if Socrates dies?
I'll work on a formal proof from the latter and post it if it works out.
Upvotes: 2 <issue_comment>username_2: Just to lean your notice to specific problem. In my opinion, all your questions "Who is a man" (beside "Socrates dog..." one) are incorrect, and not a bot, but a human shouldn't answer them on first look, in spite of what you told it before (*"Socrates is a man"*)... Yes, it might know that Socrates, <NAME> or <NAME> is a man, but you just "expected" it to get the answer you wanted, because you mentioned it earlier.
During conversation, humans can catch & understand grammar, **semantical** or **logical** (*the most important part*) mistakes in another person's questions, and in return they can give correct answers, but bots depend mostly on correct grammar and correct semantics of questions.
There might appear some bots that could overcome incorrect grammar slightly, nowadays, even Googles Search's AI hardly struggles to correct a wrong input, other than basic grammar or basic semantics), and you supposed that chat bots could "understand" semantically incorrect question, and you rant them heavily with no reason and call stupidity or whatever.
Upvotes: 0 |
2017/03/31 | 262 | 1,026 | <issue_start>username_0: I once came across a neural network being trained without back-propagation or genetic algorithms (or using any kind of data sets). It was based on how the human brain learns and adjusts its connections between neurons.
What is the name of such a machine learning approach?<issue_comment>username_1: There are approaches to training neural networks that do not use back-propagation, or genetic algorithms. One example is the [Extreme Learning Machine](https://en.wikipedia.org/wiki/Extreme_learning_machine) approach.
You may find something useful in this [older discussion on Cross Validated](https://stats.stackexchange.com/questions/235862/is-it-possible-to-train-a-neural-network-without-backpropagation).
Upvotes: 2 <issue_comment>username_2: If it was based on how the human brain learns, it might have used [hebbian learning](https://en.wikipedia.org/wiki/Hebbian_theory).
One example for such a network would be [HTM](https://en.wikipedia.org/wiki/Hierarchical_temporal_memory).
Upvotes: 3 |
2017/04/01 | 638 | 2,735 | <issue_start>username_0: I am currently studying Java (Se && EE). I am wondering if it is a good platform for developing ML algorithms for AI.
**Areas of interest**: facial rec - Speech Rec - understanding conversation in group conversations.
**Financial Institutions**: Risk assessment ML, etc.<issue_comment>username_1: If goal is to develop ML algorithms then focus on Maths concepts linear algebra, probability and statistics. Try out CS problem solving basic data structure and algorithms.
Python has good ML libraries but if you know java then you can pick python easily.
Upvotes: 1 <issue_comment>username_2: You can't "develop" ML algorithms without statistical knowledge, it simply doesn't work that way and it's impossible.
Programming is cheap for modelling, anyone who has done a computer science degree can do it. It's just like giving some inputs and giving something back. Lots of framework can do that for you, and it's easy.
In order for you to "develop" ML algorithms, you should pursue a mathematics degree. Generally, you're expected to have a PhD or something similar to "develop" ML algorithms.
If you're really interested in programming, you should do R and Python. Java is not a common data science programming language.
Upvotes: 0 <issue_comment>username_3: As others have pointed out, your level of maths/statistics knowledge is probably more important than your chosen programming language. That is, particularly true w/r/t developing (presumably new) ML algorithms. OTOH, from an "applied ML" perspective, where you just use pre-baked implementations of existing algorithms, one of the big questions is "do good libraries for these various operations exist in language $X"?
Where the value of $X is "Java" the answer is "yes". There are tons of high-quality libraries of most popular and widely used ML algorithms. There are also tons of libraries for nearly everything else, which helps when constructing wider systems which incorporate elements of ML/AI.
That said, w/r/t machine learning specifically, there are probably *more* extant libraries and what-not in Python or R, than in Java. Also, from a "rapid development" point of view, you may find that Python has some advantages as as of a result of its dynamic typing and lack of "boiler-plate" as found in Java.
Net-net, I'd say Java is a fine choice, along with Python or R, with C++ also in the mix. It may not be the *perfect* choice, but it's absolutely "good enough" and then some.
If you want a feel for some of the projects that exist now in various languages, to go mloss.org and use the [filter by language feature](http://mloss.org/software/language/). Click around there and examine some of the options you find.
Upvotes: 0 |
2017/04/01 | 466 | 2,041 | <issue_start>username_0: While studying data mining methods I have come to understand that there are two main categories:
* Predictive methods:
+ Classification
+ Regression
* Descriptive methods:
+ Clustering
+ Association rules
Since I want to predict the user availability (output) based on location, activity, battery level (input for the training model), I think it's obvious that I would choose "Predictive methods", but now I can't seem to choose between classification and regression. From what I understand this far, classification can solve my problem, because the output is "available" or "not available".
*Can classification provide me with the probability (or likelihood) of the user being available or not available?*
As in the output wouldn't just be 0 (not available) or 1 (for available), but it's be something like:
* $80\%$ available
* $20\%$ not available
*Can this problem also be solved using regression?*
I get that regression is used for continuous output (not just 0 or 1 outputs), but can't the output be the continuous value of the user availability (like the output being $80$ meaning user is $80\%$ available, implicitly the user is $20\%$ unavailable).<issue_comment>username_1: 1. Yes. For instance, the popular softmax regression gives you probability distribution for each class.
2. Yes. Softmax is a regression over a set of discrete classes.
We can use regression for classification, the most common strategy is to grab the most likely class for the prediction.
Upvotes: 3 <issue_comment>username_2: You can use naive bayes classification and calculate posterior probabilities using prior beliefs or logistic regression can be used with sigmoid function.
Upvotes: -1 <issue_comment>username_3: Yes you can user either classification or regression according to your output requirement,
If you want labeled output, like either available or not available then classification should be used.
If you want the output in the form of % of availability then regression should be used.
Upvotes: 1 |
2017/04/03 | 602 | 2,398 | <issue_start>username_0: In the blog post [Building powerful image classification models using very little data](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html), bottleneck features are mentioned. What are the bottleneck features? Do they change with the architecture that is used? Are they the final output of convolutional layers before the fully-connected layer? Why are they called so?<issue_comment>username_1: >
> In the blog post [Building powerful image classification models using very little data](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html), bottleneck features are mentioned. What are the bottleneck features?
>
>
>
It's clearly written in the link you gave *the "bottleneck features" from the VGG16 model: the last activation maps before the fully-connected layers*.
>
> Do they change with the architecture that is used?
>
>
>
Sure. The author most likely used a pre-trained model (trained on a large data and now used only as a feature extractor)
>
> Are they the final output of convolutional layers before the fully-connected layer?
>
>
>
Yes.
>
> Why are they called so?
>
>
>
Given the input size to VGG, the feature maps of HxW dimensions are getting twice smaller after every max-pool operation. HxW is the smallest on the last convolutional layer.
Upvotes: 3 <issue_comment>username_2: First, we need to talk about transfer learning. Imagine you trained a neuronal network over a dataset of images to detect cats, you can use part of the training you have done to work over another detecting something else. That's known as transfer learning.
To do transfer learning, you will remove the last fully connected layer from the model and plug in your layers there. The "truncated" model output is going to be the features that will fill your "model". Those are the bottleneck features.
VGG16 is a pretrain-model over ImageNet catalog that has very good accuracy. In the post you shared, is using that model as a base to detect cat and dogs with a higher accuracy.
Bottleneck features depends on the model. In this case, we are using VGG16. There are others pre-trained models like VGG19, ResNet-50
It's like you are cutting a model and adding your own layers. Mainly, the output layer to decide what you want to detect, the final output.
Upvotes: 3 |
2017/04/04 | 519 | 2,159 | <issue_start>username_0: The machine learning community often only provides empirical results, but I am also interested in theoretical results and proofs. Specifically, is there a **mathematical proof** that shows that certain parameters work "better" than others for a certain task?<issue_comment>username_1: There is stuff like the [Universal Approximation Theorem](https://en.wikipedia.org/wiki/Universal_approximation_theorem).
There are also [investigations into the loss surface](https://arxiv.org/pdf/1703.09833v1.pdf) of neural networks.
And classics like [this explanation](http://www.bioinf.jku.at/publications/older/2304.pdf) of the vanishing gradient problem.
But I'm afraid the mathematical theory of neural networks only exists in bits and pieces in many different papers. And many of the most important questions can currently only be answered empirically.
Upvotes: 3 <issue_comment>username_2: [<NAME>](http://www-isl.stanford.edu/~cover/papers/paper2.pdf) and [<NAME>](http://www.inference.org.uk/itila/book.html) proofed the capacity of a perceptron. This proof was [recently extended](https://arxiv.org/abs/1810.02328) to Neural Networks. All of them provide upper bounds for the number of parameters needed to learn something.
Upvotes: 0 <issue_comment>username_3: Not really, I mean at it's core machine learning from an application perspective often seeks produce human level results, but there isn't any theorem describing human understanding of reality.
Like proving computer vision works well is essentially like proving you have a correct understanding of human perception.
It becomes somewhat circular, and while there exists proofs for certain qualities of data, none of them are true. I mean think about trying to describing reality, it exists on a lower dimensional manifold but analytically describing it? Don't think so.
Even proving robustness ends up being somewhat futile since even if you correctly eliminate advasarial examples this doesn't mean you CV application will produce correct results in general, only that the classification is robust(robust and correct are two different things).
Upvotes: -1 |
2017/04/05 | 720 | 2,953 | <issue_start>username_0: I have read that all the math responsible for modern day machine learning and AI was already in place in 1900s but we did not have computational resources to implement those algorithms. So, is that true? And if it is, in what areas of machine learning the researchers work? And are all the future breakthroughs will be dependent only on increment of computational resources?<issue_comment>username_1: High-level answer: Increase in resources has been important in AI, and definitely was a factor with Deep Blue, but Machine Learning is a newer method that seems to produces more optimal results with less resources on problems of greater complexity.
Here is an article on AlphaGo's hardware: ["Google reveals the mysterious custom hardware that powers AlphaGo"](http://www.theverge.com/circuitbreaker/2016/5/19/11716818/google-alphago-hardware-asic-chip-tensor-processor-unit-machine-learning)
Also an interesting analysis on Quora: ["What hardware does AlphaGo run on? Is it customized hardware for best performance?"](https://www.quora.com/What-hardware-does-AlphaGo-run-on-Is-it-customized-hardware-for-best-performance)
Still pretty powerful systems, but I think the algorithms are as important as the computing resources, because all the hardware in the world won't help in the algorithms are poor.
<NAME>, creator of [Giraffe Chess](https://arxiv.org/pdf/1509.01549.pdf), said:
>
> "In the ensuing two decades [since Deep Blue], both computer hardware and AI research advanced the state-of-art chess-playing computers to the point where even the best humans today have no realistic chance of defeating a modern chess engine running on a smartphone."
> Source: [TechExplore](https://techxplore.com/news/2015-09-giraffe-machine-taught-chess-higher.html)
>
>
>
which suggests that hardware and software are both important parts of the overall equation.
Upvotes: 3 [selected_answer]<issue_comment>username_2: most of the underlying math(like back-propergation) was discovered a long time ago, and the advances in hardware have only recently made it possible to tackle some problems within a reasonable amount of time but that is not the only reason. Some of the other things that also made deep learning pick up includes:
* The availability of large, labeled, quality data - deep learning generally requires a lots of data to train successfully. This kind of data just wasn't there until recently.
* Algorithmic improvements(I.e further improvements on the maths) - better weight initialization techniques, the discovery of the ReLu as a replacement for the sigmoid and tach activation which goes a long way towards solving the vanishing gradient problem, the discovery of neutral network dropout which is found to reduce overfitting, etc.
* The discovery of better neutral network architectures.
So, whist the computing power is certainly a big part of it, it certainly isn't the only factor.
Upvotes: 1 |
2017/04/05 | 376 | 1,465 | <issue_start>username_0: I assume, there must be "signal-driven" and maybe also real-time programming language, which based on connectivy-data more than variables (int, string, etc).
I would like to have a language without equaton (x=4) but more like "x related to 4" or "cat related to animal" etc...<issue_comment>username_1: * What You need are other ways of [knowledge representation](https://en.wikipedia.org/wiki/Knowledge_representation_and_reasoning), such as semantic networks or conceptual graphs. there you can define any possible relation between your entities. the knowledge of "x related to 4" exactly fits into "frames" and "semantic networks".
* [Jaynes](https://en.wikipedia.org/wiki/Edwin_Thompson_Jaynes) in his book,discusses thoroughly what "plausibility" means and why we need to take into account weak syllogisms and start using probability theory as a platform for developing a (general) AI. this might also help with your "reasoning" phase (after you've developed your knowledge base)
Upvotes: 2 <issue_comment>username_2: I don't know if this is what you want, but Artificial Intelligence Markup Language or simply AIML is something that you should consider.
The only problem I see with this language is that it is not popular thus there aren't many compilers for it.
---
Here is an example of AIML.
Code from tutorials point :
```
HELLO ALICE
Hello User!
```
Result :
```
User: Hello Alice
Bot: Hello User
```
Upvotes: 1 |
2017/04/08 | 860 | 3,687 | <issue_start>username_0: I have an ANN model that receives an input and produces an output. The output is an action that interacts with the environment and changes the input accordingly. The network has a desired environment state which, in any turn, decides the desired response and trains the network on that basis.
Currently, the network works in discrete time. *How can I make this network work in a continuous manner? Can you provide some resources and links if there is any past or current research on continuous AI?*
To be more concrete, the system starts with the current environment state, for example, `[1 1 1]`, then produces an output. In current system, the **next step** takes the final state of the system as input, for example, `[1 2 2]`, but we know that such a thing doesn't happen in physical world and the system goes from `[1 1 1]` to, for example, `[1 1 2]`, and then to `[1 2 2]`, and that middle step is something that a discrete-time AI can't figure out.
The very case that I'm working on is the simulation for an autopilot cart in which the model is incapable to take subtle things like "*the maximum speed that you can turn the steering wheel*" into consideration. I don't want to add these complexities to the model since if the model is perfect, then the result is deterministic and there is no need for AI. I want the AI to be able to make a decision in each step based on the current state of the system in continuous time.<issue_comment>username_1: I don't think the transformation of `[1 1 1]` into `[1 2 2]` needs a middle step. Actuators can work simultaneously and they do not have to wait for each other to complete their job. I must even note that, if your next output is `[1 2 2]`, then performing `[1 1 2]` is so wrong in the case of following a trajectory (if it's your case). So, I guess the middle step in your example is `[1 1.5 1.5]`. Think of a line segmentation. When you segment a line, you still have your slope, and you do not create "steps". So what you are following in your neural network-based controller is exactly the pattern you need. Your problem is probably the closed-loop frequency of your controller. Better NN performance leads to quicker response and then better actuation.
Upvotes: 1 <issue_comment>username_2: By the way you have explained things above, it seems more like a problem with your code and not the something to do with the environment. The term discrete and continuous is used to define, how the outside environment is acting, rather than how your code is taking its steps. These are some lines from the book, Artificial Intelligence: A Modern Approach:
>
> The discrete/ continuous distinction applies to the state of the environment, to the way time is handled, and to the percepts and actions of the agent. For example, the chess environment has a finite number of distinct states (excluding the clock). Chess also has a discrete set of percepts and actions. Taxi driving is a continuous state and continuous-time problem: the speed and location of the taxi and of the other vehicles sweep through a range of continuous values and do so smoothly over time. Taxi-driving actions are also continuous (steering angles, etc.). Input from digital cameras is discrete, strictly speaking, but is typically treated as representing continuously varying intensities and locations.
>
>
>
So, continuous or discrete is not something that should be talked about as a problem of the code. It is basically, what an environment is. Your concern with the device is regarding the code. I will suggest that you upload the code on git and ask people to improve it.
I hope this helps!
Upvotes: 3 [selected_answer] |
2017/04/11 | 1,222 | 4,929 | <issue_start>username_0: Let's suppose there are two AGIs, $A$ and $B$. Assume that $B$ has the ability to modify $A$, but this action of modifying is considered **bad** by $B$. Can $A$ ever convince $B$ to modify $A$?<issue_comment>username_1: [Strong-narrow AI](https://futureoflife.org/2017/03/23/ai-risks-principle/) has reached important milestones recently, but, from what I can tell, we aren't even close to the creation of an AGI, and there are fundamental issues no one currently seems to have any idea of how to solve.
**Regarding your question, the simple answer is: it depends on which AI is smarter.**
*(It's more nuanced than that, but right now the question is very general, which is probably apropos;)*
I'd recommend reading [<NAME>'s Neuromancer trilogy](https://en.wikipedia.org/wiki/William_Gibson) if you're interested in AI-in-a-box (it's sort of about that) and also the more recent [Quantum Thief Trilogy by <NAME>](https://en.wikipedia.org/wiki/Hannu_Rajaniemi) to get a sense of the mechanics of the issue.
There is a great deal of academic literature on this subject, but it will likely require a little bit of getting up to speed in terms of basic research into the AI field. Future of Life Institute may not be a bad place to start: <https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/>
Upvotes: 2 <issue_comment>username_2: An AI takes the decision based on the output of their **utility function**. This is just a fancy word for the calculations that AI perform to compare profit and loss of taking a certain decision.
There is always a tight analogy between an AGI and a human. You can juxtapose the utility function to how we make decisions by considering marginal profits of doing something over the other.
Now your question. The answer is a big **NO**. This is because an AI (or a human) never takes an action that gives a low score on its utility function or is against the fundamental view of the machine (or human). *AI only cares about the utility function and goal state (plus the instrumental goals), nothing else*. Our AI outside the box has been created with the purpose of not opening the box. For it, the action of opening the box has a very low score, or a negative score.
Now you might be wondering why an AI can't be convinced of doing something that gives a low score in its utility function. Keep reading.
Consider this for a second. Suppose an AGI was created, assigned with the task of copying the handwriting of others and improving its own writing. You can think of it as a smart writing hand. Now, what this AI might come up with is that, in order to practice writing, it needs more pages and thus it needs to cut more trees for that. No matter what you do, AI will not stop from cutting more and more trees. It might even replicate itself into machines that will cut trees for him. Now, the only thing that can be done is to turn off the AGI. But, a smart machine would have known the possibility of being turned off and thus will have transferred itself to many other machines over the globe. The important question to ask here is, why it is doing all this? It is not because the machine wants to live, as we humans want to. The only reason why it wants to live is to fulfill its goal.
You simply can't change the fundamental view of the machine. The change can result in, the machine not able to attain its goal or not to that extend (optimal profit). This is the reason why you can't convince a machine to do something that it was asked not to do as its primary task.
You, as a person, are living a life now, and have some fundamental beliefs. Let us consider that you believe in not killing someone. Now, suppose that I give you a pill, and tell you that after taking this pill it will rewire your brain and you will kill the first four people you see. But, after that, you will achieve pure satisfaction and happiness. Now you will definitely not take that pill as it conflicts with something that you believe in now. Also, you will try your best and fight back not to take that pill. The same thing applies to an AGI too. It doesn't matter what your future version will feel or attain after rewiring of the brain (changes in code), it is what and who you are now matters. This video link will [help](https://www.youtube.com/watch?v=4l7Is6vOAOA&t=4s).
I hope it answers your question. There are lots of things to consider here. I have assumed a few things and tried to answer according to that.
There is one more thing. **We don’t tell the machines how to do something, instead, we only tell them what to do** (at least in the case of an AI). This is because sometimes we don't know the optimal way of solving certain problems. In the case of your question, we don't know what these two machines will do or say to each other. It will be a very interesting thing to hear or watch.
Upvotes: 3 [selected_answer] |
2017/04/12 | 3,080 | 13,813 | <issue_start>username_0: I am researching affective computing. Particularly, I'm studying the part of emotion recognition, i.e. the task of recognising the emotions that are being felt by the user/subject. For example, [affectiva](https://www.affectiva.com/emotion-ai-overview/) can be used to this end. I have concerns not in the validity of these models, but in what we are going to do with them.
What about responding to emotions? Will computers be able to really understand user emotions?<issue_comment>username_1: I was an Undergraduate Research Scholar - I and my team developed an algorithm to detect Human Emotions from touch Screen - which is under further improvement and development by PHD scholors of my guide .
From studying literature I can say that the reverse is a difficult task - Detecting Affect .
Even more difficult to detect human emotions with a light weight solutions - i.e **without using heavy and wired hardware .** - which is the need of hour .
Once its done - there is a plethora of applications - focused ad marketing , enhancing User experience , game development etc.
Upvotes: 1 <issue_comment>username_2: **Will computers be able to understand user emotions?**
The term *Understand* is multidimensional, so characterizing the degree of understanding — emotional or otherwise — is a slippery task. Nonetheless, some forms of AI emotional understanding are possible.
**An Interesting Simple Case**
Even the embedded programs of velocity sensitive musical keyboards are emotionally aware at a primitive level. Using the key depression velocity to drive the note parameters of attack, timbre, and volume allows the accomplished user's emotions to pass through the software into the emotional content of the music, which can be experienced by listeners just as with instruments that are not digital in nature.
**More Advanced Emotional Capabilities**
At the other extreme is a wide array of human emotional capabilities. One example is the set of complex recognition functions underneath the typical listening skill set of a counselling professional.
The term *affective computing* was mentioned in the question. A counselor may note the affect of the counselled, reading tone of voice, facial expression sequences, and body language. From this, a clearer picture of the internal emotional state of the counselled can be understood. These recognition and analysis abilities can determine the subject's internal emotional state more accurately and comprehensively than a masterful analysis of a functional MRI or the detection of neuro-chemical metabolites in blood.
There seems to be no theoretical principle that limits computers from mastering the front end of a counselor's skill set, the recognition of emotion to produce a coded sequence of affects. The degree to which software can interpret the sequence of affects (in combination with concurrent natural language expressions) and determine the emotional condition (and potential market-oriented decision patterns) of the human is yet to be determined.
There are other emotional capabilities beyond the skills of emotional recognition and interpretation. Compassion is largely the logical integration of ethics in the absence of self-centered motive with the results of emotional recognition and interpretation. The addition of emotional imitation yields empathy.
The development of such capabilities in automata is more important than the shallow and therefore myopic motives of e-Commerce. There may be applications for toys, education, entertainment, and remedy for the growing coldness of a technological society. (Such is discussed further below.)
Functions of recognition, interpretation, the application of rules or meta-rules in the ethical domain to these interpretations, and imitation may well be within the capabilities of digital systems.
<NAME> introduced the possibility of telepathy to the feature set of both the human mind and automata. It may seem like telepathy is confined to the domain of fiction, however an emoticon can be the tele-form of a component of affect, such as a smile. In this way, consensual technology assisted telepathy has actually become commonplace.
With the erosion of privacy in a culture with increasing interaction with digital systems by an increasing segment of the public, less consensual telepathic techniques may be developed and mastered.
**Deeper Aspects of the Question**
The doubt lies in the existence of a soul or some other autonomous and non-deterministic aspect of the human person. The existence of these things and whether the simulation of them can be accomplished with a Turing machine and practically deployed to the Von Neumann architecture or a collection of them have neither been proven or dis-proven formally. If autonomous and non-deterministic elements exist in people, then, even if they can be simulated, we cannot infur they can be realized as independently autonomous.
Challenging the implications of the MIT coined term *Meat Machine*, the notion that humans are capable of intention by fiat is not disproven, yet a pure Turing machine clearly cannot intend in that way.
This is important in the context of the question in that a computer may only be able to simulate frustration when an intention is thwarted by the conditions of life. Actual frustration may not be possible.
**Current Publicly Experienced Progress**
One interesting entry point of computer science into the domain of human emotions is the story based memory and reasoning model purported by <NAME> of Yale U in the 1990s. Although the machinery used by Amazon underneath the hold of its ever improving ability to recommend movies in a buyer specific way is Company Confidential to Amazon, one must wonder.
Can e-Commerce develop user profiling regarding books and movies to comprehend the story plot or the arc of protagonist's emotional development? Can it do so sufficiently to match user viewing and purchase with other products? Is that what Amazon is beginning to do? Short of an unethical Amazon employee, we can only guess. Certainly, what is transpiring in leading edge e-Commerce has moved beyond word or phrase matching or tracking interest in particular authors, screenwriters, directors, producers, and stars.
If <NAME>'s story based memory and reasoning has penetrated into e-Commerse, emotional analysis of individuals of within public is well underway, since story plot and protagonist developmental arcs are bound in climaxes and setting up climaxes, all related to emotional states. If not, emotional analysis of groups within the public has been underway since public relations functions entered into the roles of enterprise IT systems. (I know for a fact that such occurred decades ago.)
Even if the likes of Facebook, Google, and Amazon are attempting to approximate in various ways the emotional states of individuals in real time, the matching of movie plots of protagonist arcs and such pattern matching and naive categorization is far from what a good counselor or a self-actualized friend or family member might do.
Computer systems to which the public is exposed do not yet appear to accurately developing comprehension of the buyer's emotion. That is one area where even the most progressive interfaces are still shy. Furthermore, the computer interface itself is still as dry as it was in the late 20th century.
Short of breaches of confidentiality, what capabilities to recognize, compare, analyze, or simulate emotions may exist in proprietary labs is solely a mater of conjecture.
**Warm Interfaces**
Returning to the question of the general coldness of cybernetic interfaces in a technological society, there is much for the computer science community to learn. For one thing, the soft skills of actual technical people often leaves much to be desired, so a cultural shift from cold development teams to culturally progressive and warm teams might be necessary.
Furthermore, one who has engaged with various PaaS (personality as a service) devices such as Siri, Google Assistant, Alexa, Cortana, and Microsoft's Office Assistant, Clippy, might find the paperclip personality of Clippy to be the warmer from among the list. Perhaps the Clippy didn't see a higher level of acceptance because of the disparity between the coldness of a grid of cells in a spreadsheet and the attempted warmth of the animated character was too much, and the little guy tends to get in the way of some portion of the interface, which can be annoying. (Integration of concepts was poor.)
Nonetheless, the animated style of Clippy is intended to create a mental association with cartoon characters, which is brilliant. Cartoon character design and storytelling through comic and cartoon media is a creative area of technology that had mastered computer generated warmth long before intelligent assistants began to appear.
One can't help thinking, upon general survey of intelligent assistant history, that the more the system tries to be human, the less human it seems. Human response to emotional content in a cybernetic interface is like that of color matching or synthesized musical instruments. If one is attempting to synthesize, the synthesis has to be good. If it can't be fully convincing, then there is more authenticity in inventing something that is not pretending to be something real. A caricature of real life objects (like Sponge Bob or Road Runner) avoids exposing an unacceptable margin of error in the imitation of real life. [1]
Our sensitivity in this area is so acute that if a real person, working in a customer service department, acts too mechanically such that we have even the slightest doubt as to whether we are talking to a human being, the impression is negative and we want to hang up and look for the function we wish to initiate on the company's web site.
It is unclear whether the trend for people to become more like cogs in a machine or whether usage patterns will drive companies to design their user interfaces and the mechanisms behind them to exude something like hospitality or congeniality. Although <NAME> seems to be correct in his proposition that technology has long since been driving humanity rather than the other way around, the DNA nature of the human mind will perhaps force humanity to remain remarkably human. Thus the evolution of cybernetic interface professionals may continue to seek the meaning of and simulation of warmth.
The problematic, controvertible, and somewhat bizarre pursuit of Turing Testing will probably endure.
**Concerns About Emotional Modelling Uses**
Whether an emotional recognition, modelling, analysis, and comparison suite, obviously a powerful tool-set, is used for the betterment of humanity or its detriment is a question like that of any other powerful tool-set. Mass psychology, nuclear science, statistics, and genetics have in common the potential for great good or great evil.
In the context of this question, broadcast or more targeted information dissemination can be characterized as propaganda, which may not necessarily be perilous. Propaganda was used for women's suffrage and along the path of U.S. independence from Europe. Elements of broadcast and targeted control have created much geopolitical trouble, a topic outside of the scope of this social network, but pertinent and somewhat obvious, such that further mention is unnecessary anyway.
Individual profiling can be of great good too. Such can be like a combination of the old library subject indexed Dewey Decimal cards in combination with a knowledgeable librarian when looking for information or items of entertainment or personal expression for borrowing, rental, or purchase.
Of course, profiling in combination with statistics can lead to the ability to obtain covert control over a mass of people, not just for the profit maximization of a company. An appropriately self-deluded megalomaniac or a collection of them in some kind of secret society could use emotional modelling and comprehensive user profiling for a large sample of national or global population for the injection of emotionally powered systems of belief. Such a person or group could theoretically succeed in covert and systematic sublimation of liberty to a kind of virtual authoritarian dictatorship.
How technology is used is always a concern. Transparency and accountability are required features of a society.
Just as in the other areas of science and technology mentioned above, it is up to we who are skilled enough to realize these systems to be intelligent and active elements within our society's accountability mechanisms. Whether a corporation or governmental department is leading humanity into perdition is an item that can be judged during a job hunt and during the initial work period.
**Notes**
[1]
This is perhaps the strongest critique of the Turing's Imitation Game philosophy. Cartoons, comics, and sci-fi creatures demonstrate that people don't seem to care if intelligent beings can be distinguished from humans, and there is no proof that human intelligence is the only or the best example of intelligence.
Along these same lines, heroes are representations of intelligent behavior that epitomize some aspect of humanity that actually doesn't naturally occur in pure form as they do with these protagonists. Yet actors are branded and marketed as stars because of this idyllic expression of character. It may be that humans don't really like human intelligence and constantly try to transcend it through fiction, but results in only some public awareness that is never more than an anthropological fantasy.
Even AI is not an imitation game. It is a transcendence game. For this reason, the Turing Test is of limited value.
Upvotes: 4 [selected_answer] |
2017/04/12 | 3,048 | 13,722 | <issue_start>username_0: In our brain there is an area, near the fusiform gyrus and the occipital area, to recognize the human face. And in speech recognition, there is a technique named keyword spotting. Then I am wondering 1) if there is an area in our brain for the similar function to recognize our names; 2) if a special face recognition function should be considered when we are building a robot?<issue_comment>username_1: I was an Undergraduate Research Scholar - I and my team developed an algorithm to detect Human Emotions from touch Screen - which is under further improvement and development by PHD scholors of my guide .
From studying literature I can say that the reverse is a difficult task - Detecting Affect .
Even more difficult to detect human emotions with a light weight solutions - i.e **without using heavy and wired hardware .** - which is the need of hour .
Once its done - there is a plethora of applications - focused ad marketing , enhancing User experience , game development etc.
Upvotes: 1 <issue_comment>username_2: **Will computers be able to understand user emotions?**
The term *Understand* is multidimensional, so characterizing the degree of understanding — emotional or otherwise — is a slippery task. Nonetheless, some forms of AI emotional understanding are possible.
**An Interesting Simple Case**
Even the embedded programs of velocity sensitive musical keyboards are emotionally aware at a primitive level. Using the key depression velocity to drive the note parameters of attack, timbre, and volume allows the accomplished user's emotions to pass through the software into the emotional content of the music, which can be experienced by listeners just as with instruments that are not digital in nature.
**More Advanced Emotional Capabilities**
At the other extreme is a wide array of human emotional capabilities. One example is the set of complex recognition functions underneath the typical listening skill set of a counselling professional.
The term *affective computing* was mentioned in the question. A counselor may note the affect of the counselled, reading tone of voice, facial expression sequences, and body language. From this, a clearer picture of the internal emotional state of the counselled can be understood. These recognition and analysis abilities can determine the subject's internal emotional state more accurately and comprehensively than a masterful analysis of a functional MRI or the detection of neuro-chemical metabolites in blood.
There seems to be no theoretical principle that limits computers from mastering the front end of a counselor's skill set, the recognition of emotion to produce a coded sequence of affects. The degree to which software can interpret the sequence of affects (in combination with concurrent natural language expressions) and determine the emotional condition (and potential market-oriented decision patterns) of the human is yet to be determined.
There are other emotional capabilities beyond the skills of emotional recognition and interpretation. Compassion is largely the logical integration of ethics in the absence of self-centered motive with the results of emotional recognition and interpretation. The addition of emotional imitation yields empathy.
The development of such capabilities in automata is more important than the shallow and therefore myopic motives of e-Commerce. There may be applications for toys, education, entertainment, and remedy for the growing coldness of a technological society. (Such is discussed further below.)
Functions of recognition, interpretation, the application of rules or meta-rules in the ethical domain to these interpretations, and imitation may well be within the capabilities of digital systems.
<NAME> introduced the possibility of telepathy to the feature set of both the human mind and automata. It may seem like telepathy is confined to the domain of fiction, however an emoticon can be the tele-form of a component of affect, such as a smile. In this way, consensual technology assisted telepathy has actually become commonplace.
With the erosion of privacy in a culture with increasing interaction with digital systems by an increasing segment of the public, less consensual telepathic techniques may be developed and mastered.
**Deeper Aspects of the Question**
The doubt lies in the existence of a soul or some other autonomous and non-deterministic aspect of the human person. The existence of these things and whether the simulation of them can be accomplished with a Turing machine and practically deployed to the Von Neumann architecture or a collection of them have neither been proven or dis-proven formally. If autonomous and non-deterministic elements exist in people, then, even if they can be simulated, we cannot infur they can be realized as independently autonomous.
Challenging the implications of the MIT coined term *Meat Machine*, the notion that humans are capable of intention by fiat is not disproven, yet a pure Turing machine clearly cannot intend in that way.
This is important in the context of the question in that a computer may only be able to simulate frustration when an intention is thwarted by the conditions of life. Actual frustration may not be possible.
**Current Publicly Experienced Progress**
One interesting entry point of computer science into the domain of human emotions is the story based memory and reasoning model purported by <NAME> of Yale U in the 1990s. Although the machinery used by Amazon underneath the hold of its ever improving ability to recommend movies in a buyer specific way is Company Confidential to Amazon, one must wonder.
Can e-Commerce develop user profiling regarding books and movies to comprehend the story plot or the arc of protagonist's emotional development? Can it do so sufficiently to match user viewing and purchase with other products? Is that what Amazon is beginning to do? Short of an unethical Amazon employee, we can only guess. Certainly, what is transpiring in leading edge e-Commerce has moved beyond word or phrase matching or tracking interest in particular authors, screenwriters, directors, producers, and stars.
If <NAME>'s story based memory and reasoning has penetrated into e-Commerse, emotional analysis of individuals of within public is well underway, since story plot and protagonist developmental arcs are bound in climaxes and setting up climaxes, all related to emotional states. If not, emotional analysis of groups within the public has been underway since public relations functions entered into the roles of enterprise IT systems. (I know for a fact that such occurred decades ago.)
Even if the likes of Facebook, Google, and Amazon are attempting to approximate in various ways the emotional states of individuals in real time, the matching of movie plots of protagonist arcs and such pattern matching and naive categorization is far from what a good counselor or a self-actualized friend or family member might do.
Computer systems to which the public is exposed do not yet appear to accurately developing comprehension of the buyer's emotion. That is one area where even the most progressive interfaces are still shy. Furthermore, the computer interface itself is still as dry as it was in the late 20th century.
Short of breaches of confidentiality, what capabilities to recognize, compare, analyze, or simulate emotions may exist in proprietary labs is solely a mater of conjecture.
**Warm Interfaces**
Returning to the question of the general coldness of cybernetic interfaces in a technological society, there is much for the computer science community to learn. For one thing, the soft skills of actual technical people often leaves much to be desired, so a cultural shift from cold development teams to culturally progressive and warm teams might be necessary.
Furthermore, one who has engaged with various PaaS (personality as a service) devices such as Siri, Google Assistant, Alexa, Cortana, and Microsoft's Office Assistant, Clippy, might find the paperclip personality of Clippy to be the warmer from among the list. Perhaps the Clippy didn't see a higher level of acceptance because of the disparity between the coldness of a grid of cells in a spreadsheet and the attempted warmth of the animated character was too much, and the little guy tends to get in the way of some portion of the interface, which can be annoying. (Integration of concepts was poor.)
Nonetheless, the animated style of Clippy is intended to create a mental association with cartoon characters, which is brilliant. Cartoon character design and storytelling through comic and cartoon media is a creative area of technology that had mastered computer generated warmth long before intelligent assistants began to appear.
One can't help thinking, upon general survey of intelligent assistant history, that the more the system tries to be human, the less human it seems. Human response to emotional content in a cybernetic interface is like that of color matching or synthesized musical instruments. If one is attempting to synthesize, the synthesis has to be good. If it can't be fully convincing, then there is more authenticity in inventing something that is not pretending to be something real. A caricature of real life objects (like Sponge Bob or Road Runner) avoids exposing an unacceptable margin of error in the imitation of real life. [1]
Our sensitivity in this area is so acute that if a real person, working in a customer service department, acts too mechanically such that we have even the slightest doubt as to whether we are talking to a human being, the impression is negative and we want to hang up and look for the function we wish to initiate on the company's web site.
It is unclear whether the trend for people to become more like cogs in a machine or whether usage patterns will drive companies to design their user interfaces and the mechanisms behind them to exude something like hospitality or congeniality. Although <NAME> seems to be correct in his proposition that technology has long since been driving humanity rather than the other way around, the DNA nature of the human mind will perhaps force humanity to remain remarkably human. Thus the evolution of cybernetic interface professionals may continue to seek the meaning of and simulation of warmth.
The problematic, controvertible, and somewhat bizarre pursuit of Turing Testing will probably endure.
**Concerns About Emotional Modelling Uses**
Whether an emotional recognition, modelling, analysis, and comparison suite, obviously a powerful tool-set, is used for the betterment of humanity or its detriment is a question like that of any other powerful tool-set. Mass psychology, nuclear science, statistics, and genetics have in common the potential for great good or great evil.
In the context of this question, broadcast or more targeted information dissemination can be characterized as propaganda, which may not necessarily be perilous. Propaganda was used for women's suffrage and along the path of U.S. independence from Europe. Elements of broadcast and targeted control have created much geopolitical trouble, a topic outside of the scope of this social network, but pertinent and somewhat obvious, such that further mention is unnecessary anyway.
Individual profiling can be of great good too. Such can be like a combination of the old library subject indexed Dewey Decimal cards in combination with a knowledgeable librarian when looking for information or items of entertainment or personal expression for borrowing, rental, or purchase.
Of course, profiling in combination with statistics can lead to the ability to obtain covert control over a mass of people, not just for the profit maximization of a company. An appropriately self-deluded megalomaniac or a collection of them in some kind of secret society could use emotional modelling and comprehensive user profiling for a large sample of national or global population for the injection of emotionally powered systems of belief. Such a person or group could theoretically succeed in covert and systematic sublimation of liberty to a kind of virtual authoritarian dictatorship.
How technology is used is always a concern. Transparency and accountability are required features of a society.
Just as in the other areas of science and technology mentioned above, it is up to we who are skilled enough to realize these systems to be intelligent and active elements within our society's accountability mechanisms. Whether a corporation or governmental department is leading humanity into perdition is an item that can be judged during a job hunt and during the initial work period.
**Notes**
[1]
This is perhaps the strongest critique of the Turing's Imitation Game philosophy. Cartoons, comics, and sci-fi creatures demonstrate that people don't seem to care if intelligent beings can be distinguished from humans, and there is no proof that human intelligence is the only or the best example of intelligence.
Along these same lines, heroes are representations of intelligent behavior that epitomize some aspect of humanity that actually doesn't naturally occur in pure form as they do with these protagonists. Yet actors are branded and marketed as stars because of this idyllic expression of character. It may be that humans don't really like human intelligence and constantly try to transcend it through fiction, but results in only some public awareness that is never more than an anthropological fantasy.
Even AI is not an imitation game. It is a transcendence game. For this reason, the Turing Test is of limited value.
Upvotes: 4 [selected_answer] |
2017/04/14 | 1,534 | 5,987 | <issue_start>username_0: I am trying to find some existing research on how to select the number of hidden layers and the size of these of an LSTM-based RNN.
Is there an article where this problem is being investigated, i.e., how many memory cells should one use? I assume it totaly depends on the application and in which context the model is being used, but what does the research say?<issue_comment>username_1: Your question is quite broad, but here are some tips.
Specifically for LSTMs, see this Reddit discussion [Does the number of layers in an LSTM network affect its ability to remember long patterns?](https://www.reddit.com/r/MachineLearning/comments/4behuh/does_the_number_of_layers_in_an_lstm_network/)
The main point is that there is usually no rule for the number of hidden nodes you should use, it is something you have to figure out for each case by trial and error.
If you are also interested in feedforward networks, see the question [How to choose the number of hidden layers and nodes in a feedforward neural network?](https://stats.stackexchange.com/q/181/82135) at Stats SE. Specifically, [this answer](https://stats.stackexchange.com/a/136542/82135) was helpful.
>
> There's one additional rule of thumb that helps for supervised learning problems. You can usually prevent over-fitting if you keep your number of neurons below:
>
>
> $$N\_h = \frac{N\_s} {(\alpha \* (N\_i + N\_o))}$$
>
>
> * $N\_i$ = number of input neurons.
> * $N\_o$ = number of output neurons.
> * $N\_s$ = number of samples in training data set.
> * $\alpha$ = an arbitrary scaling factor usually 2-10.
>
>
> [Others recommend](http://www.solver.com/training-artificial-neural-network-intro) setting $alpha$ to a value between 5 and 10, but I find a value of 2 will often work without overfitting. You can think of alpha as the effective branching factor or number of nonzero weights for each neuron. Dropout layers will bring the "effective" branching factor way down from the actual mean branching factor for your network.
>
>
> As explained by this [excellent NN Design text](http://hagan.okstate.edu/NNDesign.pdf#page=469), you want to limit the number of free parameters in your model (i.e. its [degree](https://stats.stackexchange.com/q/57027/15974) or the number of nonzero weights) to a small portion of the degrees of freedom in your data. The degrees of freedom in your data is the number samples \* degrees of freedom (dimensions) in each sample or $N\_s \* (N\_i + N\_o)$ (assuming they're all independent). So $\alpha$ is a way to indicate how general you want your model to be, or how much you want to prevent overfitting.
>
>
> For an automated procedure you'd start with an alpha of 2 (twice as many degrees of freedom in your training data as your model) and work your way up to 10 if the error (loss) for your training dataset is significantly smaller than for your test dataset.
>
>
>
Upvotes: 5 <issue_comment>username_2: The selection of the number of hidden layers and the number of memory cells in LSTM probably depends on the application domain and context where you want to apply this LSTM.
The optimal number of hidden units could be smaller than the number of inputs. AFAIK, there is no rule like multiply the number of inputs with $N$. If you have a lot of training examples, you can use multiple hidden units, but sometimes just 2 hidden units work best with little data.
Usually, people use one hidden layer for simple tasks, but nowadays research in deep neural network architectures show that many hidden layers can be fruitful for a difficult object, handwritten character, and face recognition problems.
Upvotes: 3 <issue_comment>username_3: Have a look at the paper [Long Short-Term Memory Recurrent Neural Network Architectures for Large Scale Acoustic Modeling](https://wiki.inf.ed.ac.uk/twiki/pub/CSTR/ListenTerm1201415/sak2.pdf) (2014), where different LSTM architectures are compared. In the abstract, the authors write the following.
>
> We show that a two-layer deep LSTM RNN where each LSTM layer has a linear recurrent projection layer can exceed state-of-the-art speech recognition performance
>
>
>
Upvotes: 2 <issue_comment>username_4: In general, there are no guidelines on how to determine the number of layers or the number of memory cells in an LSTM.
The number of layers and cells required in an LSTM might depend on several aspects of the problem:
1. The complexity of the dataset, such as the number of features, the number of data points, etc.
2. The data-generating process. For example, the prediction of oil prices compared to the prediction of GDP is a well-understood economy. The latter is much easier than the former. Thus, predicting oil prices might require more LSTM memory cells to predict, with the same accuracy, as compared to the GDP.
3. The accuracy required for the use case. The number of memory cells will **heavily** depend on this. If the goal is to beat the state-of-the-art model, in general, one needs more LSTM cells. Compare that to the goal of coming up with a reasonable prediction, which would need fewer LSTM cells.
I follow these steps when modeling using LSTM.
1. Try a single hidden layer with 2 or 3 memory cells. See how it performs against a benchmark. If it is a time series problem, then I generally make a forecast from classical time series techniques as benchmark.
2. Try and increase the number of memory cells. If the performance is not increasing much then move on to the next step.
3. Start making the network deeper, i.e. add another layer with a small number of memory cells.
As a side note, there is no limit to the amount of labor that can be devoted to reach that global minimum of the loss function and tune the best hyper-parameters. So, having the focus on the end goal for modeling should be the strategy rather than trying to increase the accuracy as much as possible.
Most of the problems can be handled using 2-3 layers of the network.
Upvotes: 3 |
2017/04/16 | 1,654 | 6,409 | <issue_start>username_0: I have a group of structures in a program that are very specific on their meaning, eg. this is a piece of code
```
randomItem = objects.concept.random("buyable")
idea.example(objects.concept.random("family", "friend")).does({
action: "go",
target: object.concent.random("shop")
}).then({
action: "buys",
target: randomItem,
several: true
}).then({
question: true,
action: "know",
property: "amount",
target: randomItem,
several: true
})
```
I have worked with natural language parsers before.
How do I go and transform this to Natural Language (the other way around), is there any way or method; I have logical structures in which I know who is the subject, what the verb and target.
Which methods can I use to generate language from this?<issue_comment>username_1: Your question is quite broad, but here are some tips.
Specifically for LSTMs, see this Reddit discussion [Does the number of layers in an LSTM network affect its ability to remember long patterns?](https://www.reddit.com/r/MachineLearning/comments/4behuh/does_the_number_of_layers_in_an_lstm_network/)
The main point is that there is usually no rule for the number of hidden nodes you should use, it is something you have to figure out for each case by trial and error.
If you are also interested in feedforward networks, see the question [How to choose the number of hidden layers and nodes in a feedforward neural network?](https://stats.stackexchange.com/q/181/82135) at Stats SE. Specifically, [this answer](https://stats.stackexchange.com/a/136542/82135) was helpful.
>
> There's one additional rule of thumb that helps for supervised learning problems. You can usually prevent over-fitting if you keep your number of neurons below:
>
>
> $$N\_h = \frac{N\_s} {(\alpha \* (N\_i + N\_o))}$$
>
>
> * $N\_i$ = number of input neurons.
> * $N\_o$ = number of output neurons.
> * $N\_s$ = number of samples in training data set.
> * $\alpha$ = an arbitrary scaling factor usually 2-10.
>
>
> [Others recommend](http://www.solver.com/training-artificial-neural-network-intro) setting $alpha$ to a value between 5 and 10, but I find a value of 2 will often work without overfitting. You can think of alpha as the effective branching factor or number of nonzero weights for each neuron. Dropout layers will bring the "effective" branching factor way down from the actual mean branching factor for your network.
>
>
> As explained by this [excellent NN Design text](http://hagan.okstate.edu/NNDesign.pdf#page=469), you want to limit the number of free parameters in your model (i.e. its [degree](https://stats.stackexchange.com/q/57027/15974) or the number of nonzero weights) to a small portion of the degrees of freedom in your data. The degrees of freedom in your data is the number samples \* degrees of freedom (dimensions) in each sample or $N\_s \* (N\_i + N\_o)$ (assuming they're all independent). So $\alpha$ is a way to indicate how general you want your model to be, or how much you want to prevent overfitting.
>
>
> For an automated procedure you'd start with an alpha of 2 (twice as many degrees of freedom in your training data as your model) and work your way up to 10 if the error (loss) for your training dataset is significantly smaller than for your test dataset.
>
>
>
Upvotes: 5 <issue_comment>username_2: The selection of the number of hidden layers and the number of memory cells in LSTM probably depends on the application domain and context where you want to apply this LSTM.
The optimal number of hidden units could be smaller than the number of inputs. AFAIK, there is no rule like multiply the number of inputs with $N$. If you have a lot of training examples, you can use multiple hidden units, but sometimes just 2 hidden units work best with little data.
Usually, people use one hidden layer for simple tasks, but nowadays research in deep neural network architectures show that many hidden layers can be fruitful for a difficult object, handwritten character, and face recognition problems.
Upvotes: 3 <issue_comment>username_3: Have a look at the paper [Long Short-Term Memory Recurrent Neural Network Architectures for Large Scale Acoustic Modeling](https://wiki.inf.ed.ac.uk/twiki/pub/CSTR/ListenTerm1201415/sak2.pdf) (2014), where different LSTM architectures are compared. In the abstract, the authors write the following.
>
> We show that a two-layer deep LSTM RNN where each LSTM layer has a linear recurrent projection layer can exceed state-of-the-art speech recognition performance
>
>
>
Upvotes: 2 <issue_comment>username_4: In general, there are no guidelines on how to determine the number of layers or the number of memory cells in an LSTM.
The number of layers and cells required in an LSTM might depend on several aspects of the problem:
1. The complexity of the dataset, such as the number of features, the number of data points, etc.
2. The data-generating process. For example, the prediction of oil prices compared to the prediction of GDP is a well-understood economy. The latter is much easier than the former. Thus, predicting oil prices might require more LSTM memory cells to predict, with the same accuracy, as compared to the GDP.
3. The accuracy required for the use case. The number of memory cells will **heavily** depend on this. If the goal is to beat the state-of-the-art model, in general, one needs more LSTM cells. Compare that to the goal of coming up with a reasonable prediction, which would need fewer LSTM cells.
I follow these steps when modeling using LSTM.
1. Try a single hidden layer with 2 or 3 memory cells. See how it performs against a benchmark. If it is a time series problem, then I generally make a forecast from classical time series techniques as benchmark.
2. Try and increase the number of memory cells. If the performance is not increasing much then move on to the next step.
3. Start making the network deeper, i.e. add another layer with a small number of memory cells.
As a side note, there is no limit to the amount of labor that can be devoted to reach that global minimum of the loss function and tune the best hyper-parameters. So, having the focus on the end goal for modeling should be the strategy rather than trying to increase the accuracy as much as possible.
Most of the problems can be handled using 2-3 layers of the network.
Upvotes: 3 |