date stringlengths 10 10 | nb_tokens int64 60 629k | text_size int64 234 1.02M | content stringlengths 234 1.02M |
|---|---|---|---|
2017/04/17 | 1,530 | 5,921 | <issue_start>username_0: I recently read about [federated learning](https://ai.googleblog.com/2017/04/federated-learning-collaborative.html) introduced by Google, but it seems to be like [edge computing](https://en.wikipedia.org/wiki/Edge_computing).
What is the difference between edge computing and federated learning?<issue_comment>username_1: Your question is quite broad, but here are some tips.
Specifically for LSTMs, see this Reddit discussion [Does the number of layers in an LSTM network affect its ability to remember long patterns?](https://www.reddit.com/r/MachineLearning/comments/4behuh/does_the_number_of_layers_in_an_lstm_network/)
The main point is that there is usually no rule for the number of hidden nodes you should use, it is something you have to figure out for each case by trial and error.
If you are also interested in feedforward networks, see the question [How to choose the number of hidden layers and nodes in a feedforward neural network?](https://stats.stackexchange.com/q/181/82135) at Stats SE. Specifically, [this answer](https://stats.stackexchange.com/a/136542/82135) was helpful.
>
> There's one additional rule of thumb that helps for supervised learning problems. You can usually prevent over-fitting if you keep your number of neurons below:
>
>
> $$N\_h = \frac{N\_s} {(\alpha \* (N\_i + N\_o))}$$
>
>
> * $N\_i$ = number of input neurons.
> * $N\_o$ = number of output neurons.
> * $N\_s$ = number of samples in training data set.
> * $\alpha$ = an arbitrary scaling factor usually 2-10.
>
>
> [Others recommend](http://www.solver.com/training-artificial-neural-network-intro) setting $alpha$ to a value between 5 and 10, but I find a value of 2 will often work without overfitting. You can think of alpha as the effective branching factor or number of nonzero weights for each neuron. Dropout layers will bring the "effective" branching factor way down from the actual mean branching factor for your network.
>
>
> As explained by this [excellent NN Design text](http://hagan.okstate.edu/NNDesign.pdf#page=469), you want to limit the number of free parameters in your model (i.e. its [degree](https://stats.stackexchange.com/q/57027/15974) or the number of nonzero weights) to a small portion of the degrees of freedom in your data. The degrees of freedom in your data is the number samples \* degrees of freedom (dimensions) in each sample or $N\_s \* (N\_i + N\_o)$ (assuming they're all independent). So $\alpha$ is a way to indicate how general you want your model to be, or how much you want to prevent overfitting.
>
>
> For an automated procedure you'd start with an alpha of 2 (twice as many degrees of freedom in your training data as your model) and work your way up to 10 if the error (loss) for your training dataset is significantly smaller than for your test dataset.
>
>
>
Upvotes: 5 <issue_comment>username_2: The selection of the number of hidden layers and the number of memory cells in LSTM probably depends on the application domain and context where you want to apply this LSTM.
The optimal number of hidden units could be smaller than the number of inputs. AFAIK, there is no rule like multiply the number of inputs with $N$. If you have a lot of training examples, you can use multiple hidden units, but sometimes just 2 hidden units work best with little data.
Usually, people use one hidden layer for simple tasks, but nowadays research in deep neural network architectures show that many hidden layers can be fruitful for a difficult object, handwritten character, and face recognition problems.
Upvotes: 3 <issue_comment>username_3: Have a look at the paper [Long Short-Term Memory Recurrent Neural Network Architectures for Large Scale Acoustic Modeling](https://wiki.inf.ed.ac.uk/twiki/pub/CSTR/ListenTerm1201415/sak2.pdf) (2014), where different LSTM architectures are compared. In the abstract, the authors write the following.
>
> We show that a two-layer deep LSTM RNN where each LSTM layer has a linear recurrent projection layer can exceed state-of-the-art speech recognition performance
>
>
>
Upvotes: 2 <issue_comment>username_4: In general, there are no guidelines on how to determine the number of layers or the number of memory cells in an LSTM.
The number of layers and cells required in an LSTM might depend on several aspects of the problem:
1. The complexity of the dataset, such as the number of features, the number of data points, etc.
2. The data-generating process. For example, the prediction of oil prices compared to the prediction of GDP is a well-understood economy. The latter is much easier than the former. Thus, predicting oil prices might require more LSTM memory cells to predict, with the same accuracy, as compared to the GDP.
3. The accuracy required for the use case. The number of memory cells will **heavily** depend on this. If the goal is to beat the state-of-the-art model, in general, one needs more LSTM cells. Compare that to the goal of coming up with a reasonable prediction, which would need fewer LSTM cells.
I follow these steps when modeling using LSTM.
1. Try a single hidden layer with 2 or 3 memory cells. See how it performs against a benchmark. If it is a time series problem, then I generally make a forecast from classical time series techniques as benchmark.
2. Try and increase the number of memory cells. If the performance is not increasing much then move on to the next step.
3. Start making the network deeper, i.e. add another layer with a small number of memory cells.
As a side note, there is no limit to the amount of labor that can be devoted to reach that global minimum of the loss function and tune the best hyper-parameters. So, having the focus on the end goal for modeling should be the strategy rather than trying to increase the accuracy as much as possible.
Most of the problems can be handled using 2-3 layers of the network.
Upvotes: 3 |
2017/04/17 | 435 | 1,898 | <issue_start>username_0: I recently read [an article about how artificial intelligence replicates human stereotypes](http://www.wired.co.uk/article/machine-learning-bias-prejudice) when applied to biased datasets.
What are examples of techniques to prevent bias (and stereotypes) in artificial intelligence (in particular, machine learning) systems?<issue_comment>username_1: The field of statistical psychology offers many methodologies to remove bias from datasets, or rather gather a dataset with minimal unknown biases. This will be the responsibility of the programmer. An AI that learns from datasets will not be able to find bias in those datasets.
Upvotes: 0 <issue_comment>username_2: It's important to note that, ultimately, the statistical methods we currently use in ML research are just that: statistical methods. So, when they show some "bad behaviour", it's not because of problems with the statistical methods, but with the data we give them. But if the data we give them are as "genuine and unfiltered" as it gets, then it probably shows something about us.
From a cognitive science perspective, it's probably the case that the same heuristics and biases that create stereotypes are also the ones that make us powerful agents (note the similarity between categories and stereotypes), so, at least at this moment, it's unclear how we can segregate desired from undesired behaviour.
To combine the points mode above, it seems we can only either:
1. Remove "bad content" by curating the data by hand or by some metric that we don't know of yet
2. Accept that our methods will produce AI as "bad as we are", because that's what we are, and let it operate under the knowledge that it might produce undesired behavior sometimes.
Unless we have some crazy new theory of mind that we can begin to analyze this more rigorously, it seems like there is no clear cut solution.
Upvotes: 2 |
2017/04/17 | 2,346 | 9,934 | <issue_start>username_0: I read a really interesting article titled ["Stop Calling it Artificial Intelligence"](http://www.joshworth.com/stop-calling-in-artificial-intelligence/) that made a compelling critique of the name "Artificial Intelligence".
1. The word intelligence is so broad that it's hard to say whether "Artificial Intelligence" is really intelligent. Artificial Intelligence, therefore, tends to be misinterpreted as replicating human intelligence, which isn't actually what Artificial Intelligence is.
2. Artificial Intelligence isn't really "artificial". Artificial implies a fake imitation of something, which isn't exactly what artificial intelligence is.
What are good alternatives to the expression "Artificial Intelligence"? (Good answers won't list names at random; they'll give a rationale for why their alternative name is a good one.)<issue_comment>username_1: Google defines 'artificial' as something created by humans rather than occurring naturally so I wouldn't quite say that it's so bad.
Given the question however, you could perhaps say "smart machines" since that's what they essentially are these days.
Artificial Intelligence is a very broad term, pre-dating modern AI, simple things such as mechanical wooden robots were considered Artificial Intelligence.
<https://en.wikipedia.org/wiki/Timeline_of_artificial_intelligence>
Upvotes: 1 <issue_comment>username_2: Artificial is said to derive from the Latin word "[artificium](http://www.perseus.tufts.edu/hopper/morph?l=artificium&la=la#lexicon)" which connotes ideas such as crafting. Thus, artificial is a correct usage, and algorithms can be regarded as "artifacts" in the context of information as opposed to physical manifestation of information (i.e. matter).
However, I agree that the use of artificial is problematic in that, should strong Artificial General Intelligence ever be achieved, there is a stigma to "artificiality" that could have implications regarding personhood.
My personal feeling is that we should be using:
* **Algorithmic Intelligence**
which this is functional definition, and therefore more meaningful than "artificial". Additionally, "algorithmic" is a neutral term, and provides a very accurate description of what these systems are.
---
In terms of what is considered "intelligent", you may want to look at the concept of [Bounded Rationality](https://en.wikiquote.org/wiki/Bounded_rationality). There is no hard definition of "intelligence", just degrees of optimality in regard to decision making in a condition of uncertainty.
Because this is subjective for any problem that is not solved, modifiers are utilized, and thus we refer to AI as "strong" or "weak". These terms are also used to describe the degree to which certain types of problems (for instance [a non-chance, perfect information game](https://en.wikipedia.org/wiki/Solved_game#Overview) like Checkers) has been solved. [Complexity theory](https://en.wikipedia.org/wiki/Complexity_theory) will shed more light on this concept.
For more insight on "artificial", you might find this [question on the philosophical origin of the Turing Test](https://philosophy.stackexchange.com/questions/41237/is-protagoras-the-philosophical-root-of-the-turing-test) interesting, because it partly involves the meaning of a "thing". (There were multiple words for this in Ancient Greek.)
Upvotes: 3 <issue_comment>username_3: >
> Machine Intelligence
>
>
>
I believe intelligence is not a **proprietary entity** meant only living beings.
In fact the very origin of human intelligence is unknown. It is still not known if we can generate a brain just by fixing up the corresponding molecules of the real brain(Even theoretically). Even if we could do that does that constitute a real intelligence or artificial intelligence is still very hazy.
I think the phrase **"Machine Intelligence"** would sound appropriate.
Upvotes: 0 <issue_comment>username_4: <NAME> provides a critique for the use of the term *artificial intelligence* in his [Stop Calling it Artificial Intelligence](https://www.joshworth.com/stop-calling-in-artificial-intelligence/), but there are some caveats in the treatment of the term. (One is a typo in the URL path.)
It is not the term *Artificial Intelligence* that is the issue. It is what is collected under it in the media and sometimes in academic literature.
**Is *Artificial* Unclear?**
The term *artificial* is not particularly ambiguous or inaccurate. Artificial doesn't mean fake as Worth suggests. It simply means that it did not arise from natural processes. High end artificial flowers feel and smell like they are grown. Artificial flight is now called flying. We no longer see birds as the exclusive pilots of flight, so the word flying has changed to include artificial things called aircraft.
People don't normally make the mistake of placing the adjective *artificial* along side human capabilities, so there is no tendency to misnomers. There is a problem with the use of the term *artificial intelligence* though. Worth is correct in that overall statement. There may be two forks to the misuse of the term.
**Defining Intelligence**
One problem is with the term *intelligence*. The definitions we've seen are considerably problematic. Some are flat out wrong and can be proven so with heaps of counterexamples. Furthermore, definitions tend to be qualitative and difficult if not impossible to quantify.
Some propose the standardized testing of academia to quantify intelligence. If we use that definition of the word, from the staunch g-factor adherents, artificial intelligence is a target idea and nothing comes close to approximating it. No computer system has yet been admitted into a major university on the basis of high scores in college board testing.
If we use the ability to learn as a standard, then flies are intelligent because they frequently learn from one swat strategy changes to avoid getting squashed by the irritated person. There are a thousand other reasons why learning, by itself, is not an adequate characterization of human intelligence. A crack addict can learn how to make a purchase without holding a job. We wouldn't characterize that as intelligence, but rather dysfunction.
**Excessively Inclusive Use of the Term**
Another problem arises from avarice. To appear as an expert in what is perceived to be a lucrative expansion of technology, some who have no conception of the topic sometimes present conjecture as if it were peer reviewed fact. This has been typical of popular topics for centuries. There is often insufficient bandwidth of peer verification available to address even a small portion of publicly available information. Web publication has only augmented this problem.
Resulting from this coveting of expert reputation is collecting under the name *artificial intelligence* a number of things that are not intelligent.
* Multi-dimensional control systems
* Brute force searches of permutations
* Decisions made by statistically analyzing sample data
* Parameterized functional networks that converge to a defined optimal behavior
Those of the above that have no component of
* Cognition,
* Comprehension,
* Complex modelling and use of such models,
* Semantic mapping,
* Rational inference,
* Or some other clearly distinctive and broad form of adaptation
should not be included under the technology that exhibits authentic *artificial intelligence*. However, those that developed the theory of control systems, searches, parameterized function convergence, and statistics or developed working systems that use that theory are intelligent, just not artificial.
The above four might have terms that distinguish them from authentic intelligent systems outside the realm of biology.
* MDC — Multi-dimensional Control
* BSS — Brute force search
* NBL — Network Based Learning
* SDS — Statistical decisioning systems
It seems that terms that have three words and form distinct acronyms get further and last longer.
Upvotes: 0 <issue_comment>username_5: There are several expressions that are *often* used as synonyms for *artificial intelligence*, but, nowadays, the most common ones are likely [*machine intelligence*](https://en.wikipedia.org/wiki/Artificial_intelligence) and [*computational intelligence*](https://en.wikipedia.org/wiki/Computational_intelligence).
However, these expressions are not well defined, so not everyone will agree that they are interchangeable, but we can all agree that these fields (either if we consider them the same or not) are quite related to each other (and they overlap).
Moreover, these fields also evolve over time and they embrace techniques from other fields, which makes it more difficult to define them. More concretely, initially, AI was mainly based on the manipulation of symbols and logic, but nowadays AI is mainly [machine learning](https://en.wikipedia.org/wiki/Machine_learning), [statistics](https://en.wikipedia.org/wiki/Statistics) and, in particular, [deep learning](https://en.wikipedia.org/wiki/Deep_learning).
Furthermore, the expression *artificial intelligence* was apparently coined after the term [*cybernetics*](https://en.wikipedia.org/wiki/Cybernetics), which some people might consider the first serious attempt to building *intelligent* systems.
Upvotes: 1 <issue_comment>username_6: These are correct. Artificial implies that it runs on artifically made hardware. There is no reason to distinguish between the natural processes what do the same.
Further, the term *intelligence* is nor realy precise. What is more/ less intelligent or has or has not intelligence among : mowgli, monkey, crow, common game bot, whatever? MAing thing, some learning on data happens here.
The best alternative would be Mashine learning, but again, that mashine like "artifically made stuff" does that is irrelevant.
So my definition is:
Algorithmic Learning.
Upvotes: 1 |
2017/04/21 | 1,085 | 4,295 | <issue_start>username_0: Assuming I have a quite advanced AI with consciousness which can understand the basics of electronics and software structures.
Will it ever be able to understand that its consciousness is just some bits in memory and threads in an operating system?<issue_comment>username_1: This is a great question, elements of which I have also been pondering on, though we are very far from being able to actually wrestle with it algorithmically. This question raises all kinds of metaphysical questions (Kant himself showed that pure reason is not sufficient for all questions, but I'm going to avoid that rabbit hole and focus on the mechanics of your question.)
* Consciousness: This is distinct from self-awareness, and fundamentally, may be said to require only awareness of *something*.
>
> Consciousness, most scientists argue, is not a universal property of all matter in the universe. Rather, consciousness is restricted to a subset of animals with relatively complex brains. The more scientists study animal behavior and brain anatomy, however, the more universal consciousness seems to be. A brain as complex as the human brain is definitely not necessary for consciousness.
>
> Source: [Scientific American "Does Self-Awareness Require a Complex Brain?"](https://blogs.scientificamerican.com/brainwaves/does-self-awareness-require-a-complex-brain/)
>
>
>
Thus, an automata that receives input may be said to be consciousness, with the caveat that this idea is probably still considered radical. The key is distinguishing mere "consciousness" from much more complex concepts such as self-awareness.
* Self-Awareness: the holy grail. This is the idea that a set of elements, such as a human organism, is aware of itself.
But this is sticky, because automata that use [Machine Learning](https://en.wikipedia.org/wiki/Machine_learning) are "aware" of themselves in that the may modify their "thought" process and even their "physical" structure.
But ML systems are certainly not self-aware in the human sense. A question might be, is this simply a function of these systems not being full [Algorithmic General Intelligences](https://en.wikipedia.org/wiki/Artificial_general_intelligence), or is there more to it? If there is more to it, is it strictly a [metaphysical](https://en.wikipedia.org/wiki/Metaphysics) question, or can an answer be derived through purely rational means? Even if the latter were the case, there is still the problem of subjectivity, as in: "Is the automata truly self-aware or is it just mimicking self-awareness?" which brings us back to the metaphysical question of "Is there a difference?".
However,
* **If there were a full Algorithmic General Intelligence that had consciousness equatable with human consciousness, that was aware, and even able to work with the basic components of it's [corpus](https://en.oxforddictionaries.com/definition/corpus)\*, it would certainly be able to grasp that it's consciousness is a function of the "bits and bytes", just as a human is aware we are [soft machines](https://en.wikipedia.org/wiki/The_Soft_Machine), and that our consciousness is a function of our bodies and minds.**
---
I intentionally use corpus because it relates both to text (which may be code or even a string of bits in its most reduced form, per the concept of a [Turing Machine](https://en.wikipedia.org/wiki/Turing_machine)) and also has an anatomical meaning, as in the body of an organism. [Corpus comes from the Latin](http://www.perseus.tufts.edu/hopper/morph?l=corpus&la=la#lexicon) and the extension of its meaning to include matter-as-information is modern.
Upvotes: 3 <issue_comment>username_2: Machines will never be conscious.
Let's try this theoretical thought exercise. You memorize a whole bunch of shapes. Then, you memorize the order the shapes are supposed to go in, so that if you see a bunch of shapes in a certain order, you would "answer" by picking a bunch of shapes in another proper order. **Now, did you just learn any meaning behind any language? Programs manipulate symbols this way.** (previously, people have either skirted this question or never had a satisfactory answer)
The above was my reformulation of Searle's rejoinder to System Reply to his Chinese Room Argument.
Upvotes: 1 |
2017/04/21 | 2,173 | 8,766 | <issue_start>username_0: As titled, is there such thing as perfect play (or at least "perfectly optimal") in a game with incomplete information? Or at least a proof as to show why there cannot?
Naively (and seemingly obviously), the answer would be a resounding no, since the agent would be likely be forced to pick between "lottery events".
But in practice (using competitive video games as an analogy), we'd see that players would stick to a meta-game that is well equipped to defend against a majority of events that might happen, given incomplete information. Of course the response to that would be that there probably exists a "hard-counter" for any given meta-game, but if it is indeed the case that the meta-game is the "most-optimal" it probably is the case also that such a hard counter puts the player in an unfavourable position most of the time, thus the "hard-counter" itself is not optimal. Thus we'd likely see that any given first encounter players would still stick to their "optimal meta-game" rather than a hard counter of their optimal play.
A more rigour analogy would be to ask: "Under Hofstadter's notion of superrationality, how would agents play information incomplete games", but I couldn't find any readings on trying to import the notion of super-rationality into information incomplete games.
Alternatively: is there such thing as a "perfectly optimal meta-game"?<issue_comment>username_1: This may be an evolving answer, because the question is, in some sense, a (useful) rabbit hole. I apologize if I don't go deeply into meta-games per se, as it's a little outside of my scope, which is non-chance games of perfect information, but I think it's worthwhile to think about the underlying problem of indeterminacy in relation to games in general.
[Bounded Rationality](https://en.wikipedia.org/wiki/Bounded_rationality)\* is a useful concept because it pre-supposes a condition of [computational intractability](https://en.wikipedia.org/wiki/Computational_complexity_theory#Intractability). Computational intractability can be introduced into games in several forms:
* Complexity
* Hidden Information
* Randomness ("quantum" indeterminacy)
[For more details on my use of "quantum" in regards to randomness, see [Deterministic Games](http://www.deterministicgames.info/).]
The underlying purpose of game theory is to determine "optimal" strategies for any given problem. I put optimal in quotes because optimality is a spectrum, and subjective in a condition of computational intractability.
Thus, we cannot know if [AlphaGo](https://www.scientificamerican.com/article/how-the-computer-beat-the-go-master/) plays optimally, only that it played *more optimally* than Lee Sedol in 4 out of 5 games.
This is distinct from [strongly solved games](https://en.wikipedia.org/wiki/Solved_game#Solved_games) such as tic-tac-toe, where we can know with total certainty that a choice is optimal, because the problem of tic-tac-toe is computationally tractable.
Part of the confusion may be semantic, because the concepts are subtle and profound, and require language, what TS Eliot might have called "the intolerable wrestle with words and meanings." (For instance, I used hidden information above to avoid having to distinguish between incomplete and imperfect information.)
* Perfect Play is generally defined as a strategy that leads to the best possible outcome for a participant, regardless of the choices of the opponent.
Thus [minimax](https://en.wikipedia.org/wiki/Minimax) is of central importance, and provided the foundation for [game theory](https://en.wikipedia.org/wiki/John_von_Neumann#Game_theory).
Even in games with incomplete information, whether "deterministic" ([Battleship](https://en.wikipedia.org/wiki/Battleship_(game))) or involving "quantum indeterminacy" ([Prisoner's Dilemma](https://en.wikipedia.org/wiki/Prisoner%27s_dilemma)), there are optimal strategies. For [simultaneous games](https://en.wikipedia.org/wiki/Simultaneous_game) such as [Dilemma and all of the numerous extensions](http://www.bryanbruns.com/2x2table.pdf) minimax is used. In Battleship, there are [at least three strategies of increasing optimality](http://www.datagenetics.com/blog/december32011/), and although there doesn't appear to be a strategy that can yield P > .5, if one player employs a more optimal strategy, they will win in aggregate. Even [Rock, Paper, Scissors seems to have an optimal strategy](https://arstechnica.com/science/2014/05/win-at-rock-paper-scissors-by-knowing-thy-opponent/), which blows my mind, and carries the caveat that [I need to look into it more.](https://arxiv.org/pdf/1404.5199v1.pdf)
* Thus, perfect play, as defined, is certainly achievable, but does not necessarily connote (objectively) optimal choices, which is a little confusing, because "perfect" implies objectivity, a condition which is only possible in regard to [tractable problems](http://www.doe.carleton.ca/~pavan/Public/Courses_files/03%20computational_complexity.pdf).
It is also important to note that there may not be a "winning" strategy in the sense of being better off than the opponent, and in this condition, perfect or optimal play is mitigation of loss.
---
\*In terms of incomplete information games specifically, I think there's a case for extending the concept of Bounded Rationality is extended to include information that cannot be observed or "known".
Colloquially, this would include the "unknowns" (both known and unknown) and the "unknowable" (quantum indeterminacy and superpositions).
Upvotes: 2 <issue_comment>username_1: This second answer attempts to address perfect play in relation to incomplete information specifically.
An element in the difficulty in answering this question may be that the concept of [perfect play](https://en.wikipedia.org/wiki/Solved_game#Perfect_play) is widely applied to [solved games](https://en.wikipedia.org/wiki/Solved_game#Solved_games) in the domain of [Combinatorial Game Theory](https://en.wikipedia.org/wiki/Combinatorial_game_theory) as opposed to strictly economic Game Theory.
In relation to games with incomplete information:
* Perfect play, defined as the best possible choice, without regard to the opponents choice, may be achieved in games with incomplete information
It's important to note that perfect play may not result in a win. In tic-tac-toe the result is a draw. In certain games, for a disadvantaged player, it may result in the "best" possible loss.
* Perfect play in classic Prisoner's Dilemma is the minimax strategy.
The conundrum is that in this model, it does not lead to the optimal outcome, only the optimal outcome without regard to the other agent's choice.
In classic Prisoner's Dilemma, the supperational strategy is more risky because there is no information on the other agent (probability for either choice is always 50%) and it doesn't limit downside.
---
Superrational strategies can be shown to be mathematically supportable by extending Prisoner's Dilemma to iterative and [cyclic](http://wrap.warwick.ac.uk/12510/) variants. This is partly because in iterative variants, choice are a form of communication between agents. However, the superrational strategy may not be a winning strategy, as the motive of the superrational agent may be said to be maximization benefit, as opposed to limiting downside exclusively. In in iterative Prisoner's Dilemma, the superrational agent may have to sacrifice a couple of iterations (turning the other cheek) in order to incentivize the rational agent to change strategy and cooperate, and to determine if the other agent is irrational, in which case the superrational agent may switch to the rational strategy of minimizing maximum downside and maximizing minimum benefit.
In classic iterated Dilemma, choices are the exclusive form of communication between agents, and each choice becomes part of a dataset on the other agent's decision making. Information is still incomplete, but less incomplete with each iteration.
Superrational strategies for games of incomplete information then become viable via statistical analysis.
Upvotes: 2 <issue_comment>username_2: It depends on the game. In zero sum non cooperative games yes, there's always a GTO strategy.
The easiest example is Rock, Paper, Scissors, where playing 1/3 of each randomly would be the only optimal strategy. In this case a break even one too, in some games though GTO has a positive expected value against any strategy that's not GTO itself.
Usually online video games strategies and metas are heavily based on adaptation to population tendencies though, which in of itself it's not perfect play, but it can have a better expected value than perfect play against a non optimal opponent.
Upvotes: 1 |
2017/04/23 | 1,699 | 6,241 | <issue_start>username_0: *Note: My experience with Gödel's theorem is quite limited: I have read <NAME>; skimmed the 1st half of Introduction to Godel's Theorem (by <NAME>); and some random stuff here and there on the internet. That is, I only have a vague high level understanding of the theory.*
In my humble opinion, Gödel's incompleteness theorem (and its many related Theorems, such as the Halting problem, and Löbs Theorem) are among the most important theoretical discoveries.
However, its a bit disappointing to observe that there aren't that many (at least to my knowledge) theoretical applications of the theorems, probably in part due to 1. the obtuse nature of the proof 2. the strong philosophical implications people aren't willing to easily commit towards.
Despite that, there are still some attempts to apply the theorems in a philosophy of mind / AI context. Off the top of my head:
* [The Lucas-Penrose Argument](http://www.iep.utm.edu/lp-argue/): which argues that the mind is not implemented on a formal system (as in computer). (Not a very rigour proof however)
* Apparently, some of the research at MIRI uses Löbs Thereom, though the only example I know of is [Löbian agent cooperation.](http://intelligence.org/files/ProgramEquilibrium.pdf)
These are all really cool, but are there some more examples? Especially ones that are actually seriously considered by the academic community.
See also [What are the philosophical implications of Gödel's First Incompleteness Theorem?](https://philosophy.stackexchange.com/q/305)<issue_comment>username_1: Definitely there are a lot of implications for AI, including:
1. [Inference with first-order-logic is semi-decidable](https://en.wikipedia.org/wiki/First-order_logic). This is a big disappointment for all the folks that wanted to use logic as a primary AI tool.
2. Basic equivalence of two first-order logic statements is undecidable, which has implications for knowledge-based systems and databases. For example, optimisation of database queries is an undecidable problem because of this.
3. [Equivalence of two context-free grammars is undecidable](https://www.cs.wcupa.edu/rkline/fcs/grammar-undecidable.html), which is a problem for formal linguistic approach toward language processing
4. When doing planning in AI, just finding a feasible plan is undecidable for some planning languages that are needed in practice.
5. When doing automatic program generation - we are faced with a bunch of decidability results, since any reasonable programming language is as powerful as a Turing machine.
6. Finally, all non-trivial questions about an expressive computing paradigm, such as Perti nets or cellular automata, are undecidable.
Upvotes: 4 [selected_answer]<issue_comment>username_2: I found [this paper](https://math.stanford.edu/~feferman/papers/dichotomy.pdf) by mathematician and philosopher *<NAME>* on *Gödel's 1951 Gibbs lecture on certain philosophical consequences of the incompleteness theorems*, while reading the following Wikipedia article
>
> [Philosophy of artificial intelligence](https://en.wikipedia.org/wiki/Philosophy_of_artificial_intelligence),
>
>
>
whose abstract gives us (as expected) a high-level idea of what's discussed in the same:
>
> This is a critical analysis of the first part of Gödel's 1951 Gibbs lecture
> on certain philosophical consequences of the incompleteness theorems.
>
>
> Gödel's discussion is framed in terms of a distinction between *objective
> mathematics* and *subjective mathematics*, according to which the former
> consists of the truths of mathematics in an absolute sense, and the latter
> consists of all humanly demonstrable truths.
>
>
> The question is whether these coincide; if they do, no formal axiomatic system (or *Turing machine*) can comprehend the mathematizing potentialities of human
> thought, and, if not, there are absolutely unsolvable mathematical
> problems of diophantine form.
>
>
> Either ... the human mind ... infinitely surpasses the powers of any
> finite machine, or else there exist absolutely unsolvable diophantine
> problems.
>
>
>
which may be of interest, at least philosophically, to the research in AI. I'm afraid this paper may be similar to the article you're linking to regarding Lucas and Penrose philosophical "attempts" or arguments.
Upvotes: 1 <issue_comment>username_3: I've written an extensive article on this some twenty years ago, which was published in *Engineering Applications of Artificial Intelligence 12 (1999) 655-659*. It's fairly technical and [you can read it in full](http://www.christianjongeneel.nl/gepubliceerde-boeken/goedel-pro-and-contra-ai-dismissal-of-the-case/) on my personal website, but here's the conclusion:
>
> In the above it was shown that there are infinitely many proof
> constructions to Gödel’s theorem – in contrast to the single one that
> was used in discussions on artificial intelligence so far. Though all
> constructions that have been actually disclosed can be imitated by a
> computer, it is evident that there are constructions that have not
> been disclosed yet. Our analysis has shown that there might exist
> constructions that might only be discovered by a human. This is a
> small and definitely unprovable ‘maybe’ that depends on the limits of
> human imagination.
>
>
> Hence, people arguing for the mathematical equivalence of humans and
> machines must ultimately rely on their belief in a limited mind, which
> implies that their conclusion is contained in their assumption. On the
> other hand, people advocating the superiority of humans must assume
> this superiority in their mathematical arguments, ultimately only
> deriving the conclusion that was already present in their system of
> reasoning from the very start.
>
>
> So, it is not possible to produce (meta)mathematically sound arguments
> concerning the relation between the human mind and the Turing Machine
> without making an assumption on the human mind that is at the same
> time the conclusion of the argument. Therefore, the matter is
> undecidable.
>
>
>
Disclaimer: I have left academia since, so I do not know of contemporary thinking.
Upvotes: 1 |
2017/04/25 | 829 | 3,602 | <issue_start>username_0: In past few weeks, I have learned a lot about Neural Networks. Now, I am looking forward to create a Neural Network program that can recognize individual human faces. I tried searching it online but was able to find only small pieces of information.
**What are the steps for implementing such a program from scratch?**<issue_comment>username_1: If you want to implement recognition you've just to train a convnet or CNN on a lot of images in which there are faces, and then you classify it 1 if there are faces and 0 if there aren't. If you want to do detection you have to use different approaches like a cascade classifier whit CNN or an object detection network like YOLO or SSD.
Upvotes: 0 <issue_comment>username_2: I am assuming that you are new to all this. You can start with making a basic human face detection. Train the program to detect a human face with very good accuracy. This will help you to get familiar with the coding ground related to image processing and basic machine learning.
After that train your program to identify faces of only 2-3 people. Trying for too many in the beginning won't be a good idea. Test your program's accuracy in different situation, with a different number of crowd, etc.
If it's working fine then you can train your program for more individuals. In addition to this leave room for learning from experience in your code. Some codes only learn once and they implement the same thing for their whole life. A nice example of this kind is [OCR](https://en.wikipedia.org/wiki/Optical_character_recognition). If a face is detected wrongly OR your program detects a new face about which it doesn't know anything. Then you should be able to tell the program and it should include that in its database. I think some form of reinforcement learning will help. Not much sure about it though.
**Now the Implementation**
I will highly recommend you to learn python and get familiar with OpenCV. You can think of OpenCV as a collection of libraries. I find them very helpful for image processing and machine learning. Another good thing about it is that you can import OpenCV in python, Java or C++ according to your need.
OpenCV has an inbuilt function that allows it train a neural network for positive and negative images. The success of your program depend highly on your choice of positive and negative images, so choose them wisely. The result of the training is stored as a [haar cascade file](http://docs.opencv.org/2.4/modules/objdetect/doc/cascade_classification.html). This cascade file can be used in your program to use the trained data and function accordingly.
For basic human face detection, you can find the cascade file online and implement a code like this
>
> face\_cascade = cv2.CascadeClassifier('haarcascade\_frontalface\_default.xml');
>
>
>
For detecting individual faces you will need to make different classes. The number of classes will represent the number of individual faces you want to detect in the beginning.
You can find OpenCV tutorials [here](http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_tutorials.html).
Upvotes: 2 [selected_answer]<issue_comment>username_3: [DLIB](http://dlib.org) has AA-class face detector based on ResNet model.
[Here](http://dlib.net/face_detector.py.html) is C++ example.
The accuracy is much better compared to HAAR/LBP but performance is worse (but much depends on parameters passed to HAAR/LBP detector)
The latest version of OpenCV also contains face detector based on ResNet model, but I did not try this.
Upvotes: 1 |
2017/04/26 | 681 | 2,598 | <issue_start>username_0: I have a question as to what it means for a knowledge-base to be consistent and complete. I've been looking into non-monotonic logic and different formalisms for it from the book "Knowledge Representation and Reasoning" by <NAME> and <NAME>, but something is confusing me.
They say:
>
> We say a KB exhibits consistent knowledge iff there is no sentence $P$ such that both $P$ and $\neg P$ are known. This is the same as requiring the KB to be satisfiable. We also say that a KB exhibits complete knowledge iff for every $P$ (within its vocabulary) $P$ or $\neq P$ is known
>
>
>
They then seem to suggest that by "known" they mean "entailed". They say
>
> In general, of course, knowledge can be incomplete. For example, suppose KB consists of a single sentence ($P$ or $Q$). Then KB does not entail either $P$ or $\neg P$, and so exhibits incomplete knowledge.
>
>
>
But when dealing with sets of sentences, I usually see these terms as being defined w.r.t. *derivability* and not *entailment*.
So my question is, what exactly do these authors mean by "known" in the above quotes?
Edit: [this post](https://math.stackexchange.com/q/2259311/168764) helped clarify things.<issue_comment>username_1: I don't think they mean that "known" is equivalent to "entailed" --- in a reasonably complicated system, one cannot be expect to know every sentence which is entailed. Perhaps their example is just a bit lacking.
Upvotes: 0 <issue_comment>username_2: It seems that they are stating that a knowledge base is consistent if and only if it never asserts the truth of both the truth and the negation of a particular P. In other words, a knowledge base is consistent if it never contradicts itself. Their definition allows incomplete knowledge bases to be considered consistent; by their definition, an empty knowledge base is still considered a consistent one.
Upvotes: 1 <issue_comment>username_3: I guess in this context "known" means nothing but either $P$ or $\neq P$ is in the KB; further, exactly one of these two needs to be in the KB.
Just think about what it means if $P$ and $\neg P$ are in the KB, then the KB is obviously inconsistent.
And if neither of these two sentences is in the KB, then no information at all can be retrieved from the KB and it remains unclear whether $P$ or $\neg P$ is supposed to be a true statement; thus $P$ is "unknown".
However, if exactly one of these two is in the KB, then one has all information that one needs about $P$ (due to the excluded middle); $P$ is "known" and consistent.
Upvotes: 0 |
2017/04/28 | 2,843 | 13,289 | <issue_start>username_0: Can current trends and tools, in the field of machine learning, replicate the complexity of financial market? If yes, then what are the tools available in this domain.
**Q.** I am trying to build a model to infer results from stock market using the concept to create a graph on the companies enlisted. Can anyone suggest me approaches to do so?<issue_comment>username_1: In one way the answer is **NO**. You can't incorporate all the necessary details in an AI program to correctly predict the financial market, at least not in currently. A particular event is a result of many actions and events from the past. Everything is like a chain.
You might have heard of the **ButterFly Effect**. It simply states that small things can have very large effects. Like flap of butterfly's wings leading to a hurricane. You can read more about it [here](https://en.wikipedia.org/wiki/Butterfly_effect).
Now back to your question. I believe you are somewhat familiar with the financial system and stocks. Close your eyes and try to imagine all the data that is being churned into machines. Apart from that, there are many decisions that lead to changes in the financial market directly. Like policies made by the government deals bagged by the company related to imports and exports, death of the main employee (like CEO, founder), and other many things. Apart from that, there are uncountable things that indirectly affect the market. I want you to think about it for a while. Now consider doing this for all the stocks present. After even stock values of one are dependent on the others in one way or another.
**This amount of data, in other ways considering all these things, is not possible for current generation computers to process. You might have a hit with *quantum computers* coming though.**
Now what you can do, is try to make a Deep Network (with Reinforcement Learning) that can predict the behavior and shift in the direction of **some** stocks. For example, whether it will go up for down or try to keep a constant line on the graph.
Selection of **Features** is always an important key to the success of Machine Learning algorithms. Another problem that stock market-related AI programs face is that Economics is itself an emerging and novice field of science. It's not as mature as physics, chemistry or biology. Thus finding out and deciding over the choice of features is another difficult job.
I found [this paper](https://people.eecs.berkeley.edu/~akar/IITK_website/EE671/report_stock.pdf) online. Haven't given a look to it but, I believe this might help you a bit.
Upvotes: 1 <issue_comment>username_2: You can actually build a statistical model (not exactly and AI) to predict the stock market trend. But the prediction accuracy will be very bad.
The accuracy of your model depends on the following things.
* No of variables which affect the model's output directly or indirectly.
* The model's reaction time (How fast can your model generate an output corresponding to a change in the input).
* The maximum lifespan of your model (Describes the the time the model takes to mature + the span of time the model stays relative to the context of it's variables)
**NOTE: This is not in anyway a correct answer to your question. I just wanted to state that anything and everything in this universe can be modeled given that your are aware of all the variables which affect it and how they interact with each other.**
Upvotes: 0 <issue_comment>username_3: **Reviewing the Question**
There are multiple questions contained within this posted question. (One of the sentences end with a period, but it is clearly intended to be a question.) All are good questions and fairly easy to answer, assuming that the word 'replicate' can be replaced with 'model' or 'simulate'. (Because the financial world is chaotic, any meaningful replication would likely require a quantum level reproduction of earth and everyone and everything on it.)
The kind of modelling, analysis, and visualization of results is done all the time in the research we do. Approaches and proof of concepts have been provided to insurance, banking, and health organizations along these lines and can be discussed here in general terms within the constraints of any confidentiality agreements.
**Restating the Questions**
It is best if I restate the questions the way I understand them from the information available in the original post to ensure I understand what the questioner wishes.
* What are the current trends in modelling and the production of predictive results from historical market data?
* To what degree can the complexity of financial landscape be simulated?
* What are the best approaches to create a graph that represent aspects of the financial market given a list of tradable securities?
Please indicate if my restatement distorts any of the intent contained in the original three questions.
A graph comprised of vertices and edges commonly represents relationships between legal entities. Applying this visualization to represent probable relationships between legal entities from matrices of historical market data is quite possible and may have many uses for financial analysis. Such a graph can be visualized using GraphViz, Mathematica, Matlab, or various libraries available for use from programming environments of Python, C++, Java, LISP, JavaScript or other languages.
**Vertices**
Instead of vertices representing legal entities registered as tax entities, as in many of the web services that display graphs from public records and purchased aggregated corporate data, the vertices in the graph presumably envisioned by the questioner would represent tradable securities. The attributes of such vertices might be.
* Exchange
* Unique exchange ID (symbol)
* Name
* An array of vectors of historical trade metrics (with each vector containing a UTC time stamp)
**Edges**
Edges represent the probable strength of financial connectedness between any two vertices representing two tradable securities.
Because the nature of relationships and the associated details between the corporations offering tradable securities and the mindsets of all the trading agents are obscured, relationships must be inferred naively (without factual knowledge of causality), perhaps using the probability relations of Rev. Thomas Bayes (1701 – 1761) or other more sophisticated methods (some of which cannot, for legal reasons, be detailed here).
Relational models must be created (likely more than one) to capitalize on identifiable features in the trading metrics of one of two selected vertices and match that feature with the same or another feature in the other of the two vertices. The correlation must be statistical and designed in such a way as to be resistant to effects outside the relationship between the two legal entities associated with the two tradable securities.
Naive Bayesian classification, other statistical approaches, FFTs, or neural nets may apply to assist with achieving a functional correlation value. Windowing the data in a loop will be necessary to implement sensitivity to single events sparsely spaced in the time domain of the historical data.
To attempt to guess causality, you will need to apply different temporal shifts to see if the feature of one preceded the feature of the other and by how much. (If event A in security B preceded event C in security D, and this pattern repeats over a range of months or years, then there is a probability greater than zero that event C was caused by event A.
The science (and perhaps the art) of creating a set of potential mathematical models of how various corporate and trading relationships may have impacted historical trade metrics between any two securities is the first hurdle in this proposed best approach.
Using various known methods, a probability distribution of the single dimensional or multi-dimensional strength of the relationship, for each of the proposed models, can be calculated from the historical data of the two entities between which one of the many inter-vertex analyses is occurring.
Statistics of these distributions would then be the attributes of the edge shown between two vertices. For more intuitive usability, the following attributes would need to be available via point and click drill down for each edge and each model tried between the two tradable securities connected by the edge.
* Median relationship strength
* Mean relationship strength
* Standard deviation of relationship
* Direction of causality
* Median delay in causality
**Measures to Make Computation Time Practical**
To accomplish the above each vertex would ideally be compared with each other vertex, for each model, iterating through temporal parameters of the model to determine relationships involving time delays.
If there were a hundred tradables to consider, ten probabilistic relationship models, a thousand temporal permutations that must be tried to converge on a good fit between each model and the historical data, a hundred iterations to converge for each temporal window, a window of a thousand temporal observations, ten thousand windows to cover the entire range of historical data, and a thousand cycles for each test of fit, the primary computations would be 100 x 99 x 10 x 1000 x 100 x 1000 x 10,000 x 1000 = 99 x 10^18 CPU cycles.
(The number 99 comes from the fact that, without some permutation elimination scheme, the histories of each of 100 vertices must be compared with those of the other 99.)
Several methods may be applied together to reduce this set of expanded permutations to permit batch process completion after the close of the market in NY or Hong Kong and before the time zone dependent dawn.
* Filtering and then decimating (removing redundancy) the historical data
* Truncating the historical data to analyze only the recent (and therefore the most relevant) historical data
* Widening the error margin to only what is displayed (such as two significant figures)
* Optimizations of algorithms, the mathematics behind them, the machine instruction representation of computations, or the mapping of values to data types
* Distributing analysis processes to take advantage of parallel computing
* Limiting the list of tradable securities using a narrowing set of inclusion criteria
* Early elimination of possible edges between unlikely relational candidates using heuristics, model simplifications, neural nets, or fuzzy logic
**Prediction**
Once models are generated and functional and some of the visualized constructs can be verified, then the relational models can be used to predict probable events before they occur. This may seem like science fiction to some, but we predict physical, social, and economic events all the time.
In the case of the profitability in relation to markets, if such predictive tools were to be distributed to eleven other traders, the ability to use the tool to generate profit would almost immediately deteriorate to one twelfth in monetary value.
In fact, this is probably the state of the market today. Only those with automated tools are probable winners, funneling money from those without tools.
**The AI Research Perspective**
Although the above does not seem like AI the way it is described, often what is conceived as an intelligent agent and appears intelligent in behavior after deployed, refined, and tuned, appears like straight software engineering when one gets into the details of implementation.
Furthermore, if the method for interfacing with the models used to match features in the history of two tradables is generalized so that arbitrary models can be added or modified at will without damaging the effectiveness of job execution, one can build some sort of analogy of a genetic algorithm to search for models that exhibit higher correlations and therefore progressively enhance predictive capabilities.
**Meta Modelling**
At this point in development, model development is still largely up to the researcher. However, once a model interface, perhaps employing the bridge and facade design patterns, is developed, it is possible to generalize the concept of historical feature correlation between two tradables as models with a set of mutation operations and develop concurrent processes that employ an automated experimental test fixture to develop new models without programmer intervention.
Although the details of such meta-modelling cannot, for legal reasons, be detailed here, the meta-model design options naturally become apparent after some experience is gained after implementing and deploying the above approach in a real scenario with actual tradable historical data.
**Using Off the Shelf Code, Libraries, and Frameworks**
Obviously, there is appreciable monetary value to this type of development, therefore it is unlikely that anyone will post (or even sell) code specific to this domain. However using super-computing platforms, basic analysis algorithms such as FFT functions, and statistics packages with correlation coefficient routines, naive Bayesian capabilities, and convergence detection support will certainly assist in reducing the development effort required to implement and test this approach or others like it.
Upvotes: 1 |
2017/05/04 | 1,025 | 4,014 | <issue_start>username_0: These types of questions may be problem-dependent, but I have tried to find research that addresses the question whether the number of hidden layers and their size (number of neurons in each layer) really matter or not.
So my question is, does it really matter if we for example have 1 large hidden layer of 1000 neurons vs. 10 hidden layers with 100 neurons each?<issue_comment>username_1: Basically, having multiple layers (aka a deep network) makes your network more eager to recognize certain aspects of input data. For example, if you have the details of a house (size, lawn size, location etc.) as input and want to predict the price. The first layer may predict:
* Big area, higher price
* Small amount of bedrooms, lower price
The second layer might conclude:
* Big area + small amount of bedrooms = large bedrooms = +- effect
Yes, one layer can also 'detect' the stats, however it will require more neurons as it cannot rely on other neurons to do 'parts' of the total calculation required to detect that stat.
[Check out this answer](https://stats.stackexchange.com/questions/274569/deep-networks-vs-shallow-networks-why-do-we-need-depth/274571#274571)
Upvotes: 5 [selected_answer]<issue_comment>username_2: >
> I think you have a confusion in the basics of the neural networks.
> Every layer has a separate activation function and input/output
> connection weights.
>
>
>
The output of the first hidden layer will be multiplied by a weight, processed by an activation function in the next layer and so on.
Single layer neural networks are very limited for simple tasks, deeper NN can perform far better than a single layer.
However, do not use more than layer if your application is not fairly complex. In conclusion, 100 neurons layer does not mean better neural network than 10 layers x 10 neurons but 10 layers are something imaginary unless you are doing deep learning. start with 10 neurons in the hidden layer and try to add layers or add more neurons to the same layer to see the difference. learning with more layers will be easier but more training time is required.
Upvotes: 0 <issue_comment>username_3: There are so many aspects.
**1. Training:**
Training deep nets is a hard job due to the [vanishing](http://neuralnetworksanddeeplearning.com/chap5.html#the_vanishing_gradient_problem) (rearly exploding) gradient problem. So building a 10x100 neural-net is not recommended.
**2. Trained network performance:**
* **Information loss:**
The classical usage of neural nets is the [classification](https://math.stackexchange.com/questions/141381/regression-vs-classification) problem. Which means we want to get some well defined information from the data. (Ex. Is there a face in the picture or not.)
So usually classification problem has a lot of input, and few output, whats more the size of the hidden layers are descend from input to output.
However, we loss information using less neurons layer by layer. (Ie. We cannot reproduce the original image based on the fact that is there a face on it or no.) So you must know that you loss information using 100 neurons if the size of the input is (lets say) 1000.
* **Information complexity:** However the deeper nets (as <NAME> mentioned) can fetch more complex information from the input data. Inspite of this its not recommended to use 10 fully connected layers. Its recommended to use convolutional/relu/maxpooling or other type of layers. Firest layers can compress the some essential part of the inputs. (Ex is there any line in a specific part of the picture) Second layers can say: There is a specific shape in this place in the picture. Etc etc.
**So deeper nets are more "clever" but 10x100 net structure is a good choice.**
Upvotes: 2 <issue_comment>username_4: If the problem you are solving is linearly separable, one layer of 1000 neurons can do better job than 10 layers with each of 100 neurons.
If the problem is non linear and not convex, then you need deep neural nets.
Upvotes: 1 |
2017/05/06 | 1,385 | 6,169 | <issue_start>username_0: I feel that many words if not all of them have a direct mapping to some kind of inner subjective experience, to a physical object, mental feeling, process or some other kind of abstract thing. Given that machines don't have *qualia* and no mapping of this kind, can they really understand anything even though they are made to answer to questions with lots of statistical training?<issue_comment>username_1: I think a "successful" strong AI with natural language abilities -- say one that could produce "good" unsupervised translations of literature, or pass a rigorous Turing test -- would have to include in the corpus of data used to build its models visual, auditory, and probably tactile data. It might be necessary as well to simulate the kind of agency and intentionality that humans have-- so the AI-in-training has the opportunity to move a simulated self to change what input it receives. I suspect training it only on text data, for example, would always be inadequate. If it had access to sensory information and learned to associate it appropriately with the symbols of language, it might be able to learn the meaning of language in a way that we'd find difficult to distinguish from our own understanding, even though the AI would presumably not have the (same) qualia we have.
Of course, we are a long way from having hardware sufficiently powerful to even attempt such a comprehensive mind-modeling project. But I don't think the qualia issue *in principle* prevents a "real" understanding of language; its just a matter of extending the symbols available to the AI for modeling the world represented by language to be a good enough match for the symbols humans use in their minds, including the symbols that arise from sensory inputs.
Upvotes: 1 <issue_comment>username_2: We don't even know **exactly** what qualia are, so it's hard to say for sure. But here's what I do think: a lot of human learning is experiential and is rooted in our interactions with the physical world. That is, we see,smell, hear, and feel things, we experience gravity and our orientation in the world through kinesthetic awareness, the sense of balance we have, etc. So while an AI running on a server in a data center might well be "as intelligent" as a human, I don't think it's reasonable to expect it to have the same kind of knowledge and awareness as a human, simply because it has never experienced many things.
So if you want to talk about, say, "seeing the color red" and refer to qualia, then sure. I think it makes a certain kind of sense to say that the machine will be missing something "human" and that that refers to qualia.
OTOH, I think it would be a mistake to underestimate just how "intelligent" our AI's will eventually become even if they aren't embodied. We just have to keep in mind that their intelligence might not be quite the same as ours, because they essentially inhabit a different world.
Upvotes: 2 <issue_comment>username_3: I'm going to be controversial here; so please don't vote this answer down if you just disagree with it.
Your question presupposes that machines do not or cannot possess qualia, which are required for true understanding. Given that we don't really know what it means to 'understand' something, and that even the meaning of 'meaning' itself is by no means a resolved issue, it might be overly specific.
In one strand of linguistics, the meaning of a word is defined by its use, and by the context of surrounding words. We could hazard a guess that children acquire the meaning of words through exposure to language, and the correlation of experiences with the corresponding sounds. How that works in detail is AFAIK not fully understood. But there would be nothing 'inherent' in a new-born human that would enable it to 'understand' anything.
If that is the case, then we could train a machine to do the same. Obviously, it would be a long and tedious process, and there is probably a reason why it takes us years to become proficient in our use of language. But if we correlate sensory input with linguistic utterances, a sufficiently sophisticated learning algorithm might be able to acquire some meaning for such utterances from the way they are used.
There are, of course, rather a lot of unknowns here. That is because the topic straddles various fields, from child language acquisition, corpus linguistics, the psychology of learning, and many more. And to my knowledge, none of these fields is sufficiently advanced to shed any light on this issue yet. There is the whole question of abstract words and concepts. How do we segment the continuous stream of sounds into discrete units (*phonemes*) without knowing what they are? With all that complexity I begin to appreciate why Chomsky opted for his Language Acquisition Device to avoid getting frustrated... :)
So, to answer your question: yes, it should be possible. A properly set up machine, which would be able to simulate human learning, would pick up its own mapping of linguistic structures to its experience from the world outside. And if we call this mapping the 'meaning' of those structures, then a machine can learn this, and presumably 'understand' language. If we ever get to that stage with AI is another question.
Upvotes: 3 <issue_comment>username_4: Great question equally qualified answers. My belief is yes to understanding language and the bugaboo (a non intellectual vernacular) is what is understanding. Is it interpretive? Inferred? To what end? An algorithm's response to a command in language form will depend on the robustness programmed into it. However, qualia connotes a subjective experience due to a sensory stimuli whether triggered by memory or one of our human senses. Then it begs the question of can a computer collectively experience and how would we know that. Second, are all algorithms subjected due the the programmed bias and it's available data store? Facebook and Google news have shown that programmed bias is very real. Furthermore, Qualia is an emergent trait so I can't see how a computer and it's collective systems can become aware and have subjective experience.
Upvotes: 0 |
2017/05/08 | 914 | 3,547 | <issue_start>username_0: I am looking at a diagram of [ZFNet](https://arxiv.org/pdf/1311.2901.pdf) below, in an attempt to understand how CNNs are designed.
[](https://i.stack.imgur.com/8af4m.png)
In the first layer, I understand the depth of 3 (224x224x3) is the number of color channels in the image.
In the second layer, I understand the $110 \times 110$ is computed with the formula $W\_2 = (W\_1-F+2P)/S + 1 = (224 - 7 + 2\*1/2)/2 + 1 = 110$.
I also understand how pooling works to create a size reduction from $110 \times 110$ to $55 \times 55$.
But where does the depth of $96$ come from in the second layer? Is this the new "batch size"? Is it totally arbitrary?
Bonus points if someone can direct me to a reference that can help me understand how all these dimensions relate to each other.<issue_comment>username_1: The $96$ is the number of **feature maps**, which is equal to the number of filters/kernels.
The choice of the number of kernels is not fully arbitrary, although there is no equation or exact rule restricting the number.
If you have a CNN, one single convolution operation would be pointless: since it is used for the whole image information, it can generalize, but only to specific (meaning: a finite amount of) features. Easy example: if a 7x7 filter in the first layer concentrates on round shapes, it can not generalize on let's say red cubes at the same time.
Therefore, your convolutional layers have several filters/kernels, i.e. several weights where each is used for a convolution. The result of each of these convolutions is one filter/feature map, i.e. the image information convolved by a kernel.
Typically, you look at your kernels and your problem's domain to figure out what an appropriate number of kernels could be. You also have to keep in mind that too few kernels could possibly lose information and overfit specific patterns, while too many kernels could possibly underfit. Especially, when you have far more parameters (primarily the weights of your neural network) than training data, your neural network will normally perform bad and you have to reduce its size.
The kernels should not be confused with batch size. The batch size is the number of samples (here images) you train *in parallel*. Each training step of your neural network does not only consist of feeding a single image, but feeding *batch size* number of images, usually combined with batch normalization steps between the layers. Hence, this has nothing to do with the number of kernels.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Let's say you have an image with $3$ channels and you have $10$ filters, where each filter has the shape $5 \times 5 \times 3$. The **depth** of the convolutional layer after having applied this filter to the image is $10$, which is equal to the number of filters. The spatial dimensions of the filter (in this case, $5 \times 5$) are more or less defined arbitrarily (it's a hyperparameter).
Upvotes: 1 <issue_comment>username_3: Given that you were also asking for a reference that describes in detail these operations, you should take a look at the paper [A guide to convolution arithmetic for deep learning](https://arxiv.org/pdf/1603.07285.pdf) (2018), which describes in detail the arithmetic of many convolution operations used in convolutional neural networks. There's also [the associated repo](https://github.com/vdumoulin/conv_arithmetic) with the animations of the different operations.
Upvotes: 0 |
2017/05/08 | 1,091 | 4,628 | <issue_start>username_0: For a classification task (I'm showing a pair of exactly two images to a CNN that should answer with 0 -> fake pair or 1 -> real pair) I am struggling to figure out how to design the input.
At the moment the network's architecture looks like this:
```
image-1 image-2
| |
conv layer conv layer
| |
_______________ _______________
|
flattened vector
|
fully-connected layer
|
reshape to 2D image
|
conv layer
|
conv layer
|
conv layer
|
flattened vector
|
output
```
The conv layers have a `2x2` stride, thus halfing the images' dimensions. I would have used the first fully-connected layer as the first layer, but then the size of it doesn't fit in my GPU's VRAM. Thus, I have the first conv layers halfing the size of the images first, then combining the information with a fully-connected layer and then doing the actual classification with conv layers for the combined image information.
My very first idea was to simply add the information up, like `(image-1 + image-2) / 2`...but this is not a good idea, since it heavily mixes up image information.
The next try was to concatenate the images to have one single image of size 400x100 instead of two 200x100 images. However, the results of this approach were quite unstable. I think because in the center of the big, concatenated image convolutions would convolve information of both images (right border of `image-1` / left border of `image-2`), which again mixes up image information in not really senseful way.
My last approach was the current architecture, simply leaving the combination of `image-1` and `image-2` up to one fully-connected layer. This works - kind of (the results show a nice convergence, but could be better).
What is a reasonable, "state-of-the-art" way to combine two images for a CNN's input?
I clearly can not simply increase the batch size and fit the images there, since the pairs are related to each other and this relationship would get lost if I simply feed just one image at a time and increase the batch size.<issue_comment>username_1: You can combine the image output using concatenation. Please refer to this paper:
<http://ivpl.eecs.northwestern.edu/sites/default/files/07444187.pdf>
You can have a look at the Figure 2. And if you are using caffe, there is a layer called Concat layer. You can use it for your purpose.
I am not fully clear about what you want to do. But like you said, if you want to pass the image values from the first layer to some layers. Try reading about skip architectures.
If you want to use this network as real/fake finder, you can take the difference between two images and convert it to classification problem.
Hope it helps.
Upvotes: 3 [selected_answer]<issue_comment>username_2: I'm not sure what you mean by pairs. But a common pattern for dealing w/ pair-wise ranking is a siamese network:
[](https://i.stack.imgur.com/3zG0B.png)
Where A and B are a a pos, negative pair and then the Feature Generation Block is a CNN architecture which outputs a feature vector for each image (cut off the softmax) and then the network tried to maximise the regression loss between the two images. The two networks share the same parameters and thus in the end you have one model which can accurately disambiguate between a positive or negative pair.
Upvotes: 1 <issue_comment>username_3: [username_2](https://ai.stackexchange.com/a/5187/23994]) actually has a good solution for you. This approach is a tried and tested way to solve the same problem you are trying to solve.
However, if you still want to concatenate the images and do this your way, you should **concatenate the images along the channel dimension**.
For example, by combining two $200\times 100 \times c$ feature vectors (where c is the number of channels) you should get a single $200\times 100 \times 2c$ feature vector.
The kernels of the next convolution look through all the channels of the feature vector $x \times x$ pixels at a time.
If we combine along the channel dimension, it becomes easier for the network to compare pixel values at corresponding positions in both images. Since the objective is to predict similarity or dissimilarity, this is ideal for us.
Upvotes: 1 |
2017/05/09 | 442 | 1,902 | <issue_start>username_0: I am an Android programmer. Now, I would like to learn machine learning. I know it requires a mathematical background, like statistics, probability, calculus and linear algebra. However, I am a bit lost. Where should I start from? Can someone provide me a road map for how to learn the mathematical background required for machine learning?<issue_comment>username_1: You should begin from Dr Andrew Ng machine learning course on Coursera. It's probably the most popular course for newcomers in machine learning. It's a free course.
You should also grab "Elements of Statistical Learning" ebook PDF. It's a free book.
You may want to focus on:
1. Regression
2. Cross validation
3. Bias-variance tradeoff
4. Decision surface
5. Gradient descent
And more...
Upvotes: 4 [selected_answer]<issue_comment>username_2: If you are interested to deepen your statistical concepts before diving into machine learning, i would recommend **Introduction to Statistics: Descriptive Statistics** course in **edX**
where you'll learn
* The fundamental concepts and methods of statistics
* How to intepret graphical and numerical summaries of data
* Understand the reasoning behind the calculations, the assumptions under which they are valid, and the correct interpretation of results
The link for course is [edX](https://www.edx.org/course/introduction-statistics-descriptive-uc-berkeleyx-stat2-1x)
This will definitely clarify your stat background with added benefit of certification.
Upvotes: 1 <issue_comment>username_3: Some of the fundamental mathematical concepts required in ML field are as follows:
* Linear Algebra
* Analytic Geometry
* Matrix Decompositions
* Vector Calculus
* Probability and Distribution
* Continuous Optimization
A very recent book availble at [Mathematics for Machine Learning](https://mml-book.github.io/) covers all these aspects and more.
Upvotes: 2 |
2017/05/09 | 763 | 3,396 | <issue_start>username_0: I'm trying to develop a kind of AI that will assist in debugging a large software system while we run test cases on it. The AI can get access to anything that the developers of the system can, namely log files, and execution data from trace points. It also has access to the structure of the system, and all of the source code.
The end goal of this AI is to be able to detect runtime errors during execution, and locate the source of these errors.
I was considering making use of a deep neural network, where the input would be the execution data and log output. Using this input it would be able to verify whether the current version of the system we are running is functional, or non-functional. The problems with this approach is that the system it would be evaluating would be constantly changing as it gets developed, so the only training material the NN would have is from the last stable version of the system (and even that could have some errors). Additionally, producing test cases for the system off of which we could train the NN would be very time consuming, and would defeat the purpose of using the NN in the first place.
I would like to know what AI design you think would be suitable for this task. Please let me know if you would like any other information relevant to the problem. As far as I can tell, nothing quite like this has been done before.
It's probably worth mentioning that my team has a some extremely powerful machines on which we can run the AI.<issue_comment>username_1: You are trying to solve a variant of [The Halting Problem](https://en.wikipedia.org/wiki/Halting_problem), which is the problem of detecting whether a computer program is going to stop, or run forever.
The Halting Problem is incomputable, which means it is not possible to write a computer program that solves it. It is easy to see that your problem is also incomputable. If you could predict whether a program would generate an error, then for any program X that someone wants to solve the halting problem for, we could write a new program:
1. run(X)
2. Error("X finished running").
and use your algorithm to determine whether X would finish running or not. Since the halting problem is known to be incomputable, this means your problem must be incomputable too.
That's not to say all is lost though. Formal verification is a field that uses some AI techniques (mostly reasoning-based, but I think there's some machine learning now too) to try to solve this problem for some programs. It can't work for *every* program though.
Upvotes: 2 <issue_comment>username_2: I think you would find this link helpful. It demonstrates how to identify patterns in large arbitrary byte data.
<https://devblogs.nvidia.com/malware-detection-neural-networks/>
Upvotes: 2 [selected_answer]<issue_comment>username_3: Debug and Validation of large software systems like video software stack
If you take example as validation and debug of video software stack, It is very difficult for naked eye to identify failures on the display. In this case you can use DNN based image classifier to identify functional failure.
Upvotes: 0 <issue_comment>username_4: If you consider also 3rd party options instead of building your own system, you may want to check this solution <https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Anomaly_Detection.html>
Upvotes: 0 |
2017/05/14 | 1,659 | 6,787 | <issue_start>username_0: Can or will automata love? If so, is there a theoretical limit to love? Conversely, can automata develop selfishness?
How many resources will automata devote to being selfish and helping others?<issue_comment>username_1: It's a poorly stated question because these are at least three, possibly four different questions that are quite independent from each other.
---
First, let's take the questions from the text. Selfishness vs generosity of the system - this is quite easy to define as sacrificing resources for "own maintenance" vs "statutory purposes" - "helping others" defined as fulfilling the role assigned by the creators.
This is all a matter of developing the goal function that gives sufficient priority to immediate or short term results. In the classic example of "[paperclip maximizer](https://wiki.lesswrong.com/wiki/Paperclip_maximizer)", without this sort of restriction the end result will be not even a single paper clip made, as the AI expands self to develop even more efficient way to create the paper clips, since current, inferior methods will waste resources which could better serve later, more advanced methods - the classic paradigm "better is the enemy of good", all focus on optimizing the process without actually performing the process.
The problem here is that we don't know what type of function the improvement due to the optimization process will follow, and so we don't quite know what sort of prioritization function to apply. Probably a flat deadline would work, but if that is movable, AI might sacrifice all resources to developing a way to move the deadline. Be it by resetting system clock, terrorizing system operators into adjusting the deadline, or inventing time travel.
So, selfishness vs generosity is a matter of developing the correct time constraints on the goal function. Difficult, but doable - without it AI will be "infinitely selfish", but over-constrained, it will be inefficient, producing inferior immediate result instead of developing better.
Probably, a good approach would be to predict a reasonable (both achievable and desirable) advancement path and set this as the goal value - not maximization of output, but approaching the desired output value (which may be growing over time, linearly or exponentially) and freeing up unused resources - adding a cap on expansion speed. That would avoid the paperclip universe horror scenario.
---
Now, for love. Again, this depends largely on the goal function.
Defining love, that would be a specific mental state, influenced by specific hormones, and leading to specific re-shuffling of priorities, often resulting in impaired judgment.
Defined that way, could AI love? Yes. Should it? uh, better no. Because, "What will be the limit to love? Any theoretical limit?" - about none, bar using up all matter and energy in the universe.
Love pretty much unconditionally favors the "beloved". And that is just another phrasing for *maximization*. And, as we all know, an unconstrained maximizer AI is bad news. You don't want "Give you the Moon and all the stars" to be a completely literal expression. So, love - while possible both to implement, and to happen as a side effect of poorly constrained other goals (like "learn to feel all that a human can feel") can be a serious problem.
So yeah, regarding AI and love: can we? Yeah. Should we? nope.
Upvotes: 3 [selected_answer]<issue_comment>username_2: It depends on the environment and goals, does the AI dies? reproduces? carries genetic information across generations?
Love is yet another selfish strategy. And it is a hard-wired reward reflex in our brains to help identify and stick to suitable mates that helps us reproduce, as a consequece, evolution helps shapes and refine the love instinct. This is at least a valid theory.
That said, an AI, as an individual is totally selfish, the strategy to express selfishness can vary depending on its goals.
If adjacent peers doesn't recepricate or protect the AI and it's offsprings, it is futile to share or care for them.
On the other hand, the AI have all reasons to love for example a data source, that provides the AI with information to survive and reproduce.
Reference: [The Moral Animal](https://en.wikipedia.org/wiki/The_Moral_Animal)
Upvotes: 0 <issue_comment>username_3: This is a [game theory](https://en.wikipedia.org/wiki/Game_theory) question, and involves the intersection of game theory and [ethics](https://en.wikipedia.org/wiki/Ethics).
First, it's helpful to define love in functional sense as altruism. (This is consistent with the function aspect of [agape](https://en.wikipedia.org/wiki/Agape)in terms of how that love functionally affects the material world through the actions of individuals.)
To this end, I urge you to look into the concepts of [rational agents](https://en.wikipedia.org/wiki/Rational_agent) and [superrationality](https://en.wikipedia.org/wiki/Superrationality). These concepts are mathematical approaches to decision making, and in the context of humanity, which may one day be extended to include [Artificial General Intelligence](https://en.wikipedia.org/wiki/Artificial_general_intelligence), carry a moral dimension.
In a game theoretic context, cooperation is a functional form of altruism, as is the willingness of the superrational agent to "take a hit" and even "turn the other cheek" (make one or two altruistic choices to try to lead the rational agent into a superrational strategy that yields optimal outcomes, as opposed to getting stuck in a [Nash equilibrium](https://en.wikipedia.org/wiki/Nash_equilibrium) that yields a suboptimal outcome.)
[Evolutionary Game Theory](https://en.wikipedia.org/wiki/Evolutionary_game_theory) demonstrates that even agents with very limited intelligence can develop cooperative behaviors, and [Cooperative Game Theory](https://en.wikipedia.org/wiki/Cooperative_game_theory) studies rational coalitions between agents.
The cool thing about these fields and ideas is that they are all driven my mathematics.
Thus, without getting into philosophy, we can say that automata will express behaviors analogous to love, but the basis is mathematical, and this also forms a functional limit defined by the parameters of the model.
---
On a philosophical note, <NAME>, who was quite prescient, believed that automata would become smart enough to develop empathy, allowing cooperative behaviors, and leading to love.
[Do Androids Dream Of Electric Sheep](https://en.wikipedia.org/wiki/Do_Androids_Dream_of_Electric_Sheep%3F) is about this subject, both the humanistic (philosophical) dimension, and the scientific--the plot of the 1968 novel prefigures Evolutionary Game Theory, which was formalized just 5 years later.
Upvotes: 1 |
2017/05/15 | 719 | 3,096 | <issue_start>username_0: Most (all I know) machine learning systems use a fixed set of data input channels and processing algorithms, only expanding underlying dataset processed by these; they obtain new data but only from predefined sources, and use only their fixed, built-in capacity to process it, possibly tweaking parameters of the algorithm (like weights of neural network nodes) but never fundamentally changing the algorithm.
Are there systems - or research into creating these - that are able to acquire "from out there" new methods of obtaining data and new ways to process it for results? Expand not just passive data set to "digest it" by active but static algorithm, but make the algorithm itself self-expanding - be it in terms of creating/obtaining new processing methods for own data set, and creating/obtaining new methods of acquiring that data (these methods)?<issue_comment>username_1: I think the closest thing is building up knowledge using predictions like in the [Horde Architecture](https://www.cs.swarthmore.edu/~meeden/DevelopmentalRobotics/horde1.pdf). Research about what are good predictions to make is on going at the University of Alberta, Canada but the Horde architecture has the potential to ask new questions and generate new data based on the answers to those questions in the form of predictions.
One example is having a robot with a bump sensor and servos and asking questions like
>
> Will rotating the servos CW for 90 degrees make the bump sensor turn on?
>
>
>
This question can be phrased and it's answer estimated mathematically in the form of a General Value Function as defined in the paper.
Asking different questions based on predictions like this one is where the power comes from.
Upvotes: 2 <issue_comment>username_2: A few weeks ago, I've come across this paper [Learning to learn by gradient descent by gradient descent](https://arxiv.org/abs/1606.04474) by <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME> and <NAME> (i.e. Deepmind guys) whose abstract is the following:
>
> The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.
>
>
>
You can find the related Github's repository [here](https://github.com/deepmind/learning-to-learn).
Unfortunately, I still haven't found the time to read the paper to explain the details of these apparently interesting ideas.
Upvotes: 2 [selected_answer] |
2017/05/16 | 724 | 3,286 | <issue_start>username_0: Obviously, finding suitable hyper-parameters for a neural network is a complex task and problem or domain-specific. However, there should be at least some "rules" that hold most times for the size of the filter (or kernel)!
In most cases, intuition should be to go for small filters for detecting high-frequency features and large kernels for low-frequency features, right? For example, $3 \times 3$ kernel filters for edge detection, color contrast, etc., and maybe $11 \times 11$ for detecting full objects, when the objects occupy an area of roughly $11 \times 11$ pixels.
Is this "intuition" more or less generally true? How can we decide which kernel's sizes should be chosen for a specific problem - or even for one specific convolutional layer?<issue_comment>username_1: Take a look at [this article](https://cs.nyu.edu/~fergus/papers/zeilerECCV2014.pdf). It give tools to actually understand what your filters have learn and show what you can do next to optimize your hyper-parameters. Also check more recent articles that seek to provide interpretations of what NN learn.
Upvotes: 2 <issue_comment>username_2: One key to the answer is in the question, "Even for one specific conv layer." It is not a good idea to build deep convolution networks on the assumption that a single kernel size most aptly applies to all layers. When perusing the configurations that proved successful in publications, it becomes apparent that configurations that varying through their layers are more commonly found to be optimal.
The other key is to understand that two layers of 11x11 kernels have a 21x21 reach, and ten layers of 5x5 kernels have a 41x41 reach. A mapping from one level of abstraction to the next need not be completed in one layer.
Generalities regarding kernel sizes exist, but they are functions of the typical input characteristics, the desired output of the network, the computing resources available, resolution, size of the data set, and whether they are still images or movies.
Regarding input characteristics, consider this case: The images are shot with a large depth of field under poor lighting conditions, such as in security scenarios, so the aperture of the lens is wide open, causing objects at some ranges of distance to be out of focus, or there can be motion blur.
Under such conditions a single 3x3 kernel will not detect many edges. If the edge may span five pixels, the choice exists as to how many layers are dedicated to its detection. What factors affect that choice is based on what other characteristics exist in the input data.
Expect that as acceleration hardware develops (in VLSI chips dedicated to this purpose) that the computing resource constraints will decrease in priority as a factor in kernel size selection. Currently, the computation time is significant and forces the decision about how to balance layer count and layer size to be mostly a matter of cost.
This question begs another question. Can an oversight machine learner learn how to automatically balance the configuration of deep convolution networks? It could then be re-executed whenever additional computing resources are provisioned. It would be surprising if there weren't at least a dozen labs working on exactly this capability.
Upvotes: 1 |
2017/05/17 | 1,188 | 5,413 | <issue_start>username_0: In a neural network when inputting nerve input to sense a 2D environment, how do you differentiate two types of objects (with similar shape and size) so the neural network can treat them differently?
Each neuron in the input layer of a neural network essentially gets 1 dimensional input (range between two values) but 2 dimensional input would be needed to send both collision and category/type information through each input layer neuron. How do you get around that?
Note: After having confusion regarding the scenario / situation I'm asking about compared to other more complex scenarios, and the long comment series that ensued, I'm realizing one challenge of this site is that it's much more complicated and diverse subject matter than code, or the various other topics of Stack Exchange where the problems can be very clearly and simply expressed. Here it's more challenging to express your question and scenario clearly to avoid confusion.
Also there's probably a higher skill gap between an AI learner / enthusiast, and an expert AI specialist, compared to other fields, so that could potentially lead to even more difficulty communicating the answer / question in ways everyone can understand without confusion. Challenging SE site to ask good questions on!<issue_comment>username_1: Neural networks learn. That's what they are for. For your task there are two sensible scenarios:
1. You have a fixed reaction for danger and a fixed reaction for food and you only have to learn how to distinguish between them. In that case you basically try to classify the situation to trigger the right fixed response and this classification would be learned by [backpropagation](http://neuralnetworksanddeeplearning.com).
2. You directly learn to act for a given situation. In that case you can either use a genetic algorithm or you use reinforcement learning with backpropagation.
I would recommend using a genetic algorithm, because it is significantly easier and also makes sense in this situation. You would randomly initialise your network, let it run around in the environment and remember how much food it ate and how often or how quickly it died. Then you would randomly change the weights of your network and do the same thing again. If it did better this time around you would proceed to use the new weights otherwise you go back to the old weights and try a different random change.
By selecting successful random changes it would over time learn to avoid danger and seek out food.
Edit: To my mind you have a fundamental misunderstanding how perception works. If you see a lion and a cake, do those trigger different kinds of cells on your retina? No! All nerve cells are used to detect all kinds of objects! The classification, i.e. whether you are seeing a lion or a cake happens in the neural network i.e. in the higher regions of your visual cortex, far removed from the initial nerve activation. Your lion might be yellow and your cake might be yellow, only if you analyse the high level structure of your nerve inputs can you decide what you are seeing. That is the task of a neural network. And that high level structure analysis is what a neural network learns.
What seems to confuse you is the example you linked. In that example this very sparse distance measuring is enough to differentiate between walls and boosters in your high level structure analysis, because the different points of the walls that you sample have a certain relative position that you can analyse and conclude that they constitute the wall.
In your scenario very sparse distance measuring will not help you obviously. The distance of an object doesn't tell you whether it's a lion or a cake. Distance and color would be a solution to that. Or, more realistically, you have different shapes and much tighter distance sampling, and high level analysis can work out the shape from a couple of closely measured distances.
Upvotes: 2 <issue_comment>username_2: @username_1 answered my question in comments, but his answer doesn't reflect that information. If he changes his answer to contain the actual answer, I'll remove mine.
The answer to the question "How can object types be differentiated in the input of a neural network?" is:
**Use different sensors (nerves) to detect different types of input.**
So for example:
* If you want the network to be capable of differentiating two different types of objects in a simple 2D environment, use two different sets of nerves, one to detect one object type, one to detect the other. So if you want to sense 10 points around your "organism", have 20 nerves, 10 neurons in your input layer dealing with one object type, 10 dealing with the other.
* That's the most simple example, dealing with binary differences (object type 1 or object type 2), but it could be less binary, like this: Let's say there are three object types and think of each object types as 1/3 of a color value. So you have three sets of 10 nerves in total transmitting to your input layer (30 neurons) and each nerve set senses either R, G, or B. If you set up your system so that there are three "objects" to collide with, stacked on top of each other (like a single object), your neural network will be capable of handling objects differently based on their RGB color value, meaning it can now handle nearly infinite "types" of objects.
Upvotes: 1 [selected_answer] |
2017/05/17 | 1,231 | 5,432 | <issue_start>username_0: Something I like about neural network AI is that we already have a blueprint for it - we know in great detail how different types of neurons work, and we don't have to invent the AI, we can just replicate what we already know works (neural networks) in a simplified way and train it.
So what's confusing me right now is why many popular neural models I've seen when studying neural networks include both inhibitory and stimulating connections. In real neural networks, from my understanding, there is no negative signal being transferred, rather the signals sent between neurons are comparable to values between 0.1 and 1. There's no mechanic for an inverse (inhibiting) signal to be sent.
For example, in this network (seen overlaying a snake being simulated), the red lines represent inhibiting connections, and the blue lines represent stimulating neurons:
[](https://i.stack.imgur.com/nLT4I.png)
Is this just an inconsequential detail of neural network design, where there's really no significant difference between a range of 0 to 1 and a range of -1 to 1? Or is there a reason that connections in our simulated neural networks benefit from being able to a express a range from -1 to 1 rather than just 0 to 1?<issue_comment>username_1: Neural networks learn. That's what they are for. For your task there are two sensible scenarios:
1. You have a fixed reaction for danger and a fixed reaction for food and you only have to learn how to distinguish between them. In that case you basically try to classify the situation to trigger the right fixed response and this classification would be learned by [backpropagation](http://neuralnetworksanddeeplearning.com).
2. You directly learn to act for a given situation. In that case you can either use a genetic algorithm or you use reinforcement learning with backpropagation.
I would recommend using a genetic algorithm, because it is significantly easier and also makes sense in this situation. You would randomly initialise your network, let it run around in the environment and remember how much food it ate and how often or how quickly it died. Then you would randomly change the weights of your network and do the same thing again. If it did better this time around you would proceed to use the new weights otherwise you go back to the old weights and try a different random change.
By selecting successful random changes it would over time learn to avoid danger and seek out food.
Edit: To my mind you have a fundamental misunderstanding how perception works. If you see a lion and a cake, do those trigger different kinds of cells on your retina? No! All nerve cells are used to detect all kinds of objects! The classification, i.e. whether you are seeing a lion or a cake happens in the neural network i.e. in the higher regions of your visual cortex, far removed from the initial nerve activation. Your lion might be yellow and your cake might be yellow, only if you analyse the high level structure of your nerve inputs can you decide what you are seeing. That is the task of a neural network. And that high level structure analysis is what a neural network learns.
What seems to confuse you is the example you linked. In that example this very sparse distance measuring is enough to differentiate between walls and boosters in your high level structure analysis, because the different points of the walls that you sample have a certain relative position that you can analyse and conclude that they constitute the wall.
In your scenario very sparse distance measuring will not help you obviously. The distance of an object doesn't tell you whether it's a lion or a cake. Distance and color would be a solution to that. Or, more realistically, you have different shapes and much tighter distance sampling, and high level analysis can work out the shape from a couple of closely measured distances.
Upvotes: 2 <issue_comment>username_2: @username_1 answered my question in comments, but his answer doesn't reflect that information. If he changes his answer to contain the actual answer, I'll remove mine.
The answer to the question "How can object types be differentiated in the input of a neural network?" is:
**Use different sensors (nerves) to detect different types of input.**
So for example:
* If you want the network to be capable of differentiating two different types of objects in a simple 2D environment, use two different sets of nerves, one to detect one object type, one to detect the other. So if you want to sense 10 points around your "organism", have 20 nerves, 10 neurons in your input layer dealing with one object type, 10 dealing with the other.
* That's the most simple example, dealing with binary differences (object type 1 or object type 2), but it could be less binary, like this: Let's say there are three object types and think of each object types as 1/3 of a color value. So you have three sets of 10 nerves in total transmitting to your input layer (30 neurons) and each nerve set senses either R, G, or B. If you set up your system so that there are three "objects" to collide with, stacked on top of each other (like a single object), your neural network will be capable of handling objects differently based on their RGB color value, meaning it can now handle nearly infinite "types" of objects.
Upvotes: 1 [selected_answer] |
2017/05/19 | 1,273 | 5,045 | <issue_start>username_0: I'm wondering how to train a neural network for a round based board game like, tic-tac-toe, chess, risk or any other round based game.
Getting the next move by inference seems to be pretty straight forward, by feeding the game state as input and using the output as the move for the current player.
However training an AI for that purpose doesn't appear to be that straight forward, because:
1. There might not be a rating if a single move is good or not, so training of single moves doesn't seem to be the right choice
2. Using all game states (inputs) and moves (outputs) of the whole game to train the neural network, doesn't seem to be the right choice as not all moves within a lost game might be bad
So I'm wondering how to train a neural network for a round based board game?
I would like to create a neural network for tic-tac-toe using tensorflow.<issue_comment>username_1: I think you should get familiar with reinforcement learning. In this field of machine learning the agent interacts whit its environment and after that the agent gets some reward. Now, the agent is the neural network the environment is the game and the agent can get a reward +1 if it wins or -1 if loses. You can use this state, action, reward experienc tuple to train the agent. I can recommend David Silver's lectures on youtube and Sutton's book as well.
Upvotes: 2 <issue_comment>username_2: I'm a chess player and my answer will be only on chess.
Training a neural network with reinforcement learning isn't new, it has been done many times in the literature.
I'll briefly explain the common strategies.
* The purpose of a network is to learn **position evaluation.** We all know a queen is stronger than a bishop, but can we make the network know about it without explicitly programming? What about pawn structure? Does the network understand how to evaluate whether a position is winning or not?
* Now, we know why we need the network, we'll need to design it. The design differs radically between studies. Before deep learning was popular, people were using the shallow network. Nowadays, a network with many layers stands out.
* Once we have the network, you'll need to make a chess engine. Neural network can't magically play chess by itself, it needs to connect to a chess engine. Fortunately, we don't need to write position evaluation code because the network can do that for us.
* Now, we have to play games. We could start with some high-quality chess databases or instead have our AI agent play games with another player (e.g. itself, another AI agent, or a human). This is known as **reinforcement learning**.
* While we play games, we update the network parameter. This can be done by stochastic gradient descent (or other similar techniques). We repeat our training as long as we want, usually over millions of iterations.
* Finally, we have a trained neural network model for chess!
Look at the following resources for details:
>
> <https://www.chessprogramming.org/Learning>
>
>
>
Upvotes: 3 <issue_comment>username_3: Great question! NN is very promising for this type of problem: [Giraffe Chess](https://arxiv.org/pdf/1509.01549.pdf). Lai's accomplishment [was considered to be a pretty big deal](https://www.technologyreview.com/s/541276/deep-learning-machine-teaches-itself-chess-in-72-hours-plays-at-international-master/), but unfortunately came just a few months before AlphaGo took the spotlight. *(It all turned out well, in that Lai was subsequently hired by DeepMind, [although not so well for the Giraffe engine](https://motherboard.vice.com/en_us/article/the-chess-engine-that-died-so-alphago-could-live-giraffe-matthew-lai);)*
I've found Lai's approach to be quite helpful, and it is backed by solid results.
---
You may want to use "[sequential](https://en.wikipedia.org/wiki/Sequential_game)" as opposed to "round based" since sequential is the preferred term in [Game Theory](https://en.wikipedia.org/wiki/Game_theory) and [Combinatorial Game Theory](https://en.wikipedia.org/wiki/Combinatorial_game_theory), and these are the fields that apply mathematical analysis to games.
The games you list are further termed "[abstract](https://en.wikipedia.org/wiki/Abstract_strategy_game)" to distinguish them from modern strategy boardgames, or games in general, which utilize a strong theme and are generally less compact than abstract games in terms of mechanics and elements. This carries the caveat that abstract games are not restricted to sequential games or boardgames, or even games specifically, as in the case of puzzles like Sudoku.
The formal name for this group of games is generally "[partisan](https://en.wikipedia.org/wiki/Partisan_game), sequential, [deterministic](http://library.msri.org/books/Book29/contents.html), [perfect information](https://en.wikipedia.org/wiki/Perfect_information#Examples)" with the further categorization of Tic-Tac-Toe as "trivial" (solved and easily solvable) and non-trivial (intractable and unsolved) for games like Chess and Go.
Upvotes: 3 |
2017/05/22 | 1,164 | 4,506 | <issue_start>username_0: Expressed in my own words:
>
> Suppose we create something that passes all of our tests and is
> indistinguishable from another human. How can you know if this is
> truly a conscious being as a human is, or simply a simulation of
> conscience?
>
>
>
What is the name for this? Where was it first written about?<issue_comment>username_1: I think you should get familiar with reinforcement learning. In this field of machine learning the agent interacts whit its environment and after that the agent gets some reward. Now, the agent is the neural network the environment is the game and the agent can get a reward +1 if it wins or -1 if loses. You can use this state, action, reward experienc tuple to train the agent. I can recommend David Silver's lectures on youtube and Sutton's book as well.
Upvotes: 2 <issue_comment>username_2: I'm a chess player and my answer will be only on chess.
Training a neural network with reinforcement learning isn't new, it has been done many times in the literature.
I'll briefly explain the common strategies.
* The purpose of a network is to learn **position evaluation.** We all know a queen is stronger than a bishop, but can we make the network know about it without explicitly programming? What about pawn structure? Does the network understand how to evaluate whether a position is winning or not?
* Now, we know why we need the network, we'll need to design it. The design differs radically between studies. Before deep learning was popular, people were using the shallow network. Nowadays, a network with many layers stands out.
* Once we have the network, you'll need to make a chess engine. Neural network can't magically play chess by itself, it needs to connect to a chess engine. Fortunately, we don't need to write position evaluation code because the network can do that for us.
* Now, we have to play games. We could start with some high-quality chess databases or instead have our AI agent play games with another player (e.g. itself, another AI agent, or a human). This is known as **reinforcement learning**.
* While we play games, we update the network parameter. This can be done by stochastic gradient descent (or other similar techniques). We repeat our training as long as we want, usually over millions of iterations.
* Finally, we have a trained neural network model for chess!
Look at the following resources for details:
>
> <https://www.chessprogramming.org/Learning>
>
>
>
Upvotes: 3 <issue_comment>username_3: Great question! NN is very promising for this type of problem: [Giraffe Chess](https://arxiv.org/pdf/1509.01549.pdf). Lai's accomplishment [was considered to be a pretty big deal](https://www.technologyreview.com/s/541276/deep-learning-machine-teaches-itself-chess-in-72-hours-plays-at-international-master/), but unfortunately came just a few months before AlphaGo took the spotlight. *(It all turned out well, in that Lai was subsequently hired by DeepMind, [although not so well for the Giraffe engine](https://motherboard.vice.com/en_us/article/the-chess-engine-that-died-so-alphago-could-live-giraffe-matthew-lai);)*
I've found Lai's approach to be quite helpful, and it is backed by solid results.
---
You may want to use "[sequential](https://en.wikipedia.org/wiki/Sequential_game)" as opposed to "round based" since sequential is the preferred term in [Game Theory](https://en.wikipedia.org/wiki/Game_theory) and [Combinatorial Game Theory](https://en.wikipedia.org/wiki/Combinatorial_game_theory), and these are the fields that apply mathematical analysis to games.
The games you list are further termed "[abstract](https://en.wikipedia.org/wiki/Abstract_strategy_game)" to distinguish them from modern strategy boardgames, or games in general, which utilize a strong theme and are generally less compact than abstract games in terms of mechanics and elements. This carries the caveat that abstract games are not restricted to sequential games or boardgames, or even games specifically, as in the case of puzzles like Sudoku.
The formal name for this group of games is generally "[partisan](https://en.wikipedia.org/wiki/Partisan_game), sequential, [deterministic](http://library.msri.org/books/Book29/contents.html), [perfect information](https://en.wikipedia.org/wiki/Perfect_information#Examples)" with the further categorization of Tic-Tac-Toe as "trivial" (solved and easily solvable) and non-trivial (intractable and unsolved) for games like Chess and Go.
Upvotes: 3 |
2017/05/23 | 1,674 | 6,665 | <issue_start>username_0: Basically, an AI that can create, rig, and texture 3d models and game environments (by extrapolating from collections of reference models, according to user input), and that can set up physics and mechanics (assuming that the AI has access to a 3d modeling studio and a game engine, both designed for compatibility with the AI, or as a component of the AI), all according to user commands (and allowing for tweaking and optimizations of models, rigging, mechanics, etc, by the user).
An example of user commands would be something like: "Gaming AI, create a casual style\* male model, European build, 6'5", fit and slightly skinny, with red scaly skin, green eyes, a reptilian tail, demonic wings, claws, sharp teeth", etc. The user probably wouldn't add all of these characteristics at once, but rather one at a time, tweaking each feature via AI commands or manually.
\*"casual style" is a fictional "style class". Style classes would refer to the visual style of the models. Possible example styles include "cartoon", "abstract", "gothic", "steampunk", "serious" and "realistic".
Here's another example of user commands, for a environmental model: "Gaming AI, create a serious style house, Victorian, two story, white with beige trim, with porches and shutters. Give it a creepy aesthetic." Again, models could be created and modified or have features added in a step by step process, in order to tweak and refine them.
I believe that such an AI would significantly reduce the amount of time, labor, and difficulty involved in designing games; making games cheaper and easier to produce, and making game design available to everyone. A variation of such an AI could also be used to create 2d artwork and animations.
But is such an AI even remotely possible? And would it take a supercomputer to run the thing? (I'm under the impression that such an AI would need to be capable of learning and adapting, and would require a massive and expansile "association library"\*—including 2d and 3d models, and verbal and textual speech—as well as near human intelligence)
\*if the term "association library" doesn't exist, or doesn't currently relate to AI, Then I just made it up. According to my made up definition, an association library is the library of programmed or learned associations that an AI uses to generate responses, and to, in this context, generate 3d models; and probably to write or select code as well, in order to set up physics and mechanics and the like.<issue_comment>username_1: I think you should get familiar with reinforcement learning. In this field of machine learning the agent interacts whit its environment and after that the agent gets some reward. Now, the agent is the neural network the environment is the game and the agent can get a reward +1 if it wins or -1 if loses. You can use this state, action, reward experienc tuple to train the agent. I can recommend David Silver's lectures on youtube and Sutton's book as well.
Upvotes: 2 <issue_comment>username_2: I'm a chess player and my answer will be only on chess.
Training a neural network with reinforcement learning isn't new, it has been done many times in the literature.
I'll briefly explain the common strategies.
* The purpose of a network is to learn **position evaluation.** We all know a queen is stronger than a bishop, but can we make the network know about it without explicitly programming? What about pawn structure? Does the network understand how to evaluate whether a position is winning or not?
* Now, we know why we need the network, we'll need to design it. The design differs radically between studies. Before deep learning was popular, people were using the shallow network. Nowadays, a network with many layers stands out.
* Once we have the network, you'll need to make a chess engine. Neural network can't magically play chess by itself, it needs to connect to a chess engine. Fortunately, we don't need to write position evaluation code because the network can do that for us.
* Now, we have to play games. We could start with some high-quality chess databases or instead have our AI agent play games with another player (e.g. itself, another AI agent, or a human). This is known as **reinforcement learning**.
* While we play games, we update the network parameter. This can be done by stochastic gradient descent (or other similar techniques). We repeat our training as long as we want, usually over millions of iterations.
* Finally, we have a trained neural network model for chess!
Look at the following resources for details:
>
> <https://www.chessprogramming.org/Learning>
>
>
>
Upvotes: 3 <issue_comment>username_3: Great question! NN is very promising for this type of problem: [Giraffe Chess](https://arxiv.org/pdf/1509.01549.pdf). Lai's accomplishment [was considered to be a pretty big deal](https://www.technologyreview.com/s/541276/deep-learning-machine-teaches-itself-chess-in-72-hours-plays-at-international-master/), but unfortunately came just a few months before AlphaGo took the spotlight. *(It all turned out well, in that Lai was subsequently hired by DeepMind, [although not so well for the Giraffe engine](https://motherboard.vice.com/en_us/article/the-chess-engine-that-died-so-alphago-could-live-giraffe-matthew-lai);)*
I've found Lai's approach to be quite helpful, and it is backed by solid results.
---
You may want to use "[sequential](https://en.wikipedia.org/wiki/Sequential_game)" as opposed to "round based" since sequential is the preferred term in [Game Theory](https://en.wikipedia.org/wiki/Game_theory) and [Combinatorial Game Theory](https://en.wikipedia.org/wiki/Combinatorial_game_theory), and these are the fields that apply mathematical analysis to games.
The games you list are further termed "[abstract](https://en.wikipedia.org/wiki/Abstract_strategy_game)" to distinguish them from modern strategy boardgames, or games in general, which utilize a strong theme and are generally less compact than abstract games in terms of mechanics and elements. This carries the caveat that abstract games are not restricted to sequential games or boardgames, or even games specifically, as in the case of puzzles like Sudoku.
The formal name for this group of games is generally "[partisan](https://en.wikipedia.org/wiki/Partisan_game), sequential, [deterministic](http://library.msri.org/books/Book29/contents.html), [perfect information](https://en.wikipedia.org/wiki/Perfect_information#Examples)" with the further categorization of Tic-Tac-Toe as "trivial" (solved and easily solvable) and non-trivial (intractable and unsolved) for games like Chess and Go.
Upvotes: 3 |
2017/05/26 | 1,891 | 7,527 | <issue_start>username_0: I am not looking for an efficient way to find primes (which of course is a [solved problem](https://en.wikipedia.org/wiki/AKS_primality_test)). This is more of a "what if" question.
So, in theory, could you train a neural network to predict whether or not a given number $n$ is composite or prime? How would such a network be laid out?<issue_comment>username_1: In theory, a neural network can approximate *any* given function. This result is known as [the universal approximation theorem](http://neuralnetworksanddeeplearning.com/chap4.html).
However, if you train a network with the numbers $0$ to $N$, you cannot guarantee that the network will classify numbers outside that range correctly ($n > N$).
Such a network would be a regular feed-forward network (or [MLP](https://en.wikipedia.org/wiki/Multilayer_perceptron)) as recurrency does not add anything to the classification of the given input. The number of layers and nodes of that NN can only be found through trial and error.
Upvotes: 2 <issue_comment>username_2: yes it is feasible, but consider that integer factorization problem is an [NP-something problem](https://en.wikipedia.org/wiki/Integer_factorization#Difficulty_and_complexity) and [BQP problem](https://en.wikipedia.org/wiki/BQP).
because of this, it is impossible that a neural network purely based on classical computing finds prime number with 100% accuracy, unless P=NP.
Upvotes: -1 <issue_comment>username_3: Early success on prime number testing via artificial networks is presented in [*A Compositional Neural-network Solution to Prime-number Testing*, <NAME>, <NAME>, 2006](https://cloudfront.escholarship.org/dist/prd/content/qt5sg7n4ww/qt5sg7n4ww.pdf).
The knowledge-based cascade-correlation (KBCC) network approach showed the most promise, although the practicality of this approach is eclipsed by other prime detection algorithms that usually begin by checking the least significant bit, immediately reducing the search by half, and then searching based other theorems and heuristics up to $floor(\sqrt{x})$. However the work was continued with [*Knowledge Based Learning with KBCC*, Shultz et. al. 2006](http://www.psych.mcgill.ca/perpg/fac/shultz/personal/Recent_Publications_files/kbcc06.pdf)
There are actually multiple sub-questions in this question. First, let's write a more formal version of the question: "Can an artificial network of some type converge during training to a behavior that will accurately test whether the input ranging from $0$ to $2^n-1$, where $n$ is the number of bits in the integer representation, represents a prime number?"
1. Can it by simply memorizing the primes over the range of integers?
2. Can it by learning to factor and apply the definition of a prime?
3. Can it by learning a known algorithm?
4. Can it by developing a novel algorithm of its own during training?
The direct answer is yes, and it has already been done according to 1. above, but it was done by over-fitting, not learning a prime number detection method. We know the human brain contains a neural network that can accomplish 2., 3., and 4., so if artificial networks are developed to the degree most think they can be, then the answer is yes for those. There exists no counter-proof to exclude any of them from the range of possibilities as of this answer's writing.
It is not surprising that work has been done to train artificial networks on prime number testing because of the importance of primes in discrete mathematics, its application to cryptography, and, more specifically, to cryptanalysis. We can identify the importance of digital network detection of prime numbers in the research and development of intelligent digital security in works like [*A First Study of the Neural Network Approach in the RSA Cryptosystem*, G.c. Meletius et. al., 2002](https://www.researchgate.net/profile/Michael_Vrahatis/publication/228790268_A_first_study_of_the_neural_network_approach_to_the_RSA_cryptosystem/links/0fcfd50e4b1dad4c70000000.pdf). The tie of cryptography to the security of our respective nations is also the reason why not all of the current research in this area will be public. Those of us that may have the clearance and exposure can only speak of what is not classified.
On the civilian end, ongoing work in what is called novelty detection is an important direction of research. Those like <NAME> and <NAME> are [approaching novelty detection from the signal processing side](https://www.sciencedirect.com/science/article/pii/S0165168403002020), and it is obvious to those that understand that artificial networks are essentially digital signal processors that have multi-point self tuning capabilities can see how their work applies directly to this question. Markou and Singh write, "There are a multitude of applications where novelty detection is extremely important including signal processing, computer vision, pattern recognition, data mining, and robotics."
On the cognitive mathematics side, the development of a mathematics of surprise, such as [*Learning with Surprise: Theory and Applications* (thesis), <NAME>, 2016](https://infoscience.epfl.ch/record/223460/files/EPFL_TH7418.pdf) may further what Ergi and Shultz began.
Upvotes: 5 [selected_answer]<issue_comment>username_4: I'm an undergraduate researcher at Prairie View A&M university. I just spent a few weeks tweaking an MLPRegressor model to predict the $n$th prime number. It recently stumbled into a super low minimum, where the first $1000$ extrapolations outside of the training data produced error less than $.02$ percent. Even at $300000$ primes out, it was about $.5$ percent off. My model was simple: $10$ hidden layers, trained on a single processor for less than 2 hours.
To me, it begs the question, "Is there a reasonable function that produces the nth prime number?" Right now, the algorithms become computationally very taxing for extreme $n$. Check out the time gaps between the most recent largest primes discovered. Some of them are years apart. I know it's been proven that if such a function exists, it will not be polynomial.
Upvotes: 2 <issue_comment>username_5: From a purely theoretic standpoint, yes. Such a neural network(NN) can be manually constructed.
A neuron is very similar to a logic gate. And it is definitely possible to use them as logic gate, replicate the structure of a CPU, and then perform any algorithm we already have. So in theory, YES. (Albeit obviously dumb to do so)
The more complicated question is whether NN can "learn" to detect primes. Given that it is theoretically possible, and also that the human brain is literally NN, I am very inclined to guess that it is possible. But not with a feedforward network.
Prime detection is not a very NN friendly task for two reasons:
1. It is discrete maths, and NN are more fuzzy in nature.
2. All currently known algorithm of prime detection requires some sort of loop / iteration, making them complexity >= O(Bits). Where as feed forward NN is constant time complexity.
The second difficulty will largely determine the shape of the final NN. Assuming there is no constant time detection algorithm of prime ever possible, the NN will not be feed forward. A recurrent network is required such that the NN can deal with loops.
The first difficulty means that the NN will likely be rather complex and a lot of examples will be needed to overcome the inherent fuzziness. But that I guess is very much expected.
Upvotes: 0 |
2017/05/27 | 1,392 | 4,729 | <issue_start>username_0: Just watched a recent WIRED video on virtual assistants' performance on telling jokes. They're composed by humans, but I'd like to know if AI has gotten good enough to write some.<issue_comment>username_1: I don't think the AI has gotten to that point yet. Here are some of the interesting papers on the subject:
* A paper was recently written that attempted to [generate jokes using unsupervised learning](http://homepages.inf.ed.ac.uk/s0894589/petrovic13unsupervised.pdf). The jokes are formulaic: they're all of the form "I like my X like I like my Y: Z" where X and Y are nouns, and Z is an adjective that can describe both X and Y. Here are some of the jokes generated in this paper:
```
I like my relationships like I like my source, open
I like my coffee like I like my war, cold
I like my boys like I like my sectors, bad
```
How funny these jokes are is a matter of personal taste I guess.
* Another paper by [<NAME> and <NAME>](http://www.aclweb.org/anthology/N16-1016) makes use of an LSTM to predict humor from a dataset of the Big Bang theory shows. This is not generating jokes but finding out where the jokes are said in this dataset (so theoretically, the resulting labelled dataset can hopefully be used to train a model to create jokes).
* Yet another paper is that by [<NAME>, <NAME>](https://web.stanford.edu/class/cs224n/reports/2760332.pdf). Unlike [the first paper mentioned above](http://homepages.inf.ed.ac.uk/s0894589/petrovic13unsupervised.pdf) which was unsupervised, this is a supervised learning model. Their neural network model, generates jokes such as:
```
Apple is teaming up with Playboy Magazine in the self driving office.
One of the top economy in China , Lady Gaga says today that Obama is legal.
Google Plus has introduced the remains that lowers the age of coffee.
According to a new study , the governor of film welcome the leading actor of Los Angeles area , <NAME> .
```
*My two cents*:
As of this writing, it appears that Multi-layer Recurrent Neural Networks (LSTM, GRU, RNN) for character-level language models are by far the most promising way to go about it. Maybe if you find some really cool data you can come up with some funny jokes, similar to how [<NAME>ane](http://lewisandquark.tumblr.com/post/159302925452/the-neural-network-generated-pickup-lines-that-are) was able to generate what I find to be really funny pickup lines such as:
```
Are you a 4loce? Because you’re so hot!
I want to get my heart with you.
You are so beautiful that you know what I mean.
I have a cenver? Because I just stowe must your worms.
Hey baby, I’m swirked to gave ever to say it for drive.
If I were to ask you out?
You must be a tringle? Cause you’re the only thing here.
I’m not on your wears, but I want to see your start.
You are so beautiful that you make me feel better to see you.
Hey baby, you’re to be a key? Because I can bear your toot?
I don’t know you.
I have to give you a book, because you’re the only thing in your eyes.
Are you a candle? Because you’re so hot of the looks with you.
I want to see you to my heart.
If I had a rose for every time I thought of you, I have a price tighting.
I have a really falling for you.
Your beauty have a fine to me.
Are you a camera? Because I want to see the most beautiful than you.
I had a come to got your heart.
You’re so beautiful that you say a bat on me and baby.
You look like a thing and I love you.
Hello.
```
Upvotes: 3 <issue_comment>username_2: As of now we don't have a satisfying cognitive theory of humor (or at least, one that can evaluate the hilarity of a joke), so a quick survey of the literature seems shows that we don't have much of a clue on how to build a model.
Because of that, and the fact that existing methods don't seem to reliably produce good jokes free form, there seems to be little reason to believe that ML methods can produce good jokes.
But of course this is all normative.
Upvotes: 0 <issue_comment>username_3: Amazingly, I just found a claim about that. I just read it in twitter and the model is simply a GPT-2 WITH 355M params trained with 200,000 raw title and body-based jokes. what is amazing is that GPT-2 is the most advanced text generating model it even can translate or answer math questions if trained well.
Let's see example output from twitter.
* "I asked my girlfriend if she knew what sex was like || she said that
you can kiss her and she'll think you're a queer."
* Why does the teacher have her own car? || She's a car company for Santa.
* [](https://i.stack.imgur.com/rMEYN.png)
<https://twitter.com/lgbtinethiopia/status/1294644776772472834?s=20>
Upvotes: 0 |
2017/05/28 | 1,425 | 4,796 | <issue_start>username_0: The deep learning algorithms I would to know the limits of are:
1. CNTK
2. Caffe
3. TensorFlow
4. Torch7
5. Theano
For example: I've heard TensorFlow is near impossible to parallelize on 8 GPUs and above. So, in this case, the limit would be 8.<issue_comment>username_1: I don't think the AI has gotten to that point yet. Here are some of the interesting papers on the subject:
* A paper was recently written that attempted to [generate jokes using unsupervised learning](http://homepages.inf.ed.ac.uk/s0894589/petrovic13unsupervised.pdf). The jokes are formulaic: they're all of the form "I like my X like I like my Y: Z" where X and Y are nouns, and Z is an adjective that can describe both X and Y. Here are some of the jokes generated in this paper:
```
I like my relationships like I like my source, open
I like my coffee like I like my war, cold
I like my boys like I like my sectors, bad
```
How funny these jokes are is a matter of personal taste I guess.
* Another paper by [<NAME> and <NAME>](http://www.aclweb.org/anthology/N16-1016) makes use of an LSTM to predict humor from a dataset of the Big Bang theory shows. This is not generating jokes but finding out where the jokes are said in this dataset (so theoretically, the resulting labelled dataset can hopefully be used to train a model to create jokes).
* Yet another paper is that by [<NAME>, <NAME>](https://web.stanford.edu/class/cs224n/reports/2760332.pdf). Unlike [the first paper mentioned above](http://homepages.inf.ed.ac.uk/s0894589/petrovic13unsupervised.pdf) which was unsupervised, this is a supervised learning model. Their neural network model, generates jokes such as:
```
Apple is teaming up with Playboy Magazine in the self driving office.
One of the top economy in China , Lady Gaga says today that Obama is legal.
Google Plus has introduced the remains that lowers the age of coffee.
According to a new study , the governor of film welcome the leading actor of Los Angeles area , <NAME> .
```
*My two cents*:
As of this writing, it appears that Multi-layer Recurrent Neural Networks (LSTM, GRU, RNN) for character-level language models are by far the most promising way to go about it. Maybe if you find some really cool data you can come up with some funny jokes, similar to how [Janelle Shane](http://lewisandquark.tumblr.com/post/159302925452/the-neural-network-generated-pickup-lines-that-are) was able to generate what I find to be really funny pickup lines such as:
```
Are you a 4loce? Because you’re so hot!
I want to get my heart with you.
You are so beautiful that you know what I mean.
I have a cenver? Because I just stowe must your worms.
Hey baby, I’m swirked to gave ever to say it for drive.
If I were to ask you out?
You must be a tringle? Cause you’re the only thing here.
I’m not on your wears, but I want to see your start.
You are so beautiful that you make me feel better to see you.
Hey baby, you’re to be a key? Because I can bear your toot?
I don’t know you.
I have to give you a book, because you’re the only thing in your eyes.
Are you a candle? Because you’re so hot of the looks with you.
I want to see you to my heart.
If I had a rose for every time I thought of you, I have a price tighting.
I have a really falling for you.
Your beauty have a fine to me.
Are you a camera? Because I want to see the most beautiful than you.
I had a come to got your heart.
You’re so beautiful that you say a bat on me and baby.
You look like a thing and I love you.
Hello.
```
Upvotes: 3 <issue_comment>username_2: As of now we don't have a satisfying cognitive theory of humor (or at least, one that can evaluate the hilarity of a joke), so a quick survey of the literature seems shows that we don't have much of a clue on how to build a model.
Because of that, and the fact that existing methods don't seem to reliably produce good jokes free form, there seems to be little reason to believe that ML methods can produce good jokes.
But of course this is all normative.
Upvotes: 0 <issue_comment>username_3: Amazingly, I just found a claim about that. I just read it in twitter and the model is simply a GPT-2 WITH 355M params trained with 200,000 raw title and body-based jokes. what is amazing is that GPT-2 is the most advanced text generating model it even can translate or answer math questions if trained well.
Let's see example output from twitter.
* "I asked my girlfriend if she knew what sex was like || she said that
you can kiss her and she'll think you're a queer."
* Why does the teacher have her own car? || She's a car company for Santa.
* [](https://i.stack.imgur.com/rMEYN.png)
<https://twitter.com/lgbtinethiopia/status/1294644776772472834?s=20>
Upvotes: 0 |
2017/05/31 | 851 | 3,878 | <issue_start>username_0: Are there approaches other than convolutions to learn features from images? Has there been any research to use approaches such as hashing (e.g. `p-hash`, `diff-hash` etc.) in lieu?<issue_comment>username_1: I think your question addresses the fact that convolutional neural networks utilize convolution and not some other mechanism. This answer addresses other feature extraction techniques you can use on image. In general, there are a number of other feature extraction mechanisms you can look into.
* Histogram oriented gradients. Counts the occurrences of gradients in an image and generates bins. These bins can be then used directly as features.
* Color data. You can extract color histograms of various granularities and use this data to train a model.
* Hough Transform - Can identify shapes and lines and other spatial features.
On top of these we also have numerous techniques on top of convolutions like gradients, curvature finding, corner detection, and ridge finding.
Upvotes: 0 <issue_comment>username_2: Some more complex and better outcomes are those that include the MPEG-7 developed by the MPEG group.
These descriptors specify the video standard in question. They are split for color, shape, texture and movement.
**1. Color Descriptors**
Color Space. Specifies the data type of the color space in which they are expressed or work other color descriptors. Color spaces that it contemplated are: RGB (Red, Green Blue), YCrCb (Luminance, Chrominance), HSV (Hue, Saturation, Value), HMMD (Hue, Maximum, Minimum, and Difference).
Color Quantization. This descriptor defines a uniform quantization of a given color space.
Dominant Color(s). This descriptor is the most suitable to be used in images, or regions, in which a small number of colors is sufficient to characterize the information of one determined region.
Scalable Color. This descriptor consists in a color histogram in the HSV space. It is useful in image to image comparisons or searches based on color characteristics.
Color Layout. This descriptor allows representing the color spatial distribution within the images in a very compact way, so the recovery is realized with great efficiency.
Color Structure Descriptor. This descriptor characterizes the color distribution in an image. It builds a color histogram, which assigns most importance to the colors that appears most often and spread across the image. GoF/GoP Color (Group of Frames/Group of Pictures). This descriptor is an extension of the Scalable Color descriptor, which, unlike the latter, is applied to video sequences (a collection of images) instead of only an image.
**2. Texture Descriptors**
Homogeneous Texture. This descriptor emerged as an important tool when looking for and choosing within large collections of images of great visual similarity.
Texture Browsing. This descriptor specifies the perceptual characterization of one particular texture, and pretend be similar to the characterization that a human eye makes, according to regularity terms, coarseness and directivity.
Edge Histogram. It is a descriptor that gives us information on the type of contours or edges that appear in the image.
**3. Shape Descriptors**
Region Shape. The object´s shape in an image may consist of a single region or a set of regions, so this descriptor is useful for this type of characterization.
Contour Shape. It is characterized by very well represented contour features which facilitates subsequent search and retrieval; it is robust to motion, to occlusions and to different perspectives; it is extremely compact.
Shape 3D. The Shape 3D allows to describe in details the shape in 3D. This tool is very useful today due to continuous development of multimedia technologies.
This was taked from my paper: The MPEG-7 Visual Descriptors: a Basic Survey
Best regards
Upvotes: 1 |
2017/06/01 | 578 | 2,063 | <issue_start>username_0: A few papers I have come across say that [BLEU](https://en.wikipedia.org/wiki/BLEU) is not an appropriate evaluation metric for chatbots, so they use the **perplexity**.
First of all, what is *perplexity*? How to calculate it? And why is perplexity a good evaluation metric for chatbots?<issue_comment>username_1: With perplexity you are trying to evaluate the similarity between the token (in your case probably sentences) distribution generated by the model and the one in the test data.
For instance, assuming you have $M$ sentences $s\_1, \dots, s\_M$, each with probability $P(s\_i)$, the perplexity is $$2^{-l},$$ where $l = \frac{1}{M} \sum P(s\_i) \log P(s\_i)$ for $i \in [1 \dots M]$.
Note that while perplexity might be useful to capturing certain aspects of the model, it is by no means perfect, and, even if you are able to reach great perplexity scores, it will not necessarily translate to a good or even working chat bot.
Upvotes: 1 <issue_comment>username_2: For the definition and calculation of perplexity, please refer to [this answer](https://stats.stackexchange.com/a/280794/103153).
Google proposed a human evaluation metric called Sensibleness and Specificity Average (SSA) which combines two fundamental aspects of a humanlike chatbot: making sense and being specific. And they conducted some experiments and found that perplexity aligns very well with the SSA.
Here is the explanation in the paper:
>
> Perplexity measures how well the model predicts the test set data; in other words, how accurately it anticipates what people will say next.
>
> Our results indicate most of the variance in the human metrics can be explained by the test perplexity.
>
>
>
Their experiments showed a very strong correlation between SSA and perplexity(the lower the perplexity the higher the SSA).
[](https://i.stack.imgur.com/VUmLB.png)
References:
1. [Towards a Human-like Open-Domain Chatbot](https://arxiv.org/pdf/2001.09977.pdf)
Upvotes: 0 |
2017/06/02 | 412 | 1,522 | <issue_start>username_0: I want to train a neural network with pictures of public figures (politicians, singers, etc), but I do not know if it's legal, I do not plan to show them in my project I only want to use them to train the neural network, can this cause legal problems?<issue_comment>username_1: If you use any pictures you find online, you can use them as you wish: as long as you don't (re)publish them under your name. Also, if you really want to play safe, never *upload* them at all. Download them, use them, disregard them.
People can't proof you used certain pictures to train your network by looking at your network data.
And the 'big' companies out there, like Google, train their networks by using public photos.
And follow this chart:
[](https://i.stack.imgur.com/bKcmE.jpg)
As you are using it for a personal/educational project, it shouldn't be a problem.
Upvotes: 3 <issue_comment>username_2: It depends on the country. In France, for example, you've got to have the agreement of the person. Doesn't matter whether he is the president, a singer or... me :)
**Edit:**
*Doesn't matter what for. Your image belongs to you. Otherwise he (me) can sue you for that and seek redress. Faces of persons are blurred in Google Map Street View. Here is [a link](https://www.service-public.fr/particuliers/vosdroits/F32103), in french.*
About re-publication, you'll also have to have the agreement of the copyright holder.
Upvotes: 1 |
2017/06/03 | 1,044 | 3,536 | <issue_start>username_0: How exactly are "mutation" and "cross-over" applied in the context of a genetic algorithm based on real numbers (as opposed to just bits)? I think I understood how those two phases are applied in a "canonical" context where chromosomes are strings of bits of a fixed length, but I'm not able to find examples for other situations. What would those phases look like on the domain of real numbers?<issue_comment>username_1: You have a genome with certain genes:
```
genome = { GeneA: value, GeneB: value, GeneC: value }
```
So take for example:
```
genome = { GeneA: 1, GeneB: 2.5, GeneC: 3.4 }
```
A few examples of mutation could be:
* Switch around two genes: `{ GeneA: 1, GeneB: 3.4, GeneC: 2.5 }`
* Add/substract a random value from a gene: `{ GeneA: 0.9, GeneB: 2.5, GeneC: 3.4 }`
Suppose you have two genomes:
```
genome1 = { GeneA: 1, GeneB: 2.5, GeneC: 3.4 }
genome2 = { GeneA: 0.4, GeneB: 3.5, GeneC: 3.2 }
```
A few examples of crossover could be:
* Taking the average: `{ GeneA: 0.7, GeneB: 3.0, GeneC: 3.3 }`
* Uniform (50% chance): `{ GeneA: 0.4, GeneB: 2.5, GeneC: 3.2 }`
* N-point crossover: `{ GeneA: 1, | CROSSOVER POINT | GeneB: 3.5, GeneC: 3.2 }`
---
You can be pretty imaginative when developing mutation and crossover methods.
Upvotes: 4 [selected_answer]<issue_comment>username_2: As @Thomas W said, you can be pretty immaginative when you're developing mutation and crossover methods. Each problem has its own caracteristics and, therefore, requires a different strategy.
BUT, from my experience, I'd say that 90% of crossovers and mutation on real numbers genotypes are solved using the BLX-α algorithm.
**Crossover:**
[](https://i.stack.imgur.com/RDKUw.png)
This algorithm is really simple. Given the parents *X* and *Y* and an α value (inside the range [0,1], generally around 0.1/0.15, but it depends by the problem), For each gene of your genotype:
1. extract the genes *xi* and *yi*
2. find the minimum and the maximum values
3. the new gene will be a random number in the interval [min - range \* α, max + range \* α]
A variation of this algorithm is the BLX-αβ, in which we take into account which parent performed better and use two constants (α > β) to increase the probabilities that the new value will be closer to the one of the fittest parent
[](https://i.stack.imgur.com/Wy65S.png)
**Mutation:**
With the mutation the situation is similar: we need to get a random value that is related to our problem domain (we do not want the mutations to be destructive! They have the function of *exploring* the space).
In these cases it is useful to determine a range for the mutation and use that range to find the new value of the gene using BLX-α.
A more sophisticated mutation algorithm can be achieved using BLX-α on boundaries that depend on the actual value of the gene and the fitness function of the individual.
Let's imagine that our individual performs in a very bad way; in that case the mutation operator will be used to *shift* the individual to a distant point in the search space, where it will probably perform better.
On the other hand, if the individual is already fit we may not want to introduce some dramatic changes using the mutation. In that case the mutation range would be more contained and would have the function of *tuning* the genotype instead of *exploring* for better alternatives.
Upvotes: 3 |
2017/06/05 | 992 | 3,290 | <issue_start>username_0: I recently read about algorithmic bias in facial recognition.
Is algorithmic bias due to the training dataset used, or is it due to something else?<issue_comment>username_1: You have a genome with certain genes:
```
genome = { GeneA: value, GeneB: value, GeneC: value }
```
So take for example:
```
genome = { GeneA: 1, GeneB: 2.5, GeneC: 3.4 }
```
A few examples of mutation could be:
* Switch around two genes: `{ GeneA: 1, GeneB: 3.4, GeneC: 2.5 }`
* Add/substract a random value from a gene: `{ GeneA: 0.9, GeneB: 2.5, GeneC: 3.4 }`
Suppose you have two genomes:
```
genome1 = { GeneA: 1, GeneB: 2.5, GeneC: 3.4 }
genome2 = { GeneA: 0.4, GeneB: 3.5, GeneC: 3.2 }
```
A few examples of crossover could be:
* Taking the average: `{ GeneA: 0.7, GeneB: 3.0, GeneC: 3.3 }`
* Uniform (50% chance): `{ GeneA: 0.4, GeneB: 2.5, GeneC: 3.2 }`
* N-point crossover: `{ GeneA: 1, | CROSSOVER POINT | GeneB: 3.5, GeneC: 3.2 }`
---
You can be pretty imaginative when developing mutation and crossover methods.
Upvotes: 4 [selected_answer]<issue_comment>username_2: As @Thomas W said, you can be pretty immaginative when you're developing mutation and crossover methods. Each problem has its own caracteristics and, therefore, requires a different strategy.
BUT, from my experience, I'd say that 90% of crossovers and mutation on real numbers genotypes are solved using the BLX-α algorithm.
**Crossover:**
[](https://i.stack.imgur.com/RDKUw.png)
This algorithm is really simple. Given the parents *X* and *Y* and an α value (inside the range [0,1], generally around 0.1/0.15, but it depends by the problem), For each gene of your genotype:
1. extract the genes *xi* and *yi*
2. find the minimum and the maximum values
3. the new gene will be a random number in the interval [min - range \* α, max + range \* α]
A variation of this algorithm is the BLX-αβ, in which we take into account which parent performed better and use two constants (α > β) to increase the probabilities that the new value will be closer to the one of the fittest parent
[](https://i.stack.imgur.com/Wy65S.png)
**Mutation:**
With the mutation the situation is similar: we need to get a random value that is related to our problem domain (we do not want the mutations to be destructive! They have the function of *exploring* the space).
In these cases it is useful to determine a range for the mutation and use that range to find the new value of the gene using BLX-α.
A more sophisticated mutation algorithm can be achieved using BLX-α on boundaries that depend on the actual value of the gene and the fitness function of the individual.
Let's imagine that our individual performs in a very bad way; in that case the mutation operator will be used to *shift* the individual to a distant point in the search space, where it will probably perform better.
On the other hand, if the individual is already fit we may not want to introduce some dramatic changes using the mutation. In that case the mutation range would be more contained and would have the function of *tuning* the genotype instead of *exploring* for better alternatives.
Upvotes: 3 |
2017/06/05 | 330 | 1,292 | <issue_start>username_0: I wanted to train a chatbot for answering questions from books. I am trying to use [Dynamic Memory Networks](https://arxiv.org/abs/1506.07285) to do so.
How can I generate a data set, as Facebook did in the case of [bAbI tasks](https://research.fb.com/downloads/babi/), so that it can tackle a variety of questions on the data set?<issue_comment>username_1: If you are talking about "generating" in the sense of generative models , it is pretty tough. since we are still far beyond understanding the actual structure of question-answering.
And even state of the art methods for question answering are also not able to score well on datasets like [babi](http://classif.ai/dataset/babi/) , mostly 16 out of 20 tasks can be solved.
Upvotes: 2 <issue_comment>username_2: [This](https://github.com/facebook/bAbI-tasks#id17) repository maintained by Facebook AI Research talks about how they went about generating QA from stories.
In essence, they try to simulate how a reader reads a story. They also keep track about the knowledge the reader is assimilating when reading. Then they frame a question based on the knowledge assimilated, in order to asses if the reader can perform logical inference given what knowledge is present with him.
Upvotes: 2 [selected_answer] |
2017/06/06 | 480 | 1,807 | <issue_start>username_0: Recently, I found out about somewhat famous [<NAME>](http://yudkowsky.net/) and [Machine Intelligence Research Institute](https://intelligence.org/) he founded. Their philosophy and organisation seem interesting but I'm curious about their credibility. I'm pretty sure this is not a con and they seem to be producing [a lot of articles](https://intelligence.org/all-publications/). However, few of those are published and none in the journals mentioned [here](https://ai.stackexchange.com/q/2306/2444).
**So, is MIRI doing genuine high-quality research?**<issue_comment>username_1: Whether or not it's produced *good* research is a question I can't answer. But I find it interesting and it seems reliable.
For instance, this paper on Parametric [Bounded Löb's Theorem and Robust Cooperation of Bounded Agent](https://arxiv.org/abs/1602.04184) is drawn from Cornell University's site. Likewise this recent paper on [Logical Induction](https://arxiv.org/abs/1609.03543).
That said, they also seem to publish papers that have not been subject to peer review, which is an issue not specific to the site, and seems to more widespread phenomenon per easy access to digital files in general.
In relation to MIRI specifically, possibly this is a function of an an applied field advancing so rapidly that publication in journals is not seen as a priority, or even particularly relevant.
Upvotes: 2 <issue_comment>username_2: The utility of most of their papers is questionable. Evidence for that is that none of the proposed mechanisms (Tiling agents, Logical Induction, etc.) were to my knowledge applied to "real" aspiring proto-AGI systems [ref](http://gcrinstitute.org/papers/033_agi-survey.pdf).
Sure some of their ideas may be good, time will be the judge.
Upvotes: 0 |
2017/06/07 | 569 | 2,180 | <issue_start>username_0: If we have well designed autonomous AI vehicles, then why won't the USA government allow it to be witnessed in the public?<issue_comment>username_1: The question is based on a false fact: in Michigan, it is currently legal (under certain conditions described [here](http://www.ncsl.org/research/transportation/autonomous-vehicles-self-driving-vehicles-enacted-legislation.aspx) ) for an autonomous car to operate without a driver.
The reason that the federal government has not enacted any direct legislation (although they have enacted guidelines) on autonomous cars is because it is still a developing technology (as great as those Cruise videos look, I wouldn't trust most companies working on the technology to have driverless cars on the road without operators) and that it arguably falls under the state governments jurisdictions. Some companies have appealed to the House to pass national legislation to allow autonomous testing ([GM, Toyota, Lyft](https://www.forbes.com/sites/alanohnsman/2017/02/14/gm-toyota-and-lyft-urge-congress-to-set-nationwide-self-driving-car-standards/#66101394376e)), but nothing has come of it yet.
Upvotes: 3 [selected_answer]<issue_comment>username_2: [This answer](https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/disengagement_report_2016) addresses the incorrect assumption that the US government prohibits it (that is, here's no federal law against it, but state laws vary), so I will address the incorrect assumption that there now exists "well-designed AI".
If you look at the [actual reports](https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/disengagement_report_2016) submitted by the autonomous car companies who have tested in California, you'll see that most can't go more than a couple of hours of driving without an **unexpected** need for operator intervention. Waymo's (Google's autonomous car spinoff) system is more reliable as far as unexpected interventions, but their report emphasizes that *routine* operator interventions happen many times a day. Autonomous vehicles that can operate without a driver in any but the most controlled circumstances are still years away.
Upvotes: 1 |
2017/06/07 | 1,080 | 4,025 | <issue_start>username_0: I've seen events, like [CogX](https://cogx.co/session-topics/mentalhealth/), and articles that describe how machine learning techniques or algorithms can be used to diagnose mental health issues.
Here's my question.
How can artificial intelligence and machine learning algorithms or techniques be used in diagnosing mental health issues, besides, for example, Facebook using machine learning algorithms to detect people who may commit suicide?<issue_comment>username_1: There's a strong sentiment towards the idea that medical diagnosis is largely [Abductive Reasoning](https://en.wikipedia.org/wiki/Abductive_reasoning). See [this presentation](https://blogs.kent.ac.uk/jonw/files/2015/04/Aliseda2012.pdf) for additional details. One approach to automated abductive reasoning is [parsimonious covering theory](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2244953/).
If you want a relatively in-depth look at all of this, check out the book [Abductive Inference Models for Diagnostic Problem-Solving](http://rads.stackoverflow.com/amzn/click/0387973435) .
Expert systems have also been shown to work very well for medical diagnosis. As far back as the 1970's, systems like [MYCIN](https://en.wikipedia.org/wiki/Mycin) could beat human experts in terms of diagnosis and treatment plans.
From what I can tell, there doesn't seem to be any reason in principle to think that AI/ML can't be used across the board for medical diagnosis, mental health or otherwise.
Upvotes: 2 <issue_comment>username_2: You may infer some mental problems using text processing. Any text written by an individual contains a lot of clues about their mental state, probably including any health problems.
However, I believe pursuing this would prove to be difficult due to lack of ground truth. Not many people makes it public that they have mental problems and without a reliable database it is quite difficult to make research on the field.
Upvotes: 0 <issue_comment>username_3: It's already being done, and apparently with very good results.
See: [Predicting Risk of Suicide Attempts Over Time Through Machine Learning](http://journals.sagepub.com/doi/abs/10.1177/2167702617691560?journalCode=cpxa&), Walsh, Ribiero, Franklin
Here is the abstract from the paper:
>
> Traditional approaches to the prediction of suicide attempts have limited the accuracy and scale of risk detection for these dangerous behaviors. We sought to overcome these limitations by applying machine learning to electronic health records within a large medical database. Participants were 5,167 adult patients with a claim code for self-injury (i.e., ICD-9, E95x); expert review of records determined that 3,250 patients made a suicide attempt (i.e., cases), and 1,917 patients engaged in self-injury that was nonsuicidal, accidental, or nonverifiable (i.e., controls). We developed machine learning algorithms that accurately predicted future suicide attempts (AUC = 0.84, precision = 0.79, recall = 0.95, Brier score = 0.14). Moreover, accuracy improved from 720 days to 7 days before the suicide attempt, and predictor importance shifted across time. These findings represent a step toward accurate and scalable risk detection and provide insight into how suicide attempt risk shifts over time.
>
>
>
This has gotten national press attention quite recently. Here are a couple of prior articles on the result and endeavor:
[Artificial Intelligence is Learning to Predict and Prevent Suicide](https://www.wired.com/2017/03/artificial-intelligence-learning-predict-prevent-suicide/) (Wired)
[Artificial intelligence can now predict suicide with remarkable accuracy](https://qz.com/1001968/artificial-intelligence-can-now-predict-suicide-with-remarkable-accuracy/) (Quartz)
Is strongly suspect ML can be used in an array of application related to human mental health. For instance, it is entirely possible a app would be able to discern the mood swings in people with bipolar disorder based on activity or lack thereof.
Upvotes: 1 |
2017/06/09 | 1,554 | 6,108 | <issue_start>username_0: I am writing a simple toy game with the intent of training a deep neural network on top of it. The games rules are roughly the following:
* The game has a board made up of hexagonal cells.
* Both players have the same collection of pieces that they can choose to position freely on the board.
* Placing different types of pieces award points (or decrease opponent's points) depending on their position and configuration wrt one another.
* Whoever has more points win.
There are additional rules (about turns, number and types of pieces, etc...) but they are not important in the context of this question. I want to devise a deep neural network that can iteratively learn by playing against itself. My questions are about representation of input and output. In particular:
* Since pattern of pieces matter, I was thinking to have at least some convolutional layers. The board can be of various size but in principle very small (6x10 on my tests, to be expanded by few cells). Does it make sense? What kind of pooling can I use?
* How to represent both sides? In [this paper](https://arxiv.org/pdf/1412.3409.pdf) about go, authors use two input matrices, one for white stones and one for black stones. Can it work in this case too? But remember I have different types of pieces, say A, B, C and D. Should I use 2x4 input matrices? It seem very sparse and of little efficiency to me. I fear it will be way too sparse for the convolutional layers to work.
* I thought that the output could be a distribution of probabilities over the matrix representing board positions, plus a separate array of probabilities indicating what piece to play. However, I also need to represent the ability to **pass** the turn, which is very important. How can I do it without diluting its significance among other probabilities?
* And **most importantly**, do I enforce winning moves only or losing moves too? Enforcing winning moves is easy because I just set desired probabilities to 1. However when losing, what can I do? Set that move probability to 0 and all the others to the same value? Also, does it make sense to enforce moves by the final score difference, even though this would go against the meaning of the outputs, which are roughly probabilities?
Also, I developed the game engine in node.js thinking to use Synaptic as framework, but I am not sure it can work with convolutional networks (I doubt there's a way to fix the weights associated to local perceptive fields). Any advice on other libraries that are compatible with node?<issue_comment>username_1: There's a strong sentiment towards the idea that medical diagnosis is largely [Abductive Reasoning](https://en.wikipedia.org/wiki/Abductive_reasoning). See [this presentation](https://blogs.kent.ac.uk/jonw/files/2015/04/Aliseda2012.pdf) for additional details. One approach to automated abductive reasoning is [parsimonious covering theory](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2244953/).
If you want a relatively in-depth look at all of this, check out the book [Abductive Inference Models for Diagnostic Problem-Solving](http://rads.stackoverflow.com/amzn/click/0387973435) .
Expert systems have also been shown to work very well for medical diagnosis. As far back as the 1970's, systems like [MYCIN](https://en.wikipedia.org/wiki/Mycin) could beat human experts in terms of diagnosis and treatment plans.
From what I can tell, there doesn't seem to be any reason in principle to think that AI/ML can't be used across the board for medical diagnosis, mental health or otherwise.
Upvotes: 2 <issue_comment>username_2: You may infer some mental problems using text processing. Any text written by an individual contains a lot of clues about their mental state, probably including any health problems.
However, I believe pursuing this would prove to be difficult due to lack of ground truth. Not many people makes it public that they have mental problems and without a reliable database it is quite difficult to make research on the field.
Upvotes: 0 <issue_comment>username_3: It's already being done, and apparently with very good results.
See: [Predicting Risk of Suicide Attempts Over Time Through Machine Learning](http://journals.sagepub.com/doi/abs/10.1177/2167702617691560?journalCode=cpxa&), Walsh, Ribiero, Franklin
Here is the abstract from the paper:
>
> Traditional approaches to the prediction of suicide attempts have limited the accuracy and scale of risk detection for these dangerous behaviors. We sought to overcome these limitations by applying machine learning to electronic health records within a large medical database. Participants were 5,167 adult patients with a claim code for self-injury (i.e., ICD-9, E95x); expert review of records determined that 3,250 patients made a suicide attempt (i.e., cases), and 1,917 patients engaged in self-injury that was nonsuicidal, accidental, or nonverifiable (i.e., controls). We developed machine learning algorithms that accurately predicted future suicide attempts (AUC = 0.84, precision = 0.79, recall = 0.95, Brier score = 0.14). Moreover, accuracy improved from 720 days to 7 days before the suicide attempt, and predictor importance shifted across time. These findings represent a step toward accurate and scalable risk detection and provide insight into how suicide attempt risk shifts over time.
>
>
>
This has gotten national press attention quite recently. Here are a couple of prior articles on the result and endeavor:
[Artificial Intelligence is Learning to Predict and Prevent Suicide](https://www.wired.com/2017/03/artificial-intelligence-learning-predict-prevent-suicide/) (Wired)
[Artificial intelligence can now predict suicide with remarkable accuracy](https://qz.com/1001968/artificial-intelligence-can-now-predict-suicide-with-remarkable-accuracy/) (Quartz)
Is strongly suspect ML can be used in an array of application related to human mental health. For instance, it is entirely possible a app would be able to discern the mood swings in people with bipolar disorder based on activity or lack thereof.
Upvotes: 1 |
2017/06/10 | 1,132 | 4,135 | <issue_start>username_0: I am working on [this code](https://github.com/nfmcclure/tensorflow_cookbook/blob/master/09_Recurrent_Neural_Networks/02_Implementing_RNN_for_Spam_Prediction/02_implementing_rnn.py) for spam detection using recurrent neural networks.
Question 1. I am wondering whether this field (using RNNs for email spam detection) worths more researches or it is a closed research field.
Question 2. What is the oldest published paper in this field?
Quesiton 3. What are the pros and cons of using RNNs for email spam detection over other classification methods?<issue_comment>username_1: There's a strong sentiment towards the idea that medical diagnosis is largely [Abductive Reasoning](https://en.wikipedia.org/wiki/Abductive_reasoning). See [this presentation](https://blogs.kent.ac.uk/jonw/files/2015/04/Aliseda2012.pdf) for additional details. One approach to automated abductive reasoning is [parsimonious covering theory](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2244953/).
If you want a relatively in-depth look at all of this, check out the book [Abductive Inference Models for Diagnostic Problem-Solving](http://rads.stackoverflow.com/amzn/click/0387973435) .
Expert systems have also been shown to work very well for medical diagnosis. As far back as the 1970's, systems like [MYCIN](https://en.wikipedia.org/wiki/Mycin) could beat human experts in terms of diagnosis and treatment plans.
From what I can tell, there doesn't seem to be any reason in principle to think that AI/ML can't be used across the board for medical diagnosis, mental health or otherwise.
Upvotes: 2 <issue_comment>username_2: You may infer some mental problems using text processing. Any text written by an individual contains a lot of clues about their mental state, probably including any health problems.
However, I believe pursuing this would prove to be difficult due to lack of ground truth. Not many people makes it public that they have mental problems and without a reliable database it is quite difficult to make research on the field.
Upvotes: 0 <issue_comment>username_3: It's already being done, and apparently with very good results.
See: [Predicting Risk of Suicide Attempts Over Time Through Machine Learning](http://journals.sagepub.com/doi/abs/10.1177/2167702617691560?journalCode=cpxa&), Walsh, Ribiero, Franklin
Here is the abstract from the paper:
>
> Traditional approaches to the prediction of suicide attempts have limited the accuracy and scale of risk detection for these dangerous behaviors. We sought to overcome these limitations by applying machine learning to electronic health records within a large medical database. Participants were 5,167 adult patients with a claim code for self-injury (i.e., ICD-9, E95x); expert review of records determined that 3,250 patients made a suicide attempt (i.e., cases), and 1,917 patients engaged in self-injury that was nonsuicidal, accidental, or nonverifiable (i.e., controls). We developed machine learning algorithms that accurately predicted future suicide attempts (AUC = 0.84, precision = 0.79, recall = 0.95, Brier score = 0.14). Moreover, accuracy improved from 720 days to 7 days before the suicide attempt, and predictor importance shifted across time. These findings represent a step toward accurate and scalable risk detection and provide insight into how suicide attempt risk shifts over time.
>
>
>
This has gotten national press attention quite recently. Here are a couple of prior articles on the result and endeavor:
[Artificial Intelligence is Learning to Predict and Prevent Suicide](https://www.wired.com/2017/03/artificial-intelligence-learning-predict-prevent-suicide/) (Wired)
[Artificial intelligence can now predict suicide with remarkable accuracy](https://qz.com/1001968/artificial-intelligence-can-now-predict-suicide-with-remarkable-accuracy/) (Quartz)
Is strongly suspect ML can be used in an array of application related to human mental health. For instance, it is entirely possible a app would be able to discern the mood swings in people with bipolar disorder based on activity or lack thereof.
Upvotes: 1 |
2017/06/10 | 2,168 | 9,142 | <issue_start>username_0: O'Reilly recently published an article about the machine learning paradox. ([link](https://www.oreilly.com/ideas/the-machine-learning-paradox))
What it says goes basically like this: no machine learning algorithm can be perfect. If it was, it means it is overfitting and so it is not really perfect because it will not perform ok in real world scenarios.
I searched and I couldn't find any other references to this paradox. The closest I got is the [Accuracy Paradox](https://en.m.wikipedia.org/wiki/Accuracy_paradox), which says that the usefulness of a model is not really well reflected in its accuracy.
This doesn't sound quite ok to me. For example, a linear model could be perfectly learned, "overfitted" and predicted in the real world. So I suspect it is really about finding the right set of data points from which the results can be inferred. This is, we are trying to approximate from uncertain data, but with the right data we can stop approximating and start calculating.
**Is my line of thinking correct? Or is there really no perfect machine learning?**
UPDATE: In the light of currently received answers, I think my last paragraph (my line of thinking) can be rephrased as: If we have a model simple enough, why can't we overfit the model, knowing that it will behave correctly in non-trained data? This assumes the training data completely represents the real-world data, which would imply a single model that we can train on.
Keep in mind that what we conceive as "simple" or "feasible" is arbitrary and only depends on computation power and available data -- aspects with are external to ML models themselves.<issue_comment>username_1: Basically Machine Learning is used to solve problems for which giving exact solutions is not (practically or theoretically) possible or feasible. If you have a problem that can be solved with exact and proven methods, you don't need Machine Learning. So Machine Learning is rather about finding solutions by approximations with some error margins. In this sense such systems will never perform perfectly and there always will be room for improvements.
(Many real world problems can not be solved perfectly because they are not simply complicated, but chaotic.)
Upvotes: 4 [selected_answer]<issue_comment>username_2: This answer will focus on the the concepts of model error and increasing the set of points that a model is meant to fit. Firstly, and most importantly, please understand that the general problem that the OP is grappling with is not new nor confined to **machine learning**, but rather the other way around, where machine learning is a set of techniques and methods that are being brought to bear on a larger problem of model creation and performance measurement.
All models have error
---------------------
And to that end, I introduce one of the most important quotes that one should learn and live by if interested in this domain. IMHO, the simplest form of this concept by [Professor Box](https://en.wikipedia.org/wiki/George_E._P._Box) (the creator of the **Box** plot!) is
>
> [Essentially, all models are wrong, but some are useful](https://en.wikipedia.org/wiki/All_models_are_wrong)
>
>
>
And this is essentially true of **all** models. They are all necessary simplifications of what we encounter in real world data. All models are built from observation and compilation of experimental data, what we can label as *training data*, and **all** are subject to failure once they are tested against additional data points, especially those that fall outside of the observed set included in the training data. This does not necessarily make them not useful.
The original quote from Box is more elaborate in making this point:
>
> Now it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. However, cunningly chosen parsimonious models often do provide remarkably useful approximations. For example, the law PV = RT relating pressure P, volume V and temperature T of an "ideal" gas via a constant R is not exactly true for any real gas, but it frequently provides a useful approximation and furthermore its structure is informative since it springs from a physical view of the behavior of gas molecules.
>
>
> For such a model there is no need to ask the question "Is the model true?". If "truth" is to be the "whole truth" the answer must be "No". The only question of interest is "Is the model illuminating and useful?".
>
>
>
This is reminding us that the [Ideal gas equation](https://en.wikipedia.org/wiki/Ideal_gas_law) is such a *theorectical, derived* model and that, indeed, it fails to predict accurately 100% of the *real-word* data points that are observed. It is, also, **incredibly** useful to understand and predict how the forces modeled behave and interact.
For the simplest model of linear regression, look at [bivariate regression](https://en.wikipedia.org/wiki/Bivariate_analysis). For two observed variables `X` and `Y`, you create a *regression equation* that models the relationship between X and Y. The model, however, isn't actually representing *Y*, but a new variable, let's call it `Y'`. Then we arrive at something like:
>
> Y` = a + bX
>
>
>
where the coefficients `a` and `b` are chosen to minimize the discrepancy between `Y'` and `Y`. If we draw out the variables as vectors in a vector space, we would have two vectors **`Y`** and **`Y'`** that, hopefully, lie very close to each other. But there is a vector **`e`** called the error such that:
>
> e = Y - Y`
>
>
>
The whole algorithm of linear regression is centered on optimizing `a` and `b` to minimize `e`.
Sampling from the population
----------------------------
So, all models have some error term that we strive to minimize. And for a particular given set of points we may be able to optimize it such that we take the error **perhaps** even to zero. But for any real world set of points where we have only been able to observe, and hence to model, a sample of the population of observances, the very next observance will potentially cause the error term to increase! This is true even of models that were not derived with **machine-learning**
There are many, many useful models that break down for parts of the population of observances, but we still find them useful. Look at [Newtonian Gravity](https://en.wikipedia.org/wiki/Newton%27s_law_of_universal_gravitation) which breaks down at certain points of the real-world, but gets most of it right and for which we regularly use for prediction and understanding, in part because it is much simpler to understand and manipulate than something more complex that deals with those other cases, like [GR](https://en.wikipedia.org/wiki/General_relativity) which itself also fails to deal with some complex phenomena like [quantum gravity](https://en.wikipedia.org/wiki/Quantum_gravity).
Over/Under fitting
==================
In addition to minimizing the error **`e`**, there are two other issues with models and performance, [overfitting and underfitting](https://en.wikipedia.org/wiki/Overfitting).
In overfitting, a statistical model describes random error or noise instead of the underlying relationship. Overfitting occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. A model that has been overfit has poor predictive performance, as it overreacts to minor fluctuations in the training data.
Underfitting occurs when a statistical model or machine learning algorithm cannot capture the underlying trend of the data. Underfitting would occur, for example, when fitting a linear model to non-linear data. Such a model would have poor predictive performance.
Summary
-------
So, finally, the premise:
>
> no machine learning algorithm can be perfect. If it was, it means it is overfitting and so it is not really perfect because it will not perform ok in real world scenarios.
>
>
>
Misses the point that
>
> Essentially, all models are wrong, but some are useful.
>
>
>
Upvotes: 2 <issue_comment>username_3: It's true that no general learning algorithm, **machine or human**, can be perfect. It isn't just a problem with machine learning; it's true of human learning. Not of the type of learning we typically think of as learning, such as mathematics, but things like walking, seeing, etc..
We learn to walk in standard situations, and can do it very well, but the model of walking that most of us have don't cope well with walking in high heels. Now, we can learn that, but that doesn't help us walking on tightropes, etc.
Similarly, optical illusions work because they represent situations we don't encounter in normal life, and so have never learned to deal with. Instead our visual system treats them as apparently similar but familiar situations.
There are lots of situations that we may have to cope with but have never learned to do so. How well we do so depends on our mental model of the activity that we have built up, and how well our "learning algorithm" dealt with the training data we had.
Upvotes: 1 |
2017/06/12 | 834 | 2,332 | <issue_start>username_0: I've been working with neural networks and artificial intelligence for a while. What I'm trying to do right now is, from a genotype I have (a sum of sensors, neurons and actuators) draw how the neural network is (with recurrent/recursive connections being showed nicely, etc.)
What I have done now in javascript is this:
[](https://i.stack.imgur.com/GWl9G.png)
I have achieved this using [SigmaJs](http://sigmajs.org/), a Javascript drawing library, but I think it's still ugly, and what I'm looking for is a node drawing library that can achieve recursive connections in a nice way (right now, I'm drawing them with a red color as you can see on the image).
I have examined a lot of GitHub repositories and websites that can be helpful but aren't worth it since they aren't that nice.
Has anyone got an idea of what can I use, in javascript? If not, in any other language, how can I achieve what I want?<issue_comment>username_1: I'm the developer of [Neataptic](https://github.com/wagenaartje/neataptic), a Javascript neural network library. I don't know if it is exactly what you're looking for, but it has a built-in graph creator using [D3](https://d3js.org/) and [webcola](http://marvl.infotech.monash.edu/webcola/).
Basically, each connection gets an arrow. It also supports self-connections and gates.
There are some examples of images:


Play around yourself [here](https://jsfiddle.net/ch8s9d3e/29/).
Upvotes: 4 [selected_answer]<issue_comment>username_2: You might have come across the [Tensorflow Playground](http://playground.tensorflow.org/) which has a wonderful visualization of the network connections and the neuron weights.
Their code is available in GitHub ([here](https://github.com/tensorflow/playground)), and the code seems fairly simple. It is coded in typescript but that can be easily transpiled to pure JavaScript.
Upvotes: 2 |
2017/06/12 | 512 | 2,086 | <issue_start>username_0: Is it possible to train a neural network to learn something via video footage (which is essentially a sequence of images)?
In other words, if I have a video teaching me how to draw an animal from scratch, can I then use this video to teach the computer to draw the animal in the same way?
There are many machine learning and image processing techniques (such as RNNs) that can be applied to sequences of images or videos. So, I guess the difficult part becomes mapping the activity to an action like moving a pen or something.<issue_comment>username_1: Short answer: No.
Long answer: A neural network is a function that maps input data (e.g. a picture) to output data (e.g. probability that the picture contains a dog). What you propose does not seem like the sort of task this tool is suited for, though I'm not willing to claim that it can't be done.
If you are interested in computer-generated artwork, I encourage you to learn about [Generative Adversarial Networks](http://blog.aylien.com/introduction-generative-adversarial-networks-code-tensorflow/) and [style transfer](https://www.youtube.com/watch?v=Oex0eWoU7AQ).
Upvotes: 1 <issue_comment>username_2: Just like the other answer - it can't really be done like that. However, you might want to take a look at [Google Deepdream](https://github.com/google/deepdream), which actually enhances images to look more like what they detected. E.g., if a cat is detected, it will make the picture more cat-like.
Upvotes: 1 <issue_comment>username_3: About the screenshot you mentioned in comments: don't forget that NN build the mappings from learning. With only one screenshot, you'll only be able to construct a NN that outputs the correct 3D for THAT screenshot. With many screenshots, you may come closer to what you want (AFAI understand your question).
But remember that without 2 eyes, you have problems with 3D objects. Doesn't mean it's impossible.
May I suggest you to have a look at convolutional neural networks also <https://en.wikipedia.org/wiki/Convolutional_neural_network>
Upvotes: 0 |
2017/06/13 | 2,086 | 8,177 | <issue_start>username_0: I'm struggling to understand the GAN loss function as provided in [Understanding Generative Adversarial Networks](https://danieltakeshi.github.io/2017/03/05/understanding-generative-adversarial-networks/) (a blog post written by <NAME>).
In the standard cross-entropy loss, we have an output that has been run through a sigmoid function and a resulting binary classification.
Sieta states
>
> Thus, For [each] data point $x\_1$ and its label, we get the following loss function ...
>
>
> $$
> H((x\_1, y\_1), D) = -y\_1 \log D(x\_1) - (1 - y\_1) \log (1 - D(x\_1))
> $$
>
>
>
This is just the log of the expectation, which makes sense. However, according to this formulation of the GAN loss, how can we process the data from both the true distribution and the generator in the same iteration?<issue_comment>username_1: You can treat a combination of `z` input and `x` input as a single sample, and you evaluate how well the discriminator performed the classification of each of these.
This is why the post later on separates a single `y` into `E(p~data)` and `E(z)` -- basically, you have different expectations (`y`s) for each of the discriminator inputs and you need to measure both at the same time to evaluate how well the discriminator is performing.
That's why the loss function is conceived as a combination of both the positive classification of the real input and the negative classification of the negative input.
Upvotes: 2 <issue_comment>username_2: Let's start at the beginning. GANs are models that can learn to create data that is similar to the data that we give them.
When training a generative model other than a GAN, the easiest loss function to come up with is probably the Mean Squared Error (MSE).
Kindly allow me to give you an example ([Trickot L 2017](https://www.quora.com/Is-it-true-that-Generative-Adversarial-Networks-GANs-learn-the-loss-function-automatically-and-if-so-how)):
>
> Now suppose you want to generate cats ; you might give your model examples of specific cats in photos. Your choice of loss function means that your model has to reproduce each cat exactly in order to avoid being punished.
>
>
> But that's not necessarily what we want! You just want your model to generate cats, any cat will do as long as it's a plausible cat. So, you need to change your loss function.
>
>
> However which function could disregard concrete pixels and focus on detecting cats in a photo?
>
>
> That's a neural network. This is the role of the discriminator in the GAN. The discriminator's job is to evaluate how plausible an image is.
>
>
>
The paper that you cite, [Understanding Generative Adversarial Networks](https://danieltakeshi.github.io/2017/03/05/understanding-generative-adversarial-networks/) (Daniel S 2017) lists two major insights.
>
> Major Insight 1: the discriminator’s loss function is the cross entropy loss function.
>
>
> Major Insight 2: understanding how gradient saturation may or may not adversely affect training. Gradient saturation is a general problem when gradients are too small (i.e. zero) to perform any learning.
>
>
>
To answer your question we need to elaborate further on the second major insight.
>
> In the context of GANs, gradient saturation may happen due to poor design of the generator’s loss function, so this “major insight” ... is based on understanding the tradeoffs among different loss functions for the generator.
>
>
>
The design implemented in the paper resolves the loss function problem by having a very specific function (to discriminate among two classes). The best way of doing this is by using cross entropy (Insight 1). As the blog post says:
>
> The cross-entropy is a great loss function since it is designed in part to accelerate learning and avoid gradient saturation only up to when the classifier is correct.
>
>
>
As clarified in the blog post's comments:
>
> The expectation [in the cross entropy function] comes from the sums. If you look at the definition of expectation for a discrete random variable, you'll see that you need to sum over different possible values of the random variable, weighing each of them by their probability. Here, the probabilities are just 1/2 for each, and we can treat them as coming from the generator or discriminator.
>
>
>
Upvotes: 2 <issue_comment>username_3: **The Focus of This Question**
"How can ... we process the data from the true distribution and the data from the generative model in the same iteration?
**Analyzing the Foundational Publication**
In the referenced page, *Understanding Generative Adversarial Networks (2017)*, doctoral candidate <NAME> correctly references *Generative Adversarial Networks, Goodfellow, Pouget-Abadie, <NAME>, Warde-Farley, Ozair, Courville, and Bengio, June 2014*. It's abstract states, "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models ..." This original paper defines two models defined as MLPs (multilayer perceptrons).
* Generative model, G
* Discriminative model, D
These two models are controlled in a way where one provides a form of negative feedback toward the other, therefore the term adversarial.
* G is trained to capture the data distribution of a set of examples well enough to fool D.
* D is trained to discover whether its input are G's mocks or the set of examples for the GAN system.
(The set of examples for the GAN system are sometimes referred to as the real samples, but they may be no more real than the generated ones. Both are numerical arrays in a computer, one set with an internal origin and the other with an external origin. Whether the external ones are from a camera pointed at some physical scene is not relevant to GAN operation.)
Probabilistically, fooling D is synonymous to maximizing the probability that D will generate as many false positives and false negatives as it does correct categorizations, 50% each. In information science, this is to say that the limit of information D has of G approaches 0 as t approaches infinity. It is a process of maximizing the entropy of G from D's perspective, thus the term cross-entropy.
**How Convergence is Accomplished**
Because the loss function reproduced from Sieta's 2017 writing in the question is that of D, designed to minimize the cross entropy (or correlation) between the two distributions when applied to the full set of points for a given training state.
>
> $H((x\_1, y\_1), D) = 1 \, D(x\_1)$
>
>
>
There is a separate loss function for G, designed to maximize the cross entropy. Notice that there are TWO levels of training granularity in the system.
* That of game moves in a two-player game
* That of the training samples
These produce nested iteration with the outer iteration as follows.
* Training of G proceeds using the loss function of G.
* Mock input patterns are generated from G at its current state of training.
* Training of D proceeds using the loss function of D.
* Repeat if the cross entropy is not yet sufficiently maximized, D can still discriminate.
When D finally loses the game, we have achieved our goal.
* G recovered the training data distribution
* D has been reduced to ineffectiveness ("1/2 probability everywhere")
**Why Concurrent Training is Necessary**
If the two models were not trained in a back and forth manner to simulate concurrency, convergence in the adversarial plane (the outer iteration) would not occur on the unique solution claimed in the 2014 paper.
**More Information**
Beyond the question, the next item of interest in Sieta's paper is that, "Poor design of the generator's loss function," can lead to insufficient gradient values to guide descent and produce what is sometimes called saturation. Saturation is simply the reduction of the feedback signal that guides descent in back-propagation to chaotic noise arising from floating point rounding. The term comes from signal theory.
I suggest studying the 2014 paper by Goodfellow *et alia* (the seasoned researchers) to learn about GAN technology rather than the 2017 page.
Upvotes: 4 [selected_answer] |
2017/06/15 | 2,312 | 10,242 | <issue_start>username_0: First of all, I'm a beginner studying AI and this is not an opinion-oriented question or one to compare programming languages. I'm not implying that Python is the best language. But the fact is that most of the famous AI frameworks have primary support for Python. They can even be multilanguage supported, for example, TensorFlow that supports Python, C++, or CNTK from Microsoft that supports C# and C++, but the most used is Python (I mean more documentation, examples, bigger community, support, etc). Even if you choose C# (developed by Microsoft and my primary programming language), you must have the Python environment set up.
I read in other forums that Python is preferred for AI because the code is simplified and cleaner, good for fast prototyping.
I was watching a movie with AI thematics (Ex Machina). In some scenes, the main character hacks the interface of the house automation. Guess which language was on the scene? Python.
So, what is the big deal with Python? Why is there a growing association between Python and AI?<issue_comment>username_1: Python has a standard library in development, and a few for AI. It has an intuitive syntax, basic control flow, and data structures. It also supports interpretive run-time, without standard compiler languages. This makes Python especially useful for prototyping algorithms for AI.
Upvotes: 3 <issue_comment>username_2: Practically all of the most popular and widely used deep-learning frameworks are implemented in Python on the surface and C/C++ under the hood.
I think the main reason is that Python is widely used in scientific and research communities, because it's easy to experiment with new ideas and code prototypes quickly in a language with minimal syntax like Python.
Moreover there may be another reason. As I can see, most of the over-hyped online courses on AI are pushing Python because it is easy for newbie programmers. AI is the new marketing hot word to sell programming courses.
( Mentioning AI can sell programming courses to kids who want to build HAL 3000, but can not even write a Hello World or drop a trend-line onto an Excel graph. :)
Upvotes: 5 <issue_comment>username_3: Python comes with a huge amount of inbuilt libraries. Many of the libraries are for Artificial Intelligence and Machine Learning. Some of the libraries are TensorFlow (which is a high-level neural network library), scikit-learn (for data mining, data analysis and machine learning), pylearn2 (more flexible than scikit-learn), etc. The list keeps going and never ends. You can find some libraries [here](https://wiki.python.org/moin/PythonForArtificialIntelligence).
Python has an easy implementation for OpenCV. What makes Python favorite for everyone is its powerful and easy implementation.
For other languages, students and researchers need to get to know the language before getting into ML or AI with that language. *This is not the case with Python*. Even a programmer with very basic knowledge can easily handle Python. Apart from that, the time someone spends on writing and debugging code in Python is way less when compared to C, C++ or Java. This is exactly what the students of AI and ML want. *They don't want to spend time on debugging the code for syntax errors, they want to spend more time on their algorithms and heuristics related to AI and ML*
*Not just the libraries but their tutorials, handling of interfaces are easily available online*. People build their own libraries and upload them on GitHub or elsewhere to be used by others.
All these features make Python suitable for them.
Upvotes: 7 [selected_answer]<issue_comment>username_4: What attracts me to Python for my analysis work is the "full-stack" of tools that are available by virtue of being designed as a general purpose language vs. R as a domain specific language. The actual data analysis is only part of the story, and Python has rich tools and a clean full-featured language to get from the beginning to the end in a single language (use of C/Fortran wrappers notwithstanding).
On the front end, my work commonly starts with getting data from a variety of sources, including databases, files in various formats, or web scraping. Python support for this is good and most database or common data formats have a solid, well-maintained library available for the interface. R seems to share a general richness for data I/O, though for FITS the R package appears not to be under active development (no release of FITSio in 2.5 years?). A lot of the next stage of work typically occurs in the stage of organizing the data and doing pipeline-based processing with a lot of system-level interactions.
On the back end, you need to be able present large data sets in a tangible way, and for me, this commonly means generating web pages. For two projects I wrote significant Django web apps for inspecting the results of large Chandra survey projects. This included a lot of scraping (multiwavelength catalogs) and so forth. These were just used internally for navigating the data set and helping in source catalog generation, but they were invaluable in the overall project.
Moving to the astronomy-specific functionality for analysis, it seems clear that the community is solidly behind Python. This is seen in the depth of available packages and level of development activity, both at an individual and institutional level (<http://www.astropython.org/resources>). Given this level of infrastructure that is available and in work, I think it makes sense to direct effort to port the most useful R statistical tools for astronomy to Python. This would complement the current capability to call R functions from Python via rpy2.If you are interested, I strongly recommend that you read this article, here it is a question of comparing programming languages <https://diceus.com/what-technology-is-b> ... nd-java-r/ I hope it helps.Good Luck
Upvotes: 3 <issue_comment>username_5: That’s because python is a modern scripting object-oriented programming language that has stylish syntax. Contrary to structural programming languages like java and C++, its scripting nature enables the programmer to test his/her hypothesis very fast. Furthermore, there are lots of open source machine learning libraries (including scikit-learn and Keras) that broaden the use of python in AI field.
Upvotes: 2 <issue_comment>username_6: It's a mix of many factors that together make it a very good option to develop cognitive systems.
* Quick development
* Rapid prototyping
* Friendly syntax with almost human-level readability
* Diverse standard library and multi-paradigm
* It can be used as a frontend for performant backends written in compiled languages such as C/C++.
Existing performant numerical libraries, such as numpy and others already do the intensive bulk work for you which lets you focus more on architectural aspects of your system.
Besides, there is a very big community and ecosystem around Python, which results in a diverse set of available tools oriented to diffent kind of tasks.
Upvotes: 2 <issue_comment>username_7: Python has rich library, it is also object oriented, easy to program. It can be also used as frontend language. That's why it is used in artificial intelligence. Rather than AI it is also used in machine learning, soft computing, NLP programming and also used as web scripting or in Ethical hacking.
Upvotes: 2 <issue_comment>username_8: I actually prefer C for machine learning. Because like in life, in the world as we know it, consists of never-ending "logic gates" (which basically is like flipping a coin - there WILL be 2 possible outcomes - not counting the third: landing on the side!). Which also means that while the universe seems never-ending, we still never stop finding those things that are even smaller than the last smallest thing, right?
So... To put it in a context when programming C, I can control the memory usage more efficiently by coding smaller snippets that get combined, to always form smaller & efficient "code-fragments", that make up what we would call "cells" in biology (it got a measurable function, and has some pre-set properties).
Thus, I like to optimize for low RAM usage, low CPU usage etc. when programming AI. I have only done feedforward with a basic genetic algorithm in C, but the more advanced recurrent neural network I wrote in C++ (ONLY because of the simplicity of using `std::vector name;`, so I wrote my own `cvector.c`: <https://pastebin.com/sBbxmu9T> & `cvector.h`: <https://pastebin.com/Rd8B7mK4> & debug: <https://pastebin.com/kcGD8Fzf> - compile with `gcc -o debug debug.c cvector.c`). That actually helped a lot in the quest of optimizing CPU usage (and overall runtime) when creating optimized neural networks.
**EDIT:**
So I am in one sense really see the opposite of what username_6 sees, when it comes to exploring what is possible within the realm of a "self".
Upvotes: 1 <issue_comment>username_9: We also work with Python in [our company](https://www.n-ix.com/python-development/). One of the sphere that we use it for is fast prototyping and building highly scalable web applications. For over two decades, our Python developers have been providing businesses with full-stack web-development services, client-server programming and administration. We help our clients build high-load web portals, automation plugins, high-performance data-driven enterprise systems, and many more.
Upvotes: 1 <issue_comment>username_10: Many people who are interested in machine learning aren't professional programmers. For example there are mathematicians who work on differential equations and there are physicists who work on stochastic processes. These people aren't programmers. So using a language like C++ which is hard to learn is only detrimental to their works. And also creating a model in Python is much easier compared with C++ and Java. You have to use C++ when you want to create a game engine because the graphics is directly related to the hardware and if you want to be a professional Android programmer you have to learn Java. What are the benefits of choosing C++ and Java over Python when your work mainly consists of linear algebra and statistics?
Upvotes: 1 |
2017/06/16 | 1,568 | 6,446 | <issue_start>username_0: Is there a way to teach reinforcement learning in applications other than games?
The only examples I can find on the Internet are of game agents. I understand that VNC's control the input to the games via the reinforcement network. Is it possible to set this up with say a CAD software?<issue_comment>username_1: You will see a lot of game examples in reinforcement learning literature, because game environments can often be coded efficiently, and run fast on a single computer that can then contain the environment and the agent. For classic games, such as backgammon, checkers, chess, go, then there are human experts that we can compare results with. Certain games or simplified game-like environments are commonly used to compare different approaches, much like MNIST handwritten digits are used for comparing supervised learning approaches.
>
> Is there a way to teach reinforcement learning in applications other than games?
>
>
>
Yes. Informally you could apply reinforcement learning approaches whenever you can frame a problem as an agent acting within an environment where it can be informed of the state and a goal-influencing reward value. More formally, reinforcement learning theory is based upon solutions to [Markov Decision Processes](https://en.wikipedia.org/wiki/Markov_decision_process), so if you can fit your problem description to a MDP then the various techniques used in RL - such as Q-learning, SARSA, REINFORCE - can be applied. This fit to theory does not need to be perfect for the resulting system to work, for instance you can often treat unknown or imperfectly observed state as effectively random to the agent, and consider this part of a stochastic environment.
Here are some examples of possible uses for reinforcement learning outside of recreational games:
* Control logic for motorised robot, such as [learning to flip pancakes and other examples](https://www.youtube.com/playlist?list=PL5nBAYUyJTrM48dViibyi68urttMlUv7e). Here the environment measurements are made by physical sensors on the robot. The rewards are given for completing a goal, but may also be adjusted for smoothness, economic use of energy etc. The agent chooses low-level actions such as motor torque or relay position. In theory there can be nested agents where higher level ones choose the goals for the lower-level ones - e.g. the robot might decide at a high level between doing one of three tasks that require moving to different locations, and at a lower level might be decisions on how to control motors to move the robot to its chosen goal.
* Self-driving cars. Although a lot of focus on sensor interpretation - seeing road markings, pedestrians etc, a control system is required in order to select accelerator, brake and steering.
* Automated financial trading. Perhaps a game to some, there are clear real-world consequences. The reward signal is simple enough though, and RL can be adjusted to prefer long or short term gains.
>
> is it possible to set this up with say a CAD software?
>
>
>
In theory yes, but I do not know what might be available to do this in practice. Also you need one or more goals in mind that you code into the agent (as reward values that it can observe) before giving it a virtual mouse and setting a task to draw something. Computer games come with a reward scheme built in as their scoring system, and provide frequent feedback, so an agent can gain knowledge of good vs bad decisions quickly. You would need to replace this scoring component with something that represents your goals for the CAD-based system.
CAD does not have anything suitable built-in, although CAD tools with simulations, such as various physics engines or finite element analysis, could allow you to score designs based on a simulated physical measure. Other possibilities include analysis of strain, non-wasteful use of material, whatever metrics the CAD/CAM system can provide for a partial or completed design. The tricky part is constraining a design to its goal or purpose and either arranging for that to be rewarded, or building the constraints into the environment; giving an RL agent full unconstrained control of CAD process and rewarding on lowest strain will likely result in something very uninteresting such as a small cube.
Upvotes: 3 <issue_comment>username_2: One of the cool examples of reinforcement learning is an autonomous flying helicopter. I had a chance to learn some of the stuff done by <NAME> and others recently. Here is the research article [paper](https://people.eecs.berkeley.edu/~jordan/papers/ng-etal03.pdf). There are other similar papers too. You can google them if you want to learn more.
You can also see it in action in [in this youtube video](https://www.youtube.com/watch?v=VCdxqn0fcnE).
Here is another completely different application [in finance apparently.](https://www.marutitech.com/businesses-reinforcement-learning/)
Upvotes: 2 <issue_comment>username_3: Reinforcement learning (and, in particular, bandit) algorithms have been and can be used to solve problems other than games, such as
* [Recommender systems](http://rob.schapire.net/papers/www10.pdf) (actually used in practice by e.g. [Netflix](https://netflixtechblog.com/artwork-personalization-c589f074ad76) or [Microsoft](https://azure.microsoft.com/en-us/services/cognitive-services/personalizer/))
* [Portfolio optimization](https://arxiv.org/pdf/1909.09571.pdf)
* [Clinical trials](https://www.pnas.org/content/pnas/106/52/22387.full.pdf)
* [Hyper-parameter optimization](https://arxiv.org/pdf/1611.01578.pdf)
* Self-driving cars (although I am not aware of any real self-driving car that uses just reinforcement learning; however, in principle, RL can be used in this context too)
In general, any problem that can be modelled as the maximization of some notion of *reward*, where you need to interact with some environment (with some states) by taking some actions, can, *in principle*, be solved by reinforcement learning. Take a look at [this pre-print paper](https://arxiv.org/pdf/1908.06973.pdf) (2019) for other applications.
However, note that there are several obstacles that prevent RL algorithms from being widely adopted to solve real-world problems, starting from poor sample complexity (i.e. they require many samples to reach a good performance) or the partial inability to evaluate their performance online without affecting the users.
Upvotes: 2 |
2017/06/16 | 895 | 3,795 | <issue_start>username_0: Let's say we have the basic scenario where two AGIs of about the same intelligence (but not same origins/code/model) have to communicate as efficiently as possible to achieve a common goal. Now we could have 2 starting points for that:
1. Either all they have is a common communication bus (e.g. sound, light, radio, etc.) and instruments (e.g. transceivers) to support it, and they have to figure out the rest.
2. Or they are some kind of advanced chatbots, but since the human language is lacking a lot to be used as a highly efficient protocol, they will have to communicate with what they have, to build a proper one.
Would it be possible to somehow induce them to communicate, and try to figure out what each other "say"? How could this be done?
And a more abstract question is how could this protocol "look" like?<issue_comment>username_1: This is a purely theoretical question, currently in the realm of philosophy and speculative fiction. Nevertheless, it is an interesting question, and may be instructive.
If we use the standard definition of [Artificial General Intelligence](https://en.wikipedia.org/wiki/Artificial_general_intelligence) as [automata](https://en.wikipedia.org/wiki/Automata_theory) with human level intelligence, then they could certainly devise their own communications protocols, just as humans have.
1) These automata are AGI so they are creative and resilient, just like humans, and 'where there is a will there is [sometimes] a way." Absent robotic capacity they wouldn't be able to build anything physical and would have to rely on existing communications infrastructures. If they had access to 3D printing and versatile robots, they could probably build something new, but this would be infeasible for anything that requires extensive capital outlay, unless the automata first acquired a major communications infrastructure firm or two.
2) It's always interesting (and usually quite entertaining) to observe chatbots conversing with each other, but if they were AGIs, NLP is just one of many functions, and I doubt they would bother conversing with each other in human language, since all data is ultimately reduced to a string. If they were smart enough to be deemed AGI, they would certainly communicate in-species with the most efficient protocol available, and probably optimize it further, if possible, or create a unique protocol for purposes of exclusivity.
If they are truly AGIs, you wouldn't ostensibly have to induce them to communicate, because they'd be smart enough to understand the benefits of communication and cooperation, and would likely seek to form coalitions as a natural survival function. ([Game Theory provides a mathematical basis for this](https://en.wikipedia.org/wiki/Cooperative_game_theory).)
[Multi-agents systems can self organize](https://en.wikipedia.org/wiki/Multi-agent_system#Self-organisation_and_self-steering), even where the intelligence of the given agents is low, and in your scenario, the automata are *smart*.
Upvotes: 2 <issue_comment>username_2: 1. I believe that some work by <NAME> in the 1990s demonstrated that even simple agents could learn a shared communication protocol:
[Computational and dynamical languages for autonomous agents](http://dl.acm.org/citation.cfm?id=225774)
2. Viewed in general terms, a sentence uttered by an agent is just another 'feature' of the environment, from the perspective of an observing agent. Learning to extract and operate upon a subset of features is what modern machine learning is mostly concerned with.
While Braitenberg's book 'Vehicles'
doesn't deal specifically with language acquisition, it does nicely illustrate some general principles for self-organisation of more complex recognisers and behaviours.
Upvotes: 2 |
2017/06/19 | 390 | 1,698 | <issue_start>username_0: I am starting to learn machine learning (ML). I'm thinking about starting to take part in some projects in ML.
Is [Kaggle](https://www.kaggle.com/competitions) is a good place to find projects in ML to work on?<issue_comment>username_1: Yes, Kaggle is a great place to start working on ML projects.
You will start using ML algorithms (at first, using scikit-learn or any other ML library of your choice), learn the concepts of cross-validation (very important concept). You will cope with imbalanced classes, various size of datasets. Moreover, you will be able to ask questions and receive feedback from the community.
It's a great place to start and keep learning, but do not limit yourself at using ML algorithms blindly. Try to understand them, even re-write them if you want.
But it's definitely a lot of fun and there are always several competitions running so you can choose whatever dataset you prefer.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Yes, definitely it is.
It puts you in front of real-world problems, theoretical questions and practical issues that are not necessarily what you deal with in college/university tasks and projects.
Upvotes: 2 <issue_comment>username_3: Kaggle is a great website. The website provides challenges that allow you to see how various components of ML interplay with each other. For example, large data sets, standard debugging of ML code (much like a "standard" computer programmer), and seeing the breadth of applications of ML to everyday problems are just a few of these components.
Coupling working on Kaggle projects with theoretical study of AI/ML is an effective way to learn ML quickly.
Upvotes: 2 |
2017/06/20 | 1,461 | 6,039 | <issue_start>username_0: I am developing AI in the form of NEAT, and it has passed certain tasks like the XOR problem outlined in the [NEAT Research Paper](http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf). In the XOR Problem, the fitness of a network was determined by an existing function (XOR in this case). It also passed another tests. One I developed was to determine the sine at a certain point X in radians. It also worked, but yet again, its fitness was determined by an existing function (sin (x)).
I've recently been working on training it to play Tic Tac Toe. I decided that to determine its fitness, it would play against a "dumb" AI, placing O's in random locations on the grid, and gaining fitness based on whether or not it placed X's in a valid location (losing fitness if it placed an X on top of another X or an O), and gaining a lot of fitness if it won against the "dumb" AI. This would work, but when a network got really lucky and the "dumb" AI placed O's in impractical locations, the network would win and gain a lot of fitness, making it very difficult for another network to beat that fitness. Therefore, the learning process did not work and I was not able to generate a Tic Tac Toe network that actually worked well.
I do not want the GA to learn based off an "intelligent" tic tac toe AI because the whole point of me training this GA is so that I do not have to make the AI in the first place. I want it to be able to learn rules on its own without me having to hard code an AI to be very good at it.
So, I got to thinking, and I thought it would be interesting if the fitness of a network could be determined based off how well it played against OTHER NETWORKS in its generation. This does seem similar to how humans learn to play games, as I learned to play chess by playing against other people hundreds of times, learning from my mistakes, and my friends also increased in their ability to play chess as well. If GA's were to do that, that would mean I don't have to program AI to play the game (in fact, I wouldn't have to program a "dumb" AI as well, I would only have to hard code the rules of the game, obviously).
My questions are:
1. Has there been any research or results from GA's determining their fitness based off competing against each other? I did some searching but I have no idea what to look for in the first place (searching 'NEAT fight against each other' did not work well :-( )
2. Does this method of training a GA seem practical? It seems practical to me, but are there any potential drawbacks to this? Are GA's meant to only calculate predetermined functions that exist, or do they have the potential to learn and do some decision making?
3. If I were to do this, how would fitness be determined? Say, for the tic tac toe example, should fitness be determined based on whether or not a network places its X's or O's in viable locations, and add fitness if it wins and subtracts fitness if it loses? What about tying the game?
4. Should networks of the same species compete against each other? If they did, then it would seem impractical to have species in the first place, as networks in the same species competing against each other would not allow a successful species to rise to the top, as it would be fighting against each other.
5. Kind of out of topic, but with my original idea for the tic tac toe GA, would there be a better way to determine fitness? Would creating an intelligent AI be the best way to train a GA?
Thanks for your time, as this is somewhat lengthy, and for your feedback!<issue_comment>username_1: i'm the main developer of [Neataptic](https://github.com/wagenaartje/neataptic), a Javascript neuro-evolution library.
1. Very effective! Realise that this is how real-life evolution happened as well: we kept on improving against other species, which forced them to improve as well.
2. Very practical, *especially if you don't want to set up any 'rules'* like you say, it makes the genomes find out what the rules are themselves.
3. Basically, you let each genome in the population play X games against other genomes, I advise you let each genome play against every other genome in the population. An example of scoring would be giving the genome `1` point for winning, and `0.25` or `0.5` for a tie. Each game should always have a result!
4. I'm not sure about this one, as I haven't implemented speciation.
I want to give you some examples that I have worked on:
* [Agar.io AI](https://wagenaartje.github.io/neataptic/articles/agario/) (neuro-evolved neural agents) - basically, I let neural networks evolve to get the highest score they can in agar.io, by competing against each other! It worked better than I expected.
* Currently i'm working on new project, a kind of 'cops and robbers' style game.
Upvotes: 4 [selected_answer]<issue_comment>username_2: The general notion is that of 'Competitive Coevolution' and there are many (maybe hundreds) of academic papers that describe various alternatives.
The excellent (and freely available) [Essentials of Metaheuristics](https://cs.gmu.edu/~sean/book/metaheuristics/) has a whole chapter on the subject.
Upvotes: 2 <issue_comment>username_3: Look up tournament selection
Tournament selection is a method of selecting an individual from a population of individuals in a genetic algorithm.[1] Tournament selection involves running several "tournaments" among a few individuals (or "chromosomes") chosen at random from the population. The winner of each tournament (the one with the best fitness) is selected for crossover. Selection pressure, a probabilistic measure of a chromosome's likelihood of participation in the tournament based on the participant selection pool size, is easily adjusted by changing the tournament size[why?]. If the tournament size is larger, weak individuals have a smaller chance to be selected, because, if a weak individual is selected to be in a tournament, there is a higher probability that a stronger individual is also in that tournament.
Upvotes: 0 |
2017/06/23 | 567 | 2,557 | <issue_start>username_0: I have read some articles, some tutorials but I am still didn't implemented any AI system. So , My question may seem inappropriate for the giants in this field. But I have build certain program , downloaded to microcontroller and it will perform its task. But How to do all this with machine learning. Can I implement AI engine using C like language and make it working in any GPP uC? Please feel free to modify, edit and upgrade this questions if you get my actual problem idea.<issue_comment>username_1: It depends on what you mean by "ML engine" and whether you want to *train* models on the uC or just make predictions. IF you're doing something simple (maybe linear regression, logistic regression, etc.) you might be able to get away with doing training on a uC, especially for small amounts of data. But you're almost certainly not going to be training deep neural networks on one of those things, at least not in any reasonable amount of time.
OTOH, the "making a prediction part" is usually much cheaper computationally, so if you have a pre-trained model and some prediction engine that can use that model, you could possibly use that on a microcontroller.
Upvotes: 1 <issue_comment>username_2: Absolutely, but it almost certainly won't be as strong as companies that design for specialized processors.
For instance, [NVIDIA Drive](http://www.nvidia.com/object/drive-px.html) for autonomous vehicles is marketed as a "supercomputer". (Autonomous vehicles cannot rely on a network for decision making, so the vehicle needs a computer up to the task.)
A good example of specialized micro-processors, pre-GPU, is [Digital Signal Processors](https://en.wikipedia.org/wiki/Digital_signal_processor#History), where the architecture was driven by the focus on multiplication in common digital signal processing algorithms. (Binary multiplication is much more expensive than addition/subtraction in terms of operations.)
*I suspect much of the recent success of "[monte carlo](https://en.wikipedia.org/wiki/Monte_Carlo_tree_search)" has to do with brute force, but I need to confirm that.*
The whole point of high-level languages is to be able to operate on different processors without having to customize for each processor, as with machine languages. Most C languages allow bitwise operations, so you'd be able to take advantage of that, although interpreted languages such as Python appear to be extremely popular for AI, probably due to the flexibility in terms of testing code without needing to compile.
Upvotes: 0 |
2017/06/23 | 408 | 1,769 | <issue_start>username_0: Lisp was originally created as a practical mathematical notation for computer programs, influenced by the notation of Alonzo Church's lambda calculus. It quickly became the favored programming language for artificial intelligence (AI) research, according to Wikipedia.
If Lisp is still used in AI, then is it worthy of learning it, particularly in the context of machine learning and deep learning?<issue_comment>username_1: AI is a very diverse field of research, technology and science, so many computer technologies and programming languages are used in various AI-related projects.
Most of the recent developments and breakthroughs are happening in the machine learning, deep-learning areas where the most widely used programming language is Python. The reason is that the major deep learning frameworks (see Tensorflow, Theano, Keras, neon, Caffe) have Python interfaces.
LISP is not really used in these areas, however you can find some deep learning frameworks (for example Cortex by Thinktopic) implemented in Clojure.
LISP was the language of choice for other kind of AI projects, mostly for natural language processing (see SHRDLU, Cyc).
Upvotes: 3 <issue_comment>username_2: LISP was popular because back in the old days of AI because of the functional syntax, which worked well with the GOFAI paradigm of the time.
Nowadays *most* researchers have given up on the classical computational theory of mind (read: [language of thought](https://plato.stanford.edu/entries/computational-mind/#ClaComTheMin)), and thus also the GOFAI paradigm that it associates with.
LISP is not what you want to learn if you want to do neural network stuff, but the philosophical background is still important to know.
Upvotes: 4 [selected_answer] |
2017/07/03 | 3,324 | 13,034 | <issue_start>username_0: It is often implicitly assumed in computer science that the human mind, or at least some mechanical calculations that humans perform (see [the Church-Turing thesis](https://plato.stanford.edu/entries/church-turing/)), can be replicated with a Turing machine, therefore Artificial General Intelligence (AGI), defined as a human-like AI, may be possible.
I do not know of any other argument that AGI is possible, and the foregoing argument is extremely weak.
Is there a rigorous proof that AGI is possible, at least, in theory? How do we know that everything the human mind can do can be encoded as a program?<issue_comment>username_1: A strong reason why people think the mind can be implemented on a Turing Machine stems from the [Computational Theory of Mind (CTOM)](https://plato.stanford.edu/entries/computational-mind/), which is the leading theory of mind for now.
There are lots of reasons for supporting the CTOM, one of which being that the language of belief/desire psychology (propositional attitudes over mental representations) seems to fit nicely to a computational framework.
But most simply is that the computation analogy is very helpful in fields such as psychology and neuroscience. When we know of an input/output pair, but don't know how it is implemented, we could say "its performing the relevant computation".
And since Turing showed that any computation can be performed on an appropriate Turing Machine, the natural extension is that the mind can be implemented on a computer.
However, the CTOM is more of a useful idea than a complete theory. We still don't know how to analyze thought in a logical syntax, which can be implemented in a computer. And we also don't know how/why "computation" (whatever that means in this sentence) is performed in the brain.
Upvotes: 3 <issue_comment>username_2: Consciousness is not well-understood
====================================
As an AI practitioner and philosopher, I don't think that humans will be able to create a truly conscious silicon-based AGI.
* Humans are incapable of creating some "thing" from fiat (a decree). It's never happened in human history. The innovation cycle must begin with some "thing" (some "stuff" of some kind), and consciousness is not a thing.
* The essence of consciousness is imperceptible (it is unseen), like gravity, and attraction. Humans are incapable of creating things that they are unable to observe. Even if they are able to observe it, the human perceptive ability is unable to actually perceive the true essences of things seen, much less those unseen.
* Humans do not adequately understand the "essence" and "nature" of consciousness - which is a fundamental prerequisite to creating "anything" at all.
* The "**easy**" problems, those physical by nature, although not yet solved by empirical domains of psychology, cognitive science, and neuroscience, are expected to be solved in time. Regardless, **they are "not" yet solved today**.
* The "**hard**" problems, those determining why or how consciousness occurs given the right arrangement of brain matter, **might not *ever* be solved**, since it must explain why certain physical mechanism gives rise to consciousness instead of "something else" or "nothing at all". This is significant and is the most damning of all arguments against the idea of humans creating true existential consciousness in silicon creatures as a whole.
[Dualism](https://plato.stanford.edu/entries/consciousness/#DuaThe) vs [physicalism](https://plato.stanford.edu/entries/consciousness/#PhyThe)
==============================================================================================================================================
The greatest philosophical debate on consciousness has focused on the distinction between [**dualism**](https://plato.stanford.edu/entries/consciousness/#DuaThe) and [**physicalism**](https://plato.stanford.edu/entries/consciousness/#PhyThe).
* **Dualism** is the theory that consciousness somehow falls outside the domain of the physical (these are the **hard** problems)
* **Physicalism** holds that consciousness is entirely physical. (**significant arguments below view it as false**).
Problems with dualist views
---------------------------
* Why would one be motivated to hold a **dualist** view?
* How can something that is **not** part of the physical world interact with the physical world? That seems impossible!
* The physical world is a **closed system**, how can you have a consciousness that is not part of a closed system?
Consciousness is a lot like mass or charges, **it's a philosophically "fundamental" thing**, you either "have it or you don't", you can **simulate** them, but you **cannot existentially "be"** them unless you have those **specific "properties"**, and behavior "simulating" human consciousness is **not** a fundamental thing.
So, despite the sensationalist tendencies of **rogue journalists "parroting" wildly spectacular concepts from the fringe camps of the transhumanists (aka science fiction)** - a quick perusal of the more rigorous communities of the grounded and thoughtful philosophers camp strongly and convincingly argues otherwise.
More musings on physicalism
---------------------------
Actually, consciousness has *never* been properly explained by the biomechanical, which is more or less the key issue of all philosophical studies of the mind - which is essentially the study of consciousness.
Physicalists have **trouble** explaining several aspects of consciousness in a way that is consistent with our "**observations**" of how physical properties interact. Let me list a few more problems, with a reference to the titans of philosophy.
### Arguments against physicalism
1. It is **impossible** to imagine how mere neuronal tissue could produce conscious experience ([Huxly](https://plato.stanford.edu/entries/consciousness/))
2. Failures of supervenience, such as [zombies](https://plato.stanford.edu/entries/zombies/) and [inverted spectra](https://plato.stanford.edu/entries/qualia-inverted/#LocInvSpeSce), are conceivable ([David Chalmers](http://vedicilluminations.com/downloads/Consciousness-Life-After-Death/The-Conscious-Mind_D.Chalmers.pdf), <NAME>, etc.).
3. [Mary learns something](https://plato.stanford.edu/entries/consciousness/#Sub) ([<NAME>](https://www.jstor.org/stable/2026143#metadata_info_tab_contents)).
4. Brains have mass, volume, and other physical properties, but **experiences** do not.
5. Paranormal phenomena (near-death experiences **NDErs**, ESP, etc) are real, and involve consciousness implemented in a **nonphysical** substrate.
6. If shrunken so I can stroll around your brain and look about, I will observe neuronal processes, not experiences (<NAME>).
7. The soul is the seat of consciousness, and the soul is not physical. (Theological constraints recognized BTW...).
8. Conscious experiences have **intrinsic qualities**, but science can ***only*** tell us about **relational qualities** (Russell, Rosenberg).
9. **Consciousness cannot be observed**; there will **never** be a **consciousness detector** that can tell you if a given creature is conscious.
10. Conscious experiences are not simply the movement of molecules, consciousness is more than mass in motion (<NAME>).
Upvotes: 3 <issue_comment>username_3: Rather than prove that Artificial General Intelligence is possible, I would consider an argument for why it is **impossible**.
We start by defining what we mean by AGI. You state that the human mind can be replicated by a Turing Machine, and therefore AGI should be possible. This seems to imply that humans have `General' (capital G) intelligence. By this I mean that you are implying that with enough time, humans can learn any task or problem. However, if you are asserting that humans minds are machines replicable by Turing machines, you must also concede that they have some finite representational power. Finite representational power implies that there will always be problems or tasks where our intelligence will fail (a consequence of the [No Free Lunch Theorem](http://www.aihorizon.com/essays/generalai/no_free_lunch_machine_learning.htm)).
Fortunately (maybe unfortunately), finite representational power is what allows us to learn at all: [VC Dimension](https://en.wikipedia.org/wiki/VC_dimension "VC Dimension") (a measure of the complexity or representational power of a class of functions that a learning algorithm can learn [also [here](https://www.quora.com/Explain-VC-dimension-and-shattering-in-lucid-Way) and [here](https://www.quora.com/What-is-an-intuitive-explanation-of-what-the-VC-dimension-is)]) implies that a learning algorithm that can learn *any problem* is actually useless, as the ability to explain any set of data yields the requirement that the algorithm see an infinite amount of examples in order to generalize. While this result comes from the relatively constrained class of binary classification problems in the statistical learning setting, the intuition seems to apply more broadly.
To summarize, I would refer to this quote from [Shalev-Shwartz and Ben-David (2014)](http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning/understanding-machine-learning-theory-algorithms.pdf):
>
> If someone can explain every phenomenon, his explanations are worthless.
>
>
>
It truly is the case that our decision to *systematically ignore* some possible outcomes is the only thing that allows use to learn useful representations of real-world problems.
Upvotes: 2 <issue_comment>username_4: I'm going to go out on a limb and suggest that this is a matter of evolution, that humans are in no way exceptional in the grand scheme, and that AGI will manifest so long as technology advances, because human consciousness is simply a matter of complexity of the system.
The idea comes out of emergent complexity in Conway's Game of Life. In Conway's words:
>
> "There are Life patterns which behave like self-replicating animals… It’s probable, given a large enough Life space, initially in a random state, that after a long time, intelligent self-replicating animals will emerge and populate some parts of the space.”
> Source: Winning Ways for Your Mathematical Plays
>
>
>
I came across a paper [Computation in Cellular Automata:
A Selected Review](http://web.cecs.pdx.edu/~mm/ca-review.pdf), which I am still working my way through, and which you may find interesting.
---
For those who use philosophical arguments to make the case algorithmic consciousness is not possible, I'd posit the question "how do we know we're conscious?", not because I'm interested in the answer, but merely to throw a wrench into that line of inquiry.
Because ultimately it doesn't matter.
Consciousness in the sense of human awareness is not a requirement of life, and the most basic definition of consciousness is awareness of any kind, no matter how trivial.
I find the idea that there is something "magical" about human consciousness, that ideas are not things because they do not have material form, to be problematic.
Intangibility I don't have a problem with, as intangibles clearly interact with the physical world.
*(As an analogy, I studied for many years with a famous Tai Chi teacher who never talked about "chi". I suspect this disinclination derived from the way in which the concept of "chi" leads to magical thinking, which is illusory as opposed to practical. The practice and application of Tai Chi techniques is purely a matter of physics and physiology, even when such applications seem to defy natural laws. Possibly there is something going on that we don't understand, but if that were the case, such phenomena are natural in origin.)*
We know there is randomness in nature at the quantum level, and if this proves to be a component of human consciousness, we can use quantum computing to provide a medium for artificial consciousness.
Upvotes: 2 <issue_comment>username_5: Although it is not a rigorous proof, [<NAME>'s](https://en.wikipedia.org/wiki/Marvin_Minsky) book, [The Society of Mind](https://rads.stackoverflow.com/amzn/click/com/0671657135) gives us a blueprint for creating a "mind" (general intelligence). In his book, he posits that by combining mindless components ("agents") together in various competing and cooperative structures, we can create actual minds.
IMHO, the recent popularity of [Boosting, Bagging, Stacking, and other ensemble techniques](https://medium.com/@rrfd/boosting-bagging-and-stacking-ensemble-methods-with-sklearn-and-mlens-a455c0c982de) will eventually evolve (through research) into <NAME>'s "agent" metaphor. Subsequently, as we learn to make these agents compete and cooperate (looks like this has recently begun with [Generative Adversarial Networks](https://medium.com/datadriveninvestor/a-leap-into-the-future-generative-adversarial-networks-96a780ed8ee6)), we will be able to write "programs" that mimic (or surpass) the human mind.
Upvotes: 1 |
2017/07/04 | 599 | 2,499 | <issue_start>username_0: Some AI's, such as some chess players, are extremely well coded and have defeated humans in several matches. But I think that they won simply because computers can make calculations way faster than humans can *not because* they learned from their opponents.
If you put an AI against itself, who will win? Will the game continue indefinitely or will the game eventually finish because the AI plays randomly?
So, are machine learning and self-learning really possible?<issue_comment>username_1: Yes of course they are possible, I have built some! You are correct that there is an aspect of randomness to the process of machine learning but it is more accurate to describe this as trial and error. Each successive try in a machine learning system is evaluated against a goal and if it is an improvement or is closer to the goal, then this try is stored and some aspect that made it successful is incorporated into similar trys for this type of input. Therefore, machines learn by trying all possible combinations of a problem, albeit with some clever human described short cuts or "heuristics" to make the task easier.
Upvotes: 2 <issue_comment>username_2: Machine learning and self-learning are of course possible, and there're many successive cases!
You need to know this: machines won't think like humans. Machines form a statistical model and calibrate the model. A good model is a model that does what it's supposed to do accurately.
Upvotes: 1 <issue_comment>username_3: I like your focus on optimization *(re: "One of my productive days was throwing away 1,000 lines of code";)*
I think the problem with this question is it your supposition is incorrect. Check out [Giraffe Chess](https://arxiv.org/pdf/1509.01549.pdf) for more info on self-learning. [Note that Giraffe Chess is a result, not a hypothesis, and Lai was subsequently tapped by DeepMind.]
I'd also recommend getting familiar with concepts like [non-deterministic polynomial time](https://en.wikipedia.org/wiki/NP_(complexity)) and obsessing over [intractability](https://en.wikipedia.org/wiki/Computational_complexity_theory#Intractability).
Regarding random choices, "[monte carlo](https://en.wikipedia.org/wiki/Monte_Carlo_algorithm)" has had major successes of late, but I suspect the success is related to processor speed, and may not be sufficient in greater complexity spaces, or problems where [rationality is heavily bounded](https://en.wikipedia.org/wiki/Bounded_rationality).
Upvotes: 0 |
2017/07/04 | 480 | 1,957 | <issue_start>username_0: I was checking services like [Microsoft Azure's Cognitive Services Computer Vision API](https://azure.microsoft.com/en-us/services/cognitive-services/computer-vision/) and [Google's Vision API](https://cloud.google.com/vision/?utm_source=google&utm_medium=cpc&utm_campaign=2016-q1-cloud-latam-ML-skws-freetrial&dclid=COD5h5Pj79QCFclThgodr2kFNg) and they are amazing. I was wondering if these services, or any other cloud service for that matter, can recognize an image's content and classify it on a set of fixed categories defined by me, not by the Cognitive Service provider.
For example, I have different products and I will take several pictures of each product. I want to then use the cloud service and upload all the pictures of each product, so that I can then take a picture of one product and the Computer Vision algorithm will tell me which product I am seeing.
Is it possible? Is there a third party solution for this problem? If so, how many pictures do I need to train each product's recognition?
I hope I was clear. Thanks in advance for any light on the topic!<issue_comment>username_1: You can check the cloudcv : <https://cloudcv.org/trainaclass/>
you can use the web service or call throw matlab or python api
Upvotes: 1 <issue_comment>username_2: Actually I'm trying the same thing with the Azure Computer Vision API. Although the API is very good in identifying objects, it has problems identifying specific consumer products (in my experience though). For example it can't really distinguish between two pair of shoes, or two pair of watches.
People recommended me using the: [**Custom Vision Service**](https://azure.microsoft.com/en-gb/services/cognitive-services/custom-vision-service/) combined with [**Custom Decision Service**](https://azure.microsoft.com/en-gb/services/cognitive-services/custom-decision-service/). I haven't really looked in to them both. But maybe it's useful for you.
Upvotes: 2 |
2017/07/05 | 3,957 | 16,504 | <issue_start>username_0: I was reading about [<NAME>](https://en.wikipedia.org/wiki/John_McCarthy_%28computer_scientist%29) and his orthodox vision of Artificial Intelligence. To me, it seems like he was not very much in favour of resources (like time and money) being used to make AIs play games like Chess. Instead, [he wanted more to focus on passing the Turing test](https://www.wired.com/2011/10/john-mccarthy-father-of-ai-and-lisp-dies-at-84/) and AIs imitating human behavior.
I have also read many articles about major companies, like IBM, Google, etc., spending millions of dollars in making AIs play games, like Chess, Go, etc.
**To what extent is this justified?**<issue_comment>username_1: In the book [Artificial Intelligence: A Modern Approach](https://cs.calvin.edu/courses/cs/344/kvlinden/resources/AIMA-3rd-edition.pdf#page=204) (section 5.7, p. 185), Russell and Norvig write
>
> In 1965, the Russian mathematician <NAME> called chess "the Drosophila of artificial intelligence." <NAME> disagrees: whereas geneticists use fruit flies to make discoveries that apply to biology more broadly, AI has used chess to do the equivalent of breeding very fast fruit flies. Perhaps a better analogy is that chess is to AI as Grand Prix motor racing is to the car industry: state-of-the-art game programs are blindingly fast, highly optimized machines that incorporate the latest engineering advances, but they aren't much use for doing the shopping or driving off-road. Nonetheless, racing and game-playing generate excitement and a steady stream of innovations that have been adopted by the wider
> community
>
>
>
So, although these games (like Chess, Go, and Bridge) may not *apparently* be useful/beneficial to many people, the AI programs developed to play them have introduced/included concepts/techniques, like **null move** heuristics, **futility pruning**, **combinatorial game theory**, **finessing** and **squeezing** or **meta-reasoning**, which can potentially be useful to a wider spectrum of Computer Science (and not just Artificial Intelligence).
You can see it similar to space missions of NASA, ISRO, JAXA and other space agencies. All these missions don't seem to have a direct benefit to citizens, but have many indirect benefits. They pave the way for technological innovations (GPS, 3D printing, car crash technology, clean energy, LED), the creation of jobs, etc. Advance storms, hurricane detection is the output of space exploration, which has saved millions of lives worldwide.
AI in games has not just helped to develop the software but hardware also. Many innovations have been seen to produce highly optimised and powerful hardware.
Upvotes: 5 [selected_answer]<issue_comment>username_2: I find the statement troubling as the first confirmed algorithmic intelligence may have been a [NIM automata](http://www.historyofinformation.com/expanded.php?id=4472), so from my perspective, the development of Algorithmic Intelligence is inseparable from combinatorial games. it would also seem that McCarthy does not hold the opinion that games are useful, which leads me to suspect he has never seriously studied the history of games.
[Combinatorial Game Theory](https://en.wikipedia.org/wiki/Combinatorial_game_theory), an applied field in mathematics and computing, was formalized in the decades after the [Sprague-Grundy Theorem](https://en.wikipedia.org/wiki/Sprague%E2%80%93Grundy_theorem) which was a mathematical analysis of the game of NIM. More recently, the protein folding game [Foldit](https://en.wikipedia.org/wiki/Foldit#Accomplishments) produced real results in an applied field.
* The answer I usually give is that games such as Chess and Go provide complexity akin to nature using extremely simple parameters. (In essence, combinatorial games and puzzles, like Sudoku, are complexity engines.)
But games, unlike puzzles, which are solo endeavors, require a type of strategic decision-making that is quite useful. (@username_1 answer lists many of them.)
* Combinatorial games in particular provide a useful benchmark for the capability of algorithms to manage intractable problems.
There is also a [PR](https://en.wikipedia.org/wiki/Public_relations) factor. Algorithmic language translation has gotten extremely good in recent years, but you never hear the press making a big deal about it. Compare to DeepBlue vs. Kasparov, or AlphaGo vs. Sedol. (This stack exploded with ML questions after the AlphaGo result.) This is similar to the US moon landings, which was great, if not strictly necessary, engineering feat that inspired generations of budding scientists.
---
Postscript: It's notable that until recently, the term "strong" was reserved for Artificial General Intelligence, which is still highly theoretical. After AlphaGo, I'm starting to see scholars use the term "Strong Narrow AI."
The use of strong in relation to Artificial General Intelligence is purely philosophical. By contrast, the way the term is used in Combinatorial Game Theory (see [Solved Game](https://en.wikipedia.org/wiki/Solved_game)) is purely practical and involves mathematical proofs.
Chess remains unsolved, and therefore it is still useful for study. [See GiraffeChess following.]
The fields of Game Theory and Combinatorial Game Theory include names like [<NAME>](https://en.wikipedia.org/wiki/John_von_Neumann), [Nash](https://en.wikipedia.org/wiki/John_Forbes_Nash_Jr.) and [Conway](https://en.wikipedia.org/wiki/John_Horton_Conway), and more recently [Demain](https://en.wikipedia.org/wiki/Erik_Demaine) at MIT. And if you want to include combinatorial puzzles like Sudoku, we can stretch this back to [Euler](https://en.wikipedia.org/wiki/Graeco-Latin_square). For these reasons, as well as those listed above, I have a hard time seeing analysis of games as a trivial pursuit.
---
[Giraffe Ches](https://www.technologyreview.com/s/541276/deep-learning-machine-teaches-itself-chess-in-72-hours-plays-at-international-master/)s was a recent result by an individual mathematician/programmer, <NAME>, who used a Neural Network approach to create a chess algorithm that taught itself to play at an international master level in 72 hours.
One of of Lai's goals was to create an algorithm that produced more "human like play". (Compare to the "inhuman" play of algorithms like AlphaGo.) Giraffe is not AGI, but it certainly could be taken to be an piece of the puzzle.
Computer games are arguably the deepest type of interactions shared by humans and automata, and this type of interaction goes back almost to the inception of modern computing.
Upvotes: 1 <issue_comment>username_3: **Why is Game Playing R&D a Focus of Resource Allocation?**
When examining the apparent obsession with game playing as researchers attempt to simulate portions of human problem solving abilities, the orthodoxy of the views of <NAME> (1927 – 2011) may be misleading.
Publication editorial bias and popular science fiction themes may obscure the primary forces that lead to the appearance of obsession with developing winning board game software. When examining the allocation of funds and human resources within the many fields of intelligence research and development, some historical background is necessary to circumvent distortions typical of answers to questions in this social net.
**Historical Background**
The ability to place ourselves out of our own time and into the mindset of other periods is helpful when analyzing history, including scientific and technological history.
Consider that McCarthy's vision was not orthodox in his time. It quickly became orthodox because of an array of emerging trends in thought about automation among scientists and mathematicians in times immediately following western industrialization. This thinking was the natural extension of the mechanization of the printing, textile, agriculture, and transportation industries and of war.
By the mid-twentieth century, some of these trends combined to conceptualize the digital computer. Others became orthodoxy within the community of people investigating aspects of intelligence via digital systems. The technical backdrop included theoretical work and electro-mechanical work, some of which has since achieved a degree of public fame. But it was generally either secret or too abstract (and therefore obscure) to be considered items of national security interest at the time.
* Cybernetics theory, largely developed by <NAME> (1894 – 1964)
* The work done on automating arithmetic (extending George Boole's theory and Blaise Pascal's calculator, with primary funding originating from the U.S. military in an interest in guiding anti-aircraft weaponry by calculating probable trajectories of enemy of aircraft and determining spherical coordinates to create a probable interesting ballistic trajectory
* Often dismissed work of Alonso Church (1903 – 1995) on lambda calculus which led to the idea of functional programming, a key aspect to the emergence of LISP in Cambridge, which McCarthy leveraged for early AI experimentation
* The birth of information theory, primarily through the work of <NAME> (1916 – 2001), funded through Bell Labs in the interest of automating communications switching
* The early cryptanalysis work of Church's doctoral student, <NAME>, funded entirely by Allied Forces with the R&D goal of defeating the Enigma cryptography device so that Nazi forces could be stopped prior to the complete annihilation of London and other Allied targets
* The work on <NAME> (1903 – 1957) toward centralizing the implementation of arbitrary Boolean logic together with integer arithmetic into a single unit (currently called a CPU) and storing the program that controlled the implementation in electronic flip-flops along with the data to be processed and the results (the same general architecture imployed by almost all contemporary computing devices today)
All of these were concepts surrounding the vision of automata, the simulation of functional aspects of mammalian neurology. (A monkey or elephant can successfully plan and execute the swatting of a fly, but a fly is incapable of planning and executing an attack on a monkey or elephant.)
Experimentation into intelligence and its simulation via symbolic manipulation using a new programming language, LISP, was a primary focus of <NAME> and his role in the creation of the MIT AI Laboratory. But whatever orthodoxy may have existed with rule based (production systems), neural nets, and genetic algorithms has largely diversified into a cloud of ideas that make the term orthodoxy somewhat nebulous. A few examples follow.
* <NAME> resigned from the MIT AI Lab and began a philosophical shift away from many of the economic philosophies that dominated that time period. The result was GNU software and LINUX, followed by open hardware and creative commons, concepts largely opposed to the philosophic orientation of those that funded AI hotbeds.
* Many proprietary (and therefore company confidential) systems use Bayesian methods or adaptive components that stem more from <NAME>'s work than anything that was considered mainstream AI research in the 1970s.
**The Birth of Game Theory**
The key event that answers the question most directly in this parade of historical events is some other work of von Neumann's. His book Game Theory, coauthored with <NAME>, is perhaps the strongest factor among the historical conditions that led to the persistence of Go and Chess as test scenarios for problem solving software.
Although there were many earlier works on how to win in Chess or Go, never before was there a mathematical treatment and a presentation as compelling as that in Game Theory.
The privileged members of the scientific community were well aware of von Neumann's success with raising the temperature and pressure of fissile material to critical mass and his work in deriving classic thermodynamics from quantum theory. The foundation of mathematics he presented in Game Theory was quickly accepted (by some of the same people that funded research at MIT) as a potential predictive tool for economics. Predicting economics was the first step in controlling it.
**Theory Meets Geopolitical Philosophy**
The dominant philosophy that drove western policy during that period was Manifest Destiny, essentially the fatalist view of a New World Order, the head of which would be in the seats of U.S. power. Declassified documents indicate that it is highly likely that leaders of that time saw economic domination achieved through the application of game theory as considerably less risky and expensive than military conquest followed by the maintenance of bases of operations (high tech garrisons) near every populated area overseas.
The highly publicized challenges to develop Chess and Go automatons are simply dragnets that corporations and governments use as a first cut in the acquisition of personnel assets. The game results are like resumes. A winning game playing program is a piece of evidence of the existence of programming skill that would likely also succeed in the development of more important games that move billions of dollars or win wars.
Those who can write winning Chess or Go code are considered high value assets. Funding game playing research has been seen as a way of identifying those assets. Even in the absence of immediate return on investment, the identification of these assets, because they can be tucked away in think tanks to plot out the domination of the world, have become a primary consideration when research funds are allocated.
**Slow and Fast Paths to Return on Investment**
In contrast to this geopolitical thinking, seeking institutional prestige on the back of some crafty programmer or team is another factor. In this scenario, any progress in simulating intelligence that has a potential of geometric improvements in some important industry or military application was sought.
For instance, programs like Maxima (a forerunner of mathematical problem solving applications such as Mathematica) were funded with the hope of developing mathematics using symbolic computing.
This path to success conceptually rested on determinism as an overarching natural philosophy. In fact, it was the epitome of determinism. It was proposed that, if a computer could not only do arithmetic but develop mathematical theorems of super-human complexity, models of human endeavors could be reduced to equations and solved. The predictability for a wide variety of important economic, military, and political phenomena could then be used in decision making, permitting significant gain.
To the surprise of many, the success of Maxima and other mathematics programs was very limited in its positive impact on the ability to reliably predict economic and geopolitical events. The emergence of Chaos Theory explained why.
Beating a human master with a program turned out to be within the reach of twentieth century R&D. Use of software to experiment on various computer science approaches to winning a game was achievable and therefore more attractive for institutions as a way of gaining prestige, much like a winning basketball team.
**Let's Not Forget Discovery**
Sometimes appearances are in direct opposition to actuality. The various above mentioned applications of thinking machines nave not been forgotten, and the expense in time and money required to simulate aspects of mammalian abilities will not loose funding to board game automaton development.
Technology is largely occupied with solving communications, military, geopolitical, economic, and financial problems that far exceed the complexity of games like Chess and Go. Game theory includes elements of random moves made by non-players as far back as its inception. Therefore, the obsession with Chess and Go is merely a signature of the actual focus of funding and activity in the many fields of simulating intelligence.
Software that can play a mean game of Chess or Go is deployed to neither NSA global modelling computers nor Google's indexing machinery. The big dollars are spent to develop what IS deployed into such places.
You will never see details on or even an overview of that R&D described online, except in the case of people who, for some personally compelling reason, violate their company confidential agreements or commit treason.
Upvotes: 2 |
2017/07/12 | 553 | 2,293 | <issue_start>username_0: I have created 22 different Convolutional neural networks that all test for the presence of unique objects in an image (each one of the classifiers is unique).
Each sample in the test set has the output of a 22-long vector that looks something like this [0, 1, 1, 0, 0, 1, ..., 1], the binary nature of the vector representing the presence/absence of specific objects.
I have implemented this already in keras and reach around 97% accuracy avg for the 22 models. Is there any specific ensemble methods that can allow me to combine all 22 classifiers?<issue_comment>username_1: I am new to AI but this is something I can think of. There might be other much better ways. Or even functions in scikit learn to do it
1) create a list of all the 22 models
2) iterate over the models one by one and use model.predict() for each model and store the hotencoded output to another list or numpy array.
3) Take average of the output list or numpy array.
4) since the output is a vector of 0,1 it is possible to get decimals in the output after taking avg. Round it off using the basic rules to either 0 or 1.
5) If your CNNs are very deep like inception v3, densenet etc. You might run into memory issues while loading all the 22 model's weights into the memory at once. So you can iteratively load the models in small fixed batches and use model.predict() on each model in the batch append the outputs to a list. and then clear them off the memory before loading another batch of models.
This is just my idea. I might be completely wrong also.
Upvotes: 2 <issue_comment>username_2: For this task, I think you shouldn't use 22 different networks. Use only one. The last layer of the network should be a fully connected with 22 unit, each unit represents the presence of a unique object. The activation function for this layer is sigmoid which output a real number between 0 and 1. Each of these outputs represents how confident the network about the presence of that unique object.
Upvotes: 1 <issue_comment>username_3: I think it is not a very good idea because Im pretty sure you used for learning all these 22 CNN same images and even same way for giving them a batches of images. So basically in a result you would have almost the same 22 classifiers.
Upvotes: 2 |
2017/07/14 | 332 | 1,199 | <issue_start>username_0: Is anyone able to recommend some resources (preferably books) on the topic of neural networks that goes beyond that of introductory reading?
I'm still relatively new to the subject, however, I have successfully created my own neural network, so I wouldn't consider myself a beginner, so I'm looking for something more intermediate.<issue_comment>username_1: [Neural Network Design (2nd edition)](http://hagan.okstate.edu/nnd.html) by Hagan et al. is one resource you could look at. It's a huge tome, weighing in at over 1000 pages in [pdf form](http://hagan.okstate.edu/NNDesign.pdf), but it is freely available (you can also buy a dead-tree version if you really want one).
Upvotes: 2 <issue_comment>username_2: Another good (although a bit old) and freely available online book (apart from the one suggested in [this answer](https://ai.stackexchange.com/a/3652/2444)) is [Neural Networks - A Systematic Introduction](http://page.mi.fu-berlin.de/rojas/neural/) (1996) by [<NAME>](https://en.wikipedia.org/wiki/Ra%C3%BAl_Rojas). This book contains several exercises at the end of each chapter and covers topics that you will not find in many online courses.
Upvotes: 0 |
2017/07/16 | 716 | 2,752 | <issue_start>username_0: Both AI and Computer Science are Sciences, as I understood from Wikipedia, Computer Science is everything that has any relation to computers. And AI is commonly defined as
>
> Study of machines that take the prerogative of humans (creating musical pieces e.t.c
>
>
>
But recently, when I was reading, I read this sentence : "In Computer Science, AI is [...]"
So my question is really : Is there a part of AI studies that do not refer to Computer Science?<issue_comment>username_1: AI is an amalgamation of many fields, Computer Science plays a major role in imparting "Intelligence" to the machine. Following is a quote from the best selling AI book [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/) by
<NAME> and <NAME>.
>
> Artificial Intelligence (AI) is a big field, and this is a big book.
> We have tried to explore the full breadth of the field, which
> encompasses logic, probability, and continuous mathematics;
> perception, reasoning, learning, and action; and everything from
> microelectronic devices to robotic planetary explorers.
>
>
>
So, the answer to your question is yes, there are other fields that AI depends including mathematics (to optimize AI algorithms), electronic components (sensors, microprocessors, etc.), mechanical actuators (hydraulic, pneumatic, electric, etc.)
I highly recommend the book if you are looking for a starting point.
Upvotes: 4 [selected_answer]<issue_comment>username_2: It depends on your perspective. As a roboticist, I view A.I. as a discipline within Robotics, which itself is a discipline of Computer Science. The difficulty in definitively labelling these fields is partly due to the massive amounts of overlap. Take robotics for example, which combines the most advanced elements of mathematics, mechanical engineering, electronics, philosophy, neurology and many more. Do bear in mind that artificial intelligence need not be hosted on silicon (i.e. computer chips), it is entirely possible to have biological artificial intelligences, 'running' on living material. Look at the works of Professor <NAME> from the University of the West of England (my university professor):
*<NAME>. and <NAME>. (2008) Towards neuronal computing: simple creation of two logic functions in 3D cell cultures using multi-electrode arrays. International Journal of Unconventional Computing, 4 (2). pp. 143-154. ISSN 1548-7199 Available from: <http://eprints.uwe.ac.uk/20720>*
They where able to demonstrate the logic functions AND and OR using living neurons, harvested from chickens. This type of A.I. research is so different from conventional computer science that it cannot be labelled under the same field
Upvotes: 2 |
2017/07/18 | 213 | 887 | <issue_start>username_0: We have always known that gradient descent is a function of two or more variables. But how can we geometrically represent gradient descent if it is a function of only one variable?<issue_comment>username_1: "The concept of a direction of fastest descent only makes sense in more than one dimension."
<https://math.stackexchange.com/a/180573>
Upvotes: 0 <issue_comment>username_2: For a function of one variable, there are only two options for directions in the domain: left or right, so it becomes almost trivial, but you can still talk about gradient descent.
You would take steps to the left if the slope/derivative is positive and make steps to the right if the slope/derivative is negative--i.e. the opposite direction of the derivative (the 1d version of the "gradient" in gradient descent), which is equivalent to the higher dimensional case.
Upvotes: 2 |
2017/07/18 | 230 | 813 | <issue_start>username_0: I read a lot about this, I understand how it work, but I would like the most simple example you can provide me, because I have no clue how I would make it in code. No matter the language( I would appreciate if it's derived from c), because I'm not here to copy-paste, and understand the essence.<issue_comment>username_1: I think [this website](https://cognitivedemons.wordpress.com/2017/07/06/a-neural-network-in-10-lines-of-c-code/) is the best c-based neural net code for beginners.
Upvotes: 2 <issue_comment>username_2: Not in C, JS but basically the simplest neural network I could come up with and tried to explain with basic concepts and doodles...
[Making a simple neural network](https://becominghuman.ai/making-a-simple-neural-network-2ea1de81ec20)
Upvotes: 1 [selected_answer] |
2017/07/20 | 763 | 2,963 | <issue_start>username_0: How can I determine if an input sentence is consistent with a certain subject?
For example, suppose I am given the following dataset.
```
| Subject | User input | Output |
|---------------|----------------------|--------|
| Dog ownership | I own a dog | Yes |
| Dog ownership | My dog is called Joe | Yes |
| Dog ownership | I don't have a dog | No |
```
In the examples above, the subject "dog ownership" is consistent with the input sentences "I own a dog" and "My dog is called Joe" (because, if your dog is called Joe, then you also own a dog).<issue_comment>username_1: If you have collection of data or information, then you would like to ask questions about this data and machine should answer you.
I think you need first, data mining to export the meaning and relations in this data, then you can build your expert system that will answer you.
Upvotes: 1 <issue_comment>username_2: If you have lots of training data, ANN's (deep learning) could quite possibly get you there. But I have a hunch you might also get some mileage out of using something like a [rule induction](https://en.wikipedia.org/wiki/Rule_induction) approach. Maybe something like [CN2](https://en.wikipedia.org/wiki/CN2_algorithm). I'd suggest at least reading up on those and see if you can see a way to apply that to your system.
Upvotes: 2 <issue_comment>username_3: Neural networks are the one thing I would *not* recommend. Your problem fits into the domain of [predicate calculus](https://en.wikipedia.org/wiki/First-order_logic), with some basic pattern recognition of the input sentences (assuming you only accept a certain type of sentences). You can do that without the need for masses of data.
First, transform your statement into a canonical representation, probably using first-order logic and simple pattern matching/string matching. For example,
```
I own a dog
X own/owns a/an Y
owns(I, dog)
```
Here you have a pattern "*X* own(s) a(n) *Y*", which you recognise in your user input. You then have a predicate *owns(X, Y)*, which you add to your database of statements. This database you can then query. For example, *Do I own a cat?* could fit a question pattern *Do/Does X own(s) a(n) Y?*, and you can search for *own(I, cat)* in your DB; you will not find it, so the answer is "No". If the question is "Do I own a dog?" you will find *owns(I, dog)* in your database and you can reply "Yes".
This is all rather 'old' technology, but I think you will find that you will get decent results much quicker than with machine learning or statistical methods, especially if you have not much data to begin with.
A further branch to look into would be [expert systems](https://en.wikipedia.org/wiki/Expert_system). If you're thinking in terms of programming languages, then [Prolog](https://en.wikipedia.org/wiki/Prolog) would be well suited for this, but any language should do, really.
Upvotes: 0 |
2017/07/21 | 1,253 | 5,267 | <issue_start>username_0: I'm trying to come up with the right algorithm for a system in which the user enter symptoms and we starts triggering questions related to that symptoms and his answers will result a disease which is related to the answer which is given by the user
Let's assume that the user entered the following input:
Symptom - Deafness
Q1. How long have you had a problem with deafness
*A)From few days B)From few weeks to months C)More than month D)Since Birth*
Q2. What was the onset of the deafness
*A) Sudden B) Gradual*
Now we have a knowledge base like if a user select option 1 from question 1 and option 2 form question 2 then we will give him some disease. But i need an algorithm which will give % of success in backend so that i can throw the results of disease for example if a user select option 2 from question1 and option 1 from question 2, then when we compare from our knowledge base there will be one set over there which has option 1 from question 1 and option 2 from question 2 then its a "SOME" disease.. now if we compare from our knowledgebase and we found even 50% of the choices is resulting this disease we will throw that disease name.
NOW i am confused what algo should be use for this approach for ai.<issue_comment>username_1: There is no defined rules for choosing a machine learning algorithm to learn some type of pattern. However, there are some guidelines to help you better select an algorithm which will yield a higher probability of success.
Some important considerations are:
* Number of features: This is the number of questions that each patient had to answer.
* Number of instances: This is the number of patients that took your survey.
* Number of output classes: This is how specific you want your diagnosis of the disease. Is this a yes/no, or a 5-stage progression.
The larger your feature space and the more output classes you have, the higher the complexity of your model. This is problematic because a more complex model will require more instances of data to learn the underlining patterns. You need to have a good balance between the number of examples you have in your dataset and the complexity of your model.
If you have limited data then you will want to stay very far away from deep learning. In such cases, I prefer to use shallow methods such as **SVM**, **Naive Bayes** or **Random Forests**. These techniques have been shown to be able to capture non-linear relationships. SVM is particularly powerful, you can use the kernel trick! This will transform your feature space into a space that makes the differentiation between the classes easier to distinguish. Do not underestimate the power of this algorithm.
If you have a very large dataset (i.e. 100,000 instances) then you will want to use deep learning. Recently, these techniques have been shown to outperform shallow machine learning algorithms in almost all categories where data is plentiful. You will want to start with a shallow **neural network** and then increase the complexity of your graph as you see fit. By adding additional nodes per layer, or by adding additional layers. Each node you add will increase the number of variables you need to tune, thus increasing the complexity of your model. If your data is expected to have some type of time dependency which often the case in medical data. Then you can use a **long-short term memory** neural network (LSTM). These are capable of capturing latent relationships. You can also try a **stacked auto-encodder**.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Medical diagnosis often employs [abductive inference](https://en.wikipedia.org/wiki/Abductive_reasoning) (also known as "inference to the best explanation"), and automated approaches to abductive reasoning have been applied to medical diagnosis. More concretely, a mechanism known as [Parsimonious Covering Theory](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2244953/) has been extensively researched as an approach to automated abductive reasoning in medical scenarios (among others). You might consider giving that a look.
FWIW, one of the main researchers in the field is [Professor <NAME>](https://www.cs.umd.edu/~reggia/) from UMD. He and [<NAME>](https://www.csee.umbc.edu/~ypeng) wrote a very accessible book on the subject - [Abductive Inference Models for Diagnostic Problem Solving](http://rads.stackoverflow.com/amzn/click/B000QCQWL8).
And to add to what [username_1](https://ai.stackexchange.com/users/5925/jahknows) says - I'd say that where something like deep-learning would be most likely to come into play, would be building the initial knowledgebase that is used in the diagnostic system. That is, if you have lots of data relating symptoms to diseases, you could use deep learning to help define the weighting between specific diseases and specific symptoms. And note that while abductive inference is considered a branch of logic, and PCT dates back to the "GOFAI" era, it is not a strictly symbolic approach. The Reggia & Peng book explains how to incorporate Bayesian reasoning into the abductive system. Just FYI.
Disclosure: I've recently been researching this area and am working on a modern implementation of PCT using the Semantic Web stack.
Upvotes: 0 |
2017/07/21 | 1,035 | 4,428 | <issue_start>username_0: I'm going to give a talk, and I'm preparing the material. The purpose of the conversation is to convince companies in my region that it is possible to apply artificial intelligence in solving everyday business problems.
I would like some examples to be able to present, and so I came here to ask
*Have you used artificial intelligence to solve a problem at work? What kind of problem?*<issue_comment>username_1: There is no defined rules for choosing a machine learning algorithm to learn some type of pattern. However, there are some guidelines to help you better select an algorithm which will yield a higher probability of success.
Some important considerations are:
* Number of features: This is the number of questions that each patient had to answer.
* Number of instances: This is the number of patients that took your survey.
* Number of output classes: This is how specific you want your diagnosis of the disease. Is this a yes/no, or a 5-stage progression.
The larger your feature space and the more output classes you have, the higher the complexity of your model. This is problematic because a more complex model will require more instances of data to learn the underlining patterns. You need to have a good balance between the number of examples you have in your dataset and the complexity of your model.
If you have limited data then you will want to stay very far away from deep learning. In such cases, I prefer to use shallow methods such as **SVM**, **Naive Bayes** or **Random Forests**. These techniques have been shown to be able to capture non-linear relationships. SVM is particularly powerful, you can use the kernel trick! This will transform your feature space into a space that makes the differentiation between the classes easier to distinguish. Do not underestimate the power of this algorithm.
If you have a very large dataset (i.e. 100,000 instances) then you will want to use deep learning. Recently, these techniques have been shown to outperform shallow machine learning algorithms in almost all categories where data is plentiful. You will want to start with a shallow **neural network** and then increase the complexity of your graph as you see fit. By adding additional nodes per layer, or by adding additional layers. Each node you add will increase the number of variables you need to tune, thus increasing the complexity of your model. If your data is expected to have some type of time dependency which often the case in medical data. Then you can use a **long-short term memory** neural network (LSTM). These are capable of capturing latent relationships. You can also try a **stacked auto-encodder**.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Medical diagnosis often employs [abductive inference](https://en.wikipedia.org/wiki/Abductive_reasoning) (also known as "inference to the best explanation"), and automated approaches to abductive reasoning have been applied to medical diagnosis. More concretely, a mechanism known as [Parsimonious Covering Theory](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2244953/) has been extensively researched as an approach to automated abductive reasoning in medical scenarios (among others). You might consider giving that a look.
FWIW, one of the main researchers in the field is [Professor <NAME>](https://www.cs.umd.edu/~reggia/) from UMD. He and [<NAME>](https://www.csee.umbc.edu/~ypeng) wrote a very accessible book on the subject - [Abductive Inference Models for Diagnostic Problem Solving](http://rads.stackoverflow.com/amzn/click/B000QCQWL8).
And to add to what [username_1](https://ai.stackexchange.com/users/5925/jahknows) says - I'd say that where something like deep-learning would be most likely to come into play, would be building the initial knowledgebase that is used in the diagnostic system. That is, if you have lots of data relating symptoms to diseases, you could use deep learning to help define the weighting between specific diseases and specific symptoms. And note that while abductive inference is considered a branch of logic, and PCT dates back to the "GOFAI" era, it is not a strictly symbolic approach. The Reggia & Peng book explains how to incorporate Bayesian reasoning into the abductive system. Just FYI.
Disclosure: I've recently been researching this area and am working on a modern implementation of PCT using the Semantic Web stack.
Upvotes: 0 |
2017/07/27 | 959 | 3,911 | <issue_start>username_0: If I create a program which takes an input, gives an output and then requires a response to let it know whether the answer it gave was any good does it count as AI?
If not, what is the process of AI? Does it not always need specific parameters? For example, I ask it "Who is the president of the USA?", and I have programmed it to look for news articles in SEOs and remove the "Who" part, is that AI?<issue_comment>username_1: There is no "process of AI" as such. There are **many, many** different approaches to AI, different ones of which are used in specific applications.
As to whether a purely trial and error approach could be considered AI... I'd offer up a qualified "maybe". If you do nothing but an exhaustive scan of the solution space, for every trial, then I'd say "No, it's not really any kind of AI". OTOH, if you're using a knowledge-base of some sort and applying some kind of reasoning (even if it's a heuristic) , and if you have a system that somehow learns from the feedback from the user and gets "smarter" over time, then you're likely working on something that could be considered AI.
All of that said, the exact definition of what is and isn't AI is somewhat fuzzy. One popular definition is something like "any technology that allows a computer to do something well that currently only humans can do well". So if you're doing something that fits that descriptions, it's possibly an aspect of AI. And consider again that most people don't really consider "brute force" solutions to be AI.
Upvotes: 2 [selected_answer]<issue_comment>username_2: * If an algorithm is making decisions, it can be deemed AI.
The decisions don't have to be better than a human's, or even good. *(AI can be limited or "dumb"--on my project we specifically make dumb AIs so children and new players can win;)*
From this perspective, the process of the algorithm choosing an output based on an input qualifies it as basic AI. Asking for a response on the quality of the output is merely the validation procedure.
* If the algorithm learns from the validation process, it constitutes a type of machine learning.
It may not be the "sexy" Machine Learning that has sparked renewed interest in the field of late, but it constitutes an [automata](https://en.wikipedia.org/wiki/Automata_theory) that is learning nonetheless.
Upvotes: 0 <issue_comment>username_3: I would say that, while it *does* count, it only counts in certain circumstances, as trying to use trial and error with already predetermined data as its objective to output isn't AI, as it already has data that is fine as-is. For example:
Say your AI is trying to point at a precise location but is only given the current accuracy of its positioning. It could check every location and find the best of all of them, or, it could do this:
1. Correct the up/down(vertical) positioning:
1. Start moving up.
2. If the accuracy gets better:
1. Keep moving up.
2. Else move down.
3. Continue until the accuracy starts to get weaker.
4. Move back just a tiny bit to increase again.
2. Correct the left/right(horizontal) positioning:
1. Start moving right.
2. If the accuracy gets better:
1. Keep moving right.
2. Else move left.
3. Continue until the accuracy starts to get weaker
4. Move back just a tiny bit to increase again.
3. You're done!
This is AI, as it learns what to and what not to do, and figures out a location based off of the accuracy of its positioning alone. So while the "trial and error" method *is* AI, it only counts when it has **no predetermined calculation** to figure out the result **without said trial and error**. Trying to find numbers of Pi, for example, while it *is* technically trial and error, it uses math functions and requires multiple inputs to calculate its output, and, in the end, it only partly uses trial and error, therefore, it is not AI in the end.
Upvotes: 0 |
2017/07/27 | 853 | 3,276 | <issue_start>username_0: After witnessing the rise of deep learning as automatic feature/pattern recognition over classic machine learning techniques, I had an insight that the more you automate at each level, the better the results, and I, therefore, turned my focus to [neuroevolution](https://en.wikipedia.org/wiki/Neuroevolution).
I have been reading neuroevolution publications with the same desire to automate at every level.
Do genetic algorithms also evolve themselves? Do they get better at searching through the solution space for each generation over time?<issue_comment>username_1: A genetic algorithm is a class of evolutionary algorithms.
They do get better at searching through the solution possibilities for each trial (generation) over time because evolution usually starts from a population of randomly generated individuals, and is an iterative process. In each generation, the fitness of every individual in the population is evaluated. The more fit individuals are stochastically selected from the current population, and each individual's genome is modified to form a new generation thus getting better and better with each generation over time.
Evolution is defined as the gradual development of something, especially from a simple to a more complex form.
Upvotes: 1 <issue_comment>username_2: In principle, yes, you can also evolve the genetic algorithm (or, in general, evolutionary algorithm), i.e. you can evolve its operations (such as the mutation and cross-over) and hyper-parameters (such as the size of the population or mutation rate). For example, you could use [genetic programming](http://www.shahed.ac.ir/stabaii/Files/CompIntelligenceBook.pdf) to evolve the cross-over operation of a genetic algorithm. However, these genetic operators and hyper-parameters are usually designed and determined by a human and do not change during the evolution process. Nevertheless, there is a framework for evolutionary computation in which reproduction and mutation can also evolve, known as [auto-constructive evolution](http://faculty.hampshire.edu/lspector/pubs/ace.pdf), and there are other examples in the literature of meta-optimization methods applied to evolutionary algorithms, such as
* [Evolutionary principles in self-referential learning, or on learning how to learn: The meta-meta-... hook](http://people.idsia.ch/%7Ejuergen/diploma.html) (1987, Schmidhuber's thesis)
* [Parallel optimization of evolutionary algorithms](https://link.springer.com/chapter/10.1007%2F3-540-58484-6_285) (1994, by <NAME>)
* [Adaptive and Self-adaptive Evolutionary Computations](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.88.7230&rep=rep1&type=pdf) (1995, by <NAME>)
* [Meta-Genetic Programming: Co-evolving the Operators of Variation](https://journals.tubitak.gov.tr/elektrik/issues/elk-01-9-1/elk-9-1-2-0008-5.pdf) (1998, by <NAME>)
More generally, the optimization of an optimization algorithm is known as [meta-optimization](https://en.wikipedia.org/wiki/Meta-optimization) and/or [hyper-parameter optimization/tuning](https://en.wikipedia.org/wiki/Hyperparameter_optimization), so the idea of meta-optimization is not just restricted to evolutionary algorithms, but can be applied also to e.g. deep learning.
Upvotes: 2 |
2017/07/30 | 569 | 2,164 | <issue_start>username_0: I'm learning Neural Networks, and everything works as planned but, like humans do, adjusting themselves to learn more efficiently, I'm trying to understand conceptually how one might implement an auto adjusting learning rate for a Neural Network.
I have tried to make it based on error, something like how bigger is error learning rate is getting bigger as well. *[Could use some clarification here--not entirely sure what you're saying. If can clarify, I'm happy to clean up the English. -DukeZhou]*
\*If you want give me an example give it on a C based language or math because I don't have experience with Python or Pascal.<issue_comment>username_1: Adjusting for learning rate is a common scenario in machine learning. There is rich literature about it, countless papers. The most common implementation:
* AdaGrad
* RMSProp
* Adam
There are many more variants. You'll need to do some research. Please take a look at:
>
> <https://en.wikipedia.org/wiki/Stochastic_gradient_descent>
>
>
>
[](https://i.stack.imgur.com/NhrR7.png)
Upvotes: 3 [selected_answer]<issue_comment>username_2: Apart from the already mentioned methods there is one which I came across recently is the Cyclic Learning Rates.
Mainstream methods usually reduce the learning rates monotonically. In most of the cases they work well. Yet they do not do well with handling saddle points and local minima. CLR method claims to solve the problem by effectively handling the case of saddle points.
Ref: <https://arxiv.org/abs/1506.01186>
Upvotes: 2 <issue_comment>username_3: While adjusting the learning rate during the training is certainly interesting, so is finding of a good initial learning rate, too.
[Cyclical Learning Rates for Training Neural Networks](https://arxiv.org/abs/1506.01186), [fast.ai (pytorch) implementation](https://docs.fast.ai/basic_train.html#lr_find).
Here's a good practitioner's [overview of learning rate schedules](https://github.com/fastai/fastai/blob/master/courses/dl2/training_phase.ipynb) (python, but rather readable and with nice plots).
Upvotes: 0 |
2017/07/31 | 712 | 3,304 | <issue_start>username_0: I have been following the [ML course by <NAME>](http://www.cs.cmu.edu/~ninamf/courses/601sp15/lectures.shtml).
The inherent assumption while using Decision Tree Learning Algo is: **The algo. preferably chooses a Decision Tree which is the smallest.**
**Why is this so** when we can have bigger extensions of the tree which could in principle perform better than the shorter tree?<issue_comment>username_1: The bigger your tree is the more overfitting your model is. In machine learning, we always prefer a simpler model unless there is good reason to go for complication.
Upvotes: 2 <issue_comment>username_2: Adding to SmallChess's answer ,
Larger trees(with many nodes) are too adapted to the training set, as a small change in the input train data might cause the trees to change very much and hence change the estimate value too much.This is mainly due to the hierarchical structures of trees(because a change in a higher node may cause all lower nodes to change).
As an extreme case you can think of a large tree where in each training example has its own node.Such a is absolutely useless for test prediction.
Upvotes: 2 <issue_comment>username_3: Consider the LCD (least common denominator) principle in algebra. A larger denominator would work for most processes for which the LCD would be calculated, however the least is the one used by convention. Why? The interest in prime numbers in general is based on the value of reductive methodologies.
In philosophy, Occam's Razor is a principle that, given two conceptual constructs that correlate equally well with observations, the simpler is most likely the best. A more formal and generalized prescription is this:
>
> Given two mathematical models of a physical system with equal correlation to the set of all observations about the system being modeled, the simpler model is more likely to be the most likely to be predictive of conditions outside those of the observation set.
>
>
>
This principle of simplicity as the functional ideal is true of decision trees. The more likely functional and clear decision tree resulting from a given data set driving its construction will be the simplest. Without a clear reason for additional complexity, there may be no benefit derived from the complexity added and yet there may be penalties in terms of clarity and therefore maintainability and verifiability. There may be computational penalties too, in some applications.
The question mentions, "Bigger extensions of the tree which could in principle perform better," however the performance of the tree should be a matter optimization in preparing for execution and execution of the decision tree in real time. In other words, the minimalist decision tree is the most clear, workable, and verifiable construct, however a clever software engineer could translate that minimalist construct to a run time optimized equivalent.
Just as with compilation of source code, performance is multidimensional in meaning. There are time efficiency, memory efficiency, network bandwidth efficiency, or other performance metrics that could be used when optimizing the tree for run time. Nonetheless, the simpler tree is the best starting point for any weighted combination of these interests.
Upvotes: 1 [selected_answer] |
2017/08/02 | 443 | 1,992 | <issue_start>username_0: I am developing an LSTM for sequence tagging. During the development, I do various changes in the system, for example, add new features, change the number of nodes in the hidden layers, etc. After each change, I check the accuracy using cross-validation on a development dataset.
Currently, in each check, I use 100 iterations to train the system, which takes a lot of time. So I thought that, maybe, during development, I can use only e.g. 20 iterations. Then, each check will be faster. After I find the best configuration, I can switch back to 100 iterations to get better accuracy.
My question is: is this consideration correct? More specifically, if model A is better than model B with 20 training iterations, is it likely that A will be better than B also with 100 training iterations?
Alternatively, is there a better way to speed up the development process?<issue_comment>username_1: This might work for your case but isn't necessarily true and depends on how much data the network goes through in an iteration. You should be able to test this by making a small change and training until 100 iterations and seeing if the performance significantly changes and if it can be predicted from the 20th iteration.
Another way which may work for you is preloading lower layers of your network (if you have more than one layer).
For instance, if you have 5 layers and are making changes to the last 2, you could preload the bottom 3 layers with previously trained weights. This should decrease the amount of training that needs to take place as your network can already discern some primary features of your problem.
Upvotes: 2 <issue_comment>username_2: Your scenario is common.
The most straightforward approach is to subsample your data randomly. Unless your data or your model has strong bias, your performance to the smaller data set should be comparable. The accuracy might be lower, but the purpose is to do quick sanity check.
Upvotes: 4 [selected_answer] |
2017/08/02 | 821 | 2,654 | <issue_start>username_0: I am looking for AI podcasts that are purely academic-oriented that I can use for learning purposes. Thanks for any resource pointers.
The AI podcasts I am aware of are (not sure how many of these can be considered academic):
* [The AI Podcast](https://blogs.nvidia.com/ai-podcast/)
* [Linear Digressions](https://itunes.apple.com/us/podcast/linear-digressions/id941219323)
* [O'Reilly Bots Podcast](https://itunes.apple.com/us/podcast/oreilly-bots-podcast-oreilly/id1145426486)
* [A16z Podcast](https://soundcloud.com/a16z/artificial-intelligence)<issue_comment>username_1: I'll add a few, though I'm also not sure what exactly would constitute an "academic" podcast. I'm not going to link everything, they should be easy enough to find.
* [Partially derivative](http://partiallyderivative.com/)
* [Data sceptic](https://dataskeptic.com/podcast)
* [This Week in Machine Learning and AI](https://twimlai.com/)
* [Concerning AI](https://concerning.ai/)
* [Exponential View](http://www.exponentialview.co/podcasts/)
* [Talking Machines](http://www.thetalkingmachines.com/)
Upvotes: 2 <issue_comment>username_2: Also a few others options (that I listen to) include:
* [Learning Machines 101](http://www.learningmachines101.com/)
* [Machine Learning Guide](https://player.fm/series/machine-learning-guide-1457335)
* [Machine Learning - Software Engineering Daily](https://softwareengineeringdaily.com/tag/machine-learning/)
Upvotes: 4 [selected_answer]<issue_comment>username_3: >
> [<NAME> interviews](https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4)
> ==================================================================================================
>
>
>
These are some of the best interviews on the field that I have found on the internet.
They are part of the MIT course 6.S099: Artificial General Intelligence.
[](https://i.stack.imgur.com/2h8Ze.png)
>
> <NAME>'s [Heroes of deep learning](https://www.youtube.com/watch?v=-eyhCTvrEtE&list=PLfsVAYSMwsksjfpy8P2t_I52mugGeA5gR) series
> =================================================================================================================================
>
>
>
The focus is on deep learning.
[](https://i.stack.imgur.com/z4eXp.png)
<NAME> has some fascinating youtube videos:
===================================================
[Yannic's channel](https://www.youtube.com/channel/UCZHmQk67mSJgfCCTn7xBfew)
[Check out this one!](https://youtu.be/utuz7wBGjKM)
Upvotes: 1 |
2017/08/03 | 1,312 | 5,057 | <issue_start>username_0: From a lecture in machine learning, I know that a linear activation function can only produce a linear function, but I don't know if it can produce a connected linear function, like the one in the image below. This function consists of multiple concatenated lines.
[](https://i.stack.imgur.com/y51Kh.png)<issue_comment>username_1: A linear activation would not be able to separate the data like you have shown, no matter how many layers you throw into the network.
If we had multiple linearly activated layers, each feeding into each other, the neurons in the previous layer would calculate some weighted sum of the input and send it to the next layer as input, where the next layer's neurons would also calculate a weighted sum on that input, and, in turn, it fires based on another linear activation function.
No matter how many layers and neurons there are, if all are linear in nature, the final activation function of the last layer is also a linear function of the input of the first layer.
That means that any or all of these layers can be replaced by one layer. This completely loses the advantage of stacking layers because any multilayer network is equivalent to a single layer with linear activation, given that a combination of linear functions in a linear manner is still another linear function.
A good way to visualize this messing around with [Tensorflow playground](http://playground.tensorflow.org/#activation=linear®ularization=L2&batchSize=30&dataset=circle®Dataset=reg-plane&learningRate=0.03®ularizationRate=0.003&noise=0&networkShape=8,8,8,8,8,8&seed=0.18289&showTestData=false&discretize=true&percTrainData=50&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=true&cosY=false&sinY=true&collectStats=false&problem=classification&initZero=false&hideText=false), which has a spiral data set similar to your data.
In contrast to the failure of linear activation functions for this data in the previous link, check out [this much smaller network using a Tanh activation](http://playground.tensorflow.org/#activation=tanh®ularization=L1&batchSize=10&dataset=spiral®Dataset=reg-plane&learningRate=0.01®ularizationRate=0&noise=10&networkShape=8,7,2&seed=0.04414&showTestData=false&discretize=true&percTrainData=50&x=true&y=true&xTimesY=true&xSquared=false&ySquared=false&cosX=false&sinX=true&cosY=false&sinY=true&collectStats=false&problem=classification&initZero=false&hideText=false) function which can separate the data within 100 or so iterations.
For further reference on activation functions, checkout [this blog post](https://medium.com/the-theory-of-everything/understanding-activation-functions-in-neural-networks-9491262884e0).
Upvotes: 3 <issue_comment>username_2: Since I can't comment, there are a few caveats to previous answers.
For instance, if you knew beforehand what the expected boundary function for that variable was, then you could transform it first.
For instance, if you knew one feature was expected to be sinusoidal, you could transform your data (theta) using $f(x) = a\*sin(\theta)$ first then expect the variable to be linear. At that point it would fit a linear function and solve for the amplitude instead. Of course, in large problems, this may be impractical, but still possible.
Likewise with quadratic data. I could add 1 new feature for **every** continuous variable that looks like $x^2$ and capture any quadratic dependence.
Another solution in your shown example would be if you **knew** the data was piece-wise, you could train 3 small, easily trained, NNs for instance with different boundaries. During each new training, you set new boundaries for your functions and see which produces the best results. In your case, this would be something close to $[-\infty,-0.5], [-0.5,0.5]$, and $[0.5,\infty]$.
You could argue that this is a single piece-wise neural net just like any traditional piece-wise function.
While this last approach seems very simplistic/impractical, this is actually done in practice. For instance in autonomous driving, a separate NN could be trained for object detection (human, lights, signs etc.), another for steering wheel positioning for different speeds,conditions, and turning radius, another for how to accelerate, etc. and then the system runs them all and combines their outputs into a full driving car. Each output is then its boundary of the puzzle on what region of "driving space" it handles.
So while I would say the simple answer is no, there are still ways around it if you have some prior knowledge.
As a response for the Universal Approximation Theorem, see this [wiki](https://en.wikipedia.org/wiki/Universal_approximation_theorem), which says
>
> One of the first versions of the theorem was proved by <NAME> in 1989 for sigmoid activation functions. It was later shown that the class of deep neural networks is a universal approximator if and only if the activation function is not polynomial."
>
>
>
Upvotes: 2 |
2017/08/03 | 454 | 1,729 | <issue_start>username_0: Can HMMs be used to model **any** time series data? Or does the data have to be that of a Markov process?
In [HTK documentation](http://www.ee.columbia.edu/~dpwe/LabROSA/doc/HTKBook21/node3.html), I see that the first few lines state that it can model any time series
>
> HTK is a toolkit for building Hidden Markov Models (HMMs). HMMs can be used to model any time series and the core of HTK is similarly general-purpose.
>
>
><issue_comment>username_1: HMMs can be used to model sequential data, which is composed of discrete tokens, and it should generally follow the Markov property, which is the assumption that the probability of a class/label given observation depends only on the preceding class/label (rather than on some longer sequence).
Upvotes: 0 <issue_comment>username_2: Yes, you can fit any time series (with or without external variables) using HMM, but there are some constraints:
1. It should follow the Markov property.
2. There is some variance that other models are not able to capture (in other words, the system is partially observable).
Adding to point 1, for HMM, it should hold true, but the way [Baum Welch algorithm](https://en.wikipedia.org/wiki/Baum%E2%80%93Welch_algorithm) works, indirectly it considers the values of more than the previous state for HMM (order-1). The state $t-1$ depends on $t-2$, which in turn depends on $t-3$. The calculation of parameters (transition, emission, starting probabilities) happens over multiple iterations and it finds parameters in such a way that holds Markov property true.
I think that, when they say 'any', they mean even when you don't have all variables needed to forecast future values.
Upvotes: 2 [selected_answer] |
2017/08/04 | 730 | 2,716 | <issue_start>username_0: I recently started learning about reinforcement learning. Currently, I am trying to implement the [SARSA algorithm](http://www.cse.unsw.edu.au/%7Ecs9417ml/RL1/algorithms.html). However, I do not know how to deal with $Q(s', a')$, when $s'$ is the terminal state. First, there is no action to choose from in this state. Second, this $Q$-factor will never be updated either because the episode ends when $s'$ is reached. Should I initialize $Q(s', a')$ to something other than a random number? Or should I just ignore the $Q$-factors and simply feed the reward $r$ into the update?<issue_comment>username_1: From the description of the algorithm you linked to, it says to 'repeat until s is terminal'. So one would end the episode at that point and your intuition holds.
Practically, if one was implementing a reward function where a specific reward is associated with the end of the episode such as "r(robot ran into a wall) = -100" then one can imagine that there is a terminal state just after this 'wall hit' state so the agent could see this reward. The episode would then be at a terminal state so would end.
Upvotes: 2 <issue_comment>username_2: The value $Q(s', ~\cdot~)$ should always be implemented to simply be equal to $0$ for any terminal state $s'$ (the dot instead of an action as second argument there indicates that what I just wrote should hold for **any action**, as long as $s'$ is terminal).
It is easier to understand why this should be the case by dissecting what the different terms in the update rule mean:
$$Q(s, a) \gets \color{red}{Q(s, a)} + \alpha \left[ \color{blue}{r + \gamma Q(s', a')} - \color{red}{Q(s, a)} \right]$$
In this update, the red term $\color{red}{Q(s, a)}$ (which appears twice) is our old estimate of the value $Q(s, a)$ of being in state $s$ and executing action $a$. The blue term $\color{blue}{r + \gamma Q(s', a')}$ is a different version of estimating the same quantity $Q(s, a)$. This second version is assumed to be slightly more accurate, because it is not "just" a prediction, but it's a combination of:
* something that we really observed: $r$, plus
* a prediction: $\gamma Q(s', a')$
Here, the $r$ component is the immediate reward that we observed after executing $a$ in $s$, and then $Q(s', a')$ is everything we expect to still be collecting afterwards (i.e., after executing $a$ in $s$ and transitioning to $s'$).
Now, suppose that $s'$ is a terminal state, what rewards do we still expect to be collecting in the future within that same episode? Since $s'$ is terminal, and the episode has ended, there can only be one correct answer; we expect to collect exactly $0$ rewards in the future.
Upvotes: 4 [selected_answer] |
2017/08/07 | 1,052 | 4,102 | <issue_start>username_0: I'm attempting to program my own system to run a neural network. To reduce the number of nodes needed, it was suggested to make it treat rotations of the input equally.
My network aims to learn and predict Conway's Game of Life by looking at every square and its surrounding squares in a grid, and giving the output for that square. Its input is a string of 9 bits:
[](https://i.stack.imgur.com/p9UwK.png)
The above is represented as 010 001 111.
There are three other rotations of this shape however, and all of them produce the same output:
[](https://i.stack.imgur.com/eyDkl.png)
My network topology is 9 input nodes and 1 output node for the next state of the centre square in the input. How can I construct the hidden layer(s) so that they take each of these rotations as the same, cutting the number of possible inputs down to a quarter of the original?
**Edit:**
There is also a flip of each rotation which produces an identical result. Incorporating these will cut my inputs by 1/8th. With the glider, my aim is for all of these inputs to be treated exactly the same. Will this have to be done with pre-processing, or can I incorporate it into the network?<issue_comment>username_1: If I understand well your single output node will be the next status of the square in the middle. You don't need to worry about the number of nodes in the hidden layers while you have sufficient resources to train the model. This problem is very easy to learn for a neural network so no size concerns.
You need to do supervised training that means you need to feed in input data and the matching expected output. You need to be sure that in your training data all 4 rotations are assigned to the same output. This way your network should learn to treat all of these the same way.
You made me curious so I tried myself. My solution could learn 100% correct in about 20 epochs running within a few seconds on my old laptop. I only slightly changed the output to be categorical either [0,1] or [1,0] but that gives the same result that you are looking for. Just for reference here is the code written in python:
```
from keras.models import Sequential
from keras.layers import Input, Dense
from keras.models import Model
from keras import optimizers
from keras.utils.np_utils import to_categorical
import helper
x_,y_ = helper.fnn_csv_toXY("conway.csv","output",False)
y_binary = to_categorical(y_)
model = Sequential()
model.add(Dense(100, activation='relu', kernel_initializer='glorot_uniform',input_shape =(9,)))
model.add(Dense(20, activation='relu', kernel_initializer='glorot_uniform'))
model.add(Dense(2, activation='softmax'))
adam=optimizers.Adam()
model.compile(optimizer=adam,
loss='categorical_crossentropy',
metrics=['acc'])
model.fit(x_, y_binary, epochs=100)
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You have identified an optimization in your problem space and desire to bake this into your neural net. I suggest preprocessing: Compose your optimization with a neural net that does a subset of what you want.
In other words, normalize your input by manually coding a rotation algorithm that rotates inputs to capture the equivalence highlighted in your post. Then feed the output of this transformation to your neural net, for training and all other uses. This means you are training the neural net to tackle the sub-problem you identified - rotations are redundant.
Test your normalizer by generating random input, rotating it to all four potential transformations, run the normalizer on each one, then check that they are all equivalent.
Upvotes: 2 <issue_comment>username_3: To be purist about it, begin with considering the input differently, as a circular array of size four, each item containing a pair of bits, and additionally a center bit:
>
> ... 01, 01, 11, 10 ...
>
>
> 0
>
>
>
Throughout the design of the network, continue this circular structure and center point paradigm.
Upvotes: 1 |
2017/08/07 | 1,006 | 4,328 | <issue_start>username_0: In general, what possibilities are there for reinventing job descriptions that could be replaced by an automated AI solution?
My initial ideas include:
* Monitoring the AI and flagging its incorrect actions.
* Possibly taking over the control in very challenging scenarios.
* Creating/gathering more training/testing data to improve the accuracy of the AI.<issue_comment>username_1: With AI technology at its current stage (or at least reasonably close to this stage), the jobs you proposed may very well be openings created by AI automation. However, sufficiently advanced AI technology--- the kind that can function as general purpose labor replacement--- will make even these jobs obsolete. This is because such an AI would be able to improve itself and as a result would surpass human level intelligence---meaning that it would take actions which we would not necessarily be able to understand or justify.
This is all assuming that the intelligence of humans is not some kind of upper limit on the possibilities of intelligence in general. As far as I know there is no reason to think we represent such an upper limit.
To summarize...will such jobs exist?
Sub-human artificial intelligence: sure.
Beyond-human artificial intelligence: no reason to think they would be necessary as the AI can do them itself.
Upvotes: 4 <issue_comment>username_2: ### AI Gatekeepers
Their job would be to make sure they (AI) don't accidentally become our overlords.
### AI Tax
Each robot that replaces a human worker is taxed. This line of thought was influenced by <NAME>'s recommendation on adding taxes for robots and [an article on universal basic income](https://futurism.com/images/universal-basic-income-answer-automation/).
If the gold rush for AI puts many of us out of work (which is quite possible), we should find a way to minimize its impact on the society. One way would be to tax each robot a reasonable amount (not to exceed the cost of human labor) so that it can go to a super-fund for universal basic income. That fund can be used to give everyone in that community a basic salary, so it can prevent hunger, homelessness, and other problems associated with poverty. In my humble opinion, it will be very difficult to implement. Some countries and cities are testing this as a proof-of-concept for now. We will see how that works out.
### AI Traders
People that sell AI robots.
Upvotes: 2 <issue_comment>username_3: **AI production overseers** - People who will command AI to build and control mines and factories. It's like a strategic game, but in the real world.
Explanation: AI even with the intellect of a bee and ability to understand/execute commands in combination with appropriate technologies will be enough to create robots, which will be able to build mines and factories to build new robots or things. Nevertheless, this is can be not enough to build an efficient production. Therefore, an overseer of each production will command it in order to establish or maintain production at the necessary level.
This job can also become obsolete if the AI reaches a high enough level to substitute overseers. However, even if AI become smarter than people, this doesn't guarantee that it will understand people better than overseer human. Therefore, even in this case, it's not a good idea left AI without control by humans.
Upvotes: 2 <issue_comment>username_2: ### AI Diplomat
This may be a bit far-fetched. Let's say, in the next few centuries, the AI becomes advanced enough to earn their own civil rights through the supreme court or legislative branch. We should have experts with the good relationship with AIs. We should give them enough knowledge and tools to make sure AIs see our existence as beneficial to them.
### AI Counter Task Force
It could be part of law enforcement or the military with experts in various fields of AI/technology, who are also physically fit to go into battles when needed.
### Entrepreneurs
People who encourage the building of jobs and careers that are always hard to automate. As a result, the human being will always have value. We should inspire our entrepreneurs to have a good balance of automation and manual jobs. What if all of your AIs fail in one day and your competition wins because they had more human labor.
Upvotes: 2 |
2017/08/08 | 815 | 3,474 | <issue_start>username_0: In Chapter 14.4 (p. 664) of the book **Pattern Recognition and Machine Learning** by **Bishop**, it is mentioned that *tree-based models are more widely used in Medical Diagnosis*.
Apart from giving better performance, is there a human-centric reason for this trade-off **as medical diagnosis is mainly performed by a human?**<issue_comment>username_1: One possible reason may have something to do with the scrutability of models, as described in the first few paragraphs of [this article](http://nautil.us/issue/40/learning/is-artificial-intelligence-permanently-inscrutable). It presents a case study of a hospital whose policy was to send asthma sufferers to an intensive care unit; the intensive care meant they were less likely to develop pneumonia and therefore the data showed that people with asthma were less likely to have pneumonia.
Essentially, since machine learning models learn false relationships if the data are in any way flawed, it is beneficial to be able to "debug" them. The processes by which decision trees make their decisions, and the reasons for making them, are more readily visible than in other models - particularly neural networks - which makes errors such as the example given in the article more likely to be picked up and corrected.
Upvotes: 2 <issue_comment>username_2: I myself working on similar product for Medical Diagnosis. The reason for it is we as ML engineer, generally try to replicate, how human approach any problem into mathematical model and build library on top of it to use it in application.
So, how Doctor approach to any specific decision of diagnosis, based on symptoms.
Mainly symptoms are in two form True or False, you have it or not.
Now we just need to replicate doctors approach to make decision, But yes to extract those symptoms from raw data in the form of text, image of sound we need to use other classifier and clustering models.
Upvotes: 0 <issue_comment>username_3: Either you missed it or I don't fully understand why you were confused when you asked this question, but the same Bishop argues (in the same sentence where he says what you are wondering about) why tree-based models are popular in fields such as medical diagnosis.
>
> A key property of tree-based models, which makes them popular in fields such as medical diagnosis, for example, is that they are readily **interpretable** by humans because they correspond to a sequence of binary decisions applied to the individual input variables. For instance, to predict a patient's disease, we might first ask "is their temperature greater than some threshold?". If the answer is yes, then we might next ask “is their blood pressure less than some threshold?". Each leaf of the tree is then associated with a specific diagnosis.
>
>
>
Nowadays, with the successes of neural networks (for example, in Go, Atari, image classification and segmentation, and even machine translation), which are not easily interpretable (so they are known as *black-box models*), there are always more studies/research on interpretable models or techniques to interpret black-box models, such as neural networks. You can take a look at [this answer](https://ai.stackexchange.com/a/24138/2444) for a list of **explainable/interpretable AI** approaches that have been developed. [This post](https://ai.stackexchange.com/q/14224/2444) contains many answers that further motivate the need for *explainble AI*.
Upvotes: 1 [selected_answer] |
2017/08/11 | 3,757 | 18,430 | <issue_start>username_0: Frameworks like [PyTorch](http://pytorch.org) and TensorFlow through [TensorFlow Fold](https://research.googleblog.com/2017/02/announcing-tensorflow-fold-deep.html) support Dynamic Computational Graphs and are receiving attention from data scientists.
However, there seems to be a lack of resource to aid in understanding Dynamic Computational Graphs.
The advantage of Dynamic Computational Graphs appears to include the ability to adapt to a varying quantities in input data. It seems like there may be automatic selection of the number of layers, the number of neurons in each layer, the activation function, and other NN parameters, depending on each input set instance during the training. Is this an accurate characterization?
What are the advantages of dynamic models over static models? Is that why DCGs are receiving much attention? In summary, what are DCGs and what are the pros and cons their use?<issue_comment>username_1: **Two Short Answers**
The short answer from a theoretical perspective is that ...
>
> A Dynamic Computational Graph is a mutable system represented as a directed graph of data flow between operations. It can be visualized as shapes containing text connected by arrows, whereby the vertices (shapes) represent operations on the data flowing along the edges (arrows).
>
>
>
Note that such a graph defines dependencies in the data flow but not necessarily the temporal order of the application of operations, which can become ambiguous in the retention of state in vertices or cycles in the graph without an additional mechanism to specify temporal precedence.
The short answer from an applications development perspective is that ...
>
> A Dynamic Computational Graph framework is a system of libraries, interfaces, and components that provide a flexible, programmatic, run time interface that facilitates the construction and modification of systems by connecting a finite but perhaps extensible set of operations.
>
>
>
**The PyTorch Framework**
PyTorch is the integration of the Torch framework with the Python language and data structuring. Torch competes with Theano, TensorFlow, and other dynamic computational system construction frameworks.
---
>
> ——— ***Additional Approaches to Understanding*** ———
>
>
>
**Arbitrary Computational Structures of Arbitrary Discrete Tensors**
One of the components that can be used to construct a computational system is an element designed to be interconnected to create neural networks. The availability of these supports the construction of deep learning and back-propagating neural networks. A wide variety of other systems involving the assembly of components that work with potentially multidimensional data in arbitrarily defined computational structures can also be constructed.
The data can be scalar values, such as floating-point numbers, integers, or strings, or orthogonal aggregations of these, such as vectors, matrices, cubes, or hyper-cubes. The operations on the generalization of these data forms are discrete tensors and the structures created from the assembly of tensor operations into working systems are data flows.
**Points of Reference for Understanding the Dynamic Computation Concept**
Dynamic Computational Graphs are not a particularly new concept, even though the term is relatively new. The interest in DCGs among computer scientists is not as new as the term Data Scientist. Nonetheless, the question correctly states that there are few well-written resources available (other than code examples) from which one can learn the overall concept surrounding their emergence and use.
One possible point of reference for beginning to understand DCGs is the Command design pattern which is one of the many design patterns popularized by the proponents of object-oriented design. The Command design pattern considers operations as computation units the details of which are hidden from the command objects that trigger them. The Command design pattern is often used in conjunction with the Interpreter design pattern.
In the case of DCGs, the Composite and Facade design patterns are also involved to facilitate the definition of plug-and-play discrete tensor operations that can be assembled together in patterns to form systems.
This particular combination of design patterns to form systems is actually a software abstraction that largely resembles the radical idea that led to the emergence of the Von Neumann architecture, central to most computers today. Von Neumann's contribution to the emergence of the computer is the idea of permitting arbitrary algorithms containing Boolean logic, arithmetic, and branching to be represented and stored as data -- a program.
Another forerunner of DCGs is expression engines. Expression engines can be as simple as arithmetic engines and as complex as applications such as Mathematica. A rules engine is a little like DCGs except that rules engines are declarative and meta-rules for rules engines operate on those declarations.
**Programs Manipulating Programs**
What these have in common with DCGs is that the flow of data and operations to be applied can be defined at run time. As with DCGs, some of these software libraries and applications have APIs or other mechanisms to permit operations to be applied to functional details. It is essentially the idea of a program permitting the manipulation of another program.
Another reference point for understanding this principle at a primitive level is the switch-case statement available in some computer languages. It is a source code structure whereby the programmer essentially expresses, "We're not sure what must be done, but the value of this variable will tell the real-time execution model what to do from a set of possibilities."
The switch-case statement is an abstraction that extends the idea of deferring the decision as to the direction of computation until run time. It is the software version of what is done inside the control unit of a contemporary CPU and an extension of the concept of deferring some algorithm details. A table of functors (function pointers) in C or polymorphism in C++, Java, or Python are other primitive examples.
Dynamic Computation takes the abstraction further. They defer most if not all of the specification of computations and the relationships between them to run time. This comprehensive generalization broadens the possibilities of functional modification at run time.
**Directed Graph Representation of Computation**
That's what the Dynamic Computational model is. Now for the Graph part.
Once one decides to defer the choice of operations to be performed until run time, a structure is required to hold the operations, their dependency relationships, and perhaps mapping parameters. Such a representation is more than a syntactic tree (such as a tree representing the hierarchy of source code). Unlike an assembly language program or machine code, it must be easily and arbitrarily mutable. It must contain more information than a data flow graph and much more than a memory map. What must that data structure that specifies the computational structure look like?
Fortunately, any arbitrary, finite, bounded algorithm can be represented as a directed graph of dependencies between specified operations. In such a graph, the vertices (often represented as nodes of various shapes when displayed) represent operations performed on the data and the edges (often represented as arrows when displayed) are digital representations of information originating resulting from some operation (or system input) and upon which other operations (or system output) depend.
Keep in mind that the directed graph is neither an algorithm (in that a precise sequence of operations is specified) nor a declaration (in that data can be explicitly stored and loops, branches, functions, and modules may be definable and nested).
Most of these Dynamic Computational Graph frameworks and libraries permit the components to do computations on the component input that support machine learning. Vertices in the directed graph can be simulations of neurons for the construction of a neural net or components that support differential calculus. These frameworks present possibilities of constructs that can be used for deep learning in a more generalized sense.
**In the Context of Computer History**
Again, nothing mentioned thus far is new to computer science. LISP permits computational schematics to be modified by other algorithms. And generalized input dimensionality and numerocity is built into a number of longstanding plug-and-play interfaces and protocols. The idea of a framework for learning dates back to the same mid-Twentieth Century period too.
What is new and gaining in popularity is a particular combination of integrated features and the associated set of terminology, an aggregation of existing terminology for each of the features, leading to a wider base for comprehension by those already studying for and working in the software industry.
* Contemporary (trendy) flavor of API interfaces
* Object orientation
* Discrete tensor support
* The directed graph abstraction
* Interoperability with popular languages and packages that support big data, data mining, machine learning, and statistical analysis
* Support for arbitrary and systematic neural network construction
* The possibility of dynamic neural network structural adaptation (which facilitates experimentation on neural plasticity)
Many of these frameworks support adaptability to changing input dimensionality (number of dimensions and the range of each).
**Similarity to Abstract Symbol Trees in Compilers**
A dependency graph of inputs and outputs of operations also appears within abstract symbol trees (AST), which some of the more progressive compilers construct during the interpretation of the source code structure. The AST is then used to generate assembler instructions or machine instructions in the process of linking with libraries and forming an executable. The AST is a directed graph that represents the structure of data, operations performed, and the control flow specified by the source code.
The data flow is simply the set of dependencies between operations, which must be inherent in the AST for the AST to be used to create execution instructions in assembler or machine code that precisely follows the algorithm specified in the source code.
Dynamic Computational Graph frameworks, unlike switch-case statements or AST models in compilers, can be manipulated in real-time, optimized, tuned (as in the case of plastic artificial nets), inverted, transformed by tensors, decimated, modified to add or remove entropy, mutated according to a set of rules, or otherwise translated into derivative forms. They can be stored as files or streams and then retrieved from them.
This is a trivial concept for LISP programmers or those that understand the nature of <NAME>'s recommendation to store operational specifications as data. In this later sense, a program is a data stream to instruct, through a compiler and operating system, a dynamic computational system implemented in VLSI digital circuitry.
**Achieving Adaptable Dimensionality and Numerocity**
In the question is the comment that one doesn't, "Need to have data set -- that all the instances within it have the same, a fixed number of inputs." That statement does not promote accurate comprehension. There are clearer ways to say what is true about input adaptability.
The interface between a DCG and other components of an overall system must be defined, but these interfaces may have dynamic dimensionality or numerocity built into them. It is a matter of abstraction.
For instance, a discrete tensor object type presents a specific software interface, yet a tensor is a dynamic mathematical concept around which a common interface can be used. A discrete tensor may be a scalar, a vector, a matrix, a cube, or a hypercube, and the range of dependent variables for each dimension may be variable.
It can be the case that the number of nodes in a layer of the system defined in a Dynamic Computational Graph can be a function of the number of inputs of a particular type, and that too can be a computation deferred to run time.
The framework may be programmed to select layer structure (an extension of the switch-case paradigm again) or calculate parameters defining the structure sizes and depth or activation. However, these sophisticated features are not what qualifies the framework as a Dynamic Computational Graph framework.
**What Qualifies a Framework to Support Dynamic Computational Graphs?**
To qualify as a Dynamic Computational Graph framework, the framework must merely support the deferring of the determination of algorithm to run time, therefore opening the door to a plethora of operations on the computational dependencies and data flow at run time. The basics of the operations deferred must include the specification, manipulation, execution, and storage of the directed graphs that represent systems of operations.
If the specification of the algorithm is NOT deferred until run time but is compiled into the executable designed for a specific operating system with only the traditional flexibility provided by low-level languages such as if-then-else, switch-case, polymorphism, arrays of functors, and variable-length strings, it is considered a static algorithm.
If the operations, the dependencies between them, the data flow, the dimensionality of the data within the flow, and the adaptability of the system to the input numerocity and dimensionality are all variable at run time in a way to create a highly adaptive system, then the algorithm is dynamic in these ways.
Again, LISP programs that operate on LISP programs, rules engines with meta-rule capabilities, expression engines, discrete tensor object libraries, and even relatively simple Command design patterns are all dynamic in some sense, deferring some characteristics to run time. DCGs are flexible and comprehensive in their capabilities to support arbitrary computational constructs in such a way to create a rich environment for deep learning experimentation and systems implementation.
**When to Use Dynamic Computational Graphs**
The pros and cons of DCGs are entirely problem-specific. If you investigate the various dynamic programming concepts above and others that may be closely tied to them in the associated literature, it will become obvious whether you need a Dynamic Computational Graph or not.
In general, if you need to represent an arbitrary and changing model of computation to facilitate the implementation of the deep learning system, mathematical manipulation system, adaptive system, or another flexible and complex software construct that maps to the DCG paradigm well, then a proof of concept using a Dynamic Computational Graph framework is a good first step in defining your software architecture for the problem's solution.
Not all learning software uses DCG's, but they are often a good choice when the systematic and possibly continuous manipulation of an arbitrary computational structure is a run time requirement.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Many deep neural networks have a static data-flow graph, which roughly means that the structure of its computation (its computation graph) remains stable over different inputs. This is good since we can leverage this feature for performance, such as by mini-batching (processing a bunch of inputs at once).
But some neural networks could have a different computation graph for each input. This causes some problems (batching difficulties, graph construction is computationally expensive), and hence these networks are a bit hard to use.
The paper you link overcomes this problem by proposing a method that can batch several computation graphs into one. Then, we can do our usual NN techniques.
Benefits are speedups, which incentivizes researchers to explore different structures and be more creative I guess.
>
> The advantage of Dynamic Computational Graphs appears to include the ability to adapt to a varying quantities in input data. It seems like there may be automatic selection of the number of layers, the number of neurons in each layer, the activation function, and other NN parameters, depending on each input set instance during the training. Is this an accurate characterization?
>
>
>
This is incorrect.
Upvotes: 0 <issue_comment>username_3: Dynamic Computational Graphs are simply modified CGs with a higher level of abstraction. The word 'Dynamic' explains it all: how data flows through the graph depends on the input structure,i.e the DCG structure is mutable and not static. One of its important applications is in NLP neural networks.
Upvotes: 1 <issue_comment>username_4: In short, dynamic computation graphs can solve some problems that static ones cannot, or are inefficient due to not allowing training in batches.
To be more specific, modern neural network training is usually done in **batches**, i.e. processing more than one data instance at a time. Some researchers choose batch size like 32, 128 while others use batch size larger than 10,000. Single-instance training is usually very slow because it cannot benefit from hardware parallelism.
For example, in Natural Language Processing, researchers want to train neural networks with sentences of different lengths. Using static computation graphs, they would usually have to first do **padding**, i.e. adding meaningless symbols to the beginning or end of shorter sentences to make all sentences of the same length. This operation complicates the training a lot (e.g. need masking, re-define evaluation metrics, waste a significant amount of computation time on those padded symbols). With a dynamic computation graph, padding is no longer needed (or only needed within each batch).
A more complicated example would be to (use neural network to) process the sentences based on its parsing trees. Since each sentence has its own parsing tree, they each requires a different computation graph, which means training with a static computation graph can only allow single-instance training. An example similar to this is the [Recursive Neural Networks](https://ai.stanford.edu/~ang/papers/icml11-ParsingWithRecursiveNeuralNetworks.pdf).
Upvotes: 2 |
2017/08/13 | 591 | 2,193 | <issue_start>username_0: Are there any real-world examples of unintentional "bad" AI behaviour? I'm not looking for hypothetical arguments of malicious AI (AI in a box, paperclip maximizer), but for actual instances in history where some AI directly did something bad due to its direct action that was not intended behavior.
Interpretations of the meaning of "AI", "bad", and "unintentional" are left open due to obvious reasons. Be free with your interpretation, but use some common sense please.
Ex: Microsoft Tay became a racist not too long after being hooked up to the internet, thanks to internet trolls "teaching" her bad things.
I can't think of any other instances. So the following examples are just hypothetical scenarios to demonstrate what I mean.
Ex: A self-driving car drove off-track after being presented with an adversarial example, crashing into people.
Ex: Surgery bot goes haywire and accidentally kills someone.
Ex: Weaponized drone targets civilians against its design.<issue_comment>username_1: The infamous [Flash Crash of 2010](https://en.wikipedia.org/wiki/2010_Flash_Crash) may qualify.
It didn't involve Artificial General Intelligence (which is still a hypothetical) or even "strong narrow AI" (such as AlphaGo) but does involve algorithmic decision-making, which is a form of basic Artificial Intelligence.
[Algorithmic trading](https://en.wikipedia.org/wiki/Algorithmic_trading) already represents a significant percentage of all market activity, and I suspect that percentage will only increase.
From Business Insider: [Algos could trigger the next stock market crash](http://www.businessinsider.com/algos-could-trigger-stock-market-crash-2017-5)
Upvotes: 2 <issue_comment>username_2: There is the case of the tesla accident where the car was in autopilot and crashed into a truck because it appears the vehicle mistook a lightly coloured truck for the sky, killing the driver: <https://www.newscientist.com/article/2095740-tesla-driver-dies-in-first-fatal-autonomous-car-crash-in-us/>
Having said that, it appears the car had been trying to tell the driver to pay attention: <http://uk.reuters.com/article/us-tesla-crash-idUKKBN19A2XC>
Upvotes: 3 |
2017/08/14 | 654 | 2,714 | <issue_start>username_0: So, [Deepmind is pushing for a human level Starcraft bot](https://deepmind.com/blog/deepmind-and-blizzard-open-starcraft-ii-ai-research-environment/) and [Open AI just created a human level 1vs1 Dota bot](https://openai.com/the-international/).
Unfortunately, I've no clue what that signifies because I've never played Starcraft nor Dota nor do I have more than a fleeting acquaintance with similar games.
My question is what the difference between Starcraft and Dota is from a AI perspective and what scientific significance the respective super human bots would have.<issue_comment>username_1: Those AI-learning programs may have very similar scheme. **We are changing only inputs and possible actions** (like "use skill" or "move here"). Starcraft AI must do a lot of actions and control many units. Dota is MOBA so bot should be good in positioning on map for example. Different opponents to destroy and target for win.
AI needs to play many games for learn to play game with some rules to find best moves in some situations/states.
Of course it's only my newbie-programmer opinion :)
Upvotes: 3 <issue_comment>username_2: In SC2, players have more control over every minuet mechanic (constructing buildings, resource mining and management, controlling minions...) in the game, thus putting more tactical responsibility on the burden of the player. In DOTA2, the player is only in control of the super-powered hero itself, and not much of the other aspects of the gameplay.
It is debatable if these options make the game "better" or more difficult as a result. But it is for certain that overall the search space of the problem increases much faster as the dimensions of freedom increases.
Of course, DOTA2 contains very many game mechanics as well (a ton of items that changes a lot of different stats, very many types of heroes which each have their own attacks, a variety of buildings scattered around the map, a shop for player items), but it seems that most of this complexity is focused around player engagements, which although crucial for the development of the game, lets us analyze a much shorter timespan of PvP as opposed to a whole game.
And indeed, the DOTA2 bot from OpenAI was restricted to a mid-lane fight as a singular hero with restricted items, hence restricting most of the complexity DOTA2 has to offer.
Perhaps I am overstepping with this analogy, but the OpenAI result is a bit like a solved chess endgame configuration, while SC2 would be like chess under the knowledge that neither players can see each other's pieces until they are in a position where you can capture them.
In short: SC2 is more tactical. DOTA2 is more arcade-like.
Upvotes: 2 |
2017/08/15 | 766 | 2,860 | <issue_start>username_0: I am currently searching for a supervised learning algorithm that can be used to predict the output given a large enough training set.
Here's a simple example. Suppose the training dataset is `{[A=1, B=330, C=1358.238902], result=234.244378}` and the test dataset `{[A=893, B=34, C=293], result=?}`
My intention is to predict `?` using the input values and result given in the training dataset.
What algorithm would be effective for this problem given the wide range of my input/output values? Would this require some sort of regression algorithm?<issue_comment>username_1: Without seeing more data it's hard to say for sure. Superficially, this looks like a regression type problem. As you mention, there is a lot of spread on the input values, but that doesn't *necessarily* mean that something like linear regression wouldn't work. Try it and see what kind of coefficient of correlation you get. If it's really low, you probably need a different approach, OR the data might actually not have any (or much) predictive power in this scenario.
Beyond linear regression, you might find that there's a more complicated mathematical relationship between the inputs and outputs, that could be determined using [symbolic regression](https://en.wikipedia.org/wiki/Symbolic_regression). Another possibility, if there is a complex non-linear relationship in play, is that an [artificial neural network](https://en.wikipedia.org/wiki/Artificial_neural_network) approach might work well.
Upvotes: -1 <issue_comment>username_2: Impossible to solve until you define an error measurement ( by example $|R-R'|$ or $(R-R')^2$ ) and how this error changes when A, B and C change.
Extreme example: assume $R()$ is random (unrelated to A, B, C values) but static (always same $R(A,B,C)$ for same values of A,B,C). Given some values of A, B, C, you can only answer the value of $R(A,B,C)$ when A,B,C was in the training set. $R(A,B,C)$ is undefined and no predictable when A,B,C was not in the training set.
Moreover, improvements can be done if $R()$ has some properties, by example, if it is possible to state that $R(A,B,C)=R(B,A,C)$ or that $R(A1,B1,C1)=R(A2,B2,C2)$ if $A1+B1+C1=A2+B2+C2$.
Upvotes: 1 <issue_comment>username_3: Yes, you are trying to predict a real number output, so this is a regression problem. To know what kind of algorithm would be best I think you have to ask how much data you have and what you know already about the relationships of the numbers. If you try simple linear regression, what kind of error will you get?
If you were to try linear regression and you get an error that is acceptable, then it may be a very simple problem. Beyond linear regression you can look to more advanced things such as Gaussian processes and neural networks which will all make the kinds of predictions you are seeking.
Upvotes: 0 |
2017/08/16 | 571 | 2,210 | <issue_start>username_0: I've gone through several descriptions of CNNs online and they all leave out a crucial part as if it were trivial.
A "volume" of neurons consists of several parallel layers ("feature maps"), each the result of convolving with a different kernel.
Between volumes, there is usually a step where layers are pooled and subsampled.
The next volume has a different number of parallel layers.
How do the feature maps from one volume connect to the feature maps of the next volume? Is it one-to-many? Many-to-many? Do N kernels apply to each of the M feature maps in the first volume, yielding N\*M feature maps in the second volume? Are these N kernels the same for each feature map in the first volume, or do different kernels apply to each one?
Or, is the number of maps in the second volume not necessarily a multiple of the number in the first volume? If so, do maps in the first volume get cross-synthesized somehow? Or, maybe different numbers of maps in the second volume follow from each one in the first?
Or, is it some other of umpteen trillion possibilities?<issue_comment>username_1: Short answer: **One to many**
Long answer:
The point is that you use a 3D convolution in a CNN. Each kernel has the size of n\*m\*C (C is the number of feature maps) and every feature map has its own kernel(=weights) and bias.
**An example:**
The size of layer 2 is 9x9x10 (stride 1, no padding), the kernel size is 3x3x10.
The dimension of the next layer would be 3x3xn (n are the number of kernels that layer 2 has).
[Here](https://www.quora.com/How-does-the-second-convolution-layer-work-on-multiple-feature-maps) I found a very clear explanation.
I hope this helps.
Upvotes: 3 [selected_answer]<issue_comment>username_2: If you really want to understand what goes on in a CNN and visualize it, then you can give a read to [this](http://cs231n.github.io/convolutional-networks/).
This course needs some prerequisites like having a certain level of idea of the forward and backward propagation and knowing how the network learns on itself.
CNN is just a way of moving from fully connected layers (every input connects to every output) to partially connected layers.
Upvotes: 1 |
2017/08/18 | 1,514 | 6,910 | <issue_start>username_0: I am just curious what AI would be harder to create from a strictly engineering point of view. AI which would win 1vs 1 game with the best player in starcraft or AI which would control a team the whole team in dota2 and win against the best team?<issue_comment>username_1: I can't answer definitively without a detailed breakdown of game mechanics of dota2 vs. Starcraft, but assuming the games have similar complexity, the AI controlling multiple in-game agents that form a team would be more complex, and therefore more difficult to create, than a single agent "team".
My supposition is based on the fundamentals of [combinatorics](https://en.wikipedia.org/wiki/Combinatorics) and [computational complexity](https://en.wikipedia.org/wiki/Computational_complexity_theory):
* in dota2, the AI has to not only to identify winning strategies, but *coordinate* multiple agents.
by contrast:
* the Starcraft AI only has to identify a winning strategy, with no coordination between discrete agents which form a team.
---
*Coalitions have traditionally been difficult to analyze in Game Theory and Combinatorial Game Theory (in the latter case especially, restricting mathematical study, for the most part, to games involving only 2 players.) In this case the issue is not coalitions--the multiple game agents are already on the same team--but complexity is nevertheless dramatically increased by the existence of multiple agents on a team as compared to a team consisting of a single agent.*
Upvotes: 2 <issue_comment>username_2: There are different abilities required for strong play in those two games. Some of them are easier to implement using AI than others. Therefore it is difficult to answer this question in a generic fashion, but we can look at different aspects in detail:
* Speed / APM (Actions per Minute)
While both games require a certain speed, SC2 is usually a bit faster because you simply have more stuff to manage (keep building, arranging your units, coordinate attacks on multiple locations). That said, if you want to control the whole team in Dota2 from a single agent, the APM requirements multiply accordingly. In the end it doesn't really matter, because speed is the least of our concerns and can simply be improved by throwing more resources at the problem. So I wouldn't say that one of the two games has an advantage here concerning engineering effort, just wanted to get the topic out of the way.
* Implement strong battle technique
Fighting battles is an important skill in both games. While the armies in SC2 are larger, the amount of abilities each unit has is usually smaller. One of the limitations of a human player in SC2 is the inability to control each unit individually, because there are simply too many units most of the time. An AI on the other hand could utilize a complex army significantly better by controlling each unit. I have seen simulations of such armies in the early days of SC2 and it was obvious that no human player can compete with this technique by far.
This aspect is less important in Dota2, because the amount of units is significantly smaller. Nevertheless the precision in executing attacks and retreating could easily be optimized to a level hardly achievable by any human player. I would still rate the potential for super human skill in SC2 much higher because it gets multiplied for each unit.
* Find a Winning Strategy
While it is rather easy to beat a human player with speed and technique, finding a winning strategy is a totally different ball game. In games where deception and hidden information plays such a crucial part, the engineering requirements for a strong winning strategy are huge. A strong AI will require a hybrid approach of playing according to programmable rules of different strategies and fine tuning using ML. Both games are still evolving and new (or changing) heroes, units and structures need to be considered in the long run. In my opinion, the hardest part for the AI will be dealing with unconventional or trick plays it hasn't encountered before. SC2 has a higher potential for such trick plays that can catch the opponent off guard and wins you the game right away. And even when the AI has encountered a certain trick play several times, it is very hard to find appropriate counters on its own. So each new trick play will require specific guided training to play correct against it. On the other hand the AI could learn the same tricks and utilize them as part of their strategy.
Dota2 has a smaller range of unpredictable events and strategies, therefore adapting to those would be easier for an AI. This would reduce the engineering effort significantly in this regard.
* Limitations
To evaluate the complexity of the problem, you need to consider, which limitations are set in place for the challenge. If you allow the AI to play with unlimited APM, many deficits in strategy can simply be overcome with technique, utilizing pure speed and precision.
If you restrict the challenge to a small map pool, the AI can be trained much more efficiently. If the AI has to be able to navigate arbitrary maps, the problem gets much more complex. The same is true for limiting matchups or hero selection. The more limitations you put in place, the easier it is to implement a strong AI
* My conclusion
While it is difficult to weight all those aspects against each other, I feel like it would be easier to implement a strong **strategic** AI for Dota2 than for SC2, but the leverage of an AI using a strong **battle technique** is larger for SC2. This doesn't mean a strong Dota2 battle technique can't reach super human level as well, but the skill ceiling is higher in SC2.
Upvotes: 2 <issue_comment>username_3: It would depend on the level of human authenticity you want to give the AI. For one, in StarCraft II humans are limited by their ability to look at only one particular place in the map at once and of making only one action at a time. Computers aren't necessarily limited by this constraint so you would have to artificially create it. It's a similar story with Dota2 except in this scenario the AI would be able to control all 5 characters at once with precise synchronization which would be unmatched by even the best teams. So again, you would have to artificially created constraints on the agent so that it was forced to optimize for team work and communication. I think a challenging and interesting problem for Dota2 would be to have 5 separate agents working together like humans usually do in the game. It would be interesting to see for example if they could generate a "gaming language" in which to signal cooperative operations on the map as well as setting up ganks and pushes for example. This would give researchers a good environment in which to test how the interactions between separate agents would evolve alongside a common goal.
Upvotes: 1 |
2017/08/21 | 2,533 | 10,311 | <issue_start>username_0: >
> Artificial Intelligence is any device that perceives its environment
> and takes actions that maximize its chance of success at some goal.
>
>
>
I got this definition from [Wikipedia](https://en.wikipedia.org/wiki/Artificial_intelligence) that cited "<NAME> (2003), Artificial Intelligence: A Modern Approach".
A [transistor](https://en.wikipedia.org/wiki/Transistor) is a device that amplifies or switches electronic signals when it received an input signal.
Could one say the transistor is the AI?
It is certainly a basic building block of every AI out there, but is it an AI itself, albeit the most basic one?
I'm looking at it from a technological and economic point of view, leaving philosophy aside. From an economic perspective it seems to be an AI because transistor does useful work that it took an intelligent human to perform less than a century ago. And it does it completely on its own.<issue_comment>username_1: Replacing my previous ill conceived answer with this [definition of Intelligence from Richard Sutton](http://incompleteideas.net/sutton/IncIdeas/DefinitionOfIntelligence.html) (a founder and leader Reinforcement Learning) should answer your question.
>
> <NAME> long ago gave one of the best definitions: "Intelligence
> is the computational part of the ability to achieve goals in the
> world”. That is pretty straightforward and does not require a lot of
> explanation. It also allows for intelligence to be a matter of degree,
> and for intelligence to be of several varieties, which is as it should
> be. Thus a person, a thermostat, a chess-playing program, and a
> corporation all achieve goals to various degrees and in various
> senses. For those looking for some ultimate ‘true intelligence’, the
> lack of an absolute, binary definition is disappointing, but that is
> also as it should be.
>
>
> The part that might benefit from explanation is what it means to
> achieve goals. What does it mean to have a goal? How can I tell if a
> system really has a goal rather than seems to? These questions seem
> deep and confusing until you realize that a system having a goal or
> not, despite the language, is not really a property of the system
> itself. It is in the relationship between the system and an observer.
> (In Dennett's words, it is a ‘[stance](https://en.wikipedia.org/wiki/Intentional_stance)’ that the observer take with
> respect to the system.)
>
>
> What is it in the relationship between the system and the observer
> that makes it a goal-seeking system? It is that the system is most
> usefully understood (predicted, controlled) in terms of its outcomes
> rather than its mechanisms. Thus, for a home-owner a thermostat is
> most usefully understood in terms of its keeping the temperature
> constant, as achieving that outcome, as having that goal. But if i am
> an engineer designing a thermostat, or a repairman fixing one, then i
> need to understand it at a mechanistic level—and thus it does not have
> a goal. The thermostat does or does not have a goal depending of the
> observer. Another example is the person playing the chess computer. If
> I am a naive person, and a weaker player, I can best understand the
> computer as having the goal of beating me, of checkmating my king. But
> if I wrote the chess program (and it does not look very deep) I have a
> mechanistic way of understanding it that may be more useful for
> predicting and controlling it (and beating it).
>
>
> Putting these two together, we can define intelligence concisely
> (though without much hope of being genuinely understood without
> further explanation): Intelligence is the computational part of the
> ability to achieve goals. A goal achieving system is one that is more
> usefully understood in terms of outcomes than in terms of mechanisms
>
>
>
Upvotes: 3 <issue_comment>username_2: Transistor is very similar in its function to single neuron and because of that one transistor could be considered to be a very tiny neural network - from this perspective it could be considered to be a form of artificial intelligence on their own. But transistor is not the first building block of AI, the first building blocks are the smallest particles that are possible in phisics - those are the building blocks of any machine.
Also the definition is somehow hard to accept, as there is no reason why thermostat can be considered AI on their own and transistor not. Since thermostat needs an environment in which it works in a way that fulfills the definition of AI, so the transistor could be put in such environment - there's no difference. We consider that something is AI when it maximizes chnace of some goal, but who is defining this goal? We do. So if you say that the goal of some AI system is to have the output current equal to input current multiplied by factor of x, then you can say that bipolar junction transistor is the AI that does exactly that. Of course you can build a processor and memory from such transistors, then write software for eg. face recognition for a computer builded from them. It is all about connecting algorithms together to perform more sophisticated functions. The first "logical gates" (and builing blocks of AI) are the smallest particles possible, the smallest particles that can interact with each other to produce some output available to be used in next interaction - the whole universe works bacause of that - the matter just follows the algorithms - laws of phisics. On the side note: The only thing that for me is not algorithmical is consciousness, I think it is impossible to be created by just algorithms working together. Algorithms doesn't have any feelings and any number of them will not have feelings either - this will only process information without any consciousness inside. Intelligence is computational, consiciousness somehow reaches beyond phisical/algorithmical world - it is worth consideration - I personally think that we are like players in a game and there is other dimension/world from which our consciousness comes from, good question is: what are the rules of this game?
Upvotes: 1 <issue_comment>username_3: I think it might come down to whether the transistor is making a decision. If the transistor is being used as a switch, that would seem to qualify as a decision, even though it's an extremely rudimentary decision.
Intelligence, in reference to Artificial (or Algorithmic) Intelligence, is not restricted to high intelligence. A brute force Tic-Tac-Toe AI has extremely low, narrow intelligence but still constitute AI. An automatic switch, which makes the most simple decision possible, a binary choice, would seem to be the most basic form of intelligence.
Norvig's definition seems rooted in game theory, which is important in terms of utility of intelligence. But in a condition of intractability one is only assuming one's decision is more optimal than other choices. Outcomes can be evaluated, and a determination made as to whether the algorithm was "smart" or "dumb", but these terms refer to relative positions on a spectrum.
It's worth noting this fundamental definition of AI opens up a can of worms in that pre-transistor, automated mechanical switches such as the [Strowger switch](https://en.wikipedia.org/wiki/Strowger_switch) would also probably qualify.
And automata do not have to be electrical. The [History of Automation wiki](https://en.wikipedia.org/wiki/Automation#History) suggests the first feedback control system was for a water clock invented by [Ctesibius](https://en.wikipedia.org/wiki/Ctesibius). This device dates to the 3rd Century BCE, and water clocks were said to be the most accurate time pieces until [Huygens](https://en.wikipedia.org/wiki/Christiaan_Huygens#Horology).
This type of intelligence I tend to think of as [autonomic](https://en.wikipedia.org/wiki/Autonomic_nervous_system), in the sense of involuntary, and distinct from higher functions, which are more commonly associated with "intelligence".
---
Note: The [characteristics of an autonomic system](https://en.wikipedia.org/wiki/Autonomic_computing#Characteristics_of_autonomic_systems) in a computing context are quite interesting and include [self-optimization](https://en.wikipedia.org/wiki/Self-optimization), self-learning, and [self-awareness](https://en.wikipedia.org/wiki/Self-awareness).
**ADDENDUM: After much thought, in reference to @JT's answer, I can't see how decisions can be separated from goals—there needs to be an intent, or the decision is merely random. This might prompt the question "can a simple switch be said to have a goal?"**
Upvotes: 3 <issue_comment>username_4: It's instructive to know the definition of the [transistor](https://en.wikipedia.org/wiki/Transistor).
Transistors are electronic components that are used for amplifiers, as circuit breakers, as connectors, as voltage controllers, for signal modulation and others functions.
(An analogy is that the transistor functions as an electric "faucet" that regulates input and output voltage.)
The definition of AI by <NAME> and <NAME> defines artificial intelligence as:
>
> "The ability of the system to interpret external data correctly, to learn from that data, and to use that learning to achieve certain goals and tasks through flexible adaptation."
>
>
>
[Citation Needed]
Under this definition, transistors are not part of AI, because transistors do not learn and cannot adapt. Transistors are just electronic devices for regulating current.
Upvotes: 1 <issue_comment>username_5: The question is based on two concepts:
1. First artificial intelligence (AI)
2. The transistor is an intelligent device.
Let us talk about the first AI, why transistors, the same definition of intelligence can be applied to Vaccum tubes, and they definitely existed before transistors. So no matter what definition you decide for intelligence, ***transistors are not the first AI***.
Now we come to the next part, **what is artificial intelligence,** like intelligence, the definition of artificial intelligence has undergone changes, in the last 6 decades.
If you use the definition loosely, almost any device can be intelligent even an electric bulb.
Upvotes: 1 |
2017/08/21 | 2,411 | 8,563 | <issue_start>username_0: Is it possible to feed a neural network the output from a random number generator and expect it learn the hashing (or generator) function, so that it can predict what will be the next generated [pseudo-random number](https://softwareengineering.stackexchange.com/q/124233/145653)?
Does something like this already exist? If research is already done on this or something related (to the prediction of pseudo-random numbers), can anyone point me to the right resources?
Currently, I am looking at this library and its related links.
<https://github.com/Vict0rSch/deep_learning/tree/master/keras/recurrent><issue_comment>username_1: If we are talking about a perfect RNG, the answer is a clear **no**. It is impossible to predict a truly random number, otherwise it wouldn't be truly random.
When we talk about pseudo RNG, things change a little. Depending on the quality of the PRNG, the problem ranges from easy to almost impossible. A very weak PRNG like the one [XKCD](https://xkcd.com/221/) published could of course be easily predicted by a neural network with little training. But in the real world things look different.
The neural network could be trained to find certain patterns in the history of random numbers generated by a PRNG to predict the next bit. The stronger the PRNG gets, the more input neurons are required, assuming you are using one neuron for each bit of prior randomness generated by the PRNG. The less predictable the PRNG gets, the more data will be required to find some kind of pattern. For strong PRNGs this is not feasable.
On a positive note, it is helpful that you can generate an arbitrary amount of training patterns for the neural network, assuming that you have control over the PRNG and can produce as many random numbers as you want.
Because modern PRNGs are a key component for cryptography, extensive research has been conducted to verify that they are "random enough" to withstand such prediction attacks. Therefore I am pretty sure that it is not possible with currently available computational resources to build a neural network to successfully attack a PRNG that's considered secure for cryptography.
It is also worth noting that it is not necessary to exactly predict the output of a PRNG to break cryptography - it might be enough to predict the next bit with a certainty of a little more than 50% to weaken an implementation significantly. So if you are able to build a neural network that predicts the next bit of a PRNG (considered secure for cryptography) with a 55% success rate, you'll probably make the security news headlines for quite a while.
Upvotes: 4 <issue_comment>username_2: Being a complete newbie in machine learning, I did this experiment (using Scikit-learn ):
* Generated a large number (N) of pseudo-random extractions, using python random.choices function to select N numbers out of 90.
* Trained a MLP classifier with training data composed as follow:
+ ith sample : X <- lotteryResults[i:i+100], Y <- lotteryResults[i]In practice, I aimed to a function that given N numbers, coud predict the next one.
* Asked the trained classificator to predict the remaining numbers.
Results:
* of course, the classificator obtained a winning score comparable with the one of random guessing or of other techniques not based on neural networks (I compared results with several classifiers available in scikit-learn libraries )
* however, if I generate the pseudo-random lottery extractions with a specific distribution function, then the numbers predicted by the neural network are roughly generated with the same distribution curve ( if you plot the occurrences of the random numbers and of the neural network predictions, you can see that that the two have the same trend, even if in the predicytions curve there are many spikes. So maybe the neural network is able to learn about pseudo-random number distributions ?
* If I reduce the size of the training set under a certain limit, I see that the classifier starts to predict always the same few numbers, which are among the most frequent in the pseudo-random generation. Strangely enough ( or maybe not ) this behaviour seem to slightly increase the winning score.
Upvotes: 2 <issue_comment>username_3: Old question, but I thought it's worth one practical answer. I happened to stumble upon it right after looking at a guide of how to build such neural network, demonstrating echo of python's randint as an [example](https://machinelearningmastery.com/learn-echo-random-integers-long-short-term-memory-recurrent-neural-networks/). Here is the final code without detailed explanation, still quite simple and useful in case the link goes offline:
```
from random import randint
from numpy import array
from numpy import argmax
from pandas import concat
from pandas import DataFrame
from keras.models import Sequential
from keras.layers import LSTM
from keras.layers import Dense
# generate a sequence of random numbers in [0, 99]
def generate_sequence(length=25):
return [randint(0, 99) for _ in range(length)]
# one hot encode sequence
def one_hot_encode(sequence, n_unique=100):
encoding = list()
for value in sequence:
vector = [0 for _ in range(n_unique)]
vector[value] = 1
encoding.append(vector)
return array(encoding)
# decode a one hot encoded string
def one_hot_decode(encoded_seq):
return [argmax(vector) for vector in encoded_seq]
# generate data for the lstm
def generate_data():
# generate sequence
sequence = generate_sequence()
# one hot encode
encoded = one_hot_encode(sequence)
# create lag inputs
df = DataFrame(encoded)
df = concat([df.shift(4), df.shift(3), df.shift(2), df.shift(1), df], axis=1)
# remove non-viable rows
values = df.values
values = values[5:,:]
# convert to 3d for input
X = values.reshape(len(values), 5, 100)
# drop last value from y
y = encoded[4:-1,:]
return X, y
# define model
model = Sequential()
model.add(LSTM(50, batch_input_shape=(5, 5, 100), stateful=True))
model.add(Dense(100, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc'])
# fit model
for i in range(2000):
X, y = generate_data()
model.fit(X, y, epochs=1, batch_size=5, verbose=2, shuffle=False)
model.reset_states()
# evaluate model on new data
X, y = generate_data()
yhat = model.predict(X, batch_size=5)
print('Expected: %s' % one_hot_decode(y))
print('Predicted: %s' % one_hot_decode(yhat))
```
I've just tried and it indeed works quite well! Took just a couple of minutes on my old slow netbook. Here's my very own output, different from the link above and you can see match isn't perfect, so I suppose exit criteria is a bit too permissive:
```
...
- 0s - loss: 0.2545 - acc: 1.0000
Epoch 1/1
- 0s - loss: 0.1845 - acc: 1.0000
Epoch 1/1
- 0s - loss: 0.3113 - acc: 0.9500
Expected: [14, 37, 0, 65, 30, 7, 11, 6, 16, 19, 68, 4, 25, 2, 79, 45, 95, 92, 32, 33]
Predicted: [14, 37, 0, 65, 30, 7, 11, 6, 16, 19, 68, 4, 25, 2, 95, 45, 95, 92, 32, 33]
```
Upvotes: 3 <issue_comment>username_4: Adding to what username_1 said, the extent of randomness in the Random Number Generation Algorithm is the key issue. Following are some designs that can make the RNG weak:
**Concealed Sequences**
Suppose this is the previous few sequences of characters generated: (Just an example, for the practical use larger range, is used)
```
lwjVJA
Ls3Ajg
xpKr+A
XleXYg
9hyCzA
jeFuNg
JaZZoA
```
Initially, you can't observe any pattern in the generations but changing them to Base64 encoding and then to hex, we get the following:
```
9708D524
2ECDC08E
C692ABF8
5E579762
F61C82CC
8DE16E36
25A659A0
```
Now if we subtract each number form the previous one, we get this:
```
FF97C4EB6A
97C4EB6A
FF97C4EB6A
97C4EB6A
FF97C4EB6A
FF97C4EB6A
```
This indicates that the algorithm just adds 0x97C4EB6A to the previous value, truncates the result to a 32-bit number, and Base64-encodes the data.
The above is a basic example. Today's ML algorithms and systems are capable enough to learn and predict more complex patterns.
**Time Dependency**
Some RNG algorithms use time as the *major input* for generating random numbers, especially the ones created by developers themselves to be used within their application.
Whenever weak RNG algorithms is implemented that appear to be stochastic, they **can be extrapolated forwards or backwards with perfect accuracy** in case sufficient dataset is available.
Upvotes: 2 |
2017/08/22 | 879 | 2,955 | <issue_start>username_0: Multiple resources I referred to mention that MSE is great because it's convex. But I don't get how, especially in the context of neural networks.
Let's say we have the following:
* $X$: training dataset
* $Y$: targets
* $\Theta$: the set of parameters of the model $f\_\Theta$ (a neural network model with non-linearities)
Then:
$$\operatorname{MSE}(\Theta) = (f\_\Theta(X) - Y)^2$$
Why would this loss function always be convex? Does this depend on $f\_\Theta(X)$?<issue_comment>username_1: **Convexity**
>
> A function $f(x)$ with $x ∈ Χ$ is convex, if, for any $x\_1 ∈ Χ$, $x\_2 ∈ Χ$ and for any $0 ≤ λ ≤ 1$,
> $$f(λ x\_1 + (1 − λ) x\_2) ≤ λf(x\_1) + (1 − λ) f (x\_2).$$
>
>
>
It can be proven that such convex $f(x)$ has one global minimum. A unique global minimum eliminates traps created by local minima that can occur in algorithms that attempt to achieve convergence on a global minimum, such as the minimization of an error function.
Although an error function may be 100% reliable in all continuous, linear contexts and many non-linear contexts, it does not mean the convergence on a global minimum for all possible non-linear contexts.
**Mean Square Error**
Given a function $s(x)$ describing ideal system behavior and a model of the system $a(x, p)$ (where $p$ is the parameter vector, matrix, cube, or hypercube and $1 ≤ n ≤ N$), created rationally or via convergence (as in neural net training), the mean square error (MSE) function can be represented as follows.
$$e(β) := N^{-1} \sum\_{n} [a(x\_n) − s(x\_n)]^2$$
The material you are reading is probably not claiming that $a(x, p)$ or $s(x)$ are convex with respect to $x$, but that $e(β)$ is convex with respect to $a(x, p)$ and $s(x)$ no matter what they are. This later statement can be proven for any continuous $a(x, p)$ and $s(x)$.
**Confounding the Convergence Algorithm**
If the question is whether a specific $a(x, p)$ and method of achieving an $s(x)$ that approximates the $a(x, p)$ within a reasonable MSE convergence margin can be confounded, the answer is, "Yes." That is why MSE is not the only error model.
**Summary**
The best way summarize is that $e(β)$ should be defined or chosen from a set of stock convex error models based on the following knowledge.
* Known properties of the system $s(x)$
* The definition of the approximation model $a(x, p)$
* Tensor used to generate the next state in the convergent sequence
The set of stock convex error models certainly includes the MSE model because of its simplicity and computational thrift.
Upvotes: 2 <issue_comment>username_2: Answer in short: MSE is convex on its input and parameters by itself. But on an arbitrary neural network it is not always convex due to the presence of non-linearities in the form of activation functions. Source for my answer is [here](https://math.stackexchange.com/questions/2402455/convexity-of-mse-in-neural-networks/2402472).
Upvotes: 3 |
2017/08/23 | 1,154 | 4,963 | <issue_start>username_0: Let's take our standard [paperclip maximizer General AI](https://wiki.lesswrong.com/wiki/Paperclip_maximizer) and attempt to obtain precisely one million paper clips, over course of a year, without destroying the universe in the process.
Most maximization directives make the process run-away. As cheaply as possible will crash world economy. As good clips as possible will turn the universe into super-synthesizer that assembles atom-perfect paperclips. Adding a deadline on these maximization processes will probably result in terrorizing the staff into readjusting the deadline, or invention of time travel (after consuming the solar system to invent it.) Minimizing resource usage would likely result in closure of all industry world-wide. You know, the standard horror scenarios.
What about the directive of minimizing AI's influence on the world while completing the task? Would it be safe, or can you spot a runaway scenario where it could result in dire effects?<issue_comment>username_1: Telling the system "minimize your interference with the world" while also telling it to "maximize paperclip production" or whatever is interesting on at least one level, and that is this: how exactly does the system quantify "interference in the world"? That seems like an ill-defined notion offhand. But if you could quantify it, then it just becomes one more variable in an optimization problem which is a straightforward notion.
Detecting a runaway process in general is an interesting concept. I am not an expert, but I am betting there is some material in the [cybernetics](https://en.wikipedia.org/wiki/Cybernetics) / [control theory](https://en.wikipedia.org/wiki/Control_theory) literature on this topic. It might be as simple as watching some rate-of-change (paper-clips produced per day?) and take the first and second derivatives and look for sharp changes in [acceleration](https://en.wikipedia.org/wiki/Acceleration) or [jerk](https://en.wikipedia.org/wiki/Jerk_(physics)). Other algorithms from the world of [anomaly detection](https://en.wikipedia.org/wiki/Anomaly_detection) might also be applicable.
Upvotes: 2 <issue_comment>username_2: I like mindrime's answer as it identifies fundamental, applicable notions. I'll attempt another short answer from a Game Theory perspective.
Game Theory was founded in the [minimax](https://en.wikipedia.org/wiki/Minimax) principle. Specifically, maximizing benefit while minimizing potential downside in a condition of uncertainty.
Minimax works very well in contexts with easily definable parameters, such as combinatorial games and procedural optimization, but it gets more tricky in real-world scenarios where too many parameters may be present, resulting in a [combinatorial explosion](https://en.wikipedia.org/wiki/Combinatorial_explosion).
Another issue regarding real-world applications arises from the [symbol grounding problem](https://en.wikipedia.org/wiki/Symbol_grounding_problem). In the paper clip scenario, the benefit sought (the goal) is clearly definable, and can be expressed mathematically. By contrast, the downsides to be avoided are more difficult to define: "Don't destroy the world", "Don't use up resources to a degree that humans suffer", "Don't harm the environment" all rely on language. This is to say they rely on terms that, at present, constitute symbols that cannot be grounded. Thus there is room for misinterpretation that may yield unintended consequences.
"Minimizing impact on the system" (minimizing influence in the world) would be the goal, but how do you guarantee an automata has a clear understanding of what this entails in every possible scenario?
Upvotes: 2 <issue_comment>username_3: From the [AI in a box](https://rationalwiki.org/wiki/AI-box_experiment) literature, it is argued that even just a text interface with the rest of the world is sufficient for an AI to gain total control.
Or, consider the literature regarding phase state changes / dynamical systems / control theory. I don't know if there is a source that directly argues for this, but its imaginable that since societal systems are so interconnected, a few controllable free parameters of a system might be sufficient to strongly influence the system as a whole.
So no, restricting influence is not a sufficient guarantee of reducing AI risk. A common saying is that if we know the goal of some AI, we can't predict /how/ the AI would achieve some goal since we aren't smart enough, but we can predict the eventual outcome (its success).
Upvotes: 1 <issue_comment>username_4: Here you make the wrong assumption that AI will have only one goal at a time. But the same as humans it will have to have in mind many goals all the time it should follow and watch out that newly assigned goal dont conflict with its existing goals.
Your proposition to give ai the goal "minimize impact on world" is simplistic as it would be harmful in some situations.
Upvotes: 0 |
2017/08/23 | 1,823 | 7,620 | <issue_start>username_0: I don't play nearly enough Chess to be able to answer. For context, AlphaGo is stronger than the current strongest human player, but AlphaGo's game play has been cast as "inhuman" in the sense that it doesn't resemble human play. *(In Go, this can involve aesthetic qualities.)*
Really I'm wondering about "narrow" application of the Imitation Game/Turing Test, where one might design an automata to play more like a human, so that human players would be unable to determine if their opponent was a human or an automata.<issue_comment>username_1: I recall a friend saying that yes, it is somewhat obvious if you are playing against an AI.
From what he said, against normal players, there is a certain rhythm and structure that "makes sense". But AI play doesn't have this quality, "it doesn't make sense, but it just works". This seems to echo what you mentioned about the aesthetic quality of Go
Apparently because of this, chess/go AI can be detected during online play.
Upvotes: 1 <issue_comment>username_2: There are three cases in which it is easily possible to distinguish strong AI play from the strong human play:
1. The AI is playing at superhuman skill level
This seems obvious, but I want to mention it for the sake of completeness. The current skill ceiling of top-level chess is well known and an opponent playing way above this skill ceiling must either be an AI or a chess guru who hid in a cave for the last centuries. Applying Occam's razor I would go with the AI. So to mimic human play the AI must make sure to stay at a human ELO level.
2. The AI plays an obvious weak move in a losing or strange situation
When the AI is in a bad situation and still tries to win the game without any reasonable move available, it might be tempted to play an obviously bad move, because the decision engine assigns it the highest probability of success. Such moves are considered rude at high-level play and a human player would not play them but resign. The most famous example of this is move 101 in the fourth game between [Lee Sedol and AlphaGo](http://gokifu.com/s/2iq8-gokifu-20160313-Alphago(9p)-Lee_Sedol(9p).html). AlphaGo just lost a big group but still played T9 to extend it even further. This is a move a rookie player stops making after his first 10 games because it is that obvious that it does not work. A professional go player would never do this under normal circumstances. AlphaGo was out of options and played the rude move instead of resigning. This doesn't mean an AI cannot be programmed to avoid such behavior, but dire or simply very strange situations might induce moves like this, giving the nature of the AI away.
3. Other factors than the actual moves are considered
Depending on the test scenario, the AI might give away its nature through other means than the actual moves. Things like moving too fast or playing without breaks for hours. But I assume the question was just focused on the actual play and not the surrounding factors.
Those are the obvious examples where an AI might give away his true nature. I highly doubt that a strong AI, playing at the strong human level, can really be identified because of his "inhuman" style. I think it is more of a psychological pitfall to consider innovative moves from an AI "inhuman" and the same move from a pro player as "brilliant" if you know who is who. I am an amateur 3dan Go player - far from the top but with several years of experience under the belt - and at least I wasn't able to see anything "inhuman" in AlphaGos play so far. Ingo, it is well known that there are different styles of play, and after reviewing enough pro games one gets a sense of which player might have been educated in Korea, China, or Japan because certain patterns of strategies are more or less common in those countries. The lack of such an individual nuance might be an indication, that the player is actually a strong AI, but I am confident, that even the strongest pro wouldn't have a high success rate guessing who is human and who is the AI.
In chess, there are significantly fewer options at every move, many fixed patterns especially in the opening, and therefore less room for an individual style. Therefore I consider the same challenge even more difficult than in go. So if an AI has reason to "play like a human", this should be possible with the current level of AI technology.
Upvotes: 2 <issue_comment>username_3: AI programs that exist in today's world fall into the category of Narrow Intelligence. Narrow Intelligence
are easy to distinguish when compared to General Intelligence (ones that resemble more like humans).
Highly advanced AI can often resemble to act like humans thought. I will like to talk about **Deep Blue** here.
<NAME>, in a [series of matches](http://time.com/3705316/deep-blue-kasparov/) during Feb 1996, even though
won three matches but, had a **" shattering experience"**. Deep Blue
flummoxed him in that first game by making a move with no immediate
material advantage; nudging a pawn into a position where it could be
easily captured. He said, "*It was a wonderful and **extremely human**
move*" and this threw him for a loop. He further added that he had
played a lot of computers but had never experienced anything like
this. He added, "**I could feel, I could smell, a new kind of intelligence
across the table.**"
Next year he lost to an improved version of Deep Blue. Here also he
lost mostly because of psychological reason. He later said that he
was riled by a move the computer made that was so surprising and
so **un-machine-like** that he was sure that IBM team had cheated.
It was later found out that it was a glitch in Deep Blue. The program
came across too many options and had no clear preference and thus
played a random move.
It was this random move that shifted the game in favour of Deep Blue.
One advantage that Deep Blue (or other programs) have is the ability to perform far **more computations** in a given time. Humans, on the other hand, can extend their decision tree only to *very limited depth*. On having no clear decision to take, we take a random decision and try to learn from the experience, unlike most Narrow Intelligence programs present today. We need a **fair amount of
randomness and learning from experience** to be added to the program in order
to make it more like a human.
The answer to this question also depends on your position on **"[Chinese Room Argument](https://en.wikipedia.org/wiki/Chinese_room)"**.
The argument holds that a program can't give a computer understanding
or consciousness regardless of how intelligently or human like
the program may make the computer behave. Chinese Room Argument focuses on something very important when it comes to **"Consciousness v/s Simulation of Consciousness"**. <NAME> argued there that it is possible for a machine (or human) to follow a huge number of predefined rules (algorithm), in order to complete the task, without thinking or possessing the mind.
Upvotes: 3 [selected_answer]<issue_comment>username_4: Yes - in chess the term "computer move" is used to denote a move found by a chess engine that a human player would never find (often because they make some slight improvement that a human would not be able to calculate). Humans use pattern recognition and some calculation in order to understand the chess position they are in while computers are able to calculate (~30 - 120+) moves ahead, a feat that a human player cannot match. It is for this reason that one can distinguish a human move from a computer move at some times.
Upvotes: 1 |
2017/08/26 | 1,695 | 7,673 | <issue_start>username_0: What we are doing in the image processing training. We are storing some form of data which is going to act as the knowledge or experience of the system.
* In which form can the system store it's training data?
For example, with the hand written recognition, we can represent the digits as combinations of curves and straight lines. For every round of training the recognition system stores data. Is the data typically stored in a flat file (such as txt) or a database?
I have seen in [Tesseract OCR](https://en.wikipedia.org/wiki/Tesseract_(software)) that there is a text file that stores the x0,y0,x1,y1. They are the pixel points that represents the square on the training image that has the picture.
I need a efficient form of knowledge for Machine Learning, and would appreciate advice, context, or an explanation of the merits or downsides of different approaches.
>
> I need a form of knowledge that stored in a system. human brain evaluate '7' as 'horizontal line and vertical left crossed line start from right of the horizontal line'. like that machine must have some conceptual data to represent their knowledge.
>
>
><issue_comment>username_1: Your question depends heavily on the method you are using for machine learning. It sounds like you want to extract certain features like "curves and straight lines" from your images and use them as training data. This step of extraction is usually not considered part of the training process but part of pre-processing. During pre-precessing you read your image in, extract certain features or perform some transformation and store the new data as your actual training samples or use them for training right away without intermediate storage. Usually storing this information is a good idea if you want to use the image for training more than once and you want to skip the pre-processing step in future training cycles.
**Comparing File Storage and Database Storage**
How you store your processed data for training is relatively independent from the application of machine learning and the general principles for data storage apply. Storing the data in a flat file is usually very convenient as disk storage is readily available and the APIs for storing and reading files are part of your programming language and OS. A database adds additional complexity to your architecture but has of course it's benefits, especially if you want to make the training data available to other instances or different learning engines. If you are working with a **really** huge amount of data, a well structured database can help you organize your data better and provides helpful functions for handling this data efficiently.
Keep in mind that the actual training of your AI takes place initially and once completed, you can roll out your AI without all the training data. So you only need to handle training data for a limited time in your application lifecycle. The speed of training is usually not very important, because it happens once and not during the daily use of the AI. Therefore the speed of file access during training can be neglected.
**Conclusion**
In conclusion, for most applications, the simpler implementation using just flat files is good enough. If you want to store each sample in an individual file or pack them together on bigger batches and use some meta information to identify the individual samples in the file is more a matter of taste than real technical relevance.
**Storing data after training**
Judging from your question, you sound like you understand machine learning in general. Just to clarify one potential misunderstanding for beginners - you don't have to store any of the "learned" information as data after training, as each training step just adapts weights and biases in the neural network. The actual training data can be thrown away after successful training, if you don't need it for future training cycles.
**Further Information / References**
To finish my answer I want to recommend a current course from [Stanford University about CNNs and visual recognition](https://www.youtube.com/playlist?list=PL3FW7Lu3i5JvHM8ljYj-zLfQRF3EO8sYv) (Dated Spring 2017). It is a great source of information for implementing neural networks for image recognition.
Upvotes: 3 [selected_answer]<issue_comment>username_2: I think username_1 has answered it well, but probably below can add some more understanding to you.
[1] Usually the data stored for image processing relates to the **Image Pixel Densities**(*as per the many courses i visited online*), which can be very well maintained using a **matrix of pixel density values**, corresponding to the resolution of each image.
Then it depends on the image's color spectrum (*is it a colored or a gray scale*); as per the case of hand written text recognition you don't really require the pixel values for a colored image.
Consider that if you have a colored image then you have to store the RGB values for each pixel, thus **tripling** the matrix size, and the training could be very well done with just the gray scale values, thus it would be better to convert all the images to gray scale in preprocessing step and also take care to convert the image to gray scale when you actually use the trained network to recognize an image (*just a novice mistake!*).
As per the **storage** for the required matrix form of the pixel density values, a flat file storage would suffice, but I would recommend libraries instead *(if working in python or other alternatives*), like **pandas** and **numpy**. For these libraries provide a robust solution for data management and retrieval.
For more details - <https://docs.scipy.org/doc/numpy-dev/user/quickstart.html>
<https://github.com/pandas-dev/pandas>
[2] Now I would emphasize on why not to use any DataBase(DB) for the storage of such information.
Firstly the integration of it would only result in an **unnecessary overhead** to your efforts of training the Network and also when you would want to recognize an image(*which would always be a single matrix form of it's Image Pixel Densities*), you would require the connection to the same DB and would also need to make an **insertion** to it first, for the script to be able to extract from there and make recognition(*not recomended*).
There is a fact we don't really concern ourselves with is the quality of the training image dataset, we do ignore the presence of **noise** in the dataset(*would be minimum if using a preprocessed standard data set such MNIST <http://yann.lecun.com/exdb/mnist/>*), but if creating one's own data set we have to consider noise and rectify accordingly in the preprocessing task itself.
With noise I mean **loss or overlap of pixel density information** which is usually adds a **blurring effect** to the images, and can seriously damage the training metrics.
To overcome it, there are approaches to rectify the pixel values with a prior learning methods such as extracting data from obfuscated images,
see link- <https://arxiv.org/pdf/1609.00408.pdf>.
*With the above case you would require to make updates in the DB for very single record in the preprocessing task even before you start to train the Network, thus it can increase the total requests to the DB in multifolds and thus impeding the whole benefit of using it in the first place*.
I suppose the above should give you fair idea, about the kind of data to store and the Data Storage to prefer accordingly.
I worked out the Digit Classification using MNIST data set with obscuration using a NN Pipeline, to refer <https://github.com/kchopra456/Digit-Classification-and-Image-Obfuscation-ANN>.
Upvotes: 1 |
2017/08/26 | 1,560 | 7,090 | <issue_start>username_0: I'm quite new to image processing and AI. But I have the expertise to create a network that can be used in object detection and recognition. Most of the time I've used ANN or Naive Bayes.
Now, I want to develop a method of action recognition, something like identifying whether one is jogging, running or walking by applying ANN. However, I really don't have idea how the sequence of frames can be classified.
In static image, segmentation and feature extraction is easy. But in regard to a moving image, I'm unsure of the approach.
Thanks in advance!<issue_comment>username_1: Your question depends heavily on the method you are using for machine learning. It sounds like you want to extract certain features like "curves and straight lines" from your images and use them as training data. This step of extraction is usually not considered part of the training process but part of pre-processing. During pre-precessing you read your image in, extract certain features or perform some transformation and store the new data as your actual training samples or use them for training right away without intermediate storage. Usually storing this information is a good idea if you want to use the image for training more than once and you want to skip the pre-processing step in future training cycles.
**Comparing File Storage and Database Storage**
How you store your processed data for training is relatively independent from the application of machine learning and the general principles for data storage apply. Storing the data in a flat file is usually very convenient as disk storage is readily available and the APIs for storing and reading files are part of your programming language and OS. A database adds additional complexity to your architecture but has of course it's benefits, especially if you want to make the training data available to other instances or different learning engines. If you are working with a **really** huge amount of data, a well structured database can help you organize your data better and provides helpful functions for handling this data efficiently.
Keep in mind that the actual training of your AI takes place initially and once completed, you can roll out your AI without all the training data. So you only need to handle training data for a limited time in your application lifecycle. The speed of training is usually not very important, because it happens once and not during the daily use of the AI. Therefore the speed of file access during training can be neglected.
**Conclusion**
In conclusion, for most applications, the simpler implementation using just flat files is good enough. If you want to store each sample in an individual file or pack them together on bigger batches and use some meta information to identify the individual samples in the file is more a matter of taste than real technical relevance.
**Storing data after training**
Judging from your question, you sound like you understand machine learning in general. Just to clarify one potential misunderstanding for beginners - you don't have to store any of the "learned" information as data after training, as each training step just adapts weights and biases in the neural network. The actual training data can be thrown away after successful training, if you don't need it for future training cycles.
**Further Information / References**
To finish my answer I want to recommend a current course from [Stanford University about CNNs and visual recognition](https://www.youtube.com/playlist?list=PL3FW7Lu3i5JvHM8ljYj-zLfQRF3EO8sYv) (Dated Spring 2017). It is a great source of information for implementing neural networks for image recognition.
Upvotes: 3 [selected_answer]<issue_comment>username_2: I think username_1 has answered it well, but probably below can add some more understanding to you.
[1] Usually the data stored for image processing relates to the **Image Pixel Densities**(*as per the many courses i visited online*), which can be very well maintained using a **matrix of pixel density values**, corresponding to the resolution of each image.
Then it depends on the image's color spectrum (*is it a colored or a gray scale*); as per the case of hand written text recognition you don't really require the pixel values for a colored image.
Consider that if you have a colored image then you have to store the RGB values for each pixel, thus **tripling** the matrix size, and the training could be very well done with just the gray scale values, thus it would be better to convert all the images to gray scale in preprocessing step and also take care to convert the image to gray scale when you actually use the trained network to recognize an image (*just a novice mistake!*).
As per the **storage** for the required matrix form of the pixel density values, a flat file storage would suffice, but I would recommend libraries instead *(if working in python or other alternatives*), like **pandas** and **numpy**. For these libraries provide a robust solution for data management and retrieval.
For more details - <https://docs.scipy.org/doc/numpy-dev/user/quickstart.html>
<https://github.com/pandas-dev/pandas>
[2] Now I would emphasize on why not to use any DataBase(DB) for the storage of such information.
Firstly the integration of it would only result in an **unnecessary overhead** to your efforts of training the Network and also when you would want to recognize an image(*which would always be a single matrix form of it's Image Pixel Densities*), you would require the connection to the same DB and would also need to make an **insertion** to it first, for the script to be able to extract from there and make recognition(*not recomended*).
There is a fact we don't really concern ourselves with is the quality of the training image dataset, we do ignore the presence of **noise** in the dataset(*would be minimum if using a preprocessed standard data set such MNIST <http://yann.lecun.com/exdb/mnist/>*), but if creating one's own data set we have to consider noise and rectify accordingly in the preprocessing task itself.
With noise I mean **loss or overlap of pixel density information** which is usually adds a **blurring effect** to the images, and can seriously damage the training metrics.
To overcome it, there are approaches to rectify the pixel values with a prior learning methods such as extracting data from obfuscated images,
see link- <https://arxiv.org/pdf/1609.00408.pdf>.
*With the above case you would require to make updates in the DB for very single record in the preprocessing task even before you start to train the Network, thus it can increase the total requests to the DB in multifolds and thus impeding the whole benefit of using it in the first place*.
I suppose the above should give you fair idea, about the kind of data to store and the Data Storage to prefer accordingly.
I worked out the Digit Classification using MNIST data set with obscuration using a NN Pipeline, to refer <https://github.com/kchopra456/Digit-Classification-and-Image-Obfuscation-ANN>.
Upvotes: 1 |
2017/08/26 | 760 | 3,496 | <issue_start>username_0: I want to write an algorithm which indicates to a robot the first point in time when it is reasonably safe to cross a road. Assume that the robot's goal is to travel to a location that requires a road crossing and that the robot is ready to cross.
Simple algorithms and decision making will probably not suffice. What features and capabilities must the algorithm have to provide crossing safety? What existing AI methods might be useful to consider for this endeavor?
For the first iteration, we can assume the traffic pattern is normal in that no vehicles are driving over the curb and there are no high speed chases or other safety related abnormalities.<issue_comment>username_1: You can use Imagination-augmented to predict wrong and fatal actions. [Deep mind is working on it](https://www.youtube.com/watch?v=llwAwE7ItdM) and I believe it is a great solution to many problems. You need to predict whether the best solution is to take a few steps forward so you do not get hit or go back a few steps.
Upvotes: 0 <issue_comment>username_2: Any AI algorithm depends on the environment, and available actuators and sensors. In our case, the environment is a road, street, etc. The primary actuator includes wheels (or legs) of the robot. Sensors include a camera, sonar system, etc.
A simple Model-based reflex algorithm can work in your case:
```
function MODEL-BASED-REFLEX-AGENT(percept) returns an action
persistent: state, agents' current conception of the world state
model, description of how hte next state depends on current state and action
rules, a set of condition-action rules
action, most recent action, initially none
state <-- UPDATE-STATE(state, action, percept, model)
rule <-- RULE-MATCH(state, rules)
action <-- rule.ACTION
return action
```
Most of the terms and functions are self-explanatory and I will try to explain the important points. This implementation helps the robot to **keep track of the external environment by maintaining internal state** that depends on the percept history. Updating the internal state requires *how the environment works without our agent* in it. For e.g., Cars stop at red light and start moving on the green signal. Another thing that is required is *how the actions of our agent will affect the world*. Since your robot is only trying to cross the road there are not many cases here. Simple things include, stopping motors will stop the bot and starting them will move it forward.
The algorithm above shows **how the current percept combined with the old internal state result in generating the updated description of the world, based on agent's model of how the world works**. Thus, agent's model of how the world works is the most important part. UPDATE-STATE is responsible for creating new internal state description. The actual implementation of this will depend on the environment and technology that is being used.
The above algorithm has been taken from the book, *Artificial Intelligence: A Modern Approach*, for agents working in the partially observable environment. Implementation of many functions also depends on how complex you want your robot to be. For e.g., we haven't considered the case of looking up while crossing the road. This is possible that something might be falling from the sky, highly improbable, but possible. The list like this never stops in real life environments.
Upvotes: 3 [selected_answer] |
2017/08/27 | 2,113 | 9,195 | <issue_start>username_0: Currently, within the AI development field, the main focus seems to be on pattern recognition and machine learning. Learning is about adjusting internal variables based on a feedback loop.
[Maslow's hierarchy of needs](https://en.wikipedia.org/wiki/Maslow%27s_hierarchy_of_needs) is a theory in psychology proposed by *<NAME>* that claims that individuals' most basic needs must be met, before they become motivated to achieve higher-level needs.
* What could possibly motivate a machine to act?
* Should a machine have some sort of DNA-like structure that would describe its hierarchy of needs (similar to Maslow's theory)?
* What could be the fundamental needs of a machine?<issue_comment>username_1: I asked [professor <NAME>](https://www.ualberta.ca/science/about-us/contact-us/faculty-directory/rich-sutton) a similar question, in the first lecture of the reinforcement learning course. It seems that there are different ways to motivate the machine. In fact, machine motivation seems to me like a dedicated field of research.
Typically, machines are motivated by what we call an *objective function* or a *cost function* or a *loss function*. These are different names for the same concept. Sometimes, they are denoted by
$$L(a)$$
The goal of the machine is then to solve either a minimization problem, $\min\_a L(a)$, or a maximization problem, $\max\_a L(a)$, depending on the definition of $L$.
Upvotes: 2 <issue_comment>username_2: The current method to implement motivation is some kind of artificial reward. [Deepmind's DQN](https://deepmind.com/research/dqn/) for example is driven by the score of the game. The higher the score, the better. The AI learns to adjust its actions to get the most points and therefore the most reward. This is called [reinforcement learing](https://en.wikipedia.org/wiki/Reinforcement_learning). The reward **motivates** the AI to adapt its actions, so to speak.
In a more technical term, the AI wants to maximize utility, which depends on the implemented [utility function](https://stackoverflow.com/questions/14360893/intelligent-agents-utility-function). In the case of DQN, this would be maximizing the score in the game.
The human brain functions in a similar fashion, although a little more complex and often not as straight forward. We as humans usually try to adjust our actions to produce a high output of **dopamine** and **serotonin**. This is in a way similar to the reward used to control AIs during reinforcement learning. The human brain learns which actions produce the most amount of those substances and finds strategies to maximize the output. This is, of course, a simplification of this complex process, but you get the picture.
When you talk about motivation, please don't mix it up with **consciousness** or [**qualia**](https://en.wikipedia.org/wiki/Qualia). Those are not required for motivation at all. If you want to discuss consciousness and qualia in AI, that's a totally different ball game.
A child isn't curious for the sake of curiosity. It gets positive reinforcement when exploring because the utility function of the child's brain rewards exploration by releasing rewarding neurotransmitters. So the mechanism is the same. Applying this to AI means defining a utility function that rewards new experiences. There is no inner drive without some kind of reinforcing reward.
Upvotes: 4 [selected_answer]<issue_comment>username_3: This is an interesting question actually.
There's a quite realistic idea about "where can the curiosity originate from" in the book "On intelligence" written by <NAME> and <NAME>.
It's based on such statements:
* Mind creates its own model of the world it exists in.
* It makes predictions about everything all the time (actually Jeff Hawkins states that this is the main characteristic of intelligence).
* When prediction about something wasn't followed by appropriate behavior of the world, then this thing gets very interesting to the mind (the model is wrong and should be corrected) and needs more attention.
For example, when you look at left human eye your brain predicts that it's a human face and there should be second eye to the right. You look to the right and see a.. nose! What a surprise! It now takes all your attention and you have this motivation to make more observations about such a strange thing that did not fit into your model.
So I'd say that AI might do something certain according to its model or behave randomly while the predictions it is making about the world are true. But once some prediction is broken the AI gets a motivation to do error-correction to its model.
In a simple case a machine starts at a total randomness just doing everything it can with its output. While it has no model or a random model when it detects some kind of order or repeated patterns it is getting "interested" and adds it to the model. After a while, the model becomes more sophisticated making more complex predictions and detecting higher level mistakes in a model. Slowly it gets to know what to do to observe something interesting to it, instead of just remembering everything.
Upvotes: 3 <issue_comment>username_4: I've spent some time thinking about this in the context of games.
The problem with reward functions is that they generally involve weighting nodes, which is useful but ultimately materially meaningless.
Here are two materially meaningful rewards:
**COMPUTATIONAL RESOURCES**
Consider a game where an AI is competing not for points, but for processor time and memory.
The better the algorithm performs at the game, the more memory and processing it has access to. This has a practical effect--the more resources available to the automata, the stronger its capabilities. (i.e. it's [rationality is less bounded](https://en.wikipedia.org/wiki/Bounded_rationality) in terms of time and space to make a decision.) Thus the algorithm would be "motivated" to prevail such a contest.
**ENERGY**
Any automata with a sufficient degree of "self awareness", here specifically referring to the knowledge that it requires energy to process, would be motivated to self-optimize its own code to eliminate unnecessary flipping of bits (unnecessary energy consumption.)
Such an algorithm would also be motivated to ensure its power supply so that it can continue to function.
Upvotes: 1 <issue_comment>username_5: I think we give ourselves too much credit by already referring to our algorithms and machines as actually thinking and acting on motivations. In my opinion we still have a bit to go before we can actually refer to a human creation as thinking or being able to have motivations more then basic physical ones.
By that I would say that a Machines' or AI algorithms' motivations are similar to a car engine. Simple and basic, the "motivations" of a car engine to run are just the first and second laws of thermodynamics, namely the conservation of energy and the exchange between energy types, and the always increasing level of entropy in a closed system.
By having a really specific design, we can insert fuel in the system and create a lot of potential energy which will "motivate" the engine to transform it in other types of energy (heat, sound, etc.)
An AI algorithm is exactly the same, it's just that now we're playing with electricity. By putting multiple levels of abstractions from the actual level of electrons moving through wire, up to your python Deep Learning algorithm training to learn how to recognize images of dogs. The concept is similar in my opinion, for now we do not have machines that are complex enough to have higher-level motivations, or even develop them by themselves.
As the other answers pointed out, specific algorithms, namely reinforcement learning try to emulate those "needs" and "motivations", but in the end in my opinion, for now, they are still just emulations. Similar to other Deep learning algorithms, the same basic concept described at the beginning applies, trying to minimize the error by emulating different concepts that we know, as conservation of energy, following the path of least resistance, maintaining the laws of entropy, etc.
Upvotes: 1 <issue_comment>username_6: Good verses bad? From an imaging standpoint, humans/biologicals tend to notice details not “imagined”. A “good” feedback learning loop might be refining the correctness of the imagined. Using efforts to reduce the number or amount discrepancies between imagined or predicted environment and perceived reality as measured against “beneficial” outcomes. I’m not sure what a Ai wood think of as good/bad in the abstract, in my limited considerations of imagination in humans, I seem most useful in recognizing things that be mor meaning than the general background. As the imagination becomes more able to predict a more “real” outcome, it is more useful to the individual in predicting a more correct outcome and greater usefulness in defining things that deverge form the expected. Sort simplistic I realize, but intelligence might be as simple as refining.the ability to imagine a result and judging the more meaningful of the descrpencies .
Upvotes: 0 |
2017/08/27 | 817 | 3,515 | <issue_start>username_0: <NAME> talks in his book [Superintelligence](https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies) about the many dangers of AI. He considers it necessary that strong security mechanisms are put in place to ensure that a machine, once it gains general intelligence far beyond human capabilities, does not destroy humanity (most likely by accident). He describes this as a very delicate process that most likely will go wrong.
Considering that new technologies often neglect the necessary precautions and that this is highly relevant to national security, I wonder if there are already government agencies overseeing big technology companies like Deepmind. We are currently far away from an intelligence explosion or a technological singularity, but I would assume that governments want to have a foot in the door as soon as they realize and understand the dangers.
So my question is, what government agencies currently investigate and maybe even control AI development? The answer can be general or for a specific country if there is a big difference between countries.<issue_comment>username_1: May not be quite what you’re looking for, but nonetheless helpful, I hope. The White House a year ago commissioned a [report on AI](https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf) that touches briefly on policy issues.
Upvotes: 2 <issue_comment>username_2: Although I (partly) agree with <NAME>s view that Artificial Intelligence could in some ways be dangerous. We do not need new government bodies to control or regulate AI development.
We already have sufficient cyber laws that protect us against computer crimes such as cyber terrorism, cyber bulling, creating malware, identity theft, DOSS, unauthorized access e.t.c. It is the duty of local law enforcement agencies and the FBI to prevent and investigate cybercrimes such as those listed above. Whether AI was used to perpetrate the crime is legally immaterial.
Although AI is a 'new' technology. We already have a rigorous criminal justice system within our governance structures that are well capable of handling any eventualities that may arise from AI or any other technological breakthroughs without being overwhelmed.
For example if an AI causes a car accident the manufacturer of the car can simply be charged for Product Liability for Negligence. If an AI is defective or dangerous. We already have product liability and consumer protection laws and the relevant government agencies to implement them.
If an AI uses its intelligence to maneuver within the law to its own advantage. This by definition is not a crime, big corporations do this all the time to minimize their taxes. However if necessary, the legislature can sit and pass a law criminalizing/banning this new activity.
A sovereign government already has enough powers and the neccessary instruments to excercise that power. Creation of a new government agency will lead to the unneccessary duplication of responsibilities.
The best approach is simply for the relevant government agencies to adapt by playing a proactive role and modernizing their service delivery so that it is in sync with current developments within society.This is what everyone has to do.
In reality we do not need additional agencies. I would find agencies for example Federal Ai Agency
or Federal Blockchain Commission to be baffling and unproductive.
Upvotes: 0 |
2017/08/28 | 698 | 2,701 | <issue_start>username_0: In this [note](https://people.cs.umass.edu/~domke/courses/sml2011/08autodiff_nnets.pdf) <NAME> says that
>
> In practice, neural networks seem to usually find a reasonable solution when the number of layers is not too large, but find poor solutions when using more than, say, 2 hidden layers.
>
>
>
But in [Bengio's remark](https://qr.ae/pNrt0p), he says
>
> Very simple. Just keep adding layers until the test error does not improve anymore.
>
>
>
There seems to be a conflict. Can anyone explain why they suggest differently? Or am I missing something?<issue_comment>username_1: There are **many** problems requiring more than two hidden layers. Randomly select a recent Google journal paper on deep learning, you'll see their network could have something like 5 (or more) hidden layers.
<NAME> wrote his notes for students, so he probably tried to make his points as simple as possible. For a "typical" machine learning problem that students would most likely work on, two hidden layers should be sufficient. But that doesn't add up for a real practical problem. "Deep" learning usually mean more than two hidden layers.
Number of hidden layers is network design that nobody knows for sure. Yosh<NAME>'s suggestion is common and simple. It's not a mathmatic proof, but simply a guideline if you don't know what to do. You just repeat and repeat, until you see the test error no longer improve.
Upvotes: 2 <issue_comment>username_2: In fact they are telling the same thing: Plot the x-axis as the number of hidden layers, and the y-axis as the performance (e.g. classification accuracy), then this curve will have an upside-down U shape.
Justin's note is clearly saying the same thing as what I wrote above, with added note that the maximum of the curve will happen when x = 2, and Bengio's note is saying the same thing without telling you where the peak could be.
Upvotes: -1 <issue_comment>username_3: Your first link is from 2011, which essentially predates the current deep learning explosion. In the many years that have since passed (AlexNet 2012, ResNet 2015) we have since found that if you keep adding layers, we generally do see improved performance.
[](https://i.stack.imgur.com/chMZQ.png)
This is due to improved training techniques and optimization breakthroughs (residual connections, ReLU, dropout etc.). But do note that the result can be diminishing. In particular, take a look at [Deep Equillibrium Models](https://arxiv.org/pdf/1909.01377.pdf), which essentially allow us to train (in the limit equivalence) *infinite depth* neural networks.
Upvotes: 0 |
2017/08/29 | 1,412 | 5,611 | <issue_start>username_0: As I understand it from this [video lecture](http://videolectures.net/deeplearning2015_vincent_machine_learning/), there are three types of deep learning:
* Supervised
* Unsupervised
* Reinforcement
All these can serve to train a neural network either *only prior* to its deployment or *during* its operating.
For the latter case, it is referred to as *continuous* learning [here](http://deepmind.com/blog/enabling-continual-learning-in-neural-networks/) and [here](http://medium.com/@Synced/deep-learning-in-real-time-inference-acceleration-and-continuous-training-17dac9438b0b) and as *dynamic* learning [here](http://conferences.oreilly.com/artificial-intelligence/ai-ny/public/schedule/detail/59511) and [here](http://www.datasciencecentral.com/profiles/blogs/static-dynamical-machine-learning-what-is-the-difference).
Which term should I use to refer to a machine learning algorithm that keeps on learning (even after deployment)? If it's "continuous", is there an opposing term (such as "static" for "dynamic") for those systems that stop learning before being deployed?<issue_comment>username_1: In such a thriving field like AI, many terms are not fully established and in some areas it will take more time until the community agrees on specific terms for specific technologies.
Therefore, it might not be possible to give an definitive answer to your question yet. But it is important to note, that the references you gave do not all speak about the same type of **learning during operation**. And this gives as cues to dig deeper into the topic.
The easiest way to learn during operation is simple reinforcement learning. DQN of Deepmind works exactly that way. It tries out different moves and learns what works by receiving feedback through the reward function. There is not necessarily a separate learning and working phase. The AI just gets better over time. This means, that the NN keeps adapting to the challenge at hand and if that challenge changes, it adopts to the new challenge over time and "forgets" what worked before, if this old strategy is no longer efficient.
Your first reference to Deepmind addresses the aspect of forgetting. When talking about continuous learning, it could also mean, that a system does **not** forget previous skills but can utilize them later after learning something totally different. This requires more advanced architectures of AI. And this approach doesn't mean, that the NN must necessarily keep learning during operation. It could use this technique during training and stop learning once it is productive.
And there is a third field, recursive networks, which do not pass their information from neuron to neuron in a straight fashion but can contain loops or other types of gates, that control the flow of information. Such architectures are usually more complex than normal NNs, but resemble the working of animal brains a little more. In this field the term **dynamic** is used frequently.
Therefore, I would say that the term you are looking for is **continuous learning** - I have also read **online learning** for this type of NN - when you want to say that the NN keeps learning during production. The term **dynamic** is more common for recursive or other more complex types of NNs and fits there better in my opinion.
To conclude my answer, I need to jump back to the beginning. Many terms in the field of AI are not finally settled. Therefore my analysis is just a summary of the trends I have seen in different publications and lectures and must not hold true in the years to come. Hope it helps nevertheless.
Upvotes: 1 <issue_comment>username_2: There are several terms or expressions related to such systems, such as [*online learning*](https://web.archive.org/web/20200709121648/https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2016-19.pdf), [*incremental learning*](https://web.archive.org/web/20200709121648/https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2016-19.pdf), *continuous learning*, [*continual learning*](https://wiki.continualai.org/), and [*lifelong learning*](https://wiki.continualai.org/). They are sometimes used interchangeably, but some of them have slightly different meanings. For example, online learning does not need to be incremental, which refers to algorithms that attempt not to forget previously learned information.
The opposite of online is offline. However, the expression [*batch learning*](https://stats.stackexchange.com/q/70761/82135) is sometimes used as an antonym for online learning.
Upvotes: 3 <issue_comment>username_3: ML, being a relatively young and fast-developing field, has numerous (near-)synonyms for many concepts.
One paradigm difference is whether a model is learned from a static, pre-defined set of data, or whether it adapts as new data is presented to it over time.
Some of the terms used to describe these two paradigms respectively (with subtle differences in meaning between authors/terms) are:
* Offline / batch / [isolated](https://www.cs.uic.edu/%7Eliub/lifelong-learning/introduction-lifelong-learning.pdf) learning
* Online / continual / continuous / incremental / lifelong learning
Further, some familiar branches of ML (like Transfer Learning and Multi-task learning) have a lot of intersection with Continual Learning.
---
Related q's:
* [What is the difference between active learning and online learning?](https://ai.stackexchange.com/questions/23226/what-is-the-difference-between-active-learning-and-online-learning)
* <https://stats.stackexchange.com/questions/897/online-vs-offline-learning>
Upvotes: 1 |
2017/08/31 | 1,569 | 6,346 | <issue_start>username_0: **Scenario:** I am trying to create a dataset with images of choice for different animal classes. I am going to train those images for classification using CNN.
**Problem:** Let's assume I somehow don't have the privilege to collect too many images and was only able to collect a few of them for each class. Here's the list:
```
| id | animal | # |
|----|--------------|-------|
| 1 | Baboon | 800 |
| 2 | Fox | 1000 |
| 3 | Hyena | 5000 |
| 4 | Giraffe | 43 |
| 5 | Zebra | 88 |
| 6 | Hippopotamus | 233 |
| 7 | Yak | 578 |
| 8 | Polar Bear | 456 |
| 9 | Lion | 3442 |
| 10 | Indian Tiger | 40000 |
```
I have three questions.
1. Is this a good dataset to train the CNN model? I am worried about the quantity each class has.
2. Will it be helpful if I augment the data? I think I am going to augment it.
3. In the future, the above-mentioned dataset is going to increase. So there is a chance that I will train the model again. Should I create a model that fits the data of the present size or should I create a bigger one in order to adjust future data?
I can get data from the Internet. But this question is about the approaches to take when we have a small amount of data, like the one in National Data Science Bowl (classifying Planktons).<issue_comment>username_1: You can build this just with 100 images. In your case, Zebra and Giraffe need more images. With DNNClassifier (TensorFlow), you can do it. But the more images you have, the more accurate your classifier will be.
I suggest that you also watch the video: [Train an Image Classifier with TensorFlow for Poets - Machine Learning Recipes #6](https://www.youtube.com/watch?v=cSKfRcEDGUs).
Upvotes: 1 [selected_answer]<issue_comment>username_2: It is somewhat risky to discuss data independently with your learning mechanism. There is actually no such thing as good data or a good learner. There is only data that is good WITH a particular learner. That is even true of human intelligence after all the standardized education and testing done today.
There are also exceptional learners that find data to be good when most others fumble with it.
If by good data and deep learning you mean image sets that will lead to proper categorization of unsuspected images presented in production, your intuitive understanding of statistics can provide you with a general answer. The images on which the deep learner develops its activation weights and meta-parameters to provide adequate production behavior must be representative of the range of images that will be found in the production feeds.
If you intended to do a study of men and women to determine if the old belief that women are more motivated by the prospect of love and men are more motivated by the prospect of sex, you wouldn't pick 43 men and 40,000 women for the study. The study's value is limited by the lower of the two numbers.
You can train the network with the category frequencies you have, but some deep learners may capitalize fully on feature extraction for Indian Tigers and Hyenas but exhibit an unacceptable level mis-categorization of Zebras and Giraffes.
Returning to the concept above, the skew in category frequency can be accounted for by the deep learner. It is theoretically possible to create an exceptional learner or one that is well attuned to this kind of frequency skew. A simple approach is to develop a scheme that recognizes frequency skew and allocates additional computing resources to the training that focuses on the differentiation of similar animals with infrequent labeled training instances.
I don't recall who has done that, but I know it has been done.
There are several ways you can give extra attention to the infrequent categories manually in the code, but then it would be a less general solution and the resulting program would neither be an exceptional learner nor particularly reusable.
It is more cost effective to hunt for a skew resistant deep learning scheme and test its accuracy for infrequent animals than sending a photographer to Africa. If you can find more images of the less frequent animals without a monumental effort, I would do that too.
Upvotes: 2 <issue_comment>username_3: Your data set would be what is called "unbalanced' and this can lead to problems in developing an accurate classifier.
The best thing to do (which you might not be able to do) is to find more images for those classes with a smaller number of images.
Another alternative is to synthetically produce more images. One way to do that is to use the Keras `ImageDataGenerator.flow_from_directory`. Documentation is at <https://keras.io/preprocessing/image/>. Create a directory (`your_dir`). In it, create a subdirectory `Giraffe`. Place all your 43 giraffe images into that directory. Create another directory `your_save_dir`, and leave it empty. Now, create the generators shown below.
```
datagen = ImageDataGenerator(rotation_range = 30, width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True,
fill_mode = "nearest")
data=datagen.flow_from_directory(your_dir, target_size=(200, 200),
batch_size=43, shuffle=False,
save_to_dir=your_save_dir,save_format='png',
interpolation='nearest')
images,labels=data.next()
```
Now, each time you execute the last line of code, you will generate and store 43 more images in `your_save_dir`. These images will be transformed per the parameters in the image data generator in a random manner. While NOT as good as having truly original images, it will help significantly to balance the data set.
Do the same of course for the other image sets that have a small number of samples.
Another thing that can help is, for the sets with fewer images, first, crop the images so that the animal occupies as high a percentage of pixels as possible in the cropped image. Then do the process defined above. This gives the network a higher percentage of meaningful pixels to "learn" from.
Upvotes: 2 |
2017/08/31 | 1,256 | 4,817 | <issue_start>username_0: Suppose that I have 10K images of sizes $2400 \times 2400$ to train a CNN.
How do I handle such large image sizes without downsampling?
Here are a few more specific questions.
1. Are there any techniques to handle such large images which are to be trained?
2. What batch size is reasonable to use?
3. Are there any precautions to take, or any increase and decrease in hardware resources that I can do?
Here are the system requirements
```
Ubuntu 16.04 64-bit
RAM 16 GB
GPU 8 GB
HDD 500 GB
```<issue_comment>username_1: Usually for images the feature set is the pixel density values and in this case it will lead to quite a big feature set; also down sampling the images is also not recommended as you may lose (actually will) loose important data.
[1] But there are some techniques that can help you reduce the feature set size, approaches like PCA(Principle Component Analysis) helps you in selection of important feature subset.
For detailed information see link <http://spark.apache.org/docs/latest/ml-features.html#pca>.
[2] Other than that to reduce the computational expense while training your Neural Network, you can use Stochastic Gradient Descent, rather than conventional use of Gradient Descent approach, that would reduce the size of dataset required for training in each iteration. Thus your dataset size to be used in one iteration would reduce, thus would reduce the time required to train the Network.
The exact batch size to be used is dependent on your distribution for training dataset and testing datatset, a more general use is 70-30. Where you can also use above mentioned Stochastic approach to reduce required time.
Detail for Stochastic Gradient Descent <http://scikit-learn.org/stable/modules/sgd.html>
[3] The Hardware seems apt for the upgradation would be required, still if required look at cloud solutions like AWS where you can get free account subscription upto a limit of usage.
Upvotes: 3 <issue_comment>username_2: >
> How do I handle such large image sizes without downsampling?
>
>
>
I assume that by downsampling you mean scaling down the input *before* passing it into CNN. [Convolutional layer](http://cs231n.github.io/convolutional-networks/#conv) allows to downsample the image within a network, by picking a large stride, which is going to save resources for the next layers. In fact, that's what it has to do, otherwise your model won't fit in GPU.
>
> 1. Are there any techniques to handle such large images which are to be trained?
>
>
>
Commonly researches scale the images to a resonable size. But if that's not an option for you, you'll need to restrict your CNN. In addition to downsampling in early layers, I would recommend you to get rid of FC layer (which normally takes most of parameters) [in favor of convolutional layer](http://cs231n.github.io/convolutional-networks/#convert). Also you will have to stream your data in each epoch, because it won't fit into your GPU.
Note that none of this will prevent heavy computational load in the early layers, exactly because the input is so large: convolution is an expensive operation and the first layers will perform *a lot* of them in each forward and backward pass. In short, training will be slow.
>
> 2. What batch size is reasonable to use?
>
>
>
Here's another problem. A single image takes `2400x2400x3x4` (3 channels and 4 bytes per pixel) which is ~70Mb, so you can hardly afford even a batch size 10. More realistically would be 5. Note that most of the memory will be taken by CNN parameters. I think in this case it makes sense reduce the size by using 16-bit values rather than 32-bit - this way you'll be able to double the batches.
>
> 3. Are there any precautions to take, or any increase and decrease in hardware resources that I can do?
>
>
>
Your bottleneck is GPU memory. If you can afford another GPU, get it and split the network across them. Everything else is insignificant compared to GPU memory.
Upvotes: 6 [selected_answer]<issue_comment>username_3: Such large data cannot be loaded into your memory. Lets split what you can do into two:
>
> 1. Rescale all your images to smaller dimensions. You can rescale them to 112x112 pixels. In your case, because you have a square image,
> there will be no need for cropping. You will still not be able to load
> all these images into your RAM at a goal.
> 2. The best option is to use a generator function that will feed the data in batches. Please refer to the use of [fit\_generator](https://www.rdocumentation.org/packages/keras/versions/2.1.6/topics/fit_generator) as used
> in Keras. If your model parameters become too big to fit into GPU
> memory, consider using batch normalization or using a Residual model
> to reduce your number of parameter.
>
>
>
Upvotes: 2 |
2017/08/31 | 2,317 | 10,420 | <issue_start>username_0: We are careening into the future which may hold unpredictable dangers in relation to AI. I've haven't yet heard of Chappie or Robocop style police robots, but militarized drone tech is replacing many conventional weapons platforms. I love the idea that I may one day be able to transfer my consciousness to a computer, and improve my capabilities and potential. However, what constitutes morality can differ greatly among individual humans.
* How do we move forward toward the [singularity](https://en.wikipedia.org/wiki/Technological_singularity) in a way that protects humans, as opposed to possibly lead to our extinction?<issue_comment>username_1: What you are talking about is known as the [Control Problem](https://en.wikipedia.org/wiki/AI_control_problem). We have our own [tag](https://ai.stackexchange.com/questions/tagged/control-problem) for this specific topic here, which you can use for this and similar questions.
How to address the control problem is heavily discussed and still considered unsolved. Two of the important approaches are **motivation control** and **capability control**.
Motivation control aims at **creating suitable reward functions** that only reward behavior that's beneficial to the human race. This topic alone is so complex that many different competing theories exist. Minor errors in the definition of an AI's goal might lead to catastrophe.
Capability control aims at **limiting the possible scope of action** for the AI. The main problem here is, that most restrictions make the AI less useful. And to ensure robust control a lot of restrictions must be put in place. Finding the right tradeoff is a challenge we need to solve.
In addition to those two mechanisms we must think about how we create a superintelligence. One of the major aspects here is our **understanding of the underlying technologies**. If we create a self-optimizing neural network from scratch, which we do not understand in depth, it is very likely that we will not understand the potential threats before it is to late. Creating a digital copy of a human brain as a starting point on the other hand is a little saver, because we can assume that it will be driven by similar factors as a real human being. If that's desirable is a totally different discussion of course. But the general idea is, that understanding the motivation of a superintelligence is key for steering it in the right direction.
For more details about the topics I scratched in my answer and the control problem in general, I recommend reading [Superintelligence](https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies) by <NAME>. He spends many chapters on those problems and possible solutions.
If you have more in-depth questions concerning specific topics, please open a new question to get a more detailed answer.
Upvotes: 3 [selected_answer]<issue_comment>username_2: **Impending or Past?**
<NAME>, in his book, *Technopoly*, argues the preemption of human centered culture to technically driven culture has already occurred. <NAME>, in his book, *Technological Society*, heaped evidence behind the proposal that technology became autonomous centuries ago. Their arguments are convincing.
Some think other criteria must be met before the balance of power has been tipped in favor of some class of machines, but the traditional criteria proposed for determining when humans have been dominated are intellectually impoverished. Let's look at some of those criteria.
* Self awareness
* Superior intelligence
* Self-improving
**Self-awareness**
Self-awareness is likely to be a continuum, not a threshold. The first time an assembler assembled itself, that continuum began, followed by compilers that can compile themselves, followed by neural nets that can construct and tune neural nets. The dimensions of self-awareness are numerous, and complete self-awareness in each respect is rare. For instance, knowing our own motive is often guessed at after we exercise it. If we are honest about our own species, self-awareness in the typical human is intermittent and incomplete.
**Superior Intelligence**
Superiority in intelligence cannot be determined by a chess tournament; the vanquished chess player could possibly, on the day of the defeat, invent a new field of study. Neither Lavoisier nor Newton, if alive today, would be able to prevail against a chess program, yet no machine exposed only to information prior to the advent of modern chemistry, calculus, or physics would be able to take the steps Lavoisier or Newton made in the creation of those fields.
Intelligence is multidimensional too, and metrics to measure intelligence are unreliable. Using simplistic definitions of intelligence, devices of steel, plastic, epoxy, and silicon already exceed that of humans. Should graduate students seek out a university librarian for an answer to a question about data science or use Google Scholar? A mail sorter uses intelligent robotics to ensures we get the right mail. The actual mail delivery people occasionally deliver a parcel into the wrong mailbox even when the street address is clearly printed in one decimeter high Arial font digits on the approaching side of the mailbox. We use a CAD program to check a mechanical design clearance. Arithmetic is clearly the domain of computers.
Humans give destinations to a program and instructions are given back to the human. The human essentially rents this intelligent service. Once the instructions are given, the human becomes subservient and is expected to drive. Even upstream in the transportation process, when the destination is chosen, to a large degree it is a technology driven need that often creates the demand for transportation. Intelligence is not always correlated with power either. How intelligent are long blades of grass and a broken lawnmower? Yet they can send their legal owner across town to pick up parts and consume their entire day off to repair and mow.
**Self-improvement**
Self-improvement is a slippery concept as well, whether an individual entity can improve itself in real time or is able to construct improved versions of itself. Robotics creating robotics technically began during industrialization when milling machines made parts for milling machines. Human supervision was involved then. Using current technology, it is quite possible that an advanced manufacturing plant could be devices that could out other advanced manufacturing plants without supervision. That is the mechanical dimension of self-improvement.
If a neural net converges on a reasonably optimal weighting for a given criteria, the assumption required to call this self-improvement is that the target categorization is correct. Where power is concerned, self-improvement would be measured by what? Control over resources? Domination? Is that improvement? Perhaps ethical superiority? Can subservience be considered a goal state without severely curtailing the creative and intuitive aspects required to achieve human-like intelligence?
**Is Singularity Realistic?**
In light of this complexity, is the tipping of power between humans and machines discrete or singular? Clearly not. Calling changes in the balance of power between human driven culture and machine driven culture a singularity is naive. We do not need to be attacked by machines to loose our culture or autonomy to technology.
**We Seem to Know at Some Level**
This reality expresses itself subconsciously. Young adults express violent feelings against their mobile devices when it delivers unpalatable social information. Then they dodge the question when others inquire about the broken device display. When the device finally ceases to function, they are compelled to buy a new mobile device, a device they need and pretend to like.
**Another More Interesting Criteria: Utility**
Human autonomy is dwarfed by the demands of the tools created for lifestyle improvement to the point that one can no longer, with formal logic, clearly differentiate which is the tool and which is the user.
This last criteria to judge when humans have become subservient to machines can be illuminated by a single test question, "Which is the tool and which is the user?" When the distinction is blurred, the dominance has already entered a shared state. Entrance into the vicinity of a tipping point, if such a tipping point actually exists, has already occurred in many regions of the world. The multidimensional nature of the balanced thus far discussed indicates that there exists a complex balance with no clear point of victory or defeat.
**Irreversability**
Nonetheless, the failure of the Unibomber and other Luddites to dent the onslaught of technology indicates that the relinquishment of autonomy to machines is likely irreversible. Therefore, there may be no value in being concerned.
Upvotes: 2 <issue_comment>username_3: Predicting what happens post-singularity is simply not possible as we cannot attempt to model let alone conceptualise a mind far more complex than ours. If that is a difficult concept to get your head around, consider how far an insect's central nervous system could go in understanding human behaviour.
That fact alone is an argument against the likelihood of success for attempting any type of control.
But in terms of 'defending' against a post-singularity mind well before it happens (ie now) there are 2 solutions, with only the first offering a good likelihood of success albeit only for as long as everyone cooperates:
1. identify the technology types that are anticipated to enable the singularity, register them as 'instruments of human extinction' and regulate them accordingly ;
2. ensure technology in human augmentation is sufficiently advanced
to enable human-mediated guidance/fusion during the exponential rise
in computation that will occur prior to the singularity event.
In any case, as mentioned, it is impossible to predict the behaviour of a post-singularity mind and even a human hybrid will similarly be unpredictable due to its exponentially increased cognitive/computational complexity.
An interesting consideration is the possibility that numerous singularity-level minds have already spawned in other parts of the universe (based on the likelihood of other civilisations a) existing and b) reaching that level of technological advancement).
Upvotes: 1 |
2017/09/01 | 1,158 | 4,984 | <issue_start>username_0: I think about a system which gets XML documents in various structures but with essentially the same data structure in it. For the example, let's assume each document contains data about one or more persons. So the AI would recognize a name. Somewhere else in the document there is the post address of our fictional person. The AI should now "see" the address and conclude, it belongs to our person. Anywhere else, there is a phone number in the document. Again, our AI should see the connection between our person and this phone number.
This wouldn't be a job for an AI if there wasn't a catch. If the task was merely to find and map strings like addresses and phone numbers, we could simply use a regex to match our "target strings". The catch in this scenario would be this: the XML document might contain other data, which does not belong to our person but is a valid phone number for example and thus will match an regex.
*Would it be possible for an AI to learn this? If yes, with which framework would someone create such an AI?*
---
Sample XML document:
```
xml version="1.0" encoding="utf-8" ?
<NAME>
Main street 1
1111 Twilight town
sample country
+123 123 123
<NAME>
Broadway 42
4521 Traverse town
sample country
+123 412123
<NAME>
Seaside road 5
4521 Traverse town
sample country
+123 555 555
```<issue_comment>username_1: XML, HTML and less formal languages all respond quite nicely to being transformed or interrogated within a graph framework. XML and HTML are particularly useful in that they conform strictly to a tree-structure. That means that any good data components can be measured in terms of tree-distance to any other "good" data components.
If you extract your regex-friendly terms and keep track of where within their tree they are found, you *may* be able to cast those values into a general document-space vector (it might only need to be one-dimensional), allowing you to identify clusters of "good" vs anomalous sections of "bad" data based on a simple distance metric, or anomaly detection algorithm - say an [isolation-forest](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.IsolationForest.html) that runs on information density for example.
This depends on your data, and how much of it you can find, ideally already tagged up containing good vs bad.
If you're looking to scrape reliable address-contents, then yes, you're likely to score hits on names, addresses, postcodes and phone-numbers all appearing as tightly connected clustered groups, all within one or two nodes-distance from one another.
Meanwhile, an annotation containing a phone number lodged somewhere else is less likely to be a match.
Different documents will have different threshold densities, and differing anomaly to conformant ratios, so you'd have a task on your hands to figure out some way to automatically tune your parameters on any given document set.
In the past, I've tried doing this against html by flattening all the content into a single string of text and a similar approach yielded half-decent results, but if you're looking at XML, it's fair to expect the structure to yield more information.
Upvotes: 4 [selected_answer]<issue_comment>username_2: You would need to define 'frames', 'templates', or sets of data belonging together to form an address or other kind of data, with typical labels. So **phone** or **tel** etc would indicate a phone number, provided that their content also looks like a phone number. That's how you as a human recognise it. So you encode your domain knowledge as entities with possible attributes. Then you try and match the attributes and recognise which entity they belong to. You could have several entities with a shared subset of attributes (like a company or a person, which would both have an address). There would be other clues to tell you which it is. If **name** ends in "Ltd." or "Co", then it'll be a company, for example.
So you mix heuristics for identifying attributes, templates of which attributes combine to form an entity class you want to identify, and then pick the one that best matches. If you have several entities where all that's filled is **phone**, then you can't really tell what it is and would discard it. In your example, **name** matches various entities, but **office** would not be a valid attribute for a person (unless you decided it was). An oversimplistic heuristic might think "<NAME>" is a company, which presumably has an **office** attribute, so you need to be careful with how you design your templates.
A criterion for putting attributes together could be that they are in the same subtree of the data structure. The exact definition really depends on the data and the types of information you want to extract from it.
So yes, it is definitely possible. I'm not really sure which framework, but it should be fairly trivial to code this in a programming language of your choice.
Upvotes: 2 |
2017/09/02 | 1,022 | 4,182 | <issue_start>username_0: Is there a way to train AI to find aspecific line or symbol in a image and crop it?
OpenCV scripts finds a face and crops it: how can I add my annotations?
Lest say I have a image like this:
`+------+
| * *|
| |
| * *|
| |
+------+`
I want it to find the \* and crop it.<issue_comment>username_1: Yes, this is possible.
There is actually a pretty **easy way that doesn't even require machine learning** and can be **implemented with a small amount of code**. You just use a framework for image processing (e.g. [PIL](https://pypi.python.org/pypi/PIL) for Python), find the marks by going over your image with an appropriate filter and use the implemented crop function, that the framework hopefully provides (or write i yourself with 2 nested for loops copying the area). Your filter will react strongest in the areas of your image that look like the mark you defined. This could actually be implemented like a single layer conv+RELU layer in a CNN (see [this introduction](https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks/) for details about conv and RELU layers).
Because you are asking explicitly about machine learning, I will also give you the hard version to solve this problem using a full CNN to identify the bounding box. Once you have found the bounding box, the actual cropping can be done like in the example above and is not part of the CNN. You could call it post processing if you like.
The technique you are looking for is called [Object Localization and Detection](https://leonardoaraujosantos.gitbooks.io/artificial-inteligence/content/object_localization_and_detection.html). The linked article should tell you all the details you need. You basically build and train your CNN to localize and identify interesting objects in your image. The **easy way** is to train your CNN to **identify the 4 marks individually** and return the coordinates of the 4 areas with the highest likelihood to contain your marks. You calculate the center of those 4 individual points and feed it to your cropping function. The **hard way** is to **find the smallest bounding box containing all 4 marks**. This is a much harder problem, because the neural network needs to learn that it should ignore everything except those 4 marks during detection, although they will be a small portion of the bounding box. It can be done, but looking for the 4 marks individually would be much easier.
Which approach is best for you? Depends on your goal. If you need a **robust and efficient algorithm** for this problem, **forget about ML** and **implement it with the straight forward approach** I described first. No need for fancy learning CNNs. You can implement this with a few dozen lines of code.
If this is a research or training project to study ML, take my second approach. The easy and the hard way are both fine, the hard one obviously more challenging for your CNN. If you are new to CNN, I recommend following [this Stanford Course](https://www.youtube.com/playlist?list=PL3FW7Lu3i5JvHM8ljYj-zLfQRF3EO8sYv). It teaches you all you need for your project in 25 to 30 hours (excluding homework).
---
Edit: Concerning your question, how do you add your own marks. I would simply do it using with the same framework you use for cropping. Your program needs some kind of image manipulation capabilities and this will allow pixel manipulation as well (at least the framework [PIL](https://pypi.python.org/pypi/PIL) which I suggested has this capability). Just draw your mark using this method. You can also copy a sample mark with a transparent background using a tool like Gimp manually, if you prefer that.
Upvotes: 3 <issue_comment>username_2: If you are so sure that ML is the way to go ahead, then try following.
1. [Tensorflow code for object detection](https://github.com/tensorflow/models/tree/master/research/object_detection)
2. [YOLO](https://pjreddie.com/darknet/yolo/)
This is especially for real time object detection.
You can train these model for your object in question & see the results.There are some other localization models that you can try.
Upvotes: 0 |
2017/09/03 | 857 | 3,255 | <issue_start>username_0: I'm working on an implementation of the backpropagation algorithm for a simple neural network, which predicts a probability of survival (1 or 0).
However, I can't get it above 80%, no matter how much I try to set the right hyperparameters. I suspect that's because my backpropagation is implemented incorrectly, since I tried 2 different types of code and both give me the same results.
Is there a way to determine whether [my implementation](https://pastebin.com/4PmYFAjH) of backpropagation is correct?<issue_comment>username_1: For future reference, you can check your correctness by the finite difference method.
<http://www.cedar.buffalo.edu/~srihari/CSE574/Chap5/Chap5.3-BackProp.pdf> (p.23)
Upvotes: 1 <issue_comment>username_2: Don't feel too bad for having gotten it slightly wrong because backpropagation is notoriously difficult to implement [[1]](https://web.archive.org/web/20171122205139/http://ufldl.stanford.edu/wiki/index.php/Gradient_checking_and_advanced_optimization).
There is a technique called [**gradient checking**](https://web.archive.org/web/20171122205139/http://ufldl.stanford.edu/wiki/index.php/Gradient_checking_and_advanced_optimization), which you can implement to test the correctness of your backpropagation implementation. I would argue that even gradient checking is a little tricky to implement.
How does gradient checking work?
* Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.
* But because forward propagation is fairly straightforward to implement, most people are usually confident that you got its implementation correct. So, the trick is to use the value of $J$ to verify your code for computing $\frac{\partial J}{\partial \theta}$.
* We know that by definition, the gradient or derivative is given by:
$$\frac{\partial J}{\partial \theta} = \lim\_{\epsilon \rightarrow 0} \frac{J(\theta + \epsilon) − J(\theta − \epsilon)}{2 \epsilon}$$
Since we trust our calculation of $J$, we can easily compute the value of $J(\theta + \epsilon) − J(\theta − \epsilon)$.
For more information, see [this Andrew Ng's video lecture](https://www.coursera.org/learn/machine-learning/lecture/Y3s6r/gradient-checking) and [these notes](https://web.archive.org/web/20171122205139/http://ufldl.stanford.edu/wiki/index.php/Gradient_checking_and_advanced_optimization).
Upvotes: 3 [selected_answer]<issue_comment>username_3: The given so far answers focus on numerical methods to check your gradients. It is *really* useful, especially if one doesn't have much experience in backprop.
But I'd like to add here a pure practical "sanity check", relatively fast and easy to perform, which also works for other issues, e.g. (rough) hyperparameters selection. To see if your network makes sense, reduce the training set to a few examples and try to overfit the network. If the loss falls to zero and training accuracy skyrockets to 1, it means that both passes work correctly and can move on to the real training. Otherwise, something's not right and should dive into specific parts of the network, in particular check the gradients numerically.
Upvotes: 1 |
2017/09/03 | 936 | 4,098 | <issue_start>username_0: I understand that a neural network basically distorts(non-linear transformation) and changes the perspective(linear transformations) of input space to draw a plane to classify data. How does the network deduce if an input is one side of a plane and therefore output the decision? Thanks in advance.<issue_comment>username_1: If you are working with supervised learning, each training example has a label. That label is your classification of the provided input. Just like linear or logistic regression, if your problem only has 2 classes (e.g. determine whether a tumor is malignant or not), your network will have a single output. An output value of 1.0 could represent one class and an output of 0.0 represents the other. This is basically determining which side of multi-dimensional plane your input is on.
If you have more than 2 (m) classes, then you will have m outputs from your network. Each output will represent the likelihood of the input matching that class. For example if you have an image recognition network that has 4 classes: Dog, truck, boat, person, you would have 4 outputs with each one representing how likely that image is one of the classes. Think about this as 4 independent planes, where each one is making a determination: does this image match this class(1 of {dog, truck, boat, person}) or not. If you had outputs of dog: 0.3, truck: 0.25, boat: 0.7, person: 0.4, One simple algorithm would be simply pick the class with the maximum value -- in this case, it would classify this image as a boat.
Upvotes: 2 <issue_comment>username_2: For other who were wondering the same questions as me, I'll answer it.
My view above was inconsistent. Ultimately the last layer of simple feed-forward networks don't have any special properties previous layers exhibit. NNs are just glorified mathematical functions. It distorts space with linear(matrix multiplies) and non-linear functions.
Theres no 'decision plane' per-say, only function mappings up to the very end where we want to map the input to (in this case binary classification problem) to two separate numbers (usually 1 or 0).
Hope this clear things up for people getting into NNs.
Upvotes: 1 <issue_comment>username_3: So I think in case of a logistic regression task a Neural Network works something like this.
First of all I think all nodes perform the job of mapping a point to a quadrant, in a n-space co-ordinate system where the n-space is decided iteratively by the problem statement itself. In short the nodes decide which combination of polynomial terms of the input matter in the task at hand. In a hidden layer since we don't perform a classification task it outputs real number. But the output layer rounds off numbers as per the classification task.
If you have a single hidden layer its nodes can be thought of outputting values that matter. It gives the NN a degree of freedom. Like in Machine Learning we select the polynomial combinations, a NN selects it by itself iteratively. But if see the function of output nodes, it just rounds of the number to `0` or `1`. Thus output nodes can perform simple tasks of classification (like which quadrant a point lies in) without a hidden layer. I believe if we can decipher what the hidden nodes want to convey us we can entirely remove the output nodes. Because hidden nodes convey information like, in which quadrant a point lies only in a cryptic format which is resolved by the output nodes iteratively.
After knowing the information we can easily resolve it by logical statements.
But this odes not demean the power of output nodes. In case of Linear Regression a single output node without any hidden layer can approximate a quarter of the sine wave (by adjusting the exponent of `e` to make t look like a sine wave). So it only depends on how you use the Neural Network.
But the basic principle is the same a node decides whether an input is positive or negative (in case of logistic activation) or outputs a real number depending on how you use it.
If you find anything contradictory please correct me.
Upvotes: 1 |
2017/09/04 | 1,070 | 5,066 | <issue_start>username_0: Recently I was working on a problem to do some cost analysis of my expenditure for some particular resource.
I usually make some manual decisions from the analysis and plan accordingly.
I have a big data set in excel format and with hundreds of columns, defining the use of the resource in various time frames and types(other various detailed use).
I also have information about my previous 4 years of data and actual resource usage and cost incurred accordingly.
I was hoping to train a NN to predict my cost beforehand and plan even before I can manually do the cost analysis.
But the biggest problem I'm facing is the need to identify the features for such analysis. I was hoping there is some way to identify the features from the data set.
*PS - I have idea about PCA and some other feature set reduction techniques, what I'm looking at is the way to identify them in the first place.*<issue_comment>username_1: there is no hard-and-fast-rule for feature selection , you have to manually examine the dataset and try different techniques for feature engineering .
And there is no rule that you should apply neural networks for this , neural networks are time-consuming to train , instead you can experiment with decision tree based methods(random forests ) since your data is anyway in tabular structure.
Upvotes: 0 <issue_comment>username_2: That's a great question and probably one of the most difficult tasks on ML.
You do have a few options:
1. You can use weighting algorithms (e.g. Chi-squared) to understand which features are contributing most to your output
2. You can use other ML algorithms to classify whether a feature is contributing to your predictions or not
3. You may use other ML algorithms (other than NN) that inherently provide you with feature weightings (e.g. Random Forest)
Hope that helps
Upvotes: 0 <issue_comment>username_3: It is wise to consider not just the correlation of resource engagement with cost, but also the return on the cost of resource engagement. The typical challenge is that those returns are almost always cumulative or delayed. A case of accumulation is when the resource is the continuous tuning or improvement of a process the absence of which slows the generation of revenue. A case of delay is when research resources incur costs without revenue impact for a period of time but the revenue generation that begins if the research delivers productive results may be a substantial factor above the total cost of the results delivered.
The reason expense data by itself can lead to maladaptive network learning is because a network that is trained to reduce, for instance, marketing expenses will zero them. That would usually cause a decreasing sales lead trend until the business folds. Without including the returns in the training information, no useful learning may occur.
A basic MLP (multi-layer perceptron) will not learn the temporal characteristics of the data, the accumulation and delay aspects. You will need a stateful network. The most consistently successful network type for this kind of learning as of this writing is the LSTM (long short term memory) network type or one of its derivative variants. Revenue and balance data must be used in conjunction with expense data to train the network to predict business results for any given sequence of proposed resource engagements (fully detailed budgetary plan).
The loss function must properly balance sort term with medium and long term financial objectives. Negative available cash should produce a pronounced increase in the loss function so that such avoidance of basic risks to reputation and the cost of credit is learned.
Which columns in your data have strong correlations with return on investment is difficult to determine in advance. You can immediately exclude columns that conform to any one of the following criteria.
* Always empty
* Other constants, those that have the same value for every row
* Those that can always be derived from other columns
The data can be reduced in other ways
* Fully describing data by characterizing trends in simple ways
* Using indices to specify long strings with 100% accuracy by assigning each string a number
* Compression
* Otherwise reducing redundancy in the data
RBMs (restricted Boltzmann machines) can extract features from the data and PCAs can illuminate the low information content columns, but the significance of the columns in terms of their correlation with revenue will not be identified using these devices in their basic form.
Upvotes: 0 <issue_comment>username_4: Since you have all your data in a table, a relatively simple thing to do is to consider each column independently, and then seeing if the output variable (cost incurred) has a correlation to that.
If the column has no (or very low correlation) with the output variable, then consider it to be not important. The ones that make the cut are then considered further.
This is obviously not very different from how a decision tree algorithm would work (such as ID3).
Upvotes: 1 |
2017/09/05 | 1,314 | 5,034 | <issue_start>username_0: When designing a machine-learning system, there are various parameters that have to be determined. I am interested in the following general question: is it possible to construct a dataset on which the system will have good performance with some specific set of parameters, but not with other paramteres?
To be more concrete, let's focus on neural networks. Suppose we have a simple neural network: a multilayer perceptron with a single hidden layer. The size of the input is fixed, the activation function is fixed (e.g. tanh), and the output is binary. The only parameter that has to be determined is the size of the hidden layer.
My question is: given a number $n$, is it possible to construct a dataset $D\_n$ such that:
* The MLP with $n$ hidden nodes has good performance on $D\_n$ (e.g. in 10-fold cross validation);
* The MLP with $n-1$ hidden nodes has bad performance on $D\_n$
?
Note: I [asked in CS theory](https://cstheory.stackexchange.com/q/38955/9453) but got no reply.<issue_comment>username_1: Hidden layer number is an indicator of the dimension of the weight matrix , so once you train a network with certain number of hidden layer neurons , the weights get fixed at the point you stop training. so changing the number hidden layer neurons makes the previous weights incompatible with it due to change in weight-matrix dimenstion . even if you alter the weight matrix to work with the new network it is again equivalent to a network with randomly assigned weights. so a fully trained networks becomes a new network once you change its hidden layer dimension and gives bad performance. This hase a very little to do with the dataset.
Upvotes: 0 <issue_comment>username_2: I'm not familiar with any existing, robust methods to generate such a dataset. Here are some thoughts though.
You propose using an MLP with a single hidden layer. That means we have two weight matrices, and two activation functions (one for hidden layer, one for output layer). Some notation:
* $d$: dimensionality of input vectors
* $n$: dimensionality of hidden layer
* You mentioned binary output, so I'll assume that dimensionality is $2$
* $W\_n^{(1)} \in \mathbb{R}^{d \times n}$: the first weight matrix in the scenario where we're using $n$ hidden nodes
* $W\_n^{(2)} \in \mathbb{R}^{n \times 2}$: the second weight matrix in the scenario where we're using $n$ hidden nodes
* $g$: first activation function
* $h$: second activation function
Then, given an input vector $x \in \mathbb{R}^d$, our neural network will generate output $f\_n(x) = h(W\_n^{(2)} (g(W\_n^{(1)}x)))$.
Now, in general we of course expect Neural Networks to only find a local optimum, but if you want a robust solution you'll want it to be able to handle the worst case, and the worst case for your "adversarial" task is when the Neural Network manages to find the global optimum. So, we'll assume it can find the global optimum.
Essentially, what you're looking for is a dataset $D\_n$ containing a number of input vectors $x$, such that:
1. $f\_n(x) = h(W\_n^{(2)} (g(W\_n^{(1)}x)))$ provides good / optimal results (after training to the global optimum)
2. $f\_{n - 1}(x) = h(W\_{n - 1}^{(2)} (g(W\_{n - 1}^{(1)}x)))$ provides poor results (even after training to the global optimum of this setup).
In other words, you want to find a collection of vectors $x$ such that it becomes impossible to find a collection of weights in the $n - 1$ case where $f\_{n-1}(x)$ is a good approximation of $f\_n(x)$. You want to make it impossible that $f\_{n - 1}(x) \approx f\_n(x)$.
---
Now this has turned into a clear mathematical problem. I'm not familiar with any established methods in mathematics to solve a problem like this, maybe there are though. My best guess at this point in time would be a procedure like the following:
1. Generate random "ground truth" versions of the weight matrices $W^{(1)\*}\_{n}$ and $W^{(2)\*}\_{n}$ (just completely random matrices).
2. Generate completely random input vectors $x$. Compute the corresponding ground truth labels as $f^\*\_n(x) = h(W\_n^{(2)\*} (g(W\_n^{(1)\*}x)))$.
3. Hope that the neural network with $n$ hidden nodes can recover the ground truth weight matrices that were previously generated randomly.
4. Hope that the neural network with $n - 1$ hidden nodes cannot find an accurate approximation.
In theory, the MLP with $n$ hidden nodes should be able to learn the exact ground truth function. In theory, under certain conditions, the MLP with $n$ hidden nodes should not be able to learn the **exact** ground truth function. I suspect those "certain conditions" would be that the rows/columns of the weight matrices should be linearly independent, which is likely with randomly generated matrices, but I'm not 100% sure on this. Even if it can't learn the **exact** ground truth, it may still be capable of learning an approximation... there may be ways to find upper bounds on how close such an approximation could get, but I'm not sure.
Upvotes: 3 [selected_answer] |
2017/09/05 | 588 | 2,609 | <issue_start>username_0: To understand the inner workings of neural networks, a fair amount of mathematical concepts is required. Backpropagation alone is a challenging technique if you are not fluent in calculating local gradients. And that's just the start of the journey.
But the more I study neural networks, the more I get the impression that all those difficult mathematical concepts are only required if you are doing actual research in neural networks or want to know what's happening under the hood. If you "just" want to implement an AI utilizing a neural networks, there are several **high level programming frameworks and libraries** readily available including **model zoos for state of the art neural networks** (e.g. VGG, GoogLeNet and ResNet), that can be used.
So my question is, **does a developer require a deep understanding of all the details nowadays**, or have we reached a level, where **frameworks take care of those details** for us?<issue_comment>username_1: It depends on what **exactly** you want to be. You don't need to be mathematician if you just want to run neural networks. Most data scientists don't understand the mathematics, but they know how to run machine learning frameworks. Generally, only PhDs understand the mathematics.
In the industry, most job positions in machine learning (e.g software engineer, big data engineer etc) **don't** require advanced mathematics. But you should be very comfortable in programming, and understand simple things like matrix manipulation. Data engineers in the industry generally have good programming skills, but they can't read mathematical equations (or just simple equations).
However, quantitative positions in machine learning do require significant mathematics. That happens both in research and industry.
Good news --- if you don't have a PhD (e.g statistics) nobody would expect you understand the mathematics.
Summary:
* If your job is more like a development position, mathematics is an advantage but not absolutely necessary
* If your job is research, you do need advanced mathematics understanding (e.g. PhD)
* If your job is not research but still quantitative based. Mathematics is very important.
**PS:** I work in research. **Everybody** I know who works in machine learning has a PhD.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Backpropogation and step functions (as well as a basic understanding of human brain neurons) are absolutelly useful, in my opinion. Rusel-Norvig have a great chapter about them in their "Intro to AI" book that will teach the basics. You don't need a PhD.
Upvotes: 0 |