date stringlengths 10 10 | nb_tokens int64 60 629k | text_size int64 234 1.02M | content stringlengths 234 1.02M |
|---|---|---|---|
2018/08/03 | 1,454 | 5,941 | <issue_start>username_0: I'm now reading [the following blog post](https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-7-action-selection-strategies-for-exploration-d3a97b7cceaf) but on the epsilon-greedy approach, the author implied that the epsilon-greedy approach takes the action randomly with the probability epsilon, and take the best action 100% of the time with probability 1 - epsilon.
So for example, suppose that the epsilon = 0.6 with 4 actions. In this case, the author seemed to say that each action is taken with the following probability (suppose that the first action has the best value):
* action 1: 55% (.40 + .60 / 4)
* action 2: 15%
* action 3: 15%
* action 4: 15%
However, I feel like I learned that the epsilon-greedy only takes the action randomly with the probability of epsilon, and otherwise it is up to the policy function that decides to take the action. And the policy function returns the probability distribution of actions, not the identifier of the action with the best value. So for example, suppose that the epsilon = 0.6 and each action has 50%, 10%, 25%, and 15%. In this case, the probability of taking each action should be the following:
* action 1: 35% (.40 \* .50 + .60 / 4)
* action 2: 19% (.40 \* .10 + .60 / 4)
* action 3: 25% (.40 \* .25 + .60 / 4)
* action 4: 21% (.40 \* .15 + .60 / 4)
Is my understanding not correct here? Does the non-random part of the epsilon (1 - epsilon) always takes the best action, or does it select the action according to the probability distribution?<issue_comment>username_1: Epsilon-greedy is most commonly used to ensure that you have some element of exploration in algorithms that otherwise output deterministic policies.
For example, value-based algorithms (Q-Learning, SARSA, etc.) do not directly have a policy as output; they have values for states or state-action pairs as outputs. The standard policy we "extract" from that is a deterministic policy that simply tries to maximize the predicted value (or, technically, a "slightly" nondeterministic policy in that, in proper implementations, it should break ties (where there are multiple equal values at the top) randomly). For such algorithms, there is not sufficient inherent exploration, so we typically use something like epsilon-greedy to introduce an element of exploration. In these cases, both of the possible explanations in your question are identical.
In cases where your algorithm already produces complete probability distributions as outputs that do not so much focus all of the probability mass on a single or a couple of points, like the probability distribution you gave as an example in your question, it's generally not really necessary to use epsilon-greedy on top of it; you already get exploration inherently due to all actions having a decent probability assigned to them.
Now, I've actually personally mostly worked with value-based methods so far and not so much with e.g. policy gradient methods yet, so I'm not sure whether there tends to be a risk that they also "converge" to situations where they place too much probability mass on some actions and too little on others too quickly. **If that's the case**, I would expect an additional layer of epsilon-greedy exploration might be useful. And, in that case, I would indeed find your explanation the most natural. If I look through, for example, the [PPO paper](https://arxiv.org/abs/1707.06347), I didn't find anything about them using epsilon-greedy in a quick glance. So, I suppose the combination of epsilon-greedy with "nondeterministic" policies (ignoring the case of tie-breaking in value-based methods here) simply isn't really a common combination.
Upvotes: 1 <issue_comment>username_2: Let me give an example of where epsilon-greedy comes unstuck: Imagine you have a environment with a very large branching-factor, like Go; if you were using epsilon-greedy for your exploration then you may find levels higher up the search tree are very well explored because they're hit on more regularly and so you'd want to select more greedily for those areas that are well explored, but further down the tree where actions are not explored you'd want to encourage more level of random exploration. Epsilon-greedy doesn't enable you to do that; it's one probability for all situations. So it's fine to use where the state-space is quite small, but not for large state-spaces.
Actor-critic methods, such as PPO, use the entropy (the measure of randomness) of the policy to form part of the loss function which inherently encourages exploration. To elaborate on this; early in training you'd expect high levels of entropy as all actions may have near-equal probability of being selected, but as the model explores the actions and garners rewards, it will gradually favour taking the actions which lead to higher rewards, and so the entropy will decrease as training progresses and it gradually starts to behave more greedily.
Because the entropy is added to the loss (or reward in case of RL, i.e. negative loss), the agent will get a higher reward if it were to select an action with low probability but ended up getting a high reward. That will have the effect of pushing up its probability of being re-selected again in future, while pushing down the previously more-favoured action probability.
This is good because it ensures there is always some element of curiosity, and it means it will act greedily in areas of the state-space it's already well explored, but continue to explore areas of the state-space which it's unfamiliar, which is why (in my opinion) it's a superior method than using epsilon-greedy.
That said, what I personally do is once training is converged, which I define as the agent isn't garnering higher rewards and there's very low entropy in the policy, then I add some small amount of epsilon-greedy just to force some level of exploration.
Upvotes: 0 |
2018/08/03 | 474 | 2,099 | <issue_start>username_0: I need to retrieve just the text from emails. The emails can be in HTML format, and can contain huge signatures, disclaimer legalese, and broken HTML from dozens of forwards and replies. But, I only want the actual email message and not any other cruft such as the whole quotation block, signatures, etc.
This isn't really a problem that could be solved with regex because HTML mail can get very, VERY messy.
Could a neural network perform this task? What kind of problem is this? Classification? Feature selection?<issue_comment>username_1: It's certainly possible to treat this as a natural language processing problem, basically you're looking to assign "salience" scores to the text.
Really, though, that's overkill for this kind of problem. Writing a regex or a CFG parser (or better: finding an existing parser) is likely to be easier and more reliable.
Upvotes: 2 <issue_comment>username_2: It is a surmountable problem for someone experienced in software architecture and machine learning.
1. Render the message to a virtual display such as xvfb, headless Chrome, or phantomjs.
2. Capture the text with selenium, watir, or some other DOM controller, addressing your HTML and DHTML complexity concern.
3. OCR the text in inline images and insert it appropriately.
4. Once you have text with only word, line, list item, and paragraph breaks as structural separators, you have adequate separation of style and language content to then use naive Bayesian or one of the more recent forms of unsupervised categorization to find the separation point between the body and the signature block.
Extending your line of thinking, you may even be able to engineer a generative strategy for automated reply, but beware, this last feat is a dozen orders of magnitude more difficult than extracting text from HTML, DHTML, and typeset images and machine learning the separating signature blocks.
This last feat, if done poorly, would get you in trouble with many of your email reply recipients, and, if done well, would place you ahead of Amazon, Apple, and Google.
Upvotes: 1 |
2018/08/03 | 699 | 2,965 | <issue_start>username_0: I'm building a deep neural network to serve as the policy estimator in an actor-critic reinforcement learning algorithm for a continuing (not episodic) case. I'm trying to determine how to explore the action space. I have read through [this text book](http://incompleteideas.net/book/the-book-2nd.html) by Sutton, and, in section 13.7, he gives one way to explore a continuous action space. In essence, you train the policy model to give a mean and standard deviation as an output, so you can sample a value from that Gaussian distribution to pick an action. This just seems like the continuous action-space equivalent of an $\epsilon$-greedy policy.
*Are there other continuous action space exploration strategies I should consider?*
I've been doing some research online and found some articles related to RL in robotics and found that the [PoWER](https://papers.nips.cc/paper/3545-policy-search-for-motor-primitives-in-robotics.pdf) and [PI^2](https://arxiv.org/pdf/1206.4621) algorithms do something similar to what is in the textbook.
*Are these, or other, algorithms "better" (obviously depends on the problem being solved) alternatives to what is listed in the textbook for continuous action-space problems?*
I know that this question could have many answers, but I'm just looking for a reasonably short list of options that people have used in real applications that work.<issue_comment>username_1: It's certainly possible to treat this as a natural language processing problem, basically you're looking to assign "salience" scores to the text.
Really, though, that's overkill for this kind of problem. Writing a regex or a CFG parser (or better: finding an existing parser) is likely to be easier and more reliable.
Upvotes: 2 <issue_comment>username_2: It is a surmountable problem for someone experienced in software architecture and machine learning.
1. Render the message to a virtual display such as xvfb, headless Chrome, or phantomjs.
2. Capture the text with selenium, watir, or some other DOM controller, addressing your HTML and DHTML complexity concern.
3. OCR the text in inline images and insert it appropriately.
4. Once you have text with only word, line, list item, and paragraph breaks as structural separators, you have adequate separation of style and language content to then use naive Bayesian or one of the more recent forms of unsupervised categorization to find the separation point between the body and the signature block.
Extending your line of thinking, you may even be able to engineer a generative strategy for automated reply, but beware, this last feat is a dozen orders of magnitude more difficult than extracting text from HTML, DHTML, and typeset images and machine learning the separating signature blocks.
This last feat, if done poorly, would get you in trouble with many of your email reply recipients, and, if done well, would place you ahead of Amazon, Apple, and Google.
Upvotes: 1 |
2018/08/04 | 1,390 | 5,081 | <issue_start>username_0: In [Introduction to Reinforcement Learning (2nd edition)](http://incompleteideas.net/book/RLbook2020.pdf) by Sutton and Barto, there is an example of the Pole-Balancing problem (Example 3.4).
In this example, they write that this problem can be treated as an *episodic task* or *continuing task*.
I think that it can only be treated as an *episodic task* because it has an end of playing, which is falling the rod.
I have no idea how this can be treated as continuing task. Even in [OpenAI Gym cartpole env](https://gym.openai.com/envs/CartPole-v0), there is only the episodic mode.<issue_comment>username_1: The key is that reinforcement learning through something like, say, [SARSA](https://en.wikipedia.org/wiki/State%E2%80%93action%E2%80%93reward%E2%80%93state%E2%80%93action), works by splitting up the state space into discrete points, and then trying to learn the best action at every point.
To do this, it tries to pick actions that maximize the *reward signal*, possibly subject to some kind of exploration policy like [epsilon-greedy](https://jamesmccaffrey.wordpress.com/2017/11/30/the-epsilon-greedy-algorithm/).
In cart-pole, two common reward signals are:
1. Receive 1 reward when the pole is within a small distance of the topmost position, 0 otherwise.
2. Receive a reward that linearly increases with the distance the pole is off the ground.
In both cases, an agent can continue to learn after the pole has fallen: it will just want to move the poll back up, and will try to take actions to do so.
However, an *offline* algorithm wouldn't update its policy while the agent is running. This kind of algorithm wouldn't benefit from a continuous task. An [online](https://link.springer.com/article/10.1023/A:1007465907571) algorithm, on contrast, updates its policy as it goes, and has no reason to stop between episodes, except that it might become stuck in a bad state.
Upvotes: 1 <issue_comment>username_2: From [Sutton & Barto's book](http://incompleteideas.net/book/RLbook2020.pdf#page=78) (p. 56)
>
> **Example 3.4: Pole-Balancing** The objective in this task is to apply forces to a cart moving along a track so as to keep a pole hinged to the cart from falling over: A failure is said to occur if the pole falls past a given angle from vertical or if the cart runs off the track. The pole is reset to vertical after each failure. This task could be treated as episodic, where the natural episodes are the repeated attempts to balance the pole. The reward in this case could be $+1$ for every time step on which failure did not occur, so that the return at each time would be the number of steps until failure. In this case, successful balancing forever would mean a return of infinity. Alternatively, we could treat pole-balancing as a continuing task, using discounting. In this case the reward would be $-1$ on each failure and zero at all other times. The return at each time would then be related to $-\gamma^{K-1}$, where $K$ is the number of time steps before failure (as well as to the times of later failures). In either case, the return is maximized by keeping the pole balanced for as long as possible.
>
>
>
Upvotes: 1 <issue_comment>username_3: It's a *continuing task* in that, after failure, the agent always gets a reward of $0$ at each time-step *ad infinitum*.
From the book:
>
> we could treat pole-balancing as a continuing task, using discounting.
> In this case the reward would be -1 on each failure and zero at all
> other times. The return at each time would then be related to $-\gamma^K$, where $K$ is the number of time steps before failure.
>
>
>
(Here I have used $\gamma$ as the discount factor).
Said another way, assuming the agent fails in the (K + 1)th step the reward is $0$ till that step, $-1$ for it, and then $0$ for eternity.
So the return: $$G\_t = R\_{t+1} + \gamma R\_{t+2} + \gamma^2 R\_{t+3} + ... + \gamma^K R\_{t+K+1} + ... = -\gamma^K$$
Upvotes: 2 <issue_comment>username_4: In this case in Sutton & Barto, the authors are talking about removing the episode termination. They then treat the falling pole being reset to a given (originally the starting) state distrubution as a transition with a negative reward within a longer continuing problem. This is a change to the environment descripion, and it comes with some requirements, such as needing to use discounting\*.
This is different to the "absorbing state" treatment used elsewhere to put episodic and continuing tasks on the same mathematical footing.
You might use this view of a problem in any environment where the goal is to maintain a steady state. It is in part motivated by the fact that even in the episodic framing of the problem, a perfect agent would never end an episode. However, converting an episodic problem to a continuing one with resets (from limited set of transitions) as part of state transition and reward scheme may be reasonable for other purposes too.
---
\* Or the average reward setting, although they have not covered that option at that point in the book.
Upvotes: 0 |
2018/08/04 | 1,770 | 6,919 | <issue_start>username_0: Imagine I have a list (in a computer-readable form) of all problems (or statements) and proofs that math relies on.
Could I train a neural network in such a way that, for example, I enter a problem and it generates a proof for it?
Of course, those proofs then needed to be checked manually, but maybe the network then creates proofs from combinations of older proofs for problems yet unsolved.
Is that possible?
Would it be possible, for example, to solve the Collatz-conjecture or the Riemann-conjecture with this type of network? Or, if not solve, but maybe rearrange patterns in a way that mathematicians are able to use a new "proof method" to make a real proof?<issue_comment>username_1: It's possible, but probably not a good idea.
Logical proof is one of the oldest areas of AI, and there are purpose-built techniques that don't need to be trained, and that are more reliable than a neural-network approach would be, since they don't rely on statistical reasoning, and instead use the mathematician's friend: deductive reasoning.
The main field is called "[Automated Theorem Proving](https://en.wikipedia.org/wiki/Automated_theorem_proving)", and it's old enough that it's calcified a bit as a research area. There are not a lot of innovations, but some people still work on it.
The basic idea is that theorem proving is just classical or heuristic guided search: you start from a state consisting of a set of accepted premises. Then you apply any valid logical rule of inference to generate new premises that must also be true, expanding the set of knowledge that you have. Eventually, you can prove a desired premise, either through enumerative searches like [breadth first search](https://en.wikipedia.org/wiki/Breadth-first_search) or [iterative deepening](https://en.wikipedia.org/wiki/Iterative_deepening_depth-first_search), or through something like [A\*](https://en.wikipedia.org/wiki/A*_search_algorithm) with a domain-specific heuristic. A lot of solvers also use just one logical rule ([unification](https://en.wikipedia.org/wiki/Unification_(computer_science))) because it's complete, and reduces the branching factor of the search.
Upvotes: 3 <issue_comment>username_2: Your idea may be feasible in general, but a neural network is probably the wrong *high level* tool to use to explore this problem.
A neural network's strength is in finding internal representations that allow for a highly nonlinear solution when mapping inputs to outputs. When we train a neural network, those mappings are learned statistically through repetition of examples. This tends to produce models that *interpolate* well when given data similar to training set, but that *extrapolate* badly.
Neural network models also lack context, such that if you used a generative model (e.g. an RNN trained on sequences that create valid or interesting proof) then it can easily produce statistically pleasing but meaningless rubbish.
What you will need is some organising principle that allows you to explore and confirm proofs in a combinatorial fashion. In fact something like your idea has already been done more than once, but I am not able to find a reference currently.
None of this stops you using a neural network within an AI that searches for proofs. There may be places within a maths AI where you need a good heuristic to guide searches for instance - e.g. in context X is sub-proof Y likely to be interesting or relevant. Assessing a likelihood score *is* something that a neural network can do as part of a broader AI scheme. That's similar to how neural networks are combined with reinforcement learning.
It may be possible to build your idea entirety out of neural networks in principle. After all, there are good reasons to suspect human reasoning works similarly using biological neurons (not proven that artificial ones can match this either way). However, the architecture of such a system is beyond any modern NN design or training setup. It definitely will not be a matter of just adding enough layers then feeding in data.
Upvotes: 3 <issue_comment>username_3: Not in such straight forward way as described, but neural networks are successfully applied to guide the search of proof. There are automated theorem provers. What they do look roughly like this:
1. Get the mathematical statement
2. Apply one of the known mathematical equivalence transformations (theorems, axioms, etc)
3. Check, if the resulting statement is trivially true. Then our sequence of transformations is the proof (since they all were equivalence transformations). Else, goto 2.
The tricky part here is to choose which transformation to apply at step 2. A neural network can be trained to predict function like
>
> Statement, Transformation --> usefulness of that transformation to that statement
>
>
>
Then, during the search, we can apply such transformation, that neural network considers the most useful. Also, proving a theorem can be considered game, where axioms are the rules, and when you've reached the proof you win. In this form, Reinforcement Learning agents can be applied to prove theorems (this is also successfully done).
Here are papers that do similar things:
* [Generating Correctness Proofs with Neural Networks](https://arxiv.org/pdf/1907.07794.pdf) ([Proverbot9001](https://github.com/UCSD-PL/proverbot9001))
* [A Learning Environment for Theorem Proving](https://arxiv.org/pdf/1806.00608.pdf) ([GamePad](https://github.com/ml4tp/gamepad))
* [Learning to Prove with Tactics](https://arxiv.org/pdf/1804.00596.pdf) (TacticToe)
* [Learning to Prove Theorems via Interacting with Proof Assistants](https://arxiv.org/pdf/1905.09381.pdf) ([CoqGym](https://github.com/princeton-vl/CoqGym))
* [HOList: An Environment for Machine Learning of Higher-Order Theorem Proving](http://proceedings.mlr.press/v97/bansal19a/bansal19a.pdf)
* [Reinforcement Learning of Theorem Proving](https://papers.nips.cc/paper/8098-reinforcement-learning-of-theorem-proving.pdf) (rlCoP)
Upvotes: 3 <issue_comment>username_4: I've published an article with the corresponding new method based on the generative grammars of first-order theories:
[Thoughts on generative grammars and their use in automated theorem proving based on neural networks](https://www.academia.edu/43984841/Thoughts_on_generative_grammars_and_their_use_in_automated_theorem_proving_based_on_neural_networks_with_a_type_0_grammar_of_Nicods_propositional_logic_system_added_)
This approach allows not to use previous data but to generate it as much as it's needed in machine learning. In the article, you may find necessary theory on logic, grammars, and neural networks. You'll also find examples of the python-functions generating proofs literally. I've added a grammar for the propositional logic that can be naturally enlarged to the "real" cases of first-order theories (say, group or number theory).
Upvotes: 1 |
2018/08/04 | 2,239 | 9,622 | <issue_start>username_0: Do AI algorithms exist which are capable of healing themselves or regenerating a hurt area when they detect so?
For example: In humans if a certain part of brain gets hurt or removed, neighbouring parts take up the job. This happens probably because we are biologically unable to grow nerve cells. Whereas some other body parts (liver, skin) will regenerate most kinds of damage.
Now my question is does AI algorithms exist which take care of this i.e. regenerating a damaged area? From my [understanding](https://www.reddit.com/r/MachineLearning/comments/2uogqt/what_is_coadaptation_in_the_context_of_neural/) this can be achieved in a NN using dropout (probably). Is it correct? Do additional algorithms (for both AI/NN) or measures exist to make sure healing happens if there is some damage to the algorithm itself?
This can be particularly useful in cases where say there is a burnout in a processor cell processing some information about the environment. The other processing nodes have to take care to compensate or fully take-over the functions of the damaged cell.
(Intuitionally this can mean 2 things:
* We were not using the system of processors to its full capability.
* The performance of the system will take a hit due to other nodes taking over functionality of the damaged node)
Does this happen in the case of brain damage also? Or is my inferences wrong? (Kindly throw some light).
**NOTE** : I am not looking for hardware compensations like re- routing, I am asking for non-tangible healing. Adjusting the behavior or some parameters of the algorithm.<issue_comment>username_1: Yes, this was an active area of research in a number of different AI fields.
Probably the most directly related work is Bongard, Zykov & Lipson's [self-repairing robots](https://pdfs.semanticscholar.org/015c/b6e0ae8859ae5ae4095ab01495331310839b.pdf) from the early 2000's.
There's some more recent work from <NAME> that you can see [here](https://www.youtube.com/watch?v=uIn-sMq8-Ls) too.
There are lots of different ways to do this, but Bongard et al's approach was probably the most elegant. The basic idea was to frame it as a learning problem: the robot is able to learn the shape of its body by performing controlled experiments. When the body is damaged, the robot can detect that it's body has changed shape (sensors don't report the expected values when it tries to move), perform new experiments to determine the extent of the damage, and then generate new movements that work around the damaged area. Lipson covers the basics of this system very briefly in [this video](https://www.youtube.com/watch?v=iNL5-0_T1D0).
The more modern system uses a similar approach, but tries to repair its body, rather than working around the damage. It's got an internal model of what it's body should look like, and then a set of cameras that help it locate the various pieces and move them to reassemble.
Dropout is sort of a similar idea, but dropout is usually done to encourage redundancy during training, which can help a model avoid overfitting. It's usually not done explicitly to heal a damaged system, although it would make a system more resistant to damage in the first place.
Upvotes: 2 <issue_comment>username_2: The question and the example are a few contradictory.
The example is about a physical brain damage. Computer systems with the ability to self-repair exists from 1970's. They can repair a damaged disk (RAID), replace a CPU by an idle one (active/passive), mark faulty memory blocks, redirect network traffic from broken links to available ones, ... nowadays near than all hardware failures are covered.
However, the question is about "**algorithms** capable of healing themselves", that has a parallelism in "persons capable of healing from a psychological problem".
Like in the case of persons, it depends of the problem, and the amount of recovery expected.
Some easier cases are:
* Lots of non-AI systems has the ability to re-synchronize, auto-calibration, ...
* Any minimal intelligent system can "stop" if it detects it is producing continuous wrong results.
Going a step forward, thinking in ML (Neural Nets, ..) we can remark that all unsupervised learning machines can recover from a misalignment of their parameters, just re-executing the learning process (or continuously executing it).
Finally, we could ask "can a machine recover from an error in his reward function" ? And, at this point, my answer is "I do not known any system able of that, because they have no common sense".
Upvotes: 1 <issue_comment>username_3: Good question. It is related to the genetic algorithm concept, automated bug detection, and continuous integration.
**Early Genetically Inspired Algorithms**
Some of the Cambridge LISP code in the 1990s worked deliberately toward self-improvement, which is not the same as self-repair, but the two are conceptual siblings.
Some of those early LISP algorithms were genetically inspired but not pure simulations of DNA mutation with natural selection through sexual reproduction. A few of these evolution-like algorithms evaluated their own effectiveness based on a fixed effectiveness model. The effectiveness model would accept reported objective metrics at run time and analyze them. When the analysis returned an assessment of effectiveness below a minimum threshold, the LISP code would perform this procedure.
* Copy itself (which is easy in LISP)
* Mutate the algorithm in the copy according to some meta-rules
* Run the mutation in parallel as a production simulation for a while
* Check of the effectiveness of mutation out performed its own
If the mutation was gauged as more effective, it would perform four more selfless steps.
* Make a record of itself
* Attach its own performance for later meta-rule use
* Load the mutation it created in its own place
* Perform apoptosis
Unlike biological apoptosis, apoptosis in these algorithms simply pass computational resources and run time control to the mutation that was loaded.
This procedure was and probably still is easier in LISP than in other languages, although lovers of other languages would argue endlessly that point.
**Extensions of Continuous Integration**
This is also the closed loop continuous improvement strategy intended when bug reporting is integrated with continuous integration development platforms and tools. We see extensions of continuous integration in the feeding of bug lists from automated detection, especially for crashes, in many applications, frameworks, libraries, drivers, and operating system today. Many of the elements of closed loop self-repair are already in general practice among the most progressive development teams.
The bug fixes themselves are not yet automated in the way researchers were attempting in the LISP code above. Developers and team leaders are following a process similar to this.
* Developer or team lead associates (assigns) bug to developer
* Developer attempts to replicate the bug with the corresponding version of the code
* If replicated, the root cause is found
* A design for a fix occurs at some level
* The fix is implemented
If continuous integration and proper configuration management is in place, at the point when a commit of the change to the team repository occurs, it is applied to the correct branches and the test suite of unit, integration, and functional tests is run to detect any breakage that the fix may have caused inadvertently.
**Several Pieces of Full Automation are Already in Use**
As one can see, many of the pieces are in place for automatic algorithm, configuration, and deployment package self-repair. There are even projects underway in several corporations to automatically create functional tests by recording user behavior and user answers to questions like, "Was this helpful?"
**What is Missing**
What needs further development to more completely see full life cycle self-improving and self-repairing software?
* Automatic bug replication
* Automatic unit test creation
* Automatic repair design
* Automatic creation of code from design
**Next Steps**
I suggest that the next steps to be done are these.
* Assess work already done on the four missing automations above
* Review the LISP procedure that was perhaps shelved in the 1990s, or perhaps not, since we cannot see (and should not see) what was classified or made company confidential)
* Consider how the machine learning building blocks that have emerged within the last two decades may help
* Find stakeholders to provide project resources
* Get working
**A Note on Demand, Ethics, and Technological Displacement**
Truth be told, the quality of software was a problem in the 1980s, 1990s, 2000s, and 2010s. Just today, I found over a dozen bugs in software that is considered a stable release, when performing some of the most basic functions the software was designed to do.
Given bug list sizes, just as accidents make the question of whether humans should be driving cars questionable, whether humans should maintain software quality is questionable.
Humanity has survived replacement in a number of things already.
* Arithmetic with a pencil and eraser is gone
* Professional farming with garage tools is gone
* Creating advertising mechanicals with Exacto knives is gone
* Sorting mail by hand is gone
* Communicating by horse-back courier is gone
Few software engineers are happy just fixing bugs. They seem to be happiest creating new software filled with bugs that someone else is supposed to fix. Why not let that someone else be artificial?
Upvotes: 3 [selected_answer] |
2018/08/05 | 848 | 3,099 | <issue_start>username_0: I am looking for books or to state of the art papers about current the **development** trends for a strong-AI.
Please, do not include opinions about the books, just refer the book with a brief description. To emphasize, I am not looking for books on applied AI (e.g. neural networks or the book by Norvig). Furthermore, do not consider *AGI proceedings*, which contains papers that focus on very concrete aspects. The related Wikipedia describes some active investigation lines about AGI (cognitive, neuroscience, etc.) but can not be considered an educational/introductory resource. Finally, I am not interested in philosophical questions related to AI safety or risks or its morality if they are not related to its development. Development doesn't exclude mathematical foundation about it.
By example, if I look by example at this list "<https://bigthink.com/mike-colagrossi/the-10-best-books-on-ai>", the final **candidates list became empty**.<issue_comment>username_1: There is actually a book called [Artificial General Intelligence](https://books.google.com/books?id=ZRKMsqOIv60C&printsec=frontcover&dq=artificial%20general%20intelligence&hl=en&sa=X&ved=0ahUKEwjm-qGAw9jcAhUCxVkKHc10ClYQ6AEILDAB#v=onepage&q=artificial%20general%20intelligence&f=false) by <NAME> and <NAME>. It's a bit out of date (from 2008), and published as a Springer-Verlag monograph (which tends to have fairly low editorial standards). This one is also an anthology, with each chapter written by a different author. It's probably not suitable as an undergraduate level book, but it does seem to contain something like the information that's wanted.
Upvotes: 2 <issue_comment>username_2: The paper [Artificial General Intelligence: Concept, State of the Art, and Future Prospects](https://content.sciendo.com/view/journals/jagi/5/1/article-p1.xml) (2014), by <NAME> (one of the people that are really still very interested in AGI), surveys the field of artificial general intelligence (AGI), its progress, approaches, mathematical formalisms, engineering, and biology-inspired perspectives, and metrics for assessing AGI.
Just to give a little bit more context and whet your appetite, let me briefly describe the different approaches to AGI (section 3, p. 14).
* **symbolic** approach (which is based on the [Physical Symbol System Hypothesis](http://ai.stanford.edu/users/nilsson/OnlinePubs-Nils/PublishedPapers/pssh.pdf); examples of this approach are *ACT-R* or *SOAR*),
* **emergentist** approach (aka **sub-symbolic**, i.e. the use of neural networks, and similar *sub-symbolic* models, from which abstract symbolic processing/reasoning can or is expected to *emerge*; so examples of this approach is *deep learning*, *computational neuroscience*, and *artificial life*),
* **hybrid** approach (a combination of the symbolic and sub-symbolic approaches; examples of this approach are *CLARION* and *CogPrime*), and
* **universalist** approach (examples of this approach are the *[AIXI](https://ai.stackexchange.com/a/10377/2444)* and *Gödel machine*).
Upvotes: 1 |
2018/08/06 | 1,926 | 7,665 | <issue_start>username_0: Having analyzed and reviewed a certain amount of articles and questions, apparently, the expression *computational intelligence* (CI) is not used consistently and it is still unclear the relationship between CI and artificial intelligence (AI).
According to [IEEE computational intelligence society](https://cis.ieee.org/about/mission-vision-foi)
>
> The Field of Interest of the Computational Intelligence Society (CIS) shall be the theory, design, application, and development of biologically and linguistically motivated computational paradigms emphasizing neural networks, connectionist systems, genetic algorithms, evolutionary programming, fuzzy systems, and hybrid intelligent systems in which these paradigms are contained.
>
>
>
which suggests that CI could be a sub-field of AI or an umbrella term used to group certain AI sub-fields or topics, such as genetic algorithms or fuzzy systems.
What is the difference between artificial intelligence and computational intelligence? Is CI just a synonym for AI?<issue_comment>username_1: >
> What is the difference between Artificial Intelligence and Computational Intelligence?
>
>
>
**The short answer** is that they are two parallel research efforts working on similar problems, but with different methodologies and histories. Essentially, they study similar things, but with different tools. In the modern context, computational intelligence tends to use bio-inspired computing, like evolutionary and genetic algorithms. AI tends to prefer techniques with stronger theoretical guarantees, and still has a significant community focused on purely deductive reasoning. The main area of overlap is in machine learning, especially neural networks.
---
**The longer answer** is that your source from 1948 says they are synonyms in part because it predates the split in the research community, which took place later.
The two communities have always some overlap in topics, but in my experience, mostly are skeptical of each other's methodologies, and mostly publish in separate journals. Some authors consider CI to be a subset of AI however, particularly those [writing in the 1990s](https://link.springer.com/chapter/10.1007/978-3-642-58930-0_2).
Example topics that are solidly in AI but definitely not in CI are logical and expert systems, and statistical approaches to machine learning like regression.
Example topics that are solidly in CI but perhaps not in AI (depending on whether one views CI as a subset of AI or not) are genetic programming, fuzzy logic, and ant colony optimization.
As a rule, AI-rooted techniques have better theoretical guarantees, and better developed theory in general (there are exceptions though). For example, Fuzzy Logic has been strongly criticized for the lack of a solid theoretical foundation (good modern summary [here](https://link.springer.com/chapter/10.1007/978-3-540-93802-6_10)), as have genetic and evolutionary approaches (most famously, both lack a proof of convergence within finite time to a global optimum on a smooth surface, even though they do quite well in practice).
CI-rooted techniques nonetheless often see major performance advantages in specific problems (see, for instance, deep learning results), and tend to have a strong experimental and engineering tradition. The [No Free Lunch theorems](http://www.no-free-lunch.org/) are often used to justify their use when theoretical certainty is missing. Basically, the theorems say that, in learning and optimization problems, a technique can only perform well on a problem by performing poorly on some other problem. CI authors argue that there are some problem domains in which their techniques work well (which must be true, because simpler algorithms like hill-climbing outperform them on simple problems).
Check out this paper for lots more [references](https://link.springer.com/chapter/10.1007/978-3-642-58930-0_2) on CI, or this [book](http://jasss.soc.surrey.ac.uk/7/1/reviews/ramanath.html) for a list of core topics in the field.
Upvotes: 3 <issue_comment>username_2: The book [Computational Intelligence: An Introduction](https://papers.harvie.cz/unsorted/computational-intelligence-an-introduction.pdf) (2nd edition, 2007) by [<NAME>](https://ieeexplore.ieee.org/author/37276400500), which has been cited more than 3000 times, defines **artificial intelligence** as follows
>
> These intelligent algorithms include artificial neural networks, evolutionary computation, swarm intelligence, artificial immune systems, and fuzzy systems. Together with logic, deductive reasoning, expert systems, case-based reasoning and symbolic machine learning systems, these intelligent algorithms form part of the field of **Artificial Intelligence (AI)**. Just looking at this wide variety of AI techniques, AI can be seen as a combination of several research disciplines, for example, computer science, physiology, philosophy, sociology and biology.
>
>
>
and **computational intelligence** as follows
>
> This book concentrates on a sub-branch of AI, namely **Computational Intelligence (CI)** – the study of adaptive mechanisms to enable or facilitate intelligent behavior in complex and changing environments. These mechanisms include those AI paradigms that exhibit an ability to learn or adapt to new situations, to generalize, abstract, discover and associate. The following CI paradigms are covered: artificial neural networks, evolutionary computation, swarm intelligence, artificial immune systems, and fuzzy systems.
>
>
>
He then notes
>
> At this point it is necessary to state that there are different definitions of what constitutes CI. This book reflects the opinion of the author, and may well cause some debate. For example, swarm intelligence (SI) and artificial immune systems (AIS) are classified as CI paradigms, while many researchers consider these paradigms to belong only under Artificial Life. However, both particle swarm optimization (PSO)
> and ant colony optimization (ACO), as treated under SI, satisfy the definition of CI given above, and are therefore included in this book as being CI techniques. The same applies to AISs.
>
>
>
So, there may be different definitions of CI (given by different people), but, given that this book has been cited so many times, I would just stick to these definitions and use this book as a reference (I have actually consulted it a few times in the past). My university library even contains a copy of it.
To summarise, CI is a sub-field of AI, which studies (or is associated with) the following topics
* artificial neural networks (NN),
* evolutionary computation (EC),
* swarm intelligence (SI),
* artificial immune systems (AIS), and
* fuzzy systems (FS).
which are also part of AI, which additionally studies
* logic,
* deductive reasoning,
* expert systems,
* case-based reasoning, and
* symbolic machine learning systems.
Just to give further credibility to these definitions, <NAME> [has an h-index of 59, has been cited 22557 times](https://scholar.google.com/citations?user=h9pOfj0AAAAJ&hl=en), and is an [IEEE Senior Member](https://ieeexplore.ieee.org/author/37276400500). You can find more info about him [here](https://engel.pages.cs.sun.ac.za/). Note that I have no affiliation with him. I am just providing this information so that people start to follow these definitions (rather than just looking at definitions given by people who have not extensively studied the field). Moreover, note that the definition of CI given by Engelbrecht is consistent with the definition given by IEEE that you are quoting.
Upvotes: 2 |
2018/08/07 | 699 | 2,887 | <issue_start>username_0: I am using a neural network as my function approximator for reinforcement learning. In order to get it to train well, I need to choose a good learning rate. Hand-picking one is difficult, so I read up on methods of programmatically choosing a learning rate. I came across this blog post, [*Finding Good Learning Rate and The One Cycle Policy*](https://medium.com/@nachiket.tanksale/finding-good-learning-rate-and-the-one-cycle-policy-7159fe1db5d6), about finding cyclical learning rate and finding good bounds for learning rates.
All the articles about this method talk about measuring loss across batches in the data. However, as I understand it, in [Reinforcement Learning](https://en.wikipedia.org/wiki/Reinforcement_learning) tasks do not really have any "batches", they just have episodes that can be generated by an environment as many times as one wants, which also gives rewards that are then used to optimize the network.
Is there a way to translate the concept of batch size into reinforcement learning, or a way to use this method of cyclical learning rates with reinforcement learning?<issue_comment>username_1: Potentially.
If you do offline reinforcement learning, you're basically learning to approximate a function by sampling input/output pairs, rather than episode-by-episode. Here, your batch size could be set exactly as in an ordinary supervised learning problem.
If you do online learning, then it's not clear to me that the techniques used to set the learning rate in supervised learning can be directly applied though.
Both approaches are well covered in the RL chapter of [Russell & Norvig](https://rads.stackoverflow.com/amzn/click/0136042597) (17? 18?).
Upvotes: 2 <issue_comment>username_2: From my understanding of reinforcement learning, you will have an agent and an environment.
In each episode, the agent observes the state $s$, takes some action action $a$, then gets some reward $r$, and finally observes the next state $s'$, and do it again and again until the end of the episode.
The above process does not incur any "learning". Then when and where exactly do you "learn"? You learn from your history. In traditional Q learning, the Q matrix is updated every time you have a new observation of $(s\_t, a\_t, r\_t, s'\_{t+1})$. Just like the supervised learning, you put in training sample one by one.
Similarly, you can feed in training samples in "batch" when you train, which means you "remember" the past $N$ observations and train them together. I think that is the answer to your question.
Furthermore, the past $N$ observations could have a strong correlation that you don't want. To break this, you may have a larger "memory" that stores many observations, and you only sample a few (this number is your new batch size) randomly every time you train your model. This is called *experience replay*.
Upvotes: 2 |
2018/08/07 | 804 | 3,438 | <issue_start>username_0: My works quality control department is responsible for taking pictures of our products at various phases through our QC process and currently the process goes:
1. Take picture of product
2. Crop the picture down to only the product
3. Name the cropped picture to whatever the part is and some other relevant data
Depending on the type of product the pictures will be cropped a certain way. So my initial thought would be to use a reference to [an object identifier](https://cloud.google.com/vision/) and then once the object is identified it will use a cropping method specific to that product. There will also be QR codes within the pictures being taken for naming via OCR in the future so I can probably identify the parts that way if this proves slow or problematic.
The part I am unsure about is how to get the program to know how to crop based on a part. For example I would like to present the program with a couple before crop and after crop photos of product X then make a specific cropping formula for product X based on those two inputs.
Also if it makes any difference my code is in C#<issue_comment>username_1: This sounds like you have a [supervised learning](https://en.wikipedia.org/wiki/Supervised_learning) problem. Microsoft provides [a C# library](https://www.microsoft.com/net/learn/apps/machine-learning-and-ai), but it may not be suitable for your problem.
There are *many* different algorithms you could try, most of which will be within the sub-area of [computer vision](https://en.wikipedia.org/wiki/Computer_vision). Probably some kind of deep neural network is the best bet these days, but the right choice will probably depend on the details of your problem. [Goodfellow et al.](https://www.deeplearningbook.org/) have a recent book that might be a good resource for deciding what to use.
Maybe someone who works in computer vision can give you a more specific suggestion.
Upvotes: 1 <issue_comment>username_2: Depending on kind and amount of data you posess, there are few approaches that you might consider.
1. Marking target objects on dataset and training CNN that returns coordinates of target object. In this case, remember that it is usually faster when training data ROIs have their coordinates relative to image size.
2. Use some kind of focus mechanism, like spatial transformer network:
* <https://arxiv.org/abs/1506.02025>This kind of network component is able to learn image transformation (including crop) that maximazes target metric for main classifier. This tutorial on pytorch:
* <https://pytorch.org/tutorials/intermediate/spatial_transformer_tutorial.html>shows some nice visualizations of STN results. Good thing about this kind of network is that, given enough data, it might learn proper transformation from image classification data (photo -> class). One does not need to explicitly mark target objects on image!
3. Object detection networks, like YOLO, Faster-RCNN. There are many tutorials on that matter, eg:
* <https://www.datacamp.com/community/tutorials/object-detection-guide>
4. Saliency extraction. Simple idea is to generate heatmap showing what parts of input image activates classifier the most. I guess you could try calculate bounding box basing on such heatmap. Example research paper:
* <https://arxiv.org/abs/1805.08249>
Points 1 and 2 are probably easies to implement, so I would start with them.
Upvotes: 3 [selected_answer] |
2018/08/08 | 1,615 | 7,234 | <issue_start>username_0: From [Meta-Learning with Memory-Augmented Neural Networks](http://proceedings.mlr.press/v48/santoro16.pdf) in section 4.1:
>
> To reduce the risk of overfitting, we performed data augmentation by randomly translating and rotating character images. We also created new classes through 90◦, 180◦ and 270◦ rotations of existing data.
>
>
>
I can maybe see how rotations could reduce overfitting by allowing the model to generalize better. But if augmenting the training images through rotations prevents overfitting, then what is the purpose of adding new classes to match those rotations? Wouldn't that cancel out the augmentation?<issue_comment>username_1: Over-fitting in the context of convergence in a neural network can have many causes. When the model implied in the design of the network is not well fitted for the task, the network may still converge within the time frame allowed and the example set presented but it will take more time and a greater number of examples than necessary, and the reliability and accuracy of the trained circuit may be far below what could be achievable with a solid design.
Gross over-fitting can be one of the causes of decreased reliability. A more slight over-fit will exhibit accuracy somewhat diminished from the accuracy found by the end of training.
This is why various designs have emerged with functionally specific circuit simulations between more general multi-layer perceptron networks.
* Convolution kernels
* Rotations
* Other basic translations
* Hash lookups
* Other patterned circuits that remove burden from general convergence
In the case of rotation, the convergence on an optima angle in one specialized layer or longitudinal stack element can remove considerable burden and allow overall convergence with fewer general activation layers, using fewer examples, and with a significantly more reliable and accurate result.
Consider what perceptrons must do to rotate an image arbitrarily. They must wire what is essentially rotational trigonometry into the parameters of everything that is orientation-dependent within the network, creating what is essentially a pliable helix, possibly in many locations within the trained network. Creating the pliable helix functionality, parameterized in advance of training and carefully handling back-propagation to adjust to its existence, drastically reduces the complexity of convergence.
If done well, over-fitting will be much less of an issue. If done poorly, there could be worse over-fitting or other problems such as non-convergence.
In summary, the best practice is to leave to general network training what must, by its nature, be complex but handle with specific functionality what is well understood and for which mathematical and algorithmic approaches already exist.
Upvotes: 2 <issue_comment>username_2: How can data augmentation reduce overfitting?
---------------------------------------------
You write that you can already maybe see how data augmentation can help prevent overfitting in general, but it sounds a bit uncertain and it's still asked in the title of the question, so I'll address this first:
Generally, when we use Machine Learning for classification problems, we would ideally learn a classifier that can perform well on a **population**. An example of a population would be: **the set of all handwritten characters in the entire world**. Generally, we don't have that complete population available for training, we only have a (much smaller) **training dataset**. If a training set is large enough, it might be a good approximation of the true population we're interested in (a "dense sampling" of the space we're interested in), but it's still just that; an approximation.
We say that a learning algorithm is **overfitting** if it performs singificantly better on the training set than it is on the population (which we generally approximate again using a separate test set).
Now, data augmentation (like adding rotations / translations of images in the training set to the training set) can help combat overfitting **because it bridges the gap between training set and population**. The population (all handwritten characters in the entire world) will likely include characters at various offsets from the middle (e.g. translations) and at various rotations. So, data augmentation is simply adding more examples (and possibly more *varied* examples) to our training set, which importantly are considered to be a part of the population we're interested in. If, for example, the population we are interested in were only the set of all handwritten characters at a specific position in the image (e.g., centered), then augmenting the dataset by adding various translations would not help; we'd be adding instances that are outside the population we want to learn about.
---
Why doesn't adding extra classes for rotations cancel out augmentations?
------------------------------------------------------------------------
There are **two possible explanations** I can come up with:
1. **Maybe the "extra-class" rotations are different from the "data augmentation" rotations.**
Here is the exact quote that's relevant from the paper:
>
> "To reduce the risk of overfitting, we performed data augmentation by randomly translating and rotating character images. We also created new classes through 90◦, 180◦ and 270◦ rotations of existing data."
>
>
>
That first sentence is not 100% clear in my opinion. I imagine the translations they use for data augmentation are relatively small (e.g. offsets of a few pixels), so maybe the rotations they use for data augmentation are also only "small" rotations (for example, between -10◦ and +10◦). The "larger" rotations (multiples of 90◦) described in the second sentence may then no longer be a part of the "data augmentation to reduce the risk of overfitting" in the first sentence; they're simply parts of a different action performed to increase the number of classes in the dataset (and, I imagine, for each of these larger rotations they may again perform "smaller rotations" for data augmentations).
This explanation is kind of hypothetical though, it's not 100% clear from the paper exactly what they mean here in my opinion.
2. **"Overfitting" can have a slightly different interpretation in the case of one-shot learning than in traditional learning.**
Note that this paper is about "one-shot learning", where the goal is to be able to classify accurately after being presented only a single example ("one shot") of a never-before-seen class. In such one-shot problems, you could in some sense say that an algorithm might "overfit" to the "distribution of classes" if it can only perform one-shot learning well on a certain set of similar classes, but not on others.
For example, if you only train one-shot learning on a set of handwritten characters that are "upright" (close to 0 rotation), your algorithm might be able to perform well in terms of one-shot learning when presented with new classes (new handwritten characters) that are also upright, but might be incapable of proper one-shot learning when presented with new classes (new handwritten characters) that are upside-down.
Upvotes: 2 |
2018/08/08 | 1,494 | 6,853 | <issue_start>username_0: I have only a general understanding of General Topology, and want to understand the scope of the term "topology" in relation to the field of Artificial Intelligence.
In what ways are topological structure and analysis applied in Artificial Intelligence?<issue_comment>username_1: Over-fitting in the context of convergence in a neural network can have many causes. When the model implied in the design of the network is not well fitted for the task, the network may still converge within the time frame allowed and the example set presented but it will take more time and a greater number of examples than necessary, and the reliability and accuracy of the trained circuit may be far below what could be achievable with a solid design.
Gross over-fitting can be one of the causes of decreased reliability. A more slight over-fit will exhibit accuracy somewhat diminished from the accuracy found by the end of training.
This is why various designs have emerged with functionally specific circuit simulations between more general multi-layer perceptron networks.
* Convolution kernels
* Rotations
* Other basic translations
* Hash lookups
* Other patterned circuits that remove burden from general convergence
In the case of rotation, the convergence on an optima angle in one specialized layer or longitudinal stack element can remove considerable burden and allow overall convergence with fewer general activation layers, using fewer examples, and with a significantly more reliable and accurate result.
Consider what perceptrons must do to rotate an image arbitrarily. They must wire what is essentially rotational trigonometry into the parameters of everything that is orientation-dependent within the network, creating what is essentially a pliable helix, possibly in many locations within the trained network. Creating the pliable helix functionality, parameterized in advance of training and carefully handling back-propagation to adjust to its existence, drastically reduces the complexity of convergence.
If done well, over-fitting will be much less of an issue. If done poorly, there could be worse over-fitting or other problems such as non-convergence.
In summary, the best practice is to leave to general network training what must, by its nature, be complex but handle with specific functionality what is well understood and for which mathematical and algorithmic approaches already exist.
Upvotes: 2 <issue_comment>username_2: How can data augmentation reduce overfitting?
---------------------------------------------
You write that you can already maybe see how data augmentation can help prevent overfitting in general, but it sounds a bit uncertain and it's still asked in the title of the question, so I'll address this first:
Generally, when we use Machine Learning for classification problems, we would ideally learn a classifier that can perform well on a **population**. An example of a population would be: **the set of all handwritten characters in the entire world**. Generally, we don't have that complete population available for training, we only have a (much smaller) **training dataset**. If a training set is large enough, it might be a good approximation of the true population we're interested in (a "dense sampling" of the space we're interested in), but it's still just that; an approximation.
We say that a learning algorithm is **overfitting** if it performs singificantly better on the training set than it is on the population (which we generally approximate again using a separate test set).
Now, data augmentation (like adding rotations / translations of images in the training set to the training set) can help combat overfitting **because it bridges the gap between training set and population**. The population (all handwritten characters in the entire world) will likely include characters at various offsets from the middle (e.g. translations) and at various rotations. So, data augmentation is simply adding more examples (and possibly more *varied* examples) to our training set, which importantly are considered to be a part of the population we're interested in. If, for example, the population we are interested in were only the set of all handwritten characters at a specific position in the image (e.g., centered), then augmenting the dataset by adding various translations would not help; we'd be adding instances that are outside the population we want to learn about.
---
Why doesn't adding extra classes for rotations cancel out augmentations?
------------------------------------------------------------------------
There are **two possible explanations** I can come up with:
1. **Maybe the "extra-class" rotations are different from the "data augmentation" rotations.**
Here is the exact quote that's relevant from the paper:
>
> "To reduce the risk of overfitting, we performed data augmentation by randomly translating and rotating character images. We also created new classes through 90◦, 180◦ and 270◦ rotations of existing data."
>
>
>
That first sentence is not 100% clear in my opinion. I imagine the translations they use for data augmentation are relatively small (e.g. offsets of a few pixels), so maybe the rotations they use for data augmentation are also only "small" rotations (for example, between -10◦ and +10◦). The "larger" rotations (multiples of 90◦) described in the second sentence may then no longer be a part of the "data augmentation to reduce the risk of overfitting" in the first sentence; they're simply parts of a different action performed to increase the number of classes in the dataset (and, I imagine, for each of these larger rotations they may again perform "smaller rotations" for data augmentations).
This explanation is kind of hypothetical though, it's not 100% clear from the paper exactly what they mean here in my opinion.
2. **"Overfitting" can have a slightly different interpretation in the case of one-shot learning than in traditional learning.**
Note that this paper is about "one-shot learning", where the goal is to be able to classify accurately after being presented only a single example ("one shot") of a never-before-seen class. In such one-shot problems, you could in some sense say that an algorithm might "overfit" to the "distribution of classes" if it can only perform one-shot learning well on a certain set of similar classes, but not on others.
For example, if you only train one-shot learning on a set of handwritten characters that are "upright" (close to 0 rotation), your algorithm might be able to perform well in terms of one-shot learning when presented with new classes (new handwritten characters) that are also upright, but might be incapable of proper one-shot learning when presented with new classes (new handwritten characters) that are upside-down.
Upvotes: 2 |
2018/08/09 | 737 | 3,089 | <issue_start>username_0: I am reading a book that states
>
> As the mini-batch size increases, the gradient computed is closer to the 'true' gradient
>
>
>
So, I assume that they are saying that mini-batch training only focuses on decreasing the cost function in a certain 'plane', sacrificing accuracy for speed. Is that correct?<issue_comment>username_1: The basic idea behind mini-batch training is rooted in the exploration / exploitation tradeoff in [local search and optimization algorithms](https://en.wikipedia.org/wiki/Local_search_(optimization)).
You can view training of an ANN as a local search through the space of possible parameters. The most common search method is to move all the parameters in the direction that reduces error the most (gradient decent).
However, ANN parameter spaces do not usually have a smooth topology. There are many shallow local optima. Following the global gradient will usually cause the search to become trapped in one of these optima, preventing convergence to a good solution.
[Stochastic gradient decent](https://en.wikipedia.org/wiki/Stochastic_gradient_descent) solves this problem in much the same way as older algorithms like simulated annealing: you can escape from a shallow local optima because you will eventually (with high probability) pick a sequence of updates based on a single point that "bubbles" you out. The problem is that you'll also tend to waste a lot of time moving in wrong directions.
Mini-batch training sits between these two extremes. Basically you average the gradient across enough examples that you still have some global error signal, but not so many that you'll get trapped in a shallow local optima for long.
Recent research by [<NAME>](https://arxiv.org/abs/1804.07612) suggests that in fact, most of the time you'd want to use *smaller* batch sizes than what's being done now. If you set the learning rate carefully enough, you can use a big batch size to complete training faster, but the difficulty of picking the correct learning rate increases with the size of the batch.
Upvotes: 3 [selected_answer]<issue_comment>username_2: It's like you have a class of 1000 children and you being a teacher, want all of them to learn something at the same time. It is difficult because all are not the same, they have different adaptability and reasoning strength.
So one can have alternate strategies for the same task.
1) Take each child at a time and train it. It will be the good approach but it will take a long time `here each child is equal to your batch size`
2) Take a group of 10 children and train them, this can be the good compromise between time, and learning. In the smaller group, you can handle naughty one better. `here your batch size is 10`
3) If you take all 1000 children and teach them, it will take a very short time but you will not be able to give proper attention to those mischievous ones `here your batch size is 1000`
Same with machine learning, Take reasonable batch size, tune weight accordingly.
I hope this analogy will clear your doubt.
Upvotes: 0 |
2018/08/11 | 1,166 | 4,193 | <issue_start>username_0: I've been wanting to make my own Neural Network in Python, in order to better understand how it works. I've been following [this](https://www.youtube.com/watch?v=ZzWaow1Rvho) series of videos as a sort of guide, but it seems the backpropagation will get much more difficult when you use a larger network, which I plan to do. He doesn't really explain how to scale it to larger ones.
Currently, my network feeds forward, but I don't have much of an idea of where to start with backpropagation. My code is posted below, to show you where I'm currently at (I'm not asking for coding help, just for some pointers to good sources, and I figure knowing where I'm currently at might help):
```
import numpy
class NN:
prediction = []
def __init__(self,input_length):
self.layers = []
self.input_length = input_length
def addLayer(self, layer):
self.layers.append(layer)
if len(self.layers) >1:
self.layers[len(self.layers)-1].setWeights(len(self.layers[len(self.layers)-2].neurons))
else:
self.layers[0].setWeights(self.input_length)
def feedForward(self, inputs):
_inputs = inputs
for i in range(len(self.layers)):
self.layers[i].process(_inputs)
_inputs = self.layers[i].output
self.prediction = _inputs
def calculateErr(self, target):
out = []
for i in range(0,len(self.prediction)):
out.append( (self.prediction[i] - target[i]) ** 2 )
return out
class Layer:
neurons = []
weights = []
biases = []
output = []
def __init__(self,length,function):
for i in range(0,length):
self.neurons.append(Neuron(function))
self.biases.append(numpy.random.randn())
def setWeights(self, inlength):
for i in range(0,inlength):
self.weights.append([])
for j in range(0, inlength):
self.weights[i].append(numpy.random.randn())
def process(self,inputs):
for i in range(0, len(self.neurons)):
self.output.append(self.neurons[i].run(inputs,self.weights[i], self.biases[i]))
class Neuron:
output = 0
def __init__(self, function):
self.function = function
def run(self, inputs, weights, bias):
self.output = self.function(inputs,weights,bias)
return self.output
def sigmoid(n):
return 1/(1+numpy.exp(n))
def inputlayer_func(inputs,weights,bias):
return inputs
def l2_func(inputs,weights,bias):
out = 0
for i in range(0,len(inputs)):
out += weights[i] * inputs[i]
out += bias
return sigmoid(out)
NNet = NN(2)
l2 = Layer(1,l2_func)
NNet.addLayer(l2)
NNet.feedForward([2.0,1.0])
print(NNet.prediction)
```
So, is there any resource that explains how to implement the back-propagation algorithm step-by-step?<issue_comment>username_1: Backpropagation isn't too much more complicated, but understanding it well will require a bit of mathematics.
[This tutorial](https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/) is my go-to resource when students want more detail, because it includes fully worked through examples.
Chapter 18 of [Russell & Norvig's](https://rads.stackoverflow.com/amzn/click/com/0136042597) book includes pseudocode for this algorithm, as well as a derivation, but without good examples.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Nowadays, there are many resources that cover the back-propagation algorithm and some of them provide step-by-step examples.
However, in addition to [the other answer](https://ai.stackexchange.com/a/7545/2444), I would like to mention the online book [Neural Networks and Deep Learning](http://neuralnetworksanddeeplearning.com/) by Nielsen that covers [the back-propagation algorithm](http://neuralnetworksanddeeplearning.com/chap2.html) (and other topics) in detail and, at the same, intuitively, although some could disagree. You can find the associated source code [here](https://github.com/mnielsen/neural-networks-and-deep-learning) (which I had consulted a few years ago when I was learning about the topic).
Upvotes: 0 |
2018/08/12 | 2,738 | 11,970 | <issue_start>username_0: In fields such as Machine Learning, we typically (somewhat informally) say that we are overfitting if improve our performance on a training set at the cost of reduced performance on a test set / the true population from which data is sampled.
More generally, in AI research, we often end up testing performance of newly proposed algorithms / ideas on the same benchmarks over and over again. For example:
* For over a decade, researchers kept trying thousands of ideas on the game of Go.
* The ImageNet dataset has been used for huge amounts of different publications
* The Arcade Learning Environment (Atari games) has been used for thousands of Reinforcement Learning papers, having become especially popular since the DQN paper in 2015.
Of course, there are very good reasons for this phenomenon where the same benchmarks keep getting used:
* Reduced likelihood of researchers "creating" a benchmark themselves for which their proposed algorithm "happens" to perform well
* Easy comparison of results to other publications (previous as well as future publications) if they're all consistently evaluated in the same manner.
However, there is also a risk that the **research community as a whole** is in some sense "overfitting" to these commonly-used benchmarks. If thousands of researchers are generating new ideas for new algorithms, and evaluate them all on these same benchmarks, and there is a large bias towards primarily submitting/accepting publications that perform well on these benchmarks, **the research output that gets published does not necessarily describe the algorithms that perform well across all interesting problems in the world**; there may be a bias towards the set of commonly-used benchmarks.
---
**Question**: to what extent is what I described above a problem, and in what ways could it be reduced, mitigated or avoided?<issue_comment>username_1: Great question Dennis!
This is a perennial topic at AI conferences, and sometimes even in special issues of journals. The most recent one I recall was [Moving Beyond the Turing Test](http://www.aaai.org/ojs/index.php/aimagazine/article/view/2650) in 2015, which ended up leading to a collection of articles in AI magazine later that year.
Usually these discussions cover a number of themes:
1. "Existing benchmarks suck". This is usually the topic that opens discussion. In the 2015/2016 discussion, which focused on the Turing Test as a benchmark specifically, criticisms ranged from "[it doesn't incentivize AI research on the right things](https://www.aaai.org/ojs/index.php/aimagazine/article/view/2646)", to claims that it was poorly defined, too hard, too easy, or not realistic.
2. General concensus that we need new benchmarks.
3. Suggestions of benchmarks based on various current research directions. In the latest discussion this included [answering standardized tests for human students](https://www.aaai.org/ojs/index.php/aimagazine/article/view/2636) (well defined success, clear format, requires linking and understanding many ares), [playing video games](https://link.springer.com/chapter/10.1007/978-3-319-48506-5_1) (well defined success, requires visual/auditory processing, planning, coping with uncertainty), and switching focus to [robotics competitions](https://www.cambridge.org/core/journals/knowledge-engineering-review/article/robotics-competitions-as-benchmarks-for-ai-research/A189836D56D754C1F0524F8C98C1BB9C).
I remember attending very similar discussions at machine learning conferences in the late 2000's, but I'm not sure anything was published out of it.
Despite these discussions, AI researchers seem to incorporate the new benchmarks, rather than displacing the older ones entirely. The Turing Test is still [going strong](https://www.aisb.org.uk/events/loebner-prize) for instance. I think there are a few reasons for this.
First, benchmarks are useful, particularly to provide context for research. Machine learning is a good example. If the author sets up an experiment on totally new data, then even if they apply a competing method, I have to trust that they did so *faithfully*, including things like optimizing the parameters as much as with their own methods. Very often they do not do this (it requires some expertise with competing methods), which inflates the reported advantage of their own techniques. If they *also* run their algorithm on a benchmark, then I can easily compare it to the benchmark performances reported by other authors, for their own methods. This makes it easier to spot a technique that's not really effective.
Second, even if new benchmarks or new problems are more useful, nobody knows about them! Beating the current record for performance on ImageNet can slingshot someone's career in a way that top performance on a new problem simply cannot.
Third, benchmarks tend to be things that AI researchers think can actually be accomplished with current tools (whether or not they are correct!). Usually iterative improvement on them is fairly easy (e.g. extend an existing technique). In a "publish-or-perish" world, I'd rather publish a small improvement on an existing benchmark than attempt a risker problem, at least pre-tenure.
So, I guess my view is fixing the dependence on benchmarks involves fixing the things that make people want to use them:
1. Have some standard way to compare techniques, but require researchers to *also* apply new techniques to a real world problem.
2. Remove the career and prestige rewards for working on benchmark problems, perhaps by explicitly tagging them as artificial.
3. Remove the incentives for publishing often.
Upvotes: 4 [selected_answer]<issue_comment>username_2: In addition to the points already listed in [John's answer](https://ai.stackexchange.com/a/7544/1641), some factors that can help to reduce / mitigate the risk of overfitting to commonly-used benchmarks as a research community are:
1. **Competitions with instances of problems hidden from entrants:** as far as I'm aware this is particularly popular in game AI (see the General Game Playing competition and General Video Game Playing competitions). The basic idea is that submissions should be able to tackle a relatively broad class of problems (playing *any* game defined in a specified format, or generating levels for *any* video game with rules described in a specific format, etc.). To some extent, using a large suite of problems as a standard benchmark (such as the large collection of Atari games supported by ALE) also fits in with this idea, though there is value in hiding the problems that are ultimately used for testing from the people writing submissions. Of course, the idea is that entries submitted to these kinds of competitions will involve new research which may be published.
2. **Using very simple toy problems:** With simple I do not necessarily mean that they are simple to solve, but simple to describe / understand (it may still, for example, have a large state space and be difficult for current techniques to solve). Simple toy problems often help to test for a very specific "skill", and can more easily give insight into specifically why/when an algorithm may be expected to fail or succeed. Of course, large non-toy problems are also important to demonstrate "real-world" usefulness of algorithms, but they may often give less understanding / insight into an algorithm.
3. **Theoretical work:** Theoretical work can also give more insight and understanding of new algorithms. Algorithms with strong theoretical foundations are often more likely to generalize to a multitude of problem domains, assuming that the initial assumptions hold (big assumption here - there are plenty of cases where assumptions required for strong proofs do not hold!). This is not always possible / "needed", sometimes new research based purely on intuition and with relatively little theoretical foundations still turn out to work well (or theory is only developed after promising empirical results)... but it can certainly help. Theoretical work can take many different forms, such proofs of convergence (often under strict conditions), proofs for upper or lower bounds on important measures (such as regret, or probability of making a "wrong" choice, etc.), proofs that an algorithm or a problem is a more general or more specific case of an existing, well-understood algorithm or problem, proofs that a model has or does not have a certain representational capacity, proofs of algorithmic equivalence (that an algorithm computes exactly the same quantities as another well-understood algorithm, typically with lower computation and/or memory requirements), etc.
Upvotes: 2 <issue_comment>username_3: To better understand my point of view, I am using deep learning for geomatics and teledetection purposes.
So after reading with interest the great two previous answers, I would like to add my small contribution to this thread.
First, I would like to emphasis **Insight knowledge**: I do agree with the second point of [Dennis's answer](https://ai.stackexchange.com/a/10011/51249), "Simple" and well understood benchmarks helps for pin-pointing strengths of AI methods, which is good. But this is known by AI researchers. And if they work for general improvement and not just on one benchmark to "slingshot" their career as [John's second point](https://ai.stackexchange.com/a/7544/51249) is mentioning, general improvement will be made. These thoughts links directly to my first point :
1. **Multiplicity of benchmarks = Variety of problems = overfitting avoidance**: If a paper is presenting an increment of performance on only one benchmark, it will ultimately say that their method is a specific one. So even if one benchmark can lead to an "overfitting", a group of benchmarks will offer a better variety of problems and thus a better insight of the generalization properties of an AI technique. A good application case of this is the [PointNet paper](https://arxiv.org/abs/1612.00593) (first end-to-end 3D point cloud neural network), where authors tested their approach against ModelNet40 (a well-known 3D classification benchmark), but also against MNIST, to verify the generalization capabilities of the network. Can "Meta-benchmarks" be a thing ? Benchmarks that are a concatenation of actual well-known ones (or ones who are known to give insight into one specific perk).
Then, in my field, where AI is seen more as a tool than in the theoretical deep learning field. It is known that there is a gap between benchmark results and "real life" (or applied) cases. This, in my opinion, is induced by the very high quality of benchmarks ground truth that isn't available for all cases. This leads to my second point :
2. **Un/Semi-supervised benchmarks = as-noisy-as-the-reality benchmarks**: As generalization is the capacity of dealing with the unknown, and because creating benchmarks on complex data is often time and money consuming, we should get out of the supervised way of thinking. This will increase the number of benchmarks available, which is good in regards to the first point, but also force theoretical AI research to focus more on these technic. It is well known that something is off with the actual supervised learning methods : Why do we need to show every possible case to a network ? Why can it not infer new knowledge from unlabeled data ? The biggest drag on this will be the data assessment. If semi-supervised benchmark will still give classical metrics, we will need a way of analyzing results on unlabeled data. In applied cases where ground truth is not often available, this is often done by visualization. But progress can be made to create a better, unbiased way on ranking methods. One idea can be to have a double-blinded random quality assessment in a captcha like manner of the results. I know this idea works only for human-understandable data, but other ways could be and needs to be found.
Upvotes: 2 |
2018/08/12 | 1,583 | 7,128 | <issue_start>username_0: If you've been attacked by a spider once, chances are you'll never go near a spider again.
In a neural network model, having a bad experience with a spider will slightly decrease the probability you will go near a spider depending on the learning rate. This is not good.
How can you program fear into a neural network, such that you don't need hundreds of examples of being bitten by a spider in order to ignore the spider (and also that it doesn't just lower the probability that you will choose to go near a spider)?<issue_comment>username_1: I think there are 2 ways to make this happen: 1) explicitly program fear as a constraint or parameter in some logical expression, or 2) utilize a large set of training data to teach fear.
Think about a basic Pacman game-- whether Pacman fears the ghosts or doesn't fear them is hard to tell, but they ARE ghosts and Pacman avoids them so I think it's safe we can use this as a basic example of "fear". Since, in this game, fear = avoidance, you could logically program avoidance to be some sort of distance. I tried this with Pacman reinforcement learning. I tried to set a distance of 5 squares to the ghosts and anytime Pacman could see a ghost within 5 squares, he would move in a different direction. What I found is that while Pacman will try to avoid ghosts, he doesn't know strategy (or have intelligence). Pacman would simply move away from ghosts until he got boxed in.
My point is that you can program your network to avoid spiders to not get bit, but without training, you will just be creating a basic parameter that might cause problems if there are 100 super aggressive spiders coming at you! The better way is to use some base logic to avoid spiders, but then train the network to be rewarded the better spiders are avoided.
Now, there are many situations of fear so this one example with Pacman would not necessarily apply to all... Just trying to give some insight in my experience with teaching fear with reinforcement learning in Pacman.
Upvotes: 2 <issue_comment>username_2: There are a lot of approaches you could take for this. Creating a realistic artificial analog for fear as implemented biologically in animals might be possible, but there is quite a lot involved in a real animal's fear response that would not apply in simpler AI bots available now. For instance, an animal entering a state of fear will typically use hormones to signal changes throughout its body, favouring resource expenditure and risk taking ("fight or flight").
In basic reinforcement learning, the neural network would not need to directly decide switch on a "fear mode". Instead, you can make use of some design in the agent and learning algorithm to help learn from rare but significant events. Here are a few ideas:
* Experience replay. You may already be doing this in the Pacman scenario, if you are using DQN or something similar. Storing the state transition and reward that caused a large positive or negative reward, and repeatedly learning from it should offset your concern
* Prioritised sweeping. You can use larger differences experienced between predicted and actual reward to bias sampling from your replay memory towards significant events and those linked closely to them.
* Planning. With a predictive model - maybe based on sampled transitions (you can re-use the experience replay memory for this), or maybe a trained state transition prediction network - then you can look multiple steps ahead by simulating. There is a strong relation between RL and look-ahead planning too, they are very similar algorithm. The difference is which states and actions are being considered, and whether they are being simulated or experienced. Experience replay blurs the line here - it can be framed as learning from memory, or improving predictions for planning. Planning helps by optimising decisions without needing to repeat experiences as much - a combination of planning and learning can be far more powerful than either in isolation.
* Smarter exploratory action selection. Epsilon-greedy, where you either take a greedy action or take a completely random action, completely ignores how much you may have already learned about alternative actions and their relative merit. You can use something like Upper Confidence Bound with a value-based agent.
* In a deterministic world, increase the batch size for learning and planning, as you can trust that when a transition is learned once, that you know everything about it.
You will need to experiment in each environment. You can make learning agents that are more conservative about exploring near low reward areas. However, if the environment is such that it is necessary to take risks in order to get to the best rewards (which is often the case in games) then it may not be optimal in terms of learning time to have a "timid" agent. For instance in your example of Pacman, sometimes the ghosts should be avoided, sometimes they should be chased. If the agent learned strong aversion initially, it might take a long time to overcome this and learn to chase them after eating a power-up.
For your example of the spider, as the constructor of the experiment then you know that the bite is bad every time and that the agent must avoid it as much as possible. To most RL algorithms, there is no such knowledge, except gained through experience. An MDP world model does not need to match common sense, it may be that a spider bite is bad (-10 reward) 90% of the time and good 10% of the time (+1000 reward). The agent can only discover this by being bitten multiple times . . . RL typically does not start with any system to make assumptions about this sort of thing, and it is impossible to come up with a general rule about all possible MDPs. Instead, for a basic RL system, you can consider modifying hyperparameters or focusing on key events as suggested above. Outside of a basic RL system there could be merit in replicating other things, such as "instinctive" fear.
Upvotes: 3 <issue_comment>username_3: I would suggest having the agent weight its learning from a given event based on the severity of the consequences for that event happening. Eg. Have it develop a threat model like those typically drafted up in the Information Security field. High risk but low probability is something that can be accounted for and judged against.
Trying to directly imitate human fear would be silly, you'd likely end up with AIs that have phobias if you succeeded too well.
Upvotes: 1 <issue_comment>username_4: Fear of this kind is an irrational response (large negative incentive in response to a small risk). Modeling fear would need to model a "grossness" factor associated with, for example, spiders so that the normally un-proportional response would occur. The "grossness" factor could be manifested in many other forms to magnify a response to a previously unpleasant, though not particularly dangerous, experience. Such fear can also be inspired by hearsay (think hysteria caused by a sensational news story).
A NN would normally only respond minimally to a minimal risk.
Upvotes: 2 |
2018/08/13 | 2,216 | 9,792 | <issue_start>username_0: I recently heard someone make a statement that when you're designing a self-driving car, you're not building a car but really a computerized driver, so you're trying to model a human mind -- at least the part of the human mind that can drive.
Since humans are unpredictable, or rather since their actions depend on so many factors some of which are going to remain unexplained for a long time, how would a self-driving car reflect that, if they do?
A dose of unpredictability could have its uses. If, say, two self-driving cars are in a stuck in a right of way deadlock, it could be good to inject some randomness instead of maybe seeing the same action applied at the same time if the cars run the same system.
But, on the other hand, we know that non-deterministic isn't friends with software development, especially in testing. How would engineers be able to control it and reason about it?<issue_comment>username_1: Self driving cars apply Reinforcement Learning and Semi-Supervised learning, this allows them to be more suited for situations that developers did not anticipate themselves.
Some cars now apply [Swarm Intelligence](https://en.m.wikipedia.org/wiki/Swarm_intelligence), where they effectively learn from interactions among themselves, which can also aid in cases of transfer learning.
Upvotes: 2 <issue_comment>username_2: **Driving Priorities**
When considering the kind of modeling needed to create reliable and safe autonomous vehicles, the following driving safety and efficacy criteria should be considered, listed in priority with the most important first.
* The safety of those inside the vehicle and outside the vehicle
* Reduction of wear on passengers
* The safety of property
* The arrival at the given destination
* Reduction of wear on the vehicle
* Thrift in fuel resources
* Fairness to other vehicles
* The thrift in time
These are ordered in a way that makes civic and global sense, but they are not the priorities exhibited by human drivers.
**Copy Humans or Reevaluate and Design from Scratch?**
Whoever said that the goal of autonomous car design is to model the portions of a human mind that can drive should not be designing autonomous cars for actual manufacture. It is well known that most humans, although they may have heard of the following safety tips, cannot bring them into consciousness with sufficient speed to benefit from them in actual driving arrangements.
* When the tires slip sideways, steer into the skid.
* When a forward skid starts, pump the breaks.
* If someone is headed tangentially into your car's rear, immediately accelerate and then break.
* On an on ramp, accelerate to match the speed of the cars in the lane into which you merge, unless there is no space to merge.
* If you see a patch of ice, steer straight and neither accelerate nor decelerate once you reach it.
Many collisions between locomotives and cars are because a red light causes a line in multiple lanes across the tracks. Frequently, a person will move onto the railroad tracks to gain one car's length on the other cars. When others move to make undoing that choice problematic, a serious risk emerges.
As absurd as this behavior is to anyone watching, many deaths occur as a fast traveling 2,000 ton locomotive hits what feels like a dust speck to the train passengers.
**Predictability and Adaptability**
Humans are unpredictable, as the question indicates, but although adaptability may be unpredictable, unpredictability may not be adaptive. It is adaptability that is needed, and it is needed in five main ways.
* Adaptive in the moment to surprises
* Adaptive through general driving experience
* Adaptive to the specific car
* Adaptive to passenger expression
* Adaptive to particular map regions
In addition, driving a car is
* Highly mechanical,
* Visual,
* Auditory,
* Plan oriented
* Geographical, and
* Preemptive in surprise situations.
**Modelling Driving Complexities**
This requires a model or models comprise of several kinds of objects.
* Maps
* The vehicle
* The passenger intentions
* Other vehicle
* Other obstructions
* Pedestrians
* Animals
* Crossings
* Traffic signals
* Road signs
* Road side
**Neither Mystery nor Indeterminance**
Although these models are cognitively approximated in the human brain, how well they are modeled and how effective those models are at reaching something close to a reasonable balance of the above priorities varies from driver to driver, and varies from trip to trip for the same driver.
However, as complex as driving is, it is not mysterious. Each of the above models are easy to consider at a high level in terms of how they interact and what mechanical and probabilistic properties they have. Detailing these is an enormous task, and making the system work reliably is a significant engineering challenge, in addition to the training question.
**Inevitability of Achievement**
Regardless of the complexity, because of the economics involved and the fact that it is largely a problem of mechanics, probability, and pattern recognition, it will be done, and it will eventually be done well.
When it is, as unlikely as this sounds to the person who accepts our current culture as permanent, human driving may become illegal in this century in some jurisdictions. Any traffic analyst can mount heaps of evidence that most humans are ill equipped to drive a machine that weighs a ton at common speeds. The licensing of unprofessional drivers has only become widely accepted because of public insistence on transportation convenience and comfort and because the workforce economy requires it.
Autonomous cars may reflect the best of human capabilities, but they will likely far surpass them because, although the objects in the model are complex, they are largely predictable, with the notable exception of children playing. AV technology will use the standard solution for this. The entire scenario can be brought into slow motion to adapt for children playing simply by slowing way down. AI components that specifically detect children and dogs are likely to emerge soon, if they do not already exist.
**Randomness**
Randomness is important in training. For instance, a race car driver will deliberately create skids of various types to get used to how to control them. In machine learning we see some pseudo random perturbations introduced during training to ensure that the gradient descent process does not get caught in a local minimum but rather is more likely to find a global minimum (optimum).
**Deadlock**
The question is correct in stating that, "A dose of unpredictability could have its uses." The deadlock scenario is an interesting one, but is unlikely to occur as standards develop. When four drivers come to a stop sign at the same time, they really don't. It only seems like they did. The likelihood that none of them arrived more than a millisecond before the others is astronomically small.
People will not detect (or even be honest enough) to distinguish these small time differences, so it usually comes to who is most gracious to wave the others on, and there can be some deadlock there too, which can become comical, especially since all of them really wish to get moving. Autonomous vehicles will extremely rarely encounter a deadlock that is not covered by the rule book the government licensing entity publishes, which can be programmed as driving rules into the system.
On those rare occasions, the vehicles could digitally draw lots, as suggested, which is one place where unpredictability is adaptive. Doing skid experimentation like a race car driver on Main Street at midnight may be what some drunk teen might do, but that is a form of unpredictability that is not adaptive toward a sensible ordering of the priorities of driving. Neither would be texting or trying to eat and drive.
**Determinism**
Regarding determinism, in the context of the uses discussed, pseudo-random number generation of particular distributions will suffice.
* Deadlock release or
* Training speed-ups and improved reliability when there are local minima that are not the global minimum during optimization,
Functional tests and unit testing technologies are not only able to handle the testing of components with pseudo-randomness, but they sometimes employ pseudo-randomness to provide better testing coverage. The key to doing this well is understanding of probability and statistics, and some engineers and AI designers understand it well.
**Element of Surprise**
Where randomness is most important in AV technology is not in the decision making but in the surprises. That is the bleeding edge of that engineering work today. How can one drive safely when a completely new scenario appears in the audio or visual channels? This is perhaps the place where the diversity of human thought may be best adept, but at highway speeds, it is usually too slow to react in the way we see in movie chase scenes.
**Correlation Between Risk and Speed**
This brings up an interesting interaction of risk factors. It is assumed that higher speeds are more dangerous, the actual mechanics and probability are not that clear cut. Low speeds produce temporally longer trips and higher traffic densities. Some forms of accidents are less likely at higher speeds, specifically ones that are related mostly to either traffic density or happenstance. Other forms are more likely at higher speeds, specifically ones that are related to reaction time and tire friction.
With autonomous vehicles, tire slippage may be more accurately modeled and reaction time may be orders of magnitude faster, so minimum speed limits may be more imposed and upper limits may increase once we get humans out of the driver's seats.
Upvotes: 3 [selected_answer] |
2018/08/13 | 328 | 1,449 | <issue_start>username_0: Can we say that the Turing test aims to develop machines or methods to reach human-level performance in all cognitive tasks and that machine learning is one of these methods that can pass the Turing test?<issue_comment>username_1: Essentially yes.
The Turing Test is essentially a benchmark or challenge *problem*. It is a task that AI researchers would like to be able to solve.
Machine learning is a *technique*. It is a tool developed by AI researchers to solve various problems. Some kinds of machine learning are applicable to the Turing Test, but others are not. Machine learning is also applicable to a wide range of other problems.
Upvotes: 2 <issue_comment>username_2: <NAME> and <NAME>, in the paper [Making AI meaningful again](https://arxiv.org/abs/1901.02918) (2019), argue that machine learning is not sufficient to build an AI that is able to fully (like humans) understand language. They state that current machine learning (or stochastic) approaches might not take into account a lot of contexts while understanding language (e.g. at machine translation), because of the eventual lack of associated data or because of the big number of possible solutions to a language understanding problem.
If understanding language is a necessary skill to pass the Turing test, then current machine learning is not sufficient to build an AI that will pass the Turing test (according to them).
Upvotes: 2 |
2018/08/14 | 2,500 | 10,489 | <issue_start>username_0: I was trying to implement the breadth-first search (BFS) algorithm for the [sliding blocks puzzle](https://en.wikipedia.org/wiki/Sliding_puzzle) (number type). Now, the main thing I noticed is that, if you have a $4 \times 4$ board, the number of states can be as large as $16!$, so I cannot enumerate all states beforehand.
How do I keep track of already visited states? I am using a class board each class instance contains a unique board pattern and is created by enumerating all possible steps from the current step.
I searched on the net and, apparently, they do not go back to the just-completed previous step, **BUT** we can go back to the previous step by another route too and then again re-enumerate all steps which have been previously visited.
So, how to keep track of visited states when all the states have not been enumerated already? Comparing already present states to the present step will be costly.<issue_comment>username_1: You can use a `set` (in the mathematical sense of the word, i.e. a collection that cannot contain duplicates) to store states that you have already seen. The operations you'll need to be able to perform on this are:
* inserting elements
* testing if elements are already in there
Pretty much every programming language should already have support for a data structure that can perform both of these operations in constant ($O(1)$) time. For example:
* `set` in Python
* `HashSet` in Java
At first glance, it may seem like adding all the states you ever see to a set like this will be expensive memory-wise, but it is not too bad in comparison to the memory you already need for your frontier; if your branching factor is $b$, your frontier will grow by $b - 1$ elements per node that you visit (remove $1$ node from frontier to "visit" it, add $b$ new successors/children), whereas your set will only grow by $1$ extra node per visited node.
In pseudocode, such a set (let's name it `closed_set`, to be consistent with [the pseudocode on wikipedia](https://en.wikipedia.org/wiki/Breadth-first_search#Pseudocode) could be used in a Breadth-First Search as follows:
```
frontier = First-In-First-Out Queue
frontier.add(initial_state)
closed_set = set()
while frontier not empty:
current = frontier.remove_next()
if current == goal_state:
return something
for each child in current.generate_children()
if child not in closed_set: // This operation should be supported in O(1) time regardless of closed_set's current size
frontier.add(child)
closed_set.add(current) // this should also run in O(1) time
```
(some variations of this pseudocode might work too, and be more or less efficient depending on the situation; for example, you could also take the `closed_set` to contain all nodes of which you have already added children to the frontier, and then entirely avoid the `generate_children()` call if `current` is already in the `closed_set`.)
---
What I described above would be the standard way to handle this problem. Intuitively, I suspect a different "solution" could be to always randomize the order of a new list of successor states before adding them to the frontier. This way, you do not avoid the problem of occasionally adding states that you've already previousl expanded to the frontier, but I do think it should significantly reduce the risk of getting stuck in infinite cycles.
**Be careful**: I do not know of any formal analysis of this solution that proves that it always avoids infinite cycles though. If I try to "run" this through my head, intuitively, I suspect it should kind of work, and it does not require any extra memory. There may be edge cases that I'm not thinking of right now though, so it also simply might not work, the standard solution described above will be a safer bet (at the cost of more memory).
Upvotes: 4 [selected_answer]<issue_comment>username_2: username_1' answer is correct: you should use a HashSet or a similar structure to keep track of visited states in BFS Graph Search.
However, it doesn't quite answer your question. You're right, that in the worst case, BFS will then require you to store 16! nodes. Even though the insertion and check times in the set will be O(1), you'll still need an absurd amount of memory.
To fix this, **don't use BFS**. It's intractable for all but the simplest of problems, because it requires both time *and memory* that are exponential in the distance to the nearest goal state.
A much more memory-efficient algorithm is *[iterative deepening](https://en.wikipedia.org/wiki/Iterative_deepening_depth-first_search)*. It has all the desirable properties of BFS, but uses only O(n) memory, where n is the number of moves to reach the nearest solution. It might still take a while, but you'll hit memory limits long before CPU-related limits.
Better still, develop a domain specific heuristic, and use [A\* search](https://en.wikipedia.org/wiki/A*_search_algorithm). This should require you to examine only a very small number of nodes, and allow the search to complete in something much closer to linear time.
Upvotes: 4 <issue_comment>username_3: Ironically the answer is "use whatever system you want." A hashSet is a good idea. However, it turns out that your concerns over memory usage are unfounded. BFS is so bad at these sorts of problems, that it resolves this issue for you.
Consider that your BFS requires you to keep a stack of unprocessed states. As you progress into the puzzle, the states you deal with become more and more different, so you're likely to see that each ply of your BFS multiplies the number of states to look at by roughly 3.
This means that, when you're processing the last ply of your BFS, you have to have at least 16!/3 states in memory. Whatever approach you used to make sure that fit in memory will be sufficient to ensure your previously-visited list fits in memory as well.
As others have pointed out, this is not the best algorithm to use. Use an algorithm which is a better fit for the problem.
Upvotes: 2 <issue_comment>username_4: While the answers given are *generally* true, a BFS in the 15-puzzle is not only quite feasible, it was done in 2005! The paper that describes the approach can be found here:
<http://www.aaai.org/Papers/AAAI/2005/AAAI05-219.pdf>
A few key points:
* In order to do this, external memory was required - that is the BFS used the hard drive for storage instead of RAM.
* There are actually only 15!/2 states, since the state space has two mutually unreachable components.
* This works in the sliding tile puzzle because the state spaces grows really slowly from level to level. This means that the total memory required for any level is far smaller than the full size of the state space. (This contrasts with a state space like Rubik's Cube, where the state space grows much more quickly.)
* Because the sliding-tile puzzle is undirected, you only have to worry about duplicates in the current or previous layer. In a directed space you may generate duplicates in any previous layer of the search which makes things much more complicated.
* In the original work by Korf (linked above) they didn't actually store the result of the search - the search just computed how many states were at each level. If you want to store the first results you need something like WMBFS (<http://www.cs.du.edu/~sturtevant/papers/bfs_min_write.pdf>)
* There are three primary approaches to comparing states from the previous layers when states are stored on disk.
+ The first is sorting-based. If you sort two files of successors, you can scan them in linear order to find duplicates.
+ The second is hash-based. If you use a hash function to group successors into files, you can load files which are smaller than the full state space to check for duplicates. (Note that there are two hash functions here -- one to send a state to a file, and one to differentiate states within that file.)
+ The third is structured duplicate detection. This is a form of hash-based detection, but it is done in a way that duplicates can be checked immediately when they are generated instead of after they have all been generated.
There is a lot more to be said here, but the paper(s) above give a lot more details.
Upvotes: 3 <issue_comment>username_5: **Approaches to the Game**
It is true that the board has $16!$ possible states. It is also true that using a hash set is what students learn in a first year algorithms courses to avoid redundancy and endless looping when searching a graph that may contain graph cycles.
However, those trivial facts are not pertinent if the goal is to complete the puzzle in the fewest computing cycles. Breadth first search isn't a practical way to complete an orthogonal move puzzle. The very high cost of a breadth first search would only be necessary if number of moves is of paramount importance for some reason.
**Sub-sequence Descent**
Most of the vertices representing states will never be visited, and each state that is visited can have between two and four outgoing edges. Each block has an initial position and a final position and the board is symmetric. The greatest freedom of choice exists when the open space is one of the four middle positions. The least is when the open space is one of the four corner positions.
A reasonable disparity (error) function is simply the sum of all x disparities plus the sum of all y disparities and a number heuristically representing which of the three levels of freedom of movement exists because of the resulting placement of the open space (middle, edge, corner).
Although blocks may temporarily move away from their destinations to support a strategy toward completion requiring a sequence of moves, there is rarely a case where such a strategy exceeds eight moves, generating, on the average, 5,184 permutations for which the final states can be compared using the disparity function above.
If the empty space and positions of block 1 through 15 are encoded as an array of nibbles, only addition, subtraction, and bit-wise operations are needed, making the algorithm fast. Repeating the eight move brute force strategies can be repeated until disparity falls to zero.
**Summary**
This algorithm cannot cycle because there is always at least one of the permutations of eight moves that decreases disparity, regardless of the initial state, with the exception of a starting state that is already complete.
Upvotes: 1 |
2018/08/14 | 357 | 1,400 | <issue_start>username_0: I just stumbled upon the concept of neuron coverage, which is the ratio of activated neurons and total neurons in a neural network. But what does it mean for a neuron to be "activated"? I know what activation functions are, but what does being activated mean e.g. in the case of a ReLU or a sigmoid function?<issue_comment>username_1: A neuron is said activated when its output is more than a threshold, generally 0.
For examples :
\begin{equation}
y = Relu(a) > 0
\end{equation}
when
\begin{equation}
a = w^Tx+b > 0
\end{equation}
Same goes for sigmoid or other activation functions.
Upvotes: 5 [selected_answer]<issue_comment>username_2: The term "activated" is mostly used when talking about activation functions which only outputs a value (except 0) when the input to the activation function is greater than a certain treshold.
Especially when discussing ReLU the term "activated" may be used. ReLU will be "activated" when it's output is greater than 0, which is also when it's input is greater than 0.
Other activation functions, like sigmoid, always returns a value greater than 0, and doesn't have any special treshold. Therefore, the term "activated" is of less meaning here.
Even though we know little about them, the neurons in the brain also seems to have something which resembles an activation function with some kind of "activation treshold".
Upvotes: 2 |
2018/08/14 | 3,440 | 13,191 | <issue_start>username_0: How important is consciousness and self-consciousness for making advanced AIs? How far away are we from making such?
When making e.g. a neural network there's (very probably) no consciousness within it, but just mathematics behind, but do we need the AIs to become conscious in order to solve more complex tasks in the future? Furthermore, is there actually any way we can know for sure if something is conscious, or if it's just faking it? It's "easy" to make a computer program that claims it's conscious, but that doesn't mean it is (e.g. Siri).
And if the AIs are only based on predefined rules without consciousness, can we even call it "intelligence"?<issue_comment>username_1: Artificial consciousness is a challenging theoretical and engineering objective. Once that major challenge is met, the computer's conscious awareness of itself would likely be a minor addition, since the conscious computer is just another object of which its consciousness can be aware.
A child can look in the mirror and recognize that moving their hands back and forth or making faces produces corresponding changes in the reflection. They recognize themselves. Later on they realize that exerting physical control over their own movement is much easier than exerting control over another person's hands or face.
Some learn that limited control of the faces and manual operations of others is possible if certain social and economic skills are mastered. They become employers, landlords, investors, activists, writers, directors, public figures, or entrepreneurs.
Anyone who has studied the cognitive sciences or experienced the line between types of thought because they are a professional counselor or just a deep listener knows that the lines around consciousness are blurry. Consider these.
* Listening to speech
* Watching a scene
* Focusing on a game
* Presenting an idea
* Washing up for work
* Driving a car
* Choosing a purchase
Any one of these things can be done with or without certain kinds of consciousness, subconsciousness, impulse, or habit.
Subjectively, people report getting out of the car and not recalling having driven home. One can listen to someone talking, nod in affirmation, respond with, "Yeah, I understand," and even repeat what they said, and yet appear to have no memory of the content of the speech if queried in depth. One can read a paragraph and get to the end without comprehension.
In contrast, a person may mindfully wash up for work, considering the importance of hygiene and paying attention like a surgeon preparing for an operation, noticing the smell of the soap and even the chlorination of the city water.
Between those extremes, partial consciousness is also detectable by experiment and in personal experience. Consciousness most definitely requires attention functionality, which tentatively supervises the coordination of other brain-body sub-systems.
Once a biological or artificial system achieves the capacity to coordinate attentively, the objects and tasks toward which they can be coordinated can be interchanged. Consider these.
* Dialog
* Playing to win
* Detecting honesty or dishonesty
Now consider how similar or different these mental activities are when we compare self-directed or externally directed attention.
* One can talk to one's self or talk to another
* One can play both sides of a chess game or play against another
* One can scrutinize one's own motives or those of another
This is an illustration why the self- part of self-consciousness is not the challenge in AI. It is the attentive (yet tentative) coordination that is difficult. Early microprocessors, designed to work in real time control systems, included (and still include) exception signaling that simplistically models this tentativeness. For instance, while playing to win in a game, one might try to initiate dialog with the subject. Attention may shift when the two activities require the same sub-systems.
We tend to consider this switching of attention consciousness too. If we are the person trying to initiate dialog with the person playing to win, we might say, "Hello?" The question mark is because we are wondering if the player is conscious.
If one was to diminish the meaning of consciousness to the most basic criteria, one might say this.
>
> "My neural net is intelligent in some small way because it is conscious of the disparity between my convergence criteria and the current behavior of the network as it is parametrized, so it is truly an example of artificial intelligence, albeit a primitive one."
>
>
>
There is nothing grossly incorrect about that statement. Some have called that, "Narrow Intelligence." That is a slightly inaccurate characterization, since there may be an astronomical number of possible applications of an arbitrarily deep artificial network that uses many of the most effective techniques available in its design.
The other problem with narrowness as a characterization is the inference that there are intelligent systems that are not narrow. Every intelligent system is narrow compared to a more intelligent system. Consider this thought experiment.
>
> Hannah writes a paper on general intelligence with excellence, both in theoretical treatment and in writing skill. Many quote it and reference it. Hannah is now so successful in her AI career that she has the money and time to build a robotic system. She bases its design on her now famous paper and spares no expense.
>
>
> To her surprise, the resulting robot is so adaptive that its adaptability exceeds even himself. She names it Georgia Tech for fun because she lives near the university.
>
>
> Georgia becomes a great friend. She learns at an incredible rate and is a surprisingly great housemate, cleaning better than Hannah thought humanly possible, which may be literally true.
>
>
> Georgia applies to Georgia Tech, just down the bus line from Hannah's house and studies artificial intelligence there. Upon the achievement of a PhD after just three years of study, Georgia sits with Hannah after a well attended Thesis Publication party that Hannah graciously held for her.
>
>
> After the last guest leaves, there is a moment of silence as Hannah realizes the true state of her household. She thinks, "Will Georgia now exceed me in her research?" Hannah finally, sheepishly asks, "In complete honesty, Georgia, do you think you are now a general intelligence like me?"
>
>
> There is a pause. With a forced look of humility, Georgia replies, "By your definition of general intelligence, I am. You are no longer."
>
>
>
Whether this story becomes true in 2018, 3018, or never, the principle is clear. Georgia is just as able to analyze herself comparatively with Hannah as Hannah is similarly able. In the story, Georgia applies the definition created in Hannah's paper because Georgia is now able to conceive of many definitions of intelligence and chooses Hannah's as the most pertinent in the context of the conversation.
Now imagine this alteration to the story.
>
> ... She thinks, at what level is Georgia thinking? Hannah finally, sheepishly asks, "In complete honesty, Georgia, are you now as conscious as me?"
>
>
> Georgia thinks through the memory of all uses of the word conscious in her past studies — a thousand references in cognitive science, literature, law, neurology, genetics, brain surgery, treatment of brain injury, and addiction research. She pauses for a few microseconds to consider it all thoroughly, while at the same time sensing her roommates body temperature, neuro-chemical balances, facial muscle motor trends, and body language.
>
>
> Respectfully, she waits 3.941701 extra seconds, which she calculated as the delay that would minimize any humiliation to Hannah, whom she loves, and replies, "Conscious of what?"
>
>
>
In Georgia's reply may be a hypothesis of which Hannah may or may not be aware. For any given automatons, $a, b, \ldots$, given consciousness, $C$, of a scenario $s$, we have a definition, $\Phi\_c$ that can be applied to evaluate the aggregate of all aspects of consciousness of any of the automations, $x$, giving $\Phi\_c(C\_x(s))$. Georgia's (apparently already proven) hypothesis is thus.
>
> $\forall \Phi\_c(C\_a(s)) \;\;\; \exists \;\;\; b, \, \epsilon>0 \;\; \ni \;\; \Phi\_c(C\_b(s)) + \epsilon > \Phi\_c(C\_a(s))$
>
>
>
This is a mathematical way of saying that there can always be someone or some thing more conscious of a given scenario, whether or not she, he, or it is brought into existence. Changing the criteria of evaluation from consciousness to intelligence, we have thus.
>
> $\forall \Phi\_i(C\_a(s)) \;\;\; \exists \;\;\; b, \, \epsilon>0 \;\; \ni \;\; \Phi\_i(C\_b(s)) + \epsilon > \Phi\_i(C\_a(s))$
>
>
>
One can only surmise that Hannah's paper defines general intelligence relative to what whatever is the smartest thing around, which was once well-educated human beings. Thus Hannah's definition of intelligence is dynamic. Georgia applies the same formula to the new situation where she is now the standard against which lesser intelligence is narrow.
Regarding the ability to confirm consciousness, it is actually easier to confirm than intelligence. Consider this thought experiment.
>
> Jack is playing chess with Dylan using the new chess set that Jack bought. In spite of the aesthetic beauty of this new set, with its white onyx and black agate pieces, Dylan moves each piece with prowess and checkmates Jack. Jack wonders if Dylan is more intelligent than him and asks what would be a normal question under those conditions.
>
>
> "Dylan, buddy, how long have you been playing chess?"
>
>
>
Regardless of the answer and regardless whether Dylan is a robot with a quantum processor of advanced AI or a human being, the intelligence of Dylan cannot be reliably gauged. However, there is NO DOUBT that Dylan was conscious of the game play.
In the examples in the lists at the top of this answer there are a particular sets of requirements to qualify as consciousness. For the case of Jack and Dylan playing, a few things MUST working in concert.
1. Visual recognition of the state of the board
2. Motor control of the arm and hand to move pieces
3. Tactile detection in finger and thumb tips
4. Hand-eye cordination
5. Grasp coordination
6. A model of how to physically move board pieces
7. A model of the rules of chess in memory
8. A model of how to win when playing it (or astronomical computational power to try everything possible permutation that makes any sense)
9. An internal representation of the board state
10. Attention execution, visually and in terms of the objective of winning
11. Prioritization that decides, unrelated to survival odds or asset accumulation, whether to beat Jack in chess, do something else, or nothing (non-deterministic if the ancient and commonplace notion of the causal autonomy of the soul is correct)
The topology of connections are as follows, and there may be more.
>
> 1 ⇄ 4 ⇄ 2
>
>
> 3 ⇄ 5 ⇄ 2
>
>
> 4 ⇄ 6 ⇄ 5
>
>
> 7 ⇄ 8 ⇄ 9
>
>
> 6 ⇄ 10 ⇄ 8
>
>
> 10 ⇄ 11
>
>
>
This is one of many integration topologies that support one of many types of things to which consciousness might apply.
Whether looking in the mirror just to prepare for work or whether looking deeply, considering the ontological question, "Who am I?" each mix of consciousness, subconsciousness, impulse, and habit require a specific topology of mental features. Each topology must be coordinated to form its specific embodiment of consciousness.
To address some other sub-questions, it is easy to make a machine that claims itself to be conscious a digital voice recorder can be programmed to do it in five seconds by recording yourself saying it.
Getting a robot to read this answer or some other conception, consider it thoughtfully, and then construct the sentence from knowledge of the vocabulary and conventions of human speech to tell you its conclusion is an entirely different task. The development of such a robot may take 1,000 more years of AI research. Maybe ten. Maybe never.
The last question, switched from plural to singular is, "If [an artificially intelligent device] is only [operating] on predefined rules, without consciousness, can we even call it intelligent?" The answer is necessarily dependent upon definition $\Phi\_i$ above, and, since neither $\Phi\_c$ nor $\Phi\_i$ have a standard definition within the AI community, one can't determine the cross-entropy or correlation. It is indeterminable.
Perhaps formal definitions of $\Phi\_c$ and $\Phi\_i$ can now be written and submitted to the IEEE or some standards body.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Self-conscious AIs are important because consciousness is what makes us human. And scientists are trying to develop AIs that are closer to humans and human behaviour. The thought process of humans is very complicated and to have it developed in AIs could really replace us with robots that have consciousness. The only reason AIs are not taking up work that involves decision-making because they do not have consciousness.
Upvotes: 0 |
2018/08/15 | 327 | 1,402 | <issue_start>username_0: I read somewhere that a *multilayer perceptron* is a recursive function in its forward propagation phase. I am not sure, what is the recursive part? For me, I would see an MLP as a chained function. So, it would nice anyone could relate an MLP to a recursive function.<issue_comment>username_1: Inherently, no. The MLP is just a data structure. It represents a function, but a standard MLP is just representing an input-output mapping, and there's no recursive structure to it.
On the other hand, possibly your source is referring to the common *algorithms* that operate over MLPs, specifically forward propagation for prediction and back propagation for training. Both of these algorithms are easy to think about recursively, with each node performing a sort of recursive call with its children or parents as the target, and some useful information about activations or errors attached. I actually encourage my students to implement it recursively for this reason, even though it's probably not the most efficient solution.
Upvotes: 3 <issue_comment>username_2: Sure, you can define plenty of things we don't generally need to regard as recursive as so. An MLP is just a series of functions applied to its input. This can be loosely formulated as
$$ o\_n = f(o\_{n-1})$$
Where $o\_n$ is the output of layer $n$.
But this clearly doesn't reveal, much does it?
Upvotes: 2 |
2018/08/16 | 1,055 | 4,515 | <issue_start>username_0: To the best of my understanding, the Monte Carlo tree search (MCTS) algorithm is an alternative to minimax for searching a tree of nodes. It works by choosing a move (generally, the one with the highest chance of being the best), and then performing a random playout on the move to see what the result is. This process continues for the amount of time allotted.
This doesn't sound like machine learning, but rather a way to traverse a tree. However, I've heard that AlphaZero uses MCTS, so I'm confused. If AlphaZero uses MCTS, then why does AlphaZero learn? Or did AlphaZero do some kind of machine learning before it played any matches, and then use the intuition it gained from machine learning to know which moves to spend more time playing out with MCTS?<issue_comment>username_1: Monte Carlo Tree Search is not usually thought of as a machine learning technique, but as a search technique. There are parallels (MCTS does try to learn general patterns from data, in a sense, but the patterns are not very general), but really MCTS is not a suitable algorithm for most learning problems.
AlphaZero was a combination of several algorithms. One was MCTS, but MCTS needs a function to tell it how good different states of the game might be (or else, it needs to simulate entire games). One way to handle this function in a game like chess or Go is to approximate it by training a neural network, which is what the Deep Mind researchers did. This is the learning component of AlphaZero.
Upvotes: 4 [selected_answer]<issue_comment>username_2: John's answer is correct in that MCTS is traditionally not viewed as a Machine Learning approach, but as a tree search algorithm, and that AlphaZero combines this with Machine Learning techniques (Deep Neural Networks and Reinforcement Learning).
However, there are some interesting similarities between MCTS itself and Machine Learning. In some sense, MCTS attempts to "learn" the value of nodes from experience generated through those nodes. This is very similar to how Reinforcement Learning (RL) works (which itself is typically described as a subset of Machine Learning).
Some researchers have also experimented with replacements for the traditional *Backpropagation phase* of MCTS (which, from an RL point-of-view, can be described as implementing a Monte-Carlo backups) based on other RL methods (e.g., Temporal-Difference backups). A comprehensive paper describing these sorts of similarities between MCTS and RL is: [On Monte Carlo Tree Search and Reinforcement Learning](https://jair.org/index.php/jair/article/view/11099/26289).
Also note that the *Selection phase* of MCTS is typically treated as a sequence of small Multi-Armed Bandit problems, and those problems also have strong connections with RL.
---
**TL;DR**: MCTS is not normally viewed as a Machine Learning technique, but if you inspect it closely, you can find lots of similarities with ML (in particular, Reinforcement Learning).
Upvotes: 3 <issue_comment>username_3: Welcome to the mine-field of semantic definitions within AI! According to Encyclopedia Britannica ML is a “discipline concerned with the implementation of computer software that can learn autonomously.” There are a bunch of other definitions for ML but generally they are all this vague, saying something about “learning”, “experience”, “autonomous”, etc. in varying order. There is no well-known benchmark definition that most people use, so unless one wants to propose one, whatever one posts on this needs to be backed up by references.
According to Encyclopedia Britannica’s definition the case for calling MCTS part of ML is pretty strong (Chaslot, Coulom’s et al. work from 2006-8 is used for the MCTS reference). There are two policies used in MCTS, a tree-policy and a simulation-policy. At decision time the tree-policy updates action-values by expanding the tree structure and backing up values from whatever it finds from search. There is no hard-coding on which nodes should be selected/expanded; it all comes from maximizing rewards from statistics. The nodes closer to the root appear more and more intelligent as they “learn” to mimic distributions/state and/or action-values from the corresponding ones from reality. Whether this can be called “autonomous” is an equally difficult question because in the end it’s humans who wrote the formulas/theory MCTS uses. 50 years from now it may not be called autonomous, or ML, but today it would probably at least "qualify".
Upvotes: 1 |
2018/08/16 | 417 | 1,872 | <issue_start>username_0: Turing test was created to test machines exhibiting behavior equivalent or indistinguishable from that of a human. Is that the sufficient condition of intelligence?<issue_comment>username_1: We don't know.
However, an important line will have been crossed - it will be impossible to tell the difference between an intelligent agent and the machine by use of a text interface. Which is the main point of the test - "if it quacks like a duck".
It is also an important philosophical point. Whether intelligence is defined purely by behaviour in an environment, or by the mechanisms that arrive at that behaviour. A suitably large database of conversational openers and "correct" responses can in theory mimic a lot of real world conversations. Some chatbots take advantage of this and use modern computer capacity to store a *lot* of responses, and that approach has gained competitive scores in the Loebner prize competition (although not to the stage of actually passing the test). This leads us to the Chinese Room issue, and wondering which part of the system is actually intelligent, or even how much of human conversation is actually intelligent or meaningful (and it what ways).
Upvotes: 3 [selected_answer]<issue_comment>username_2: If intelligence is defined as utility relation to a task, any algorithm can be said to be intelligent.
If the task is convincing a human that an algorithm is human, and the algorithm achieves this goal, it can be said to be strongly intelligent in relation to the task (fooling the human. Here the term strong is used because the algorithm's performance is stronger than the humans. Strength is relative, and Turing Tests are unavoidably subjective.)
However, this does not mean that the algorithm is generally strongly intelligent, because it may not exceed human capability in *all* tasks.
Upvotes: 0 |
2018/08/16 | 881 | 3,533 | <issue_start>username_0: Could we teach an AI with sentences such as "ants are small" and "the sky is blue"? Is there any research work that attempts to do this?<issue_comment>username_1: I think that what you're really asking about is the question of [knowledge representation](https://en.wikipedia.org/wiki/Knowledge_representation_and_reasoning). Regardless of how you train your AI, one of the most fundamental questions is how do you represent "knowledge" and especially when it exists at different levels of abstraction, may be mutually recursive, etc. Along with that goes the question of [belief revision](https://en.wikipedia.org/wiki/Belief_revision) which deals with how you update existing beliefs/knowledge in the light of new information.
Both of these areas are still subject to plenty of active research and neither has entirely settled answers to the core questions. But progress has been made in both areas.
Personally I suspect that something like [semantic networks](https://en.wikipedia.org/wiki/Semantic_network) or [conceptual graphs](https://en.wikipedia.org/wiki/Conceptual_graph) will be the best answer to the KR problem. Dealing with belief revision seems even fuzzier to me, although there are known strategies (like the [AGM postulates](https://en.wikipedia.org/wiki/Belief_revision#The_AGM_postulates)) that work to a point. Something like [Bayesian Belief Networks](https://en.wikipedia.org/wiki/Bayesian_network) may also prove useful.
Upvotes: 1 <issue_comment>username_2: I have recently watched [a podcast by <NAME> where he interviews <NAME>](https://www.youtube.com/watch?v=STFcvzoxVw4), who talks about this way of teaching with "predicates" and "invariants".
If you were really able to teach a model with sentences like "a bird is an animal that is able to fly" but "not all birds fly, such as penguins", that would probably represent one of the biggest milestones in machine learning, especially, if the machine was able to learn and apply this learned knowledge as *efficiently* as humans do. However, we are not quite there yet!
To know more about this new learning paradigm **Learning Using Statistical Invariants (LUSI)**, you probably should read the paper [Rethinking statistical learning theory: learning using statistical invariants](https://link.springer.com/article/10.1007/s10994-018-5742-0) (2019), by <NAME> and <NAME>, which will probably be difficult to follow if you have no knowledge of learning theory and your mathematical background is poor. However, there are some sections of this paper that are accessible to everyone. For example, section 6.6
>
> Suppose that the Teacher teaches Student to recognize digits by providing a number of examples and also suggesting the following heuristics: "In order to recognize the digit zero, look at the center of the picture — it is usually light; in order to recognize the digit 2, look at the bottom of the picture - it usually has a dark tail" and so on.
>
>
> From the theory above, the Teacher wants the Student to construct specific predicates $\psi(x)$ to use them for invariants. However, the Student does not necessarily construct exactly the same predicate that the Teacher had in mind (the Student's understanding of concepts "center of the picture" or "bottom of the picture": can be different). Instead of $\psi(x)$, the Student constructs function $\hat{\psi}(x)$. However, this is acceptable, since any function from $L\_2$ can serve as a predicate for an invariant.
>
>
>
Upvotes: 0 |
2018/08/16 | 2,540 | 7,749 | <issue_start>username_0: I have a 100-150 words text and I want to extract particular information like location, product type, dates, specifications and price.
Suppose if I arrange a training data which has a text as input and location/product/dates/specs/price as a output value. So I want to train the model for these specific output only.
I have tried Spacy and NLTK for entity extraction but that doesn't suffice above requirements.
Sample text:
>
> Supply of Steel Fabrication Items.
> General Item .
> Construction Material .
> Hardware Stores and Tool .
> Construction of Security Fence. - Angle Iron 65x65x6mm for fencing post of height 3.5, Angle Iron 65x65x6mm for fencing post of height 3.5, MS Flat 50 x 5mm of 2.60m height, Angle Iron 50x50x6mm for Strut post of height 3.10mtr, Angle Iron 50x50x6mm for fencing post of height 1.83, Angle Iron 50x50x6mm for fencing post of height 1.37, Barbed wire made out of GI wire of size 2.24mm dia, Chain link fence dia 4 mm and size of mesh 50mm x, Concertina Coil 600mm extentable up to 6 mtr, Concertina Coil 900mm extentable up to 15 to 20 mtr, Binding wire 0.9mm dia., 12 mm dia 50mm long bolts wih nuts & 02 x washers, Cement in polythene bags 50 kgs each grade 43 OPC, Sand Coarse confiming to IS - 383-970, 2nd revision, Crushed Stone Aggregate 20 mm graded, TMT Bar 12mm dia with 50mm U bend, Lime 1st quality, Commercial plywood 6' x 3' x 12 mm., Nails all Type 1" 2"3" 4" 5" and 6"., Primer Red Oxide, Synthetic enamel paint, colour black/white Ist quality .
> Angle Iron 65x65x6mm for fencing post of height 3.5, Angle Iron 65x65x6mm for fencing post of height 3.5 mtr, MS Flat 50 x 5mm of 2.60m height, Angle Iron 50x50x6mm for Strut post of height 3.10mtr, Barbed wire made out of GI wire of size 2.24mm dia, Chain link fence dia 4 mm and size of mesh 50mm x, Concertina Coil 600mm extentable up to 6 mtr, Binding wire 0.9mm dia., 12 mm dia 50mm long bolts with nuts & 02 x washers, Cement in polythene bags 50 kgs each grade 43 OPC, Sand Coarse confiming to IS - 383-970, 2nd revision, Crushed Stone Aggregate 20 mm graded, TMT Bar 12mm dia with 50mm U bend, Lime 1st quality, Commercial plywood 6' x 3' x 12 mm., Nails all Type 1" 2"3" 4" 5" and 6"., Primer Red Oxide, Synthetic enamel paint, colour black/white Ist quality., Cutting Plier 160mm long, Leather Hand Gloves/Knitted industrial, Ring Spanner of 16mm x 17mm, 14 x 16mm, Crowbar hexagonal 1200mm long x 40mm, Plumb bob steel, Bucket steel 15 ltr capacity (as per, Plastic water tank 500 ltrs Make - Sintex, Water level pipe 30 Mtr, Brick Hammer 250 Gms with handle, Hack saw Blade double side, Welding Rod, Cutting rod for making holes, HDPE Sheet 5' x 8', Plastic Measuring tape 30 Mtr, Steel Measuring tape 5 Mtr, Wooden Gurmala 6"x3", Steel Pan Mortar of 18"dia (As, Showel GS with wooden handle, Phawarah with wooden handle (As per, Digital Vernier Caliper, Digital Weighing Machine cap 500 Kgs, Portable Welding Machine, Concrete mixer machine of 8 CFT .
> Angle Iron 65x65x6mm for fencing post of height 3.5, Angle Iron 65x65x6mm for fencing post of height 3.5, MS Flat 50 x 5mm of 2.60m height, Angle Iron 50x50x6mm for Strut post of height 3.10mtr, Barbed wire made out of GI wire of size 2.24mm dia, Chain link fence dia 4 mm and size of mesh 50mm, Concertina Coil 600mm extentable up to 6 mtr, Binding wire 0.9mm dia., 12 mm dia 50mm long bolts with nuts & 02 x washers, Cement in polythene bags 50 kgs each grade 43, Sand Coarse confiming to IS - 383-970, 2nd revision, Crushed Stone Aggregate 20 mm graded, TMT Bar 12mm dia with 50mm U bend, Lime 1st quality, Commercial plywood 6' x 3' x 12 mm., Nails all Type 1" 2"3" 4" 5" and 6"., Primer Red Oxide, Synthetic enamel paint, colour black/white Ist quality., Cutting Plier 160mm long, Leather Hand Gloves/Knitted industrial, Ring Spanner of 16mm x 17mm, 14 x 16mm, Crowbar hexagonal 1200mm long x 40mm, Plumb bob steel, Bucket steel 15 ltr capacity (as per, Plastic water tank 500 ltrs Make - Sintex, Water level pipe 30 Mtr, Brick Hammer 250 Gms with handle, Hack saw Blade double side, Welding Rod, Cutting rod for making holes, HDPE Sheet 5' x 8', Plastic Measuring tape 30 Mtr, Steel Measuring tape 5 Mtr, Wooden Gurmala 6"x3", Steel Pan Mortar of 18"dia (As per, Showel GS with wooden handle, Phawarah with wooden handle (As per, Digital Vernier Caliper)
>
>
><issue_comment>username_1: I think that what you're really asking about is the question of [knowledge representation](https://en.wikipedia.org/wiki/Knowledge_representation_and_reasoning). Regardless of how you train your AI, one of the most fundamental questions is how do you represent "knowledge" and especially when it exists at different levels of abstraction, may be mutually recursive, etc. Along with that goes the question of [belief revision](https://en.wikipedia.org/wiki/Belief_revision) which deals with how you update existing beliefs/knowledge in the light of new information.
Both of these areas are still subject to plenty of active research and neither has entirely settled answers to the core questions. But progress has been made in both areas.
Personally I suspect that something like [semantic networks](https://en.wikipedia.org/wiki/Semantic_network) or [conceptual graphs](https://en.wikipedia.org/wiki/Conceptual_graph) will be the best answer to the KR problem. Dealing with belief revision seems even fuzzier to me, although there are known strategies (like the [AGM postulates](https://en.wikipedia.org/wiki/Belief_revision#The_AGM_postulates)) that work to a point. Something like [Bayesian Belief Networks](https://en.wikipedia.org/wiki/Bayesian_network) may also prove useful.
Upvotes: 1 <issue_comment>username_2: I have recently watched [a podcast by <NAME> where he interviews <NAME>](https://www.youtube.com/watch?v=STFcvzoxVw4), who talks about this way of teaching with "predicates" and "invariants".
If you were really able to teach a model with sentences like "a bird is an animal that is able to fly" but "not all birds fly, such as penguins", that would probably represent one of the biggest milestones in machine learning, especially, if the machine was able to learn and apply this learned knowledge as *efficiently* as humans do. However, we are not quite there yet!
To know more about this new learning paradigm **Learning Using Statistical Invariants (LUSI)**, you probably should read the paper [Rethinking statistical learning theory: learning using statistical invariants](https://link.springer.com/article/10.1007/s10994-018-5742-0) (2019), by <NAME> and <NAME>, which will probably be difficult to follow if you have no knowledge of learning theory and your mathematical background is poor. However, there are some sections of this paper that are accessible to everyone. For example, section 6.6
>
> Suppose that the Teacher teaches Student to recognize digits by providing a number of examples and also suggesting the following heuristics: "In order to recognize the digit zero, look at the center of the picture — it is usually light; in order to recognize the digit 2, look at the bottom of the picture - it usually has a dark tail" and so on.
>
>
> From the theory above, the Teacher wants the Student to construct specific predicates $\psi(x)$ to use them for invariants. However, the Student does not necessarily construct exactly the same predicate that the Teacher had in mind (the Student's understanding of concepts "center of the picture" or "bottom of the picture": can be different). Instead of $\psi(x)$, the Student constructs function $\hat{\psi}(x)$. However, this is acceptable, since any function from $L\_2$ can serve as a predicate for an invariant.
>
>
>
Upvotes: 0 |
2018/08/16 | 1,100 | 4,431 | <issue_start>username_0: So Taleb has two heuristics to generally describe data distributions. One is Mediocristan, which basically means things that are on a Gaussian distribution such as height and/or weight of people.
The other is called Extremistan, which describes a more Pareto like or fat-tailed distribution. An example is wealth distribution, 1% of people own 50% of the wealth or something close to that and so predictability from limited data sets is much harder or even impossible. This is because you can add a single sample to your data set and the consequences are so large that it breaks the model, or has an effect so large that it cancels out any of the benefits from prior accurate predictions. In fact this is how he claims to have made money in the stock market, because everyone else was using bad, Gaussian distribution models to predict the market, which actually would work for a short period of time but when things went wrong, they went really wrong which would cause you to have net losses in the market.
I found this video of Taleb being asked about AI. His claim is that A.I. doesn't work (as well) for things that fall into extremistan.
Is he right? Will some things just be inherently unpredictable even with A.I.?
Here is the video I am referring to <https://youtu.be/B2-QCv-hChY?t=43m08s><issue_comment>username_1: Yes and no!
There's no inherent reason that machine learning systems can't deal with extreme events. As a simple version, you can learn the parameters of a [Weibull distribution](https://en.wikipedia.org/wiki/Weibull_distribution), or another extreme value model, from data.
The bigger issue is with known-unknowns vs. unknown-unknowns. If you *know* that rare events are possible (as with, say, earthquake prediction), you can incorporate that knowledge into the models you develop, and you'll get something that works as well or better than humans in that domain. If you *don't* know that rare events are possible (as with, say, a stock market crash produced by correlated housing defaults), then your model will reflect that as well.
I tend to think Taleb is being a bit unfair here: AI can't handle these kinds of events precisely because its creators (us) can't handle them! If we knew they were possible, then we could handle them pretty well, and AI could too.
Upvotes: 3 <issue_comment>username_2: Yes, Taleb is right. There are practical and mathematical limits even an AI, no matter how powefull, cannot overcome. Taleb point is that some events are inherently unpredictable. The amount of data the we would need to correctly guess their distribution is so massive we could never build a machine large enough that would run for long enough to provide accurate predictions:
[](https://i.stack.imgur.com/gAzQ5.png)
Quote (and above image) from Taleb book on [Statistical Consequences of Fat Tails](https://arxiv.org/pdf/2001.10488.pdf) (emphasis mine)
>
> Once we leave the yellow zone, where the law of large numbers (LLN) largely
> works, and the central limit theorem (CLT) eventually ends up workin, then, we encounter convergence problems.
>
>
>
>
> Further up, in the top segment, there is no mean. We call it the Fuhgetaboudit. **If you see something in that category, you go home and you don’t talk about it.**
>
>
>
>
> The traditional statisticians approach to thick tails has been to claim to assume a
> different distribution but keep doing business as usual, using same metrics, tests,
> and statements of significance. Once we leave the yellow zone, for which statistical
> techniques were designed (even then), things no longer work as planned. The
> next section presents a dozen issues, almost all terminal.
>
>
>
Moreover, consider that not only you need a huge amount of data, but also that the interesting data-points for extremely rare events happen, as their name indicates, extremely rarely. So you might have to wait a literal eternity to ever have enough data to train the AI to make a prediction.
That said, even while AI cannot escape the mathematical boundaries that make prediction of extreme events impossible, it might perhaps be able to learn to do what Taleb recommends us to do: When dealing with extreme events, avoid prediction, and instead come up with approaches that make allow you to survive regardless lack of predictive power.
Upvotes: 0 |
2018/08/16 | 1,569 | 6,866 | <issue_start>username_0: I'm making a Connect Four game where my engine uses Minimax with Alpha-Beta pruning to search. Since Alpha-Beta pruning is much more effective when it looks at the best moves first (since then it can prune branches of poor moves), I'm trying to come up with a set of heuristics that can rank moves from best to worst. These heuristics obviously aren't guaranteed to always work, but my goal is that they'll *often* allow my engine to look at the best moves first. An example of such heuristics would be as follows:
* Closeness of a move to the centre column of the board - weight 3.
* How many pieces surround a move - weight 2.
* How low, horizontally, a move is to the bottom of the board - weight 1.
* etc
However, I have no idea what the best set of weight values are for each attribute of a move. The weights I listed above are just my estimates, and can obviously be improved. I can think of two ways of improving them:
1) Evolution. I can let my engine think while my heuristics try to guess which move will be chosen as best by the engine, and I'll see the success score of my heuristics (something like x% guessed correctly). Then, I'll make a pseudo-random change/mutation to the heuristics (by randomly adjusting one of the weight values by a certain amount), and see how the heuristics do then. If it guesses better, then that will be my new set of heuristics. Note that when my engine thinks, it considers thousands of different positions in its calculations, so there will be enough data to average out how good my heuristics are at prediction.
2) Generate thousands of different heuristics with different weight values from the start. Then, let them all try to guess which move my engine will favor when it thinks. The set of heuristics that scores best should be kept.
I'm not sure which strategy is better here. Strategy #1 (evolution) seems like it could take a long time to run, since every time I let my engine think it takes about 1 second. This means testing each new pseudo-random mutation will take a second. Meanwhile, Strategy #2 seems faster, but I could be missing out on a great set of heuristics if I myself didn't include them.<issue_comment>username_1: Hmmm, I see some issues that are actually present in both of the approaches you propose.
It is important to note that the depth level that your Minimax search process manages to reach, and therefore also the speed with which it can traverse the tree, is extremely important for the algorithm's performance. Therefore, when evaluating how good or bad a particular heuristic function for move ordering is, **it is not only important to look at how well it ordered moves; it is also important to take into account the runtime overhead of the heuristic function call**. If your heuristic functions manages to sort well, but is so computationally expensive that you can't search as deep in the tree, it's often not really worth it. Neither of the solutions you propose are able to take this into account.
Another issue is that it's not trivial to measure what ordering is the "best". A heuristic that has the highest accuracy for the position of the best move only is not necessarily the best heuristic. For example, a heuristic that always places the best move in the second position ($0\%$ accuracy because it's in the wrong position, should be first position) might be better than a heuristic that places the best move in the first position $50\%$ of the time ($50\%$ accuracy), and places the best move last in the other $50\%$ of cases.
---
I would be more inclined to evaluate the performance of different heuristic functions by setting up tournaments where different versions of your AI (same search algorithm, **same processing time constraints per turn**, different heuristic function) play against each other, and **measuring the win percentage**.
This set up can also be done with two variants analogous to what you proposed; you can exhaustively put all the heuristic functions you can come up with against each other in tournaments, or you can let let an evolutionary algorithm sequentially generate populations of hypothesis-heuristic-functions, and run a tournament with each population. Generally, **I would lean towards the evolutionary approach**, since we expect it to search the same search space of hypotheses (heuristic functions), but we expect it to do so in a more clever / efficient manner than an exhaustive search. Of course, if you happen to have a ridiculous amount of hardware available (e.g., if you're Google), you might be able to perform the complete exhaustive search at once in parallel.
---
Note that there are also ways to do fairly decent move ordering without heuristic functions like the ones you suggested.
For example, you likely should be using **iterative deepening**; this is a variant of your search algorithm where you first only perform a search with a depth limit $d = 1$, then repeat the complete search process with a depth limit $d = 2$, then again with a limit $d = 3$, etc., until processing time runs out.
Once you have completed such a search process for a depth limit $d$, and move on to the subsequent search process with a limit of $d + 1$, you can order order the moves **in the root node** according to your evaluations from the previous search process (with depth limit $d$). Yes, here you would only have move ordering in the root node, and nowhere else, but this is by far the most influential / important place in the tree to do move ordering. Move ordering becomes less and less important as you move further away from the root.
If you're using a **transposition table** (TT), it is also common to store the "best move" found for every state in your TT. If, later on, you run into a state that already exists in your TT (which will be very often if you're using iterative deepening), and if you cannot directly take the stored value but have to actually do a search (for instance, because your depth limit increased due to iterative deepening), you can search the "best move" stored in the TT first. This is very light move ordering in that you only put one move at the front and don't order the rest, but it can still be effective.
Upvotes: 3 [selected_answer]<issue_comment>username_2: With regards to random vs evolutionary algorithm, an evolutionary algorithm will almost always be superior. Imagine the space of all possible heuristics. An evolutionary algorithm moves through it 'intelligently' i.e. it somewhat follows the gradient of the space and should converge to a local optimum. A random algorithm will not be able to achieve this.
With regards to the time taken, surely it would be the same for each one to evaluate X heuristics?
Upvotes: 0 <issue_comment>username_3: There is a third strategy and that is to study what heuristics a human player uses.
Upvotes: 0 |
2018/08/17 | 2,180 | 8,308 | <issue_start>username_0: A lot of questions on this site seem to be asking "can I use X to solve Y?", where X is usually a deep neural network, and Y is often something already addressed by other areas of AI that are less well known?
I have some ideas about this, but am inspired by questions like [this one](https://bicycles.stackexchange.com/q/1195) where a fairly wide range of views are expressed, and each answer focuses on just one possible problem domain.
There are some related questions on this stack already, but they are not the same. [This question](https://ai.stackexchange.com/q/240/2444) specifically asks what genetic algorithms are good for, whereas I am more interested in having an inventory of problems mapped to possible techniques. [This question](https://ai.stackexchange.com/q/60/2444) asks what possible barriers are to AI with a focus on machine learning approaches, but I am interested in what we *can* do without using deep neural nets, rather than what is difficult in general.
A good answer will be supported with citations to the academic literature, and a brief description of both the problem and the main approaches that are used.
Finally, [this question](https://ai.stackexchange.com/q/2999/2444) asks what AI can do to solve problems related to climate change. I'm not interested in the ability to address specific application domains. Instead, I want to see a catalog of abstract problems (e.g. having an agent learn to navigate in a new environment; reasoning strategically about how others might act; interpreting emotions), mapped to useful techniques for those problems. That is, "solving chess" isn't a problem, but "determining how to optimally play turn-based games without randomness" is.<issue_comment>username_1: I was hoping to see more answers here, but I'll get us started with some examples:
***Combinatorial Search Problems***: If your problem can be phrased as movement through a [combinatorial graph](https://en.wikipedia.org/wiki/Combinatorics#Graph_theory), you don't need a neural network. In particular, your problem should have discrete states, a clear set of actions that are possible in each state, a clear definition of where we start, and a clear definition of what the goal state looks like. The most effective general purpose technique is [iterative deepening search](https://en.wikipedia.org/wiki/Iterative_deepening_depth-first_search). If you have an idea about which moves might be more effective, or better, a function that estimates how far each state is from the goal, you may be able to build a heuristic function and use [A\* search](https://en.wikipedia.org/wiki/A*_search_algorithm) instead. Common applications for these techniques include pathfinding in video games (or directions in other applications), AI planning, and Automated Theorem Proving.
I'll add some more topics later, but I suspect others have expertise to share here. Let's see some more ideas!
Upvotes: 3 <issue_comment>username_2: A nice example [Markov Decision Processes](https://en.wikipedia.org/wiki/Markov_decision_process), which can be solved by classic [reinforcement learning](https://en.wikipedia.org/wiki/Reinforcement_learning) techniques like [Q learning](https://en.wikipedia.org/wiki/Q-learning).
A Markov Decision Process consists of
1. A set of *discrete states* (or continuous states that have been discretized)
2. A set of possible *actions* that can be taken in each state.
3. A set of *transition probabilities* that describe how an agent stochastically moves from its current state to the next, based on the agent's actions.
4. A *reward function* quantitatively describing how nice it is to be in each state.
5. A *discounting factor* that describes how much worse it is to receive a reward in the future than today.
Very small MDPs can be directly, exactly, solved, using techniques like [value iteration](https://artint.info/html/ArtInt_227.html), but the computational cost for these approaches grows extremely fast.
Reinforcement Learning (RL) was developed as a machine learning approach for MDPs. There is a loop: the agent gets the state of the environment, chooses an action, executes this action on the environment, and he gets back a reward, and the new state of the environment, and so on... You want the agent to maximize the cumulative reward over time.
The basic concept of Q Learning doesn't use ANNs. In Q learning, you build a state-action matrix, called the Q matrix. Thus, you must discretize the states of your environment, and the actions available to your agent. Then, the coefficient Qij is the expected reward when you perform the action j on the state i. In basic Q Learning, you explore and build this matrix, and it should converge and give an "optimal rule of action" for your agent.
However, the situation is often too complex, and you often want a non-discretized space of states or actions. Here Deep QL arrives, where the Q matrix becomes an ANN.
You can find a nice QL tutorial [here](http://adventuresinmachinelearning.com/reinforcement-learning-tutorial-python-keras/) (normal and deep).
And a lecture about QL [here](https://www.youtube.com/watch?v=lvoHnicueoE).
Keep in mind that only ANNs perform well in complex situations, so you'll always see examples with ANNs, even if the basic theory doesn't require ANNs.
Upvotes: 2 <issue_comment>username_2: Image Segmentation with Unsupervised Learning
---------------------------------------------
Deep Learning is now widely used for image classification and segmentation. However, for segmentation, some algorithms are still really effective. For example, they could also be used for the development of self-driving cars.
### K-means for image segmentation
When you identify the pixels of an RGB image to vectors in $\mathbb{R}^3$, you can run the [classic k-means algorithm](https://en.wikipedia.org/wiki/K-means_clustering) to distinguish objects. Furthermore, you can do superpixel segmentation, by adding to all pixel vectors two components corresponding to their coordinates in the image (so it will be vectors in $\mathbb{R}^5$). You can run again a k-means algorithm to segment your image in superpixels. You can read about that [SLIC Superpixels Compared to State-of-the-Art Superpixel Methods](https://www.researchgate.net/publication/225069465_SLIC_Superpixels_Compared_to_State-of-the-Art_Superpixel_Methods) (2012), Achanta et al.
**Example**
Below is an example of the segmentation of picture of a seagull on a roof. On the left, we have the original image. In the middle, 3 clusters. On the right, 12 clusters. If it easily distinguishes the roof from the sky, the seagull is still unclear with 12 centroids.
[](https://i.stack.imgur.com/8rVjS.jpg?s=128)
[](https://i.stack.imgur.com/v6Iy1.jpg?s=128)
[](https://i.stack.imgur.com/NtsF5.jpg?s=128)
### Similarity graph and normalized cut
The main idea is to build a graph of similarities between pixels and then to cut the graph into subgraphs. First, you need to define a distance between pixels. For example, the colour dissimilarity, that could be $d(p\_1, p\_2) = exp(-\sum{(p\_{1,i} - p\_{2,i})^2}), i \in (r, g, b)$. Then, build the graph over the whole image, and divide it iteratively, using the [Normalized Cut algorithm](https://en.wikipedia.org/wiki/Segmentation-based_object_categorization).
### Morphological gradients
Here, image segmentation is done by computing the [morphological gradient](https://en.wikipedia.org/wiki/Morphological_gradient) of the image. It is the difference between the dilatation and the erosion of the input image. Erosion and dilatation are equivalent to passing min() and max() filters over all subwindows of a given size.
An example, still with the seagull. (Morphological gradient computed on a grayscaled input). The seagull appears more clearly.
[](https://i.stack.imgur.com/I9kte.jpg?s=256)
[](https://i.stack.imgur.com/l5CKx.jpg?s=256)
Upvotes: 2 |
2018/08/17 | 2,823 | 10,979 | <issue_start>username_0: I recently [came across this function](https://ai.stackexchange.com/a/7582/1671):
$$\sum\_{t = 0}^{\infty} \gamma^t R\_t.$$
It's elegant and looks to be useful in the type of deterministic, perfect-information, finite models I'm working with.
However, it occurs to me that using $\gamma^t$ in this manner might be seen as somewhat arbitrary.
Specifically, the objective is to discount per the added uncertainty/variance of "temporal distance" between the present gamestate and any potential gamestate being evaluated, but that variance would seem to be a function of the branching factors present in a given state, and the sum of the branching factors leading up to the evaluated state.
* Are there any defined discount-factors based on the number of branching factors for a given, evaluated node, or the number of branches in the nodes leading to it?
If not, I'd welcome thoughts on how this might be applied.
An initial thought is that I might divide 1 by the number of branches and add that value to the goodness of a given state, which is a technique I'm using for heuristic tie-breaking with no look-ahead, but that's a "value-add" as opposed to a discount.
---
For context, this is for a form of [partisan Sudoku](https://www.reddit.com/r/abstractgames/comments/8khl96/formal_definition_of_mnpgames/), where an expressed position $p\_x$ (value, coordinates) typically removes some number of potential positions $p$ from the gameboard. *(Without the addition of an element displacement mechanic, the number of branches can never increase.)*
On a $(3^2)^2$ Sudoku, the first $p\_x$ removes $30$ out of $729$ potential positions $p$, including itself.
With each $p\_x$, the number of branches diminishes until the game collapses into a tractable state, allowing for perfect play in endgames. [Even there, a discounting function may have some utility because outcomes sets of ratios. Where the macro metric is territorial (controlled regions at the end of play), the most meaningful metric may ultimately be "efficiency" *(loosely, "points\_expended to regions\_controlled")*, which acknowledges a benefit to expending the least amount of points $p\_x$, even in a tractable endgame where the ratio of controlled regions cannot be altered. Additionally, zugzwangs are possible in the endgame, and in that case reversing the discount to maximize branches may have utility.]
$(3^2)^2 = 3x3(3x3) = "9x9"$ but the exponent is preferred so as not to restrict the number of dimensions.<issue_comment>username_1: First, an important note on any form of discounting: **adding a discount factor can change what the optimal policy is**. The optimal policy when a discount factor is present can be different from the optimal policy in the case where a discount factor is absent. **This means that "artificially" adding a discount factor is harmful if we expect to be capable of learning / finding an optimal policy**. Generally, we don't expect to be capable of doing that though, except for the case where we have an infinite amount of processing time (which is never in practice). In that other answer which you linked to in your question, I describe that it can still be **useful, that it may help to find a good (not optimal, just good) policy more quickly**, but it does not come "for free".
---
I'm not aware of any research evaluating ideas like the one in your question. I am not 100% sure why, I suspect it could be an interesting idea in some situations, but would have to be investigated carefully due to my point above; if not evaluated properly, it could also unexpectedly be harmful.
One thing to note is that the use of discounting factors $\gamma < 1$ is extremely common (basically ubiquitous) in Reinforcement Learning (RL) literature, but rather rare in literature on tree search algorithms like MCTS (though not non-existant; for example, it's used in the [original UCT paper from 2006](http://ggp.stanford.edu/readings/uct.pdf)). For the concept of "branching factors", we have the opposite; in RL literature it is *very common* to consistently have the same action space regardless of states ("constant branching factor"), whereas this is *very uncommon* in literature on tree search algorithms. So, the combination of discount factors + branching factors is actually somewhat rare in existing literature (which of course doesn't mean that the idea couldn't work or be relevant, it just might explain why the idea doesn't appear to have been properly investigated yet).
**One important concern I do have with your idea** is that it seems like it could be somewhat of an "anti-heuristic" in some situations (with which I mean, a heuristic that is detrimental to performance). In many games, it is advantageous to be in game states where you have many available moves (a large branching factor), this can mean that you are in a strong position. Consider, for example, chess, where a player who is in a convincingly winning position likely has more moves available than their opponent. I suspect your idea, when applied to chess, would simply promote an aggressive playing style where both players capture as many pieces as possible in an effort to quickly reduce the branching factors across the entire tree.
---
>
> If not, I'd welcome thoughts on how this might be applied.
>
>
> *(I might divide 1 by the number of branches and add that value to the goodness of a given state, but that's a "value-add" as opposed to a discount.)*
>
>
>
Such an additive change would be more closely related to the idea of **reward shaping**, rather than discounting (which is again a thing that, if not done carefully, can significantly alter the task that you're optimizing for). Intuitively I also suspect it might not do anything at all since you'd always be adding the same constant value regardless of which move you took (regardless of which move you took, your parent state will always have had the same branching factor). I might be missing out on some details here, but I think you'd have to have a multiplicative effect on your regular observed rewards.
I suppose one example that could be worth a try would be something like maximizing the following return;
$$\sum\_{t = 0}^{T} \gamma^t \beta^{b(S\_{t}) - 1} R\_{t + 1},$$
where:
* $0 \leq \gamma \leq 1$ is the normal time-based discount factor (can set it to $1$ if you like)
* $0 \leq \beta \leq 1$ is a new branching-factor-based discount factor
* $b(S\_t)$ is the branching factor (the number of available moves) in state $S\_t$
* $R\_{t + 1}$ is the immediate reward for transitioning from state $S\_t$ to $S\_{t + 1}$
I put the $-1$ in the power of $\beta$ because I suspect you wouldn't want to do any discounting in situations where only one move is available. Intuitively, I do suspect this $\beta$ would require very careful tuning though. It is already quite common to choose $\gamma = 0.9$ or $\gamma = 0.99$, with $\beta$ you may want to stay even closer to $1$.
Upvotes: 2 [selected_answer]<issue_comment>username_2: *[Thanks to username_1 for helping to explain and unpack.](https://ai.stackexchange.com/a/7626/1671) I'm still in the early stages of approaching the function, so any thoughts on my current line of thinking would be appreciated.*
-----------------------------------------
-----------------------------------------
There are two conditions that need to be separated: number of branches leading to an a given node (raw probability) and number of branches leading from given node (outdegree, which I like to think of as "chaotictivity";)
Raw probability here is the indegree of all nodes leading to the evaluated node.
Aggregate indegree seems for appropriate for a basic time discounting function because it reflects the unqualified probability of any given future state or position.
My thought is that I can use it to "normalize" the reward distributions. An expressed position $p\_{x\_1}$ yields an aggregate of $30R$ over 20 potential positions $p$**;** $p\_{x\_2}$ yields an aggregate of $15R$ over 10 $p$: $$\frac{30}{20} = \frac{15}{10}$$
This "squeezes" the R sum for a given ply into a single number.
The fractions below represent: $\frac{choice}{choices}$
$$\left \{\frac{1}{4} \right \}$$
$$\left \{ \frac{1}{2} | \frac{1}{2} | \frac{1}{2} |\frac{1}{2} \right \}$$
$$\left \{ \frac{1}{1} | \frac{1}{1} \right \} | \left \{ \frac{1}{1} | \frac{1}{1} \right \} | \left \{ \frac{1}{1} | \frac{1}{1} \right \} | \left \{ \frac{1}{1} | \frac{1}{1} \right \}$$
The $\beta$ for any given node on the final ply would be: $$\frac{1}{4} \* \frac{1}{2} = \frac{1}{8} = .125$$
Fractions are attractive because integers may be utilized until the $\beta$ needs to be expressed to whatever digit, and the distinct cardinalities of the factors is maintained for ancillary evaluation.
This function should allow the automata to do things like make the choice with the greatest certainty of the least maximal potential downside.
(If greater uncertainty is desirable, the automata can pursue more chaotic nodes, where aggregate least maximal downside is equivalent. In that "personalities" can be desirable in automata that play games with humans, there might be a maximax "gambler" persona that seeks chaos;)
The function can also be used to evaluate individual target nodes, and modified by any number of additional functions for "tuning".
I'm thinking about how I might use the degree of variance between the most probable and least probable node for a given ply. If the least probable node is $\frac{1}{20}$, and most probable node for a given ply is $\frac{1}{5} = \frac{4}{20}$, the variance is $\frac{3}{20} = .15$
This is initially designed for partisan sudoku. I'm thinking it is worthwhile to utilize the native $R$, which is an integer value between $\{-m^n, .., 0, .., m^n\}$ for an $(m^n)^n$ sudoku grid, as it segregates gains and losses.
-------------------------------
-------------------------------
EXPERIMENT DESIGN
Partisan sudoku is an attractive model for gauging relative strength of automata because outcomes are a set of ratios.
In the 2-player game where the number of regions in the sudoku is even, the game is perfectly symmetrical and neither player has a material, positional, or turn number advantage. Because the game is intractable on higher order grids, perfectly symmetrical games will not necessarily result in perfect ties because optimality of a given position is only presumed. (The second player can mirror the starting player for a perfect tie, where all outcome ratios are equal, or employ symmetry breaking if it perceives an opponent's position as less optimal than alternate positions.)
In 2-player games with an odd number of regions, matched sets of games may be employed with the automata alternating as starting player.
My initial plan is to evaluate individual heuristics, where one agent employs the branch discount $\beta$, and one agent does not.
Upvotes: 0 |
2018/08/18 | 3,695 | 15,017 | <issue_start>username_0: When we talk about artificial intelligence, human intelligence, or any other form of intelligence, what do we mean by the term **intelligence** in a general sense? What would you call intelligent and what not? In other words, how do we define the term **intelligence** in the most general possible way?<issue_comment>username_1: Intelligence is the ability to weave together various concepts and associations into a meaningful whole; filtering, adding and rejecting appropriately various ideas from personal knowledge and experience. Then effectively reflecting these ideas back to a questioner to affirm understanding and comprehension, allowing a conversation to proceed effectively towards a mutually beneficial conclusion.
Upvotes: 0 <issue_comment>username_2: I'm going to preface this answer by noting that persons much smarter than myself have treated this subject in some detail. That said, as far as I can discern:
>
> When we talk about intelligence we're referring to problem solving strength in relation to a problem, relative to the strength of other intelligences.
>
>
>
This is a somewhat game-theoretic conception, related to [rationality](https://en.wikipedia.org/wiki/Bounded_rationality) and the concept of the [rational agent](https://en.wikipedia.org/wiki/Intelligent_agent). Regarding intelligence in this manner may be unavoidable. Specifically, we could define intelligence as the ability to understand a problem or solution or abstract concepts, but we can't validate that understanding without testing it. *(For instance, I might believe I grasp a mathematical technique, but the only way to determine if that belief is real or illusory is to utilize that technique and evaluate the results.)*
The reason games like Chess and Go have been used as milestones, aside from longstanding human interest in the games, is that they provide models with simple, fully definable parameters, and, in the case of Go at least, have complexity akin to nature, by which I mean [unsolvable](https://en.wikipedia.org/wiki/Solved_game)/[intractable](https://en.wikipedia.org/wiki/Computational_complexity_theory#Intractability). (Compare to strength at Tic-Tac-Toe, which is trivially solved.)
However, we should consider [a point made in this concise answer to a question involving the Turing Test](https://ai.stackexchange.com/a/7595/1671):
>
> "...is [intelligence] defined purely by behaviour in an environment, or by the mechanisms that arrive at that behaviour?"
>
>
>
This is important because [Google just gave control over data center cooling to an AI](https://www.technologyreview.com/s/611902/google-just-gave-control-over-data-center-cooling-to-an-ai/). Here it is clearly the mechanism itself that demonstrates utility, but if we call that mechanism intelligent, for intelligence to have meaning, we still have to contend with "intelligent how?" (In what way is it intelligent?) If we want to know "how intelligent?" (its degree of utility) we still have to evaluate its performance in relation to the performance of other mechanisms.
(In the case of the automata controlling the air conditioning at Google, we can say that it is more intelligent than the prior control system, and by how much.)
Because we're starting to talk about more "generalized intelligence", defined here as mechanisms that can be applied to a set of problems, *(I include minimax as a form of "axiomatic intelligence" and machine learning as a form "adaptive intelligence")*, it may be worthwhile to expand and clarify the definition:
>
> Intelligence is the problem solving strength of a mechanism in relation to a problem or a set of problems, relative to the strength of other mechanisms.
>
>
>
or, if we wanted to be pithy:
>
> [Intelligence is as intelligence does](https://en.wiktionary.org/wiki/stupid_is_as_stupid_does) (and how well.)
>
>
>
Upvotes: 4 [selected_answer]<issue_comment>username_3: This is an important question for AI – maybe the most important of all – for the research field of Artificial *Intelligence*. I mean if AI is science, then its experiments will be empirically testable. There has to be a way to decide pass or fail. So what are the tests for intelligence? Before you even design a test, you need a clear idea of what intelligence amounts to, otherwise how could you design a competent test for it?
Sure, I'm part of the research and development project known as Building Watertight Submarines, and sure, I'm totally confident my submarine is watertight, but I have no idea how to test whether it is or not because I don't know what "watertight" means.
This whole idea is absurd. But ask AI what "intelligence" means. The answers you get, on analysis, are almost the same as the submarine example.
**Base Answer - Behavior**
The word (idea, concept) "Intelligence" is usually defined by AI in terms of behavior. I.e. the Turing test approach. A machine is intelligent if it behaves in a way that, were a human to behave in that same way, the human would be said to be performing an action that required human intelligence.
*Problem 1*: player pianos are intelligent. Playing a Scott Joplin tune obviously requires intelligence in a human.
*Problem 2*. If a machine passes the test, it only shows that the machine is "intelligent" for the tested behaviors. What about untested behaviors? This is actually a life-and-death problem today with self-driving vehicle AI control systems. The AI systems are acceptably good at driving a car (which obviously requires human intelligence) in specific environments, e.g. freeways with well-marked lanes, no tight corners, and a median barrier separating the two directions. But the systems go disastrously wrong in "edge cases" – unusual situations.
*Problem 3*. Who would put their child on a school bus driven by a robot that had passed the Turing test for driving school buses? What about a storm when a live power line falls across the road? Or a twister in the distance is coming this way? What about a thousand other untested possibilities? A responsible parent would want to know (a) what are the principles of the internal processes and structures of human intelligence, and (b) that the digital bus driver had adequately similar internal processes and structures – i.e., not behavior but the right *inner* elements, the right inner causation.
**Desired answer – inner principles**
I would want to know that the machine was running the right inner processes and that it was running these processes (algorithms) on the right inner (memory) structures. Problem is, no one seems to know what the right inner processes and structures of human intelligence are. (A huge problem to be sure – but one that hasn't held AI back – or self-driving system developers - one bit.) The implication of this is that what AI ought to be doing now is working out what are the inner processes and structures of human intelligence. But it's not doing this – rather, it's commercializing its flawed technology.
**Elements of a definition – 1. Generalization**
We do know some things about human intelligence. Some tests really do test whether a machine has certain properties of the human mind. One of these properties is generalization. In [his 1950 paper](https://academic.oup.com/mind/article/LIX/236/433/986238#164226550), Turing, as a sort of joke, gave a really good example of conversational generalization: (The witness is the machine.)
>
> **Interrogator**: In the first line of your sonnet which reads 'Shall I
> compare thee to a summer's day', would not 'a spring day' do as
> well or better?
>
>
> **Witness**: It wouldn't scan.
>
>
> **Interrogator**: How about 'a winter's day' That would scan all right.
>
>
> **Witness**: Yes, but nobody wants to be compared to a winter's day.
>
>
> **Interrogator**: Would you say <NAME> reminded you of Christmas?
>
>
> **Witness**: In a way.
>
>
> **Interrogator**: Yet Christmas is a winter's day, and I do not think Mr. Pickwick would mind the comparison.
>
>
> **Witness**: I don't think you're serious. By a winter's flay one means a typical winter's day, rather than a special one like Christmas.
>
>
>
Current AI has nothing that comes even remotely near being able to generalize like this. Failure to generalize is regarded as perhaps the greatest failing of current AI. The ability to generalize would be one part of an adequate definition of "intelligence". But what generalization amounts to would need to be explicated.
The problem of generalization, also, is behind several the severe philosophical objections to AI theory, including the [frame problem](https://plato.stanford.edu/entries/frame-problem), the [problem of common-sense knowledge](https://plato.stanford.edu/entries/logic-ai/#cs), and the problem of combinatorial explosion.
**Elements of a definition – 2. Perception**
Sensory perception is fairly obviously fundamental to human learning and intelligence. Data (in some form) is emitted by the human senses then processed by the central system. In the computer, binary values exit the digital sensor and travel to the machine. However, nothing in the values themselves indicates what was sensed. Yet the only thing the computer gets is the binary values. How could the machine ever come to know what is sensed? (The [classic Chinese room argument](https://plato.stanford.edu/entries/chinese-room/) problem.)
So another element of human-like intelligence is the ability to perceive in a human-like way. What "human-like way" means here is that the machine processes sensory input using the same principles that apply in human perception. The problem is that no one seems to know how a semantics (knowledge) can be built from the data emitted by digital sensors (or organic senses). But still, human-like perception needs to be an element of an adequate definition of "intelligence".
Once AI gets these two issues sorted out – generalization and perception – then it will probably, *hopefully*, be well on the way to realizing its original goal of almost 70 years past – building a machine with (or that could acquire) a human-like general intelligence. And maybe the principles of generalization and the principles of perception are one and the same. And maybe there is actually only one principle. It shouldn't be assumed that the answers are complex. Sometimes the hardest things to understand are the most simple.
So the question "What do we mean when we say "intelligence"? is really important to AI. And the conclusion is that AI ought to replace its current behavioral definition of "intelligence" with one that includes the human elements of generalization and perception. And then get on and try to work out the operating principles, or principle, of both of these.
Upvotes: 2 <issue_comment>username_4: The most general definition of the term intelligence that is both terse and exact is this.
>
> The collection of behavioral features resident in some entity where the entity sustainably succeeds in specific pursuits while avoiding specific losses in a particular range of environmental conditions.
>
>
>
These are examples of failures in exhibiting intelligence per the above definition, demonstrating the importance of each phrase.
* Some behavioral features that are intelligent but an overall system behavior that is not. For example, a rocket that reaches an altitude but cannot enter orbit or a turtle that can retract its head but cannot catch a bug.
* The intelligence is scattered across entities such that each individual entity does not exhibit intelligence, such as a bee or a single.
* Intelligence is exhibited momentarily but disintegrates over time, not adapting to changing conditions, or is not sufficiently reliable to fill a practical role.
* The entity can attain a goal but such success is nullified by loss accumulated during their attainment.
* The entity can avert loss, but cannot reliably succeed in the pursuit of its objectives.
* An entity that can adapt to any environmental condition and act intelligently in every one does not exist. Human intelligence is limited to specific scenarios and shock and confusion result when overburdened and super-intelligence is, as of this writing, conjecture without empirical evidence or theoretical proof.
Notice four things in this definition.
* Optimality is not required. Only better than random behavior is required.
* Although the entity tested for intelligence may interface with its environment only through data sets and test metrics, these are its environment.
* Time is necessarily involved. In a simple case, an artificial network exhibits intelligence only in its ability to exhibit behavior that was previously learned to be sufficient. Such can only retain intelligence by adjusting its training or in an environment where adaptation to new patterns is not required.
* Cognition is not required but cognition certainly augments the range of objectives that can be reliably pursued and the ability of the entity to detect danger and more proactively avert loss.
Upvotes: 0 <issue_comment>username_5: Over the years, many people have attempted to define intelligence, so there are many definitions of intelligence, but most of them are not formalized. For a big collection of definitions, see the paper [A Collection of Definitions of Intelligence](https://arxiv.org/pdf/0706.3639.pdf) (2007) by <NAME> and <NAME>.
In an attempt to formally define intelligence, so that it comprises all forms of intelligence, in the paper [Universal Intelligence: A Definition of Machine Intelligence](https://arxiv.org/pdf/0712.3329.pdf) (2007), the same Legg and Hutter, after having researched the previously given definitions of intelligence, define intelligence as follows
>
> **Intelligence measures an [agent](https://ai.stackexchange.com/a/12995/2444)'s ability to achieve goals in a wide range of environments**
>
>
>
This definition apparently favors systems that are able to solve more tasks (i.e. [AGIs](http://www.scholarpedia.org/article/Artificial_General_Intelligence)) than systems that are only able to solve a specific task (i.e. [narrow AIs](https://en.wikipedia.org/wiki/Weak_AI)), but, according to Legg and Hutter, it should summarise the main points of the previously given definitions of intelligence, so it should be a reasonable and quite general definition of intelligence. Moreover, properties associated with intelligence, like the ability to learn, should be emergent, i.e. in order to achieve goals in a wide range of environments you also need the ability to learn.
In my blog post [On the definition of intelligence](https://username_5.github.io/blogging/2020/05/20/on-the-definition-of-intelligence/), I also talk about this definition, but I suggest that you read the mentioned papers if you are interested in all details. [This video](https://www.youtube.com/watch?v=F2bQ5TSB-cE) by <NAME> could also be useful and interesting.
Upvotes: 2 |
2018/08/20 | 766 | 3,638 | <issue_start>username_0: I think I've seen the expressions "stationary data", "stationary dynamics" and "stationary policy", among others, in the context of reinforcement learning. What does it mean? I think stationary policy means that the policy does not depend on time, and only on state. But isn't that a unnecessary distinction? If the policy depends on time and not only on the state, then strictly speaking time should also be part of the state.<issue_comment>username_1: You are right: a stationary policy is independent of time. It is basically a mapping from states to actions (or probability distributions over actions). Regardless of the point in time in which the agent observes the state $s$ it will select an action $a$ (or select a probability $\pi(a \vert s)$ for every action $a$).
Upvotes: 2 <issue_comment>username_2: A *stationary policy* is a policy that does not change. Although strictly that is a time-dependent issue, that is not what the distinction refers to in reinforcement learning. It generally means that the policy is not being updated by a learning algorithm.
If you are working with a stationary policy in reinforcement learning (RL), typically that is because you are trying to learn its value function. Many RL techniques - including Monte Carlo, Temporal Difference, Dynamic Programming - can be used to evaluate a given policy, as well as used to search for a better or optimal policy.
*Stationary dynamics* refers to the environment, and is an assumption that the rules of the environment do not change over time. The rules of the environment are often represented as an MDP model, which consists of all the state transition probabilities and reward distributions. Reinforcement learning algorithms that work *online* can usually cope and adjust policies to match non-stationary environments, provided the changes do not happen too often, or enough learning/exploring time is allowed between more radical changes. Most RL algorithms have at least some online component, it is also important to keep exploring non-optimal actions in environments with this trait (in order to spot when they may become optimal).
*Stationary data* is not a RL-specific term, but also relates to the need for an online algorithm, or at least plans for discarding older data and re-training existing models over time. You might have non-stationary data in any ML, including supervised learning - prediction problems that work with data about people and their behaviour often have this issue as population norms change over timescales of months and years.
Upvotes: 4 [selected_answer]<issue_comment>username_3: **There are two kinds of problem**
Stationary and non-stationary
-----------------------------
Stationary problems are those whose reward value is static, dost not change
and on other hand non-stationary problems are those whose reward value change with time
Upvotes: 0 <issue_comment>username_4: A stationary policy is the one that does not depend on time. Meaning that the agent will take the same decision whenever certain conditions are met. This stationary policy may be probabilistic which implies that the probability of choosing an action remains the same. It may take different decisions but the probability remains the same.
A Stationary environment refers to the static model of the system. The model comprises of a Reward function and Transition probabilities. So, in a stationary environment, the reward function and transition probabilities remain constant or the changes are slow enough that the agent finds enough training time to learn the changes done in the environment.
Upvotes: 2 |
2018/08/20 | 722 | 3,273 | <issue_start>username_0: I have the following question about You Only Look Once (YOLO) algorithm, for object detection.
I have to develop a neural network to recognize web components in web applications - for example, login forms, text boxes, and so on. In this context, I have to consider that the position of the objects on the page may vary, for example, when you scroll up or down.
The question is, would YOLO be able to detect objects in "different" positions? Would the changes affect the recognition precision? In other words, how to achieve translation invariance? Also, what about partial occlusions?
My guess is that it depends on the relevance of the examples in the dataset: if enough translated / partially occluded examples are present, it should work fine.
If possible, I would appreciate papers or references on this matter.
(PS: if anyone knows about a labeled dataset for this task, I would really be grateful if you let me know.)<issue_comment>username_1: As you said, a CNN would be able to detect objects in different positions if the dataset contains enough examples of such cases, though the network is able to generalize and should be able to detect objects in slightly changed positions and orientations.
The term "translation invariance" does not mean that translating an object in the image would yield the same output for this object, but that translating the whole image would yield the same result. So the relative position of object IS important, modern CNN's takes decisions on the whole image (with strong local cues, of course).
To maximize the ability of your CNN to detect multiple orientation, you can train with data augmentation that rotate the images.
the same reasoning can be applied to partial occlusions: if there are enough samples with occlusion in the training set the network should be able to detect those ones. The network ability to generalize should also help a little when occlusions are small, and still be able to detect the object.
Some papers tried different experiment to demonstrate the robustness to occlusion and translation, for instance by looking at the network activation when artificially occluding a portion of the image with a gray rectangle, though I do not have a paper name in mind.
Upvotes: 0 <issue_comment>username_2: As I know about the YOLO, its algorithm splits the whole picture into many small frames and performs classification and boundaries detection at once for every frame, so that the location of the object does not matter.
Upvotes: 1 <issue_comment>username_3: This is a good question because unlike Faster R-CNN RPN YOLO "proposals" arise from a fully connected layer instead of a convolutional one
The parameters of each "7x7 feature map" in the 7x7x30 YOLO detection layer are not shared which implies an image should be translated to present each of its objects to each of the 7x7 positions to ensure translation invariance
This is related to its extensive requirement for data augmentation including translations, scaling, exposure and saturation where the R-CNN family of algorithms just use random horizontal flips
This was modified in [Darknet-19](https://arxiv.org/pdf/1612.08242.pdf) in which the last layer is convolutional when modified for detection
Upvotes: 0 |
2018/08/22 | 2,428 | 9,355 | <issue_start>username_0: So suppose that you have a real estate appraisal problem. You have some structured data, and some images exterior of home, bedrooms, kitchen, etc. The number of pictures taken is variable per observational unit, i.e. the house.
I understand the basics of combining an image processing neural net with tabular data for a single image. You chop off the final layer and feed in the embeddings of the image to your final model.
How would one deal with variable number of images? Where your unit of observation can have between zero and infinity images (theoretically no upper bound on number of images in observation)?<issue_comment>username_1: I can think of 4 options:
1. One option is to divide the data so that each data point only has one picture but we have multiple data points per one real estate with structured data duplicated. Then calculate the forecasts and average the resulting predicted price between data points belonging to one real estate. Here we probably assume that the quality of pictures is more important than what is being shot.
2. The other option is to divide the pictures to categories: bedroom1, bathroom, bedroom2, kitchen and so on. Then for the missing pictures use a black square. This way you will be able to have multiple pictures in one data point.
3. The third option would be to have places for the maximum amount of pictures and fill only the first pictures that are available. And the other pictures fill with black squares/Nans.
4. The best option would probably be combining option number two and three. So you have categories and maximum number of pictures per category. So you will have four categories for bathroom1 if the maximum is 4 pictures of the first bathroom:
Bathroom1\_1, Bathroom1\_2, Bathroom1\_3, Bathroom1\_4, Bedroom1\_1, Bedroom1\_2, Bedroom1\_3, Bedroom2\_1, Bedroom2\_2 ... and so on.
The gradient boosting algorithms are very good at working with missing values. So probably lightgbm and xgboost as last layers could give good results.
So in the end you can check all the variants and even do ensembles from the variants that give the best results.
Upvotes: 0 <issue_comment>username_2: We can generalize both problem and solution by removing the specifics of housing.
**Representing Forward Propagation**
We have a function $f$ we wish to obtain via the training of an artificial network that produces a scalar result $s$, the sole dependent variable and the generalization of market price.
$s = f(s\_1, s\_2, ..., s\_k, c\_1, c\_2, ..., c\_v)$
The independent scalar variables $s\_1$ through $s\_k$ are the generalization of a constant number $k$ of property features from the tax authority, assessor's office, or inspection document. The question calls this *structured data*, however it is questionable whether $k$ is truly effectively constant. In normal practice, some of $s\_i$ will be unassigned. Since the question overlooks the additional complexity of missing scalars, so will this answer.
The independent cube variables $c\_1$ through $c\_v$ are the generalizations of a variable number $v$ of property images from Google photography, real estate agents, buyers, sellers, and other potential sources. The dimensions of each cube are horizontal and vertical positions and pixel structure element number.
It is unlikely that the resolution values for each cube are uniform between samples in real life, which the question did not mention, so this answer will overlook that complexity and focus on the variability of $v$, the quantity of cubes representing images for a given example. Since $v$ cannot meaningfully be either negative or infinite, we can assume $v$ to be a non-negative integer.
**Terminology**
The observational unit should not be considered the house or the property but an image of it, which may be a member of a camera location and orientation category relative to the elements of the property. Each image capture is an observation, distinct in both Bergsonian and clock time. Each item under evaluation is an *example* from the sample of all items, in this case, all properties in the region for which prediction is attempted.
**Design of a Solution**
Each of the $v$ cubes demonstrate zero or more additional features of the item being evaluated, real properties in the question's specific case.
It may be reasonable to assume that such features may either positively or negative affect the example's label corresponding to the result $s$, but not both positively and negatively affect it. If that is the case, we can aggregate the features across the set of cubes for each example under the reasonable assumption that there are no points of inflection. Such may be reasonable because, for instance, a feature regarding the uniformity of paint coverage, lawn care, or roofing material may have no inflection point. Such aspects of the property cannot be too uniform. That makes the substitution straightforward.
A reasonably versatile way to generalize the aggregation of directionally consistent function is using a substitution, which may be what the question meant by *feed in the embeddings of the image to your final model*.
$s = f = f'(s\_1, s\_2, ..., s\_k, v, h\big(v, \sum\_{j = 1}^v g(c\_j)\big)$
Note the elements of this substitution.
* $h$ is a vector function that normalizes the distribution of each feature found in the feature vector before using it as a set of inputs to $f'$.
* $g$ is a generalization of the cube, with the input (independent variable) being a cube representing the image and with the output (dependent variable) being a vector of features extracted.
* $v$, the number of cubes (visual observations) is fed along with extracted features into the function $f'$, which can be realized through the convergence of a multilayer perceptron.
If the images are grouped in terms of the location of the camera, then this principle can be applied iteratively, where $o$ is the number of distinct categories of camera orientation. In this case, the pairs $(v\_z, h\_z)$ represent the cube quantity and feature aggregations for image camera location category $z$.
$s = f = f'(s\_1, s\_2, ..., s\_k, v\_1, h\_1, v\_2, h\_2, ... v\_o, h\_o)$
Given sensible models $h$ and $g$, training for prediction is straightforward.
Image feature extraction can be realized through ConvNet approaches such as OverFeat1, AlexNet2, CaffeNet3, GoogLeNet4, VGG 65 or PatreoNet6. Tuning such models produces $g$.
The nature of function $h$ may be homogeneous or heterogeneous across dimensions. Each component of the feature vector arising from extraction can have applied to it a function such as any of these or others, where $q$ is the feature index, $p\_{qi}$ is learning parameter i.
1. $h\_q(x) = x$
2. $h\_q(x) = \large{x^{p\_{q1} + p\_{q2} v}}$
3. $h\_q(x) = \log (x + p\_{q1} v + p\_{q2})$
4. $h\_q(x) = \large{\epsilon^x}$
5. Others
Scalar function 1 is best when the designer wishes the convergence during the training of $f'$ in such a way that normalization is accomplished in the net. It is a good choice for features where its frequency of occurrence and magnitude are across the entire set of images for a given example item is roughly proportional to the resulting value of that item.
Function 2 presents flexibility in normalization curvature with respect to the number of cubes. Function 3 presents attenuation of the frequency of feature occurrence. Function 4 presents compounding of feature effect with recurrence in the images of the same example.
The key is then the selection of how to deal with the substitution in the training in terms of procedure and wiring of corrective signaling. Procedurally, there are three options.
* Train both the ConvNet function $g$ and the multilayer perceptron function $f'$ together, extending the applicable principles of back-propagation and gradient descent.
* Extract features first, tuning the ConvNet corresponding to $g$ prior to training the network corresponding to $f'$. The advantage of this approach is manual control over and interim evaluation of feature extraction.
* Use something similar to the mini-batch approach to find a balance between the above two extremes.
———
**Footnotes**
[1] <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, Overfeat: Integrated recognition, localization and detection using convolutional networks, arXiv preprint arXiv:1312.6229v4
[2] <NAME>, <NAME>, <NAME>, Imagenet classification with deep convolutional neural networks, in: Neural Information Processing Systems, 2012, pp. 1106–1114
[3] <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, Caffe: Convolutional architecture for fast feature embedding, arXiv preprint arXiv:1408.5093
[4] <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, Going deeper with convolutions, arXiv preprint arXiv:1409.4842
[5] <NAME>, <NAME>, Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv:1409.1556
[6] <NAME>, <NAME>, <NAME>, Improving spatial feature representation from aerial scenes by using convolutional networks, in: Graphics, Patterns and Images (SIBGRAPI), 2015 28th SIBGRAPI Conference on, IEEE, 2015, pp. 289–296.
Upvotes: 2 [selected_answer] |
2018/08/22 | 876 | 2,794 | <issue_start>username_0: Is there a machine learning system that is able to "understand" mathematical problems given in a textual description, such as
>
> A big cat needs 4 days to catch all the mice and a small cat needs 12 days. How many days need both, if they catch mice together?
>
>
>
?<issue_comment>username_1: There was a lot of work on this topic at UT Austin, which has now migrated to the [Alan Institute](https://allenai.org/aristo/).
There is no off-the-shelf software that will answer your question (if there was, DARPA would stop funding its development!), but you can read about the latest development in a number of recent papers.
[This paper](http://ai2-website.s3.amazonaws.com/publications/diagram_understanding_in_geometry_questions.pdf) (Seo et al. EMNLP 2015) discusses the techniques that are used to interpret diagrams that accompany geometry problems, while [this one](http://ai2-website.s3.amazonaws.com/publications/Arithmetic_Word_Problems.pdf) (Hosseini et al. EMNLP 2014) talks about how to automatically parse verbs to interpret the meaning of a question. The [2015 TACL paper](http://ai2-website.s3.amazonaws.com/publications/algebra-TACL2015.pdf) (Koncel-Kedziorski et al. 2015) completes this by discussing how to extract the relevant equations from a word problem. Once you have the equations, know what question is being asked, and can interpret any diagrams, you can do most high school math problems.
However, I don't think this is yet a fully reliable system. It is one part of a larger, long running effort to create a program that can achieve higher education certifications in many subjects. You can see many projects related to this at the Alan Institute's [website](https://allenai.org/).
Upvotes: 2 <issue_comment>username_2: Well this is a relatively new problem very tied to Question Answering. One of the recent systems is EUCLID that can answer those type of question the public Dolphin algebra question set by using a tree transducer cascade approach.
This paper details the proposed model [<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2017). Beyond sentential semantic parsing: Tackling the math sat with a cascade of tree transducers. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (pp. 795-804)](https://pdfs.semanticscholar.org/c22a/240d1087603664826e9aab809273ed9bff15.pdf?_ga=2.52187753.1130679049.1530133172-566539276.1446829155&_gac=1.6555398.1527028246.EAIaIQobChMI9YuywK-a2wIVlcBkCh0WKw_kEAAYASAAEgKwgPD_BwE).
In the same sense, SEMEVAL has released a task related to Math QA, you can see the related bibliography and referenced works [semeval 2019 task 10 internal](https://github.com/allenai/semeval-2019-task-10-internal).
Upvotes: 1 |
2018/08/22 | 1,843 | 6,619 | <issue_start>username_0: I was reading the book [Reinforcement Learning: An Introduction by <NAME> and <NAME>](http://incompleteideas.net/book/bookdraft2017nov5.pdf) (complete draft, November 5, 2017).
On page 271, the pseudo-code for the episodic Monte-Carlo Policy-Gradient Method is presented. Looking at this pseudo-code I can't understand why it seems that the discount rate appears 2 times, once in the update state and a second time inside the return. [See the figure below]
[](https://i.stack.imgur.com/dxDnP.png)
It seems that the return for the steps after step 1 are just a truncation of the return of the first step. Also, if you look just one page above in the book you find an equation with just 1 discount rate (the one inside the return.)
Why then does the pseudo-code seem to be different? My guess is that I am misunderstanding something:
$$
{\mathbf{\theta}}\_{t+1} ~\dot{=}~\mathbf{\theta}\_t + \alpha G\_t \frac{{\nabla}\_{\mathbf{\theta}} \pi \left(A\_t \middle| S\_t, \mathbf{\theta}\_{t} \right)}{\pi \left(A\_t \middle| S\_t, \mathbf{\theta}\_{t} \right)}.
\tag{13.6}
$$<issue_comment>username_1: The discount factor does appear twice, and this is correct.
This is because the function you are trying to maximise in REINFORCE for an episodic problem (by taking the gradient) is the expected return from a given (distribution of) start state:
$$J(\theta) = \mathbb{E}\_{\pi(\theta)}[G\_t|S\_t = s\_0, t=0]$$
Therefore, during the episode, when you sample the returns $G\_1$, $G\_2$ etc, these will be less relevant to the problem you are solving, reduced by the discount factor a second time as you noted. At the extreme with an episodic problem and $\gamma = 0$ then REINFORCE will only find an optimal policy for the first action.
In continuing problems, you would use different formulations for $J(\theta)$, and these do not lead to the extra factor of $\gamma^t$.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Neil's answer already provides some intuition as to why the pseudocode (with the extra $\gamma^t$ term) is correct.
I'd just like to additionally clarify that you do not seem to be misunderstanding anything, **Equation (13.6) in the book is indeed different from the pseudocode**.
Now, I don't have the edition of the book that you mentioned right here, but I do have a later draft from March 22, 2018, and the text on this particular topic seems to be similar. In this edition:
* Near the end of page 326, it is explicitly mentioned that they'll assume $\gamma = 1$ in their proof for the Policy Gradient Theorem.
* That proof eventually leads to the same Equation (13.6) on page 329.
* Immediately below the pseudocode, on page 330, they actually briefly address the difference between the Equation and the pseudocode, saying that that difference is due to the assumption of $\gamma = 1$ in the proof.
* Right below that, in *Exercise 13.2*, they give some hints as for what you should be looking at if you'd like to derive the modified proof for the case where $\gamma < 1$.
Upvotes: 3 <issue_comment>username_3: It's a subtle issue.
If you look at the A3C algorithm in the [original paper](https://arxiv.org/abs/1602.01783) (p.4 and appendix S3 for pseudo-code), their actor-critic algorithm (same algorithm both episodic and continuing problems) is off by a factor of gamma relative to the actor-critic pseudo-code for episodic problems in the Sutton and Barto book (p.332 of January 2019 edition of <http://incompleteideas.net/book/the-book.html>). The Sutton and Barto book has the extra "first" gamma as labeled in your picture. So, either the book or the A3C paper is wrong? Not really.
The key is on p. 199 of the Sutton and Barto book:
>
> If there is discounting (gamma < 1) it
> should be treated as a form of termination, which can be done simply by including
> a factor of in the second term of (9.2).
>
>
>
The subtle issue is that there are two interpretations to the discounting factor gamma:
1. A multiplicative factor that puts less weight on distant future rewards.
2. A probability, 1 - gamma, that a simulated trajectory spuriously terminates, at any time step. This interpretation only makes sense for episodic cases, and not continuing cases.
Literal implementations:
1. Just multiply the future rewards and related quantities (V or Q) in the future by gamma.
2. Simulate some trajectories and randomly terminate (1 - gamma) of them at each time step. Terminated trajectories give no immediate or future rewards.
The two interpretations of gamma are valid. But choosing one or the other means you are tackling a different problem. The math is slightly different and you end up with an extra gamma multiplying $G \nabla\ln\pi(a|s)$ with the second interpretation.
For example, if you are at step t=2 and gamma = 0.9, the algorithm for the second interpretation is that the policy gradient is $\gamma^2 G \nabla\ln\pi(a|s)$ or $0.81 G \nabla\ln\pi(a|s)$. This term has 19% less gradient power than the t=0 term for the simple reason that 19% of simulated trajectories have died off by t=2.
With the first interpretation of gamma, there is no such 19% decay. The policy gradient is just $G \nabla\ln\pi(a|s)$ at t=2. But gamma is still present within $G$ to discount the future rewards.
You can choose whichever interpretation of gamma, but you have to be mindful of the consequences to the algorithm. I personally prefer to stick with interpretation 1 just because it's simpler. So I use the algorithm in the A3C paper, not the Sutton and Barto book.
Your question was about the REINFORCE algorithm, but I have been discussing actor-critic. You have the exact same issue related to the two gamma interpretations and the extra gamma in REINFORCE.
Upvotes: 2 <issue_comment>username_4: I would like to follow up on @username_3's answer about the first interpretation of gamma. (I'm a new user and can't post a comment.)
Does it make sense to include $\gamma$ in the return and the state and action value functions without considering it as part of the MDP? (interpretation 2).
If one derives the policy gradient with respect to the discounted return as the objective, then the discounted state distribution arises (the one corresponding to interpretation 2), not the original one. [Proof here.](https://proceedings.neurips.cc/paper/1999/file/464d828b85b0bed98e80ade0a5c43b0f-Paper.pdf)
This seems to mean that you can't have interpretation 1 without interpretation 2.
I would be happy to discuss this further.
Upvotes: 0 |
2018/08/23 | 1,144 | 4,820 | <issue_start>username_0: I am reading [Goodfellow et al Deeplearning Book](http://www.deeplearningbook.org). I found it difficult to understand the difference between the definition of the hypothesis space and representation capacity of a model.
In [Chapter 5](http://www.deeplearningbook.org/contents/ml.html), it is written about hypothesis space:
>
> One way to control the capacity of a learning algorithm is by choosing its hypothesis space, the set of functions that the learning algorithm is allowed to select as being the solution.
>
>
>
And about representational capacity:
>
> The model specifies which family of functions the learning algorithm can choose from when varying the parameters in order to reduce a training objective. This is called the representational capacity of the model.
>
>
>
If we take the linear regression model as an example and allow our output $y$ to takes polynomial inputs, I understand the hypothesis space as the ensemble of quadratic functions taking input $x$, i.e $y = a\_0 + a\_1x + a\_2x^2$.
How is it different from the definition of the representational capacity, where parameters are $a\_0$, $a\_1$ and $a\_2$?<issue_comment>username_1: Consider a target function $f: x \mapsto f(x)$.
A hypothesis refers to an approximation of $f$. A **hypothesis space** refers to the set of possible approximations that an algorithm can create for $f$. The hypothesis space consists of the set of functions the model is limited to learn. For instance, linear regression can be limited to linear functions as its hypothesis space, or it can be expanded to learn polynomials.
The **representational capacity** of a model determines the flexibility of it, its ability to fit a variety of functions (i.e. which functions the model is able to learn), at the same. It specifies the family of functions the learning algorithm can choose from.
Upvotes: 2 [selected_answer]<issue_comment>username_2: A **hypothesis space/class** is the **set** of functions that the learning algorithm considers when picking one function to minimize some risk/loss functional.
The **capacity** of a hypothesis space is a **number** or **bound** that quantifies the size (or richness) of the hypothesis space, i.e. the number (and type) of functions that can be represented by the hypothesis space. So a hypothesis space **has** a capacity. The two most famous measures of capacity are VC dimension and Rademacher complexity.
In other words, the hypothesis class is the *object* and the capacity is a *property* (that can be measured or quantified) of this object, but there is not a big difference between hypothesis class and its capacity, in the sense that a hypothesis class naturally defines a capacity, but two (different) hypothesis classes could have the same capacity.
Note that *representational capacity* (not *capacity*, which is common!) is not a standard term in computational learning theory, while *hypothesis space/class* is commonly used. For example, [this famous book](https://www.cs.huji.ac.il/%7Eshais/UnderstandingMachineLearning/understanding-machine-learning-theory-algorithms.pdf) on machine learning and learning theory uses the term *hypothesis class* in many places, but it never uses the term *representational capacity*.
Your book's definition of *representational capacity* is *bad*, in my opinion, if *representational capacity* is supposed to be a synonym for *capacity*, given that that definition also coincides with the definition of hypothesis class, so your confusion is understandable.
Upvotes: 2 <issue_comment>username_3: A **hypothesis space** is defined as the set of functions $\mathcal H$ that can be chosen by a learning algorithm to minimize loss (in general).
$$\mathcal H = \{h\_1, h\_2,....h\_n\}$$
The hypothesis class can be finite or infinite, for example a discrete set of shapes to encircle certain portion of the input space is a finite hypothesis space, whereas hpyothesis space of parametrized functions like neural nets and linear regressors are infinite.
Although the term representational capacity is not in the vogue a rough definition woukd be: The **representational capacity** of a model, is the ability of its **hypothesis space** to approximate a complex function, with 0 error, which can only be approximated by **infinitely** many hypothesis spaces whose representational capacity is equal to or exceed the representational capacity required to approximate the complex function.
The most popular measure of representational capacity is the [$\mathcal V$ $\mathcal C$ Dimension](https://en.wikipedia.org/wiki/Vapnik%E2%80%93Chervonenkis_dimension) of a model. The upper bound for VC dimension ($d$) of a model is:
$$d \leq \log\_2| \mathcal H|$$ where $|H|$ is the cardinality of the set of hypothesis space.
Upvotes: 2 |
2018/08/23 | 678 | 2,576 | <issue_start>username_0: Where can I find (more) pre-trained [language models](https://en.wikipedia.org/wiki/Language_model)? I am especially interested in **neural network-based** models for **English and German**.
I am aware only of [Language Model on One Billion Word Benchmark](https://github.com/tensorflow/models/tree/master/research/lm_1b) and [TF-LM: TensorFlow-based Language Modeling Toolkit](https://github.com/lverwimp/tf-lm).
I am surprised not to find a greater wealth of models for different frameworks and languages.<issue_comment>username_1: This will depend to some extent on what you want to do with the language models.
Some possible resources are:
TensorFlow offers 3 pre-trained language models in the [research package](https://github.com/tensorflow/models/tree/master/research).
Cafe's [ModelZoo](https://github.com/BVLC/caffe/wiki/Model-Zoo) has a single pre-trained model that does video -> captions.
Other packages like Cafe2 offer pre-trained models, but the documentation does not suggest any of them are suitable for language.
Failing this, a good approach might be to email the authors of a paper that adopts an approach you like. Some (but far from all) researchers will be happy to share their models, which you can then use as a starting point for your own.
Upvotes: 0 <issue_comment>username_2: Of course now there has been a huge development:
Huggingface published [pytorch-transformers](https://github.com/huggingface/pytorch-transformers), a library for the so successful Transformer models (BERT and its variants, GPT-2, XLNet, etc.), including many pretrained (mostly English or multilingual) models ([docs here](https://huggingface.co/pytorch-transformers/)). It also includes one German BERT model. SpaCy offers a [convenient wrapper](https://github.com/explosion/spacy-pytorch-transformers) ([blog post](https://explosion.ai/blog/spacy-pytorch-transformers)).
Update: Now, Salesforce published the English model [CTRL](https://github.com/salesforce/ctrl), which allows for use of "control codes" that influence the style, genre and content of the generated text.
For completeness, here is the old, now less relevant version of my answer:
---
Since I posed the question, I found this **pretrained German language model**:
<https://lernapparat.de/german-lm/>
It is an instance of a [3-layer "averaged stochastic descent weight-dropped" LSTM](https://github.com/Bachfischer/german2vec) which was implemented based on an implementation by [Salesforce](https://github.com/salesforce/awd-lstm-lm).
Upvotes: 2 [selected_answer] |
2018/08/23 | 1,542 | 4,994 | <issue_start>username_0: In the Trust-Region Policy Optimisation (TRPO) algorithm (and subsequently in PPO also), I do not understand the motivation behind replacing the log probability term from standard policy gradients
$$L^{PG}(\theta) = \hat{\mathbb{E}}\_t[\log \pi\_{\theta}(a\_t | s\_t)\hat{A}\_t],$$
with the importance sampling term of the policy output probability over the old policy output probability
$$L^{IS}\_{\theta\_{old}}(\theta) = \hat{\mathbb{E}}\_t \left[\frac{\pi\_{\theta}(a\_t | s\_t)}{\pi\_{\theta\_{old}}(a\_t | s\_t)}\hat{A}\_t \right]$$
Could someone please explain this step to me?
I understand once we have done this why we then need to constrain the updates within a 'trust region' (to avoid the $\pi\_{\theta\_{old}}$ increasing the gradient updates outwith the bounds in which the approximations of the gradient direction are accurate). I'm just not sure of the reasons behind including this term in the first place.<issue_comment>username_1: I am not 100% sure if the following is the only/complete story, but I'm quite confident it's at least part of the story:
In the [PPO paper](https://arxiv.org/abs/1707.06347), after describing the standard policy gradient objective $L^{PG}$, they mention the following:
>
> While it is appealing to perform multiple steps of optimization on this loss $L^{PG}$ using the same trajectory, doing so is not well-justified, and empirically it often leads to destructively large policy updates
>
>
>
This is because, as soon as you've performed one update using a trajectory generated with the previous policy, you land in an **off-policy** situation; the experience gained in that trajectory is no longer representative of your current policy, and all the estimators (like the advantage estimator) technically become incorrect.
With importance sampling, you can correct for this. This is also commonly used in multi-step off-policy value learning algorithms. Intuitively, the importance sampling term emphasizes estimates of advantage $\hat{A}\_t$ corresponding to actions $a\_t$ that have become **more likely** in the new policy relative to the old policy, and it de-emphasizes advantages corresponding to actions that have already become **less likely** in the new policy relative to the old policy.
If an action $a\_t$ in the old trajectory has already become highly unlikely since that trajectory of experience was generated, we have $\pi\_{\theta} (a\_t \vert s\_t) < \pi\_{\theta\_{\text{old}}} (a\_t \vert s\_t)$, which means that $\frac{\pi\_{\theta} (a\_t \vert s\_t)}{\pi\_{\theta\_{\text{old}}} (a\_t \vert s\_t)}$ becomes close to $0$, which means that we'll reduce the influence of that particular chunk of experience on our subsequent updates. This makes sense because, due to previous updates since the generation of that trajectory, that particular part of the trajectory has already become highly unlikely anyway, and should therefore no longer be relevant for our updates.
The ability to perform multiple updates using the same (old) trajectory anyway is useful because this increases sample-efficiency, we can re-use the same samples of experience more than once rather than using them once and then discarding them again.
Upvotes: 3 <issue_comment>username_2: For everybody getting here from google, like me: the $\log$ might have been replaced in the loss function, but I think it is still there when taking the gradient of both functions (correct me, if I am wrong):
$$\begin{aligned}
\nabla\_{\theta} L^{P G}(\theta) &=\nabla\_{\theta} \hat{E}\_{t}\left[\log \pi\_{\theta}\left(a\_{t} \mid s\_{t}\right) \hat{A}\_{t}\right] \\
&=\hat{E}\_{t}\left[\nabla\_{\theta} \log \pi\_{\theta}\left(a\_{t} \mid s\_{t}\right) \hat{A}\_{t}\right]
\end{aligned}$$
and
$$
\begin{aligned}
\nabla\_{\theta} L^{I S}(\theta)=& \nabla\_{\theta} \hat{E}\_{t} \left[\frac{\pi\_{\theta}\left(a\_{t} \mid s\_{t}\right)}{\pi\_{\theta\_{\text {old}}}\left(a\_{t} \mid s\_{t}\right)} \hat{A}\_{t}\right] \\
&=\hat{E}\_{t} \left[\nabla\_{\theta} \frac{\pi\_{\theta}\left(a\_{t} \mid s\_{t}\right)}{\pi\_{\theta\_{\text {old}}}\left(a\_{t} \mid s\_{t}\right)} \hat{A}\_{t}\right] \\
&=\hat{E}\_{t} \left[\frac{\pi\_{\theta}\left(a\_{t} \mid s\_{t}\right)}{\pi\_{\theta\_{\text {old}}}\left(a\_{t} \mid s\_{t}\right)} \frac{\nabla\_{\theta} \pi\_{\theta}\left(a\_{t} \mid s\_{t}\right)}{\pi\_{\theta}\left(a\_{t} \mid s\_{t}\right)} \hat{A}\_{t}\right] \\
&=\hat{E}\_{t}\left[\frac{\pi\_{\theta}\left(a\_{t} \mid s\_{t}\right)}{\pi\_{\theta\_{\text {old}}}\left(a\_{t} \mid s\_{t}\right)} \nabla\_{\theta} \log \pi\_{\theta}\left(a\_{t} \mid s\_{t}\right) \hat{A}\_{t}\right]
\end{aligned}
$$
So, the $\pi\_{\theta}\left(a\_{t} \mid s\_{t}\right) $ in the PG function was replaced with $\frac{\pi\_{\theta}\left(a\_{t} \mid s\_{t}\right)}{\pi\_{\theta\_{\text {old}}}\left(a\_{t} \mid s\_{t}\right)} $ whose derivate is the same as the log of the PG function (apart from the proportionality factor).
Upvotes: 2 |
2018/08/23 | 426 | 1,860 | <issue_start>username_0: A linear activation function (or none at all) should only be used when the relation between input and output is linear. Why doesn't the same rule apply for other activation functions? For example, why doesn't sigmoid only work when the relation between input and output is "of sigmoid shape"?<issue_comment>username_1: It is correct to say that a sigmoid activation function would only work well as a model if the desired output is close to the sigmoid function applied to the input. This is a trivial fact that applies to a single layer perceptron. This is true for for the single layer case for any activation function, also a trivial fact.
When the layer number is between one and infinity (two or more), the theory bifurcates. The identity function becomes a special case: Any number of layers that conform to a first degree polynomial, $ax + b$, can be replaced with a single identity layers. The alternative case, where there are multiple layers that functionally do not conform to a first degree polynomial, $ax + b$, cannot be replaced by a single layer of some equally simple function. The complexity increases geometrically, which is the entire point of multilayer perceptrons.
Under particular constraints, the multilayer perceptron can produce a wide variety of functional behaviors that do not resemble the activation functions of the layers.
For instance, a properly trained network using sigmoid activation functions, with sufficient layer depth and sufficient massive allocation of computing resources, could theoretically approximate the topography of the Himalayas.
Upvotes: 1 <issue_comment>username_2: Because any number of linear layers can be represented by a single linear layer:
$A\_2\*(A\_1\*x +b\_1)+B\_2 = A\_2\*A\_1\*x+ A2\*b\_1+b\_2$
The same is not true if you have non-linear functions.
Upvotes: -1 |
2018/08/24 | 761 | 3,099 | <issue_start>username_0: Recent advances in Deeplearning and dedicated hardware has made it possible to detect images with a much better accuracy than ever. Neural networks are the gold standard for computer vision application and are used widely in the industry, for example for internet search engines and autonomous cars. In real life problems, the image contains of regions with different objects. It is not enough to only identify the picture but elements of the picture.
A while ago an alternative to the well known sliding window algorithm was described in the literature, called Region Proposal Networks. It is basically a convolution neural network which was extended by a region vector.
**Problem that I am trying to solve:**
In a given video frame, I want to pick some region of interests (literally), and perform classification on those regions.
**How is it currently implemented**
1. Capture the video frame
2. Split the video frame into multiple images each representing a region of interest
3. Perform image classification(inference) on each of the image (corresponding to a part of the frame)
4. Aggregate the results of #3
**Problem with the current approach**
Multiple inferences per frame.
**Question**
I am looking for a solution where I specify the locations of interest in a frame, and inference task, be it object detection (or) image classification, is performed only on those regions.Can you please point to me the references which I need to study (or) use to do this.<issue_comment>username_1: There are many different problems in computer vision. Four of them are well described by the [top image here](https://medium.com/comet-app/review-of-deep-learning-algorithms-for-object-detection-c1f3d437b852):
* Classification: Given an image, say what is on it (a single thing)
* Classification+Localization: Given an image, say what is on it and draw an axis-aligned bounding box (AABB) around it
* Object detection: Given an image, draw AABBs around every object and classify those objects
* Semantic segmentation: See [Survey of semantic segmentation](https://arxiv.org/pdf/1602.06541.pdf)
* Instance segmentatation: Like semantic segmentation, but if there are multiple cats then they should be recognized as different objects.
Your question seems to be about object detection. The relevant papers here are:
* [SSD: Single Shot MultiBox Detector](https://arxiv.org/pdf/1512.02325.pdf): [Code](https://martin-thoma.com/object-detection/)
* [You Only Look Once: Unified, Real-Time Object Detection](https://arxiv.org/abs/1506.02640v5) (YOLO): [Website with video](https://pjreddie.com/darknet/yolo/)
* [YOLOv3](https://arxiv.org/pdf/1804.02767.pdf)
If you actually already have the regions, then you can simply perform classification on them. When you pad / scale / crop them, you can batch-predict them.
Upvotes: 1 <issue_comment>username_2: It might be easier to approach as an object segmentation problem which identifies multiple objects in a given image/frame. There are lots of examples of you do a search using “object segmentation” as keyword.
Upvotes: 0 |
2018/08/24 | 1,768 | 7,862 | <issue_start>username_0: I wrote a simple feed-forward neural network that plays tic-tac-toe:
* 9 neurons in input layers: 1 - my sign, -1 - opponent's sign, 0 - empty;
* 9 neurons in hidden layer: value calculated using ReLU;
* 9 neurons in output layer: value calculated using softmax;
I am using an evolutionary approach: 100 individuals play against each other (all-play-all). The top 10 best are selected to mutate and reproduce into the next generation. The fitness score calculated: +1 for the correct move (it's possible to place your sign on already occupied tile), +9 for victory, -9 for a defeat.
What I notice is that the network's fitness keeps climbing up and falling down again. It seems that my current approach only evolves certain patterns on placing signs on the board and once random mutation interrupts the current pattern new one emerges. My network goes in circles without ever evolving actual strategy. I suspect the solution for this would be to pit network against tic-tac-toe AI, but is there any way to evolve the actual strategy just by making it to playing against itself?<issue_comment>username_1: The characteristics of the game are these.
* Moves are in two dimensions
* Only three possible first moves because of symmetry (corner, edge, middle)
* Never more than seven possible moves after that
* No random game play element (such as a roll of dice)
* No secrets (such as a hand of cards)
* No way to lose with perfect play
* Only one two-part rule: (1) Must make your mark (2) in an unmarked cell
* Only one winning condition: Three in a row of your mark
The goal strategy is to get two marks such that a third will require two blocks in one opponent move. A three layer network will be able to learn that strategy. Because the game complexity is low, a mutation is always going to have a radical effect on game play. This means that the changes due to the mutation must be gradual and must stop when the strategy is found. This implies that stateful learning is best.
The problem is not related to human versus self play.
Since there is exactly one mark to make per turn, soft max doesn't make much sense for the output layer. You want binary outputs — threshold activation functions in the last layer. Use two pair of binary output cells, one pair for each dimension.
Upvotes: 1 <issue_comment>username_2: >
> What I notice is that the network's fitness keeps climbing up and falling down again. It seems that my current approach only evolves certain patterns on placing signs on the board and once random mutation interrupts current pattern new one emerges. My network goes in circles without ever evolving actual strategy. I suspect solution for this would be to pit network against tic-tac-toe AI, but is there any way to evolve actual strategy just by making it to playing against itself?
>
>
>
The likely cause of this phenomenon is that your fitness function involves evaluating the fitness of an agent by letting it play lots of other agents (the entire population), many of which are likely very poor agents.
Because Tic-Tac-Toe is such a simple game, we know that optimal play from both sides leads to a draw. Suppose we have a population of the following three strategies:
* $\pi\_1$: an optimal player
* $\pi\_2$: a sub-optimal player
* $\pi\_3$: a different sub-optimal player
There can easily be situations where the optimal player $\pi\_1$ gets a draw against both of the sub-optimal players (if they're not **extremely** bad), because right from the get-go an optimal player will play "safe" enough such that it can still guarantee a draw against an optimal opponent, which may not be the fastest way to win against a sub-optimal player.
In the same situation, the sub-optimal player $\pi\_2$ for example may be able to consistently win against a slightly worse sub-optimal player $\pi\_3$. In this example situation, your fitness function ranks the agents $f(\pi\_2) > f(\pi\_1) > f(\pi\_3)$, which is wrong.
---
As you already suggested yourself, the most straightforward way to address this problem would be to simply evaluate the fitness of strategies by having them play against an optimal minimax-agent, rather than a population of many strategies (including poor ones).
If you really want to use only evolving, no tree search, you'll have to find a way to **fix the fitness function such that the problem described above can no longer occur**. One way you could try to do that (*not 100% sure it would work, but imagine that it might*) would be to set up some larger tournament bracket where agents progress through the brackets if they're able to beat others which they were paired up with. Getting further in the tournament would then increase the fitness of an agent. Very bad sub-optimal players should not be able to progress far in the tournament if they get beaten by other (sub-)optimal agents, but sub-optimal agents which manage to advance still shouldn't be able to get more than a draw against an optimal agent. Some things to keep in mind with this idea:
* You'll likely want to **repeat the entire tournament many times with different, randomized initial pairings in the bracket**, and compute fitness based on average (or maybe median or max) ranking across all repetitions of the tournament. This would be to filter out members of the population getting particularly lucky or unlucky with the opponents they encounter in the brackets.
* You'll have to think about which actions the agents select in such a tournament. Do they deterministically play the action given by the $\arg\max$ of the softmax outputs (in which case you could also just use the linear outputs rather than softmax outputs, since the softmax function does not change ranking of outputs). Or do they nondeterministically sample actions according to the softmax distribution? Sampling actions according to the softmax distribution seems attractive because it leads to more variance in game state situations encountered, and it is important to be robust and be capable of handling many different game states. On the other hand, it does introduce noise and may make an optimal agent accidentally lose. **I suppose I would lead towards deterministic play with the $\arg\max$ over the softmax outputs**. In a large tournament, there will still be sufficient variety due to encountering many different opponents.
* You'll have to put thought into handling the situation where two agents may infinitely keep drawing against each other. Which agent advances? If this happens, I think I would gradually shift from deterministcally playing $\arg\max$ actions to playing according to the softmax distribution. This guarantees that someone eventually loses. Averaging results over multiple different games like this, plus averaging over multiple different repetitions of the complete tournament, should yield accurate results.
* You'll want to think about how to handle **illegal actions**. You describe including punishments for illegal actions in the fitness function. This introduces extra noise / variance in the fitness function which is already difficult enough to get right to begin with, so I wouldn't do this. I'd recommend having a **manual post-processing step in which you manually set probabilities of illegal actions being played to $0$**, and normalizing to make sure the probabilities over the remaining actions add up to $1$ again. This is also what's done (on a much larger scale) by DeepMind in their state-of-the-art Go/Chess/Shogi agents for example. If you really don't want to do this, if you really want evolved strategies to automatically learn not to ever have high outputs for illegal actions, I would recommend immediately making them lose the game if they do suggest an illegal action. That way you still have a "clean" fitness function based only on wins/draws/losses.
Upvotes: 2 |
2018/08/27 | 1,104 | 4,481 | <issue_start>username_0: I am trying to understant how it works. How do you teach it say, to add 1 to each number it gets. I am pretty new to the subject and I learned how it works when you teach it to identify a picture of a number. I can understand how it identifies a number but I cant get it how would it study to perform addition? I can understand that it can identify a number or picture using the pixels and assigning weights and then learning to measure whether a picture of a number resembling the weight is assigned to each pixel. But i can't logically understand how would it learn the concept of adding a number by one. Suppose I showed it thousands of examples of 7 turning to 8 152 turning into 153 would it get it that every number in the world has to be added by one? How would it get it having no such operation of + ? Since addition does not exist to its proposal then how can it realize that it has to add one in every number? Even by seeing thousands of examples but having no such operation of plus I cant understand it. I could understand identifying pixels and such but such an operation I cant get the theoretical logic behind it. Can you explain the logic in layman terms?<issue_comment>username_1: Welcome to AI.SE @bilanush.
Here's an example approach that might make things clearer. There are other ways to train a neural network to do this however.
In your earlier example with an image, you probably noticed that the network receives the image as a series of values, representing each pixel in the image. The network then learns which of a series of output neurons should be active in response to a given set of pixel values. Those output neurons, when read in an appropriate way, correspond to the correct label for the image. The difference between the set of outputs that should have been active, and the set that were active, forms the basis of the error signal that allows the network to learn.
You've probably heard that computers represent numbers with *binary digits*. So you could think of the number 16 as being: 00010000 in "8-bit binary". In 16-bit binary, this number would be 0000000000010000, and so on.
So one way of viewing your problem is a function mapping binary inputs to binary outputs (very similar to labelling a black-and-white image). For instance, the input 00010000 (16) should produce the output 00010001 (17). The input 00100011 (35) should produce the output 00100100 (36), and so on.
As before, you will have a set of output neurons. In this case, it should be wide as the set of input neurons. As before, the error signal is the difference between the expected inputs and outputs.
As to the question of how they can learn this function "without plus", in fact the individual neurons in a network perform just two operations: addition of their inputs, and a non-linear transformation of the sum. It has been proven that these are sufficient to learn any function from inputs to outputs, as long as the network contains 3 layers or more, and the as long as the middle layer is wide enough, but here it should be easy to see how addition might emerge.
Upvotes: 1 <issue_comment>username_2: This is what we call a regression problem. Although @John has provided a novel method I do not think it will work since you are decomposing the number into its minimal representation, so teaching it will be quite tough due to long term dependencies like `0111111` will change to `100000`, so you have to train on almost all examples, with 0 actual learning.
Let's see your problem from a different viewpoint. Why are you thinking only of integers? Your problem can be generalised to approximating this curve:
[](https://i.stack.imgur.com/95Bkk.jpg)
This is clearly an example of $W.T \* X + B$. Single input node will feed into 2 output nodes of Leaky ReLu activation, and a bias node. The bias node will be your $c$ and the 2-3 Leaky ReLu's will adjust their weights to create a straight line. Training this might be a problem (with co-adaptation between nodes), but mathematically a solution will be achieved by this Neural Net structure.
Also it is better to train on real values also, for better and finer weight adjustment (although theoretically for a single independent variable $x$ you should just need 3-4 values to make this Neural Net learn, but who knows?)
NOTE: The approximation in the negative region might not be that great.
Upvotes: 0 |
2018/08/27 | 473 | 1,828 | <issue_start>username_0: What are the best machine learning models that have been used to compose music? Are there some good research papers (or books) on this topic out there?
I would say, if I use a neural network, I would opt for a recurrent one, because it needs to have a concept of timing, chord progressions, and so on.
I am also wondering how the loss function would look like, and how I could give the AI so much feedback as they usually need.<issue_comment>username_1: There are a few of them. The most recent I've found is from DeepMind: [The challenge of realistic music generation: modelling raw audio at scale](https://arxiv.org/abs/1806.10474). This [video](https://www.youtube.com/watch?v=8GUYAVXmhsI) is a great analysis of it.
Upvotes: 1 <issue_comment>username_2: I am also new to the neural network architecture game but from what I have learned so far I think you have a few good options to choose from.
A recurrent neural network (RNN) would be a standard approach but if you're looking for something more robust you could look into a [Long Short Term Memory network](https://en.wikipedia.org/wiki/Long_short-term_memory) (LSTM). The neurons have a memory of past events and can recall that later on. It is a subset of RNN.
Perhaps you could go a little further and use a [Convolutional Neural Network](https://en.wikipedia.org/wiki/Convolutional_neural_network) (CNN). So far these type of networks have been highly successful for image recognition. You could abstract a song piece as an image. Each pixel could be a progression in time and the value of the pixel could be the actual note.
Also take a look at [this article](https://towardsdatascience.com/the-mostly-complete-chart-of-neural-networks-explained-3fb6f2367464) for a good overview of several different neural network types.
Upvotes: 0 |
2018/08/28 | 1,236 | 4,235 | <issue_start>username_0: I've seen many [Lorem Ipsum](https://en.wikipedia.org/wiki/Lorem_ipsum) generators on the web, but not only, there is also "bacon ispum", "space ispum", etc. So, how do these generators generate the text? Are they powered by an AI?<issue_comment>username_1: If you wanted to generate more I guess you could take the string and convert to a list then you could randomly select as many words as you want, from the list.
Using Python
```
import numpy as np
lorem = "lLorem ipsum dolor sit amet, consectetur adipiscing elit. ".split()
number_of_words_needed = 20
new_text = []
for i in range(number_of_words_needed):
new_text.append(lorem[np.random.randint(len(lorem))])
print(new_text)
```
>
> ipsum ['sit', 'dolor', 'elit.', 'sit', 'sit', 'sit', 'elit.', 'dolor', 'amet,', 'ipsum', 'amet,', 'ipsum', 'dolor', 'Lorem', 'Lorem', 'adipiscing', 'sit', 'elit.', 'consectetur', 'adipiscing']
>
>
>
for reference
[Source](https://www.lipsum.com/): "Contrary to popular belief, Lorem Ipsum is not simply random text. It has roots in a piece of classical Latin literature from 45 BC, making it over 2000 years old. <NAME>, a Latin professor at Hampden-Sydney College in Virginia, looked up one of the more obscure Latin words, consectetur, from a Lorem Ipsum passage, and going through the cites of the word in classical literature, discovered the undoubtable source. Lorem Ipsum comes from sections [1.10.32](http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%3A2007.01.0036%3Abook%3D1%3Asection%3D32) and [1.10.33](http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%3A2007.01.0036%3Abook%3D1%3Asection%3D33) of "de Finibus Bonorum et Malorum" (The Extremes of Good and Evil) by Cicero, written in 45 BC. This book is a treatise on the theory of ethics, very popular during the Renaissance. The first line of Lorem Ipsum, "Lorem ipsum dolor sit amet..", comes from a line in section 1.10.32.
The standard chunk of Lorem Ipsum used since the 1500s is reproduced below for those interested. Sections 1.10.32 and 1.10.33 from "de Finibus Bonorum et Malorum" by Cicero are also reproduced in their exact original form, accompanied by English versions from the 1914 translation by <NAME>."
Upvotes: 2 <issue_comment>username_2: Lorem ipsum generators don't typically use anything considered as AI. Usually they just store large pieces of text and select sections from it randomly - they are very simple. The main goal is to produce "nonsense" text that fills space but does not distract from issues of layout and design. The variations of it are usually just for fun, and like the original, are mostly simple generators which select strings of text from a core data source randomly and without using any AI techniques.
It *is* possible to build more sophisticated random text generators that work using data structures from [Natural Language Processing](https://en.wikipedia.org/wiki/Natural_language_processing) (NLP).
One popular and easy-to-code data structure is [N-grams](https://en.wikipedia.org/wiki/N-gram), which store the frequencies/probabilities of the Nth word given words 1 to N-1. E.g. a bigram structure can tell you all the possible words to come after "fish" e.g. `"fish" => ["food" => 0.2, "swims" => 0.3, "and" => 0.4, "scale" => 0.1]` To use that structure to generate text, use a random number generator to select a word based on looking up the Nth word's frequency, then shift the list of words being considered and repeat.
A more recent text generating NLP model is recurrent neural networks (RNNs), which have a variety of designs. Popular right now are LSTM networks, and these are capable of some quite sophisticated generation, provided they are trained with enough data for long enough. The blog [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) by <NAME> has quite a few really interesting examples of using RNNs for text generation. In practice this works similarly to n-grams: Use the RNN to suggest probabilities for next word given words so far, choose one randomly, then feed back the generated word into the RNN and repeat.
Upvotes: 4 [selected_answer] |
2018/08/29 | 1,731 | 7,144 | <issue_start>username_0: I'm coding a Proximal Policy Optimization (PPO) agent with the [Tensorforce library](https://github.com/reinforceio/tensorforce) (which is built on top of TensorFlow).
The first environment was very simple. Now, I'm diving into a more complex environment, where all the actions are not available at each step.
Let's say there are 5 actions and their availability depends on an internal state (which is defined by the previous action and/or the new state/observation space):
* 2 actions (0 and 1) are always available
* 2 actions (2 and 3) are only available when the internal state is 0
* 1 action (4) is only available when the internal state is 1
Hence, there are 4 actions available when the internal state is 0 and 3 actions available when the internal state is 1.
I'm thinking of a few possibilities to implement that:
1. Change the action space at each step, depending on the internal state. I assume this is nonsense.
2. Do nothing: let the model understand that choosing an unavailable action has no impact.
3. Do *almost* nothing: impact slightly negatively the reward when the model chooses an unavailable action.
4. Help the model: by incorporating an integer into the state/observation space that informs the model what's the internal state value + bullet point 2 or 3
Are there other ways to implement this? From your experience, which one would be the best?<issue_comment>username_1: The most straightforward solution is to simply make every action "legal", but implementing a consistent, deterministic mapping from potentially illegal actions to different legal actions. Whenever the PPO implementation you are using selects an illegal action, you simply replace it with the legal action that it maps to. Your PPO algorithm can then still update itself as if the illegal action were selected (the illegal action simply becomes like... a "nickname" for the legal action instead).
For example, in the situation you describe:
>
> * 2 actions (0 and 1) are always available
> * 2 actions (2 and 3) are only available when the internal\_state == 0
> * 1 action (4) is only available when the internal\_state == 1
>
>
>
In cases where `internal_state == 0`, if action `4` was selected (an illegal action), you can always swap it out for one of the other actions and play that one instead. It doesn't really matter (theoretically) which one you pick, as long as you're consistent about it. The algorithm doesn't have to know that it picked an illegal action, whenever it picks that same illegal action in the future again in similar states it will consistently get mapped to the same legal action instead, so you just reinforce according to that behaviour.
---
The solution described above is very straightforward, probably the most simple to implement, but of course it... "smells" a bit "hacky". A cleaner solution would involve a step in the Network that sets the probability outputs of illegal actions to $0$, and re-normalizes the rest to sum up to $1$ again. This requires much more care to make sure that your learning updates are still performed correctly though, and is likely a lot more complex to implement on top of an existing framework like Tensorforce (if not already somehow supported in there out of the box).
---
For the first "solution", I wrote above that it does not matter "theoretically" how you choose you mapping. I absolutely do expect your choices here will have an impact on learning speed in practice though. This is because, in the initial stages of your learning process, you'll likely have close-to-random action selection. If some actions "appear multiple times" in the outputs, they will have a greater probability of being selected with the initial close-tor-andom action selection. So, there will be an impact on your initial behaviour, which has an impact on the experience that you collect, which in turn also has an impact on what you learn.
I certainly expect it will be beneficial for performance if you can include input feature(s) for the `internal_state` variable.
If some legal actions can be identified that are somehow "semantically close" to certain illegal actions, it could also be beneficial for performance to specifically connect those "similar" actions in the "mapping" from illegal to legal actions if you choose to go with that solution. For example, if you have a "jump forwards" action that becomes illegal in states where the ceiling is very low (because you'd bump your head), it may be better to map that action to a "move forwards" action (which is still kind of similar, they're both going forwards), than it would be to map it to a "move backwards" action. This idea of "similar" actions will only be applicable to certain domains though, in some domains there may be no such similarities between actions.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Normally, the set of actions that the agent can execute does not change over time, but some actions can become impossible in different states (for example, not every move is possible in any position of the TicTacToe game).
Take a look as example at pice of code <https://github.com/haje01/gym-tictactoe/blob/master/examples/base_agent.py> :
```
ava_actions = env.available_actions()
action = agent.act(state, ava_actions)
state, reward, done, info = env.step(action)
```
Upvotes: 1 <issue_comment>username_3: >
> Change the action space at each step, depending on the internal\_state. I assume this is nonsense.
>
>
>
>
Yes, this seems overkill and makes the problem unnecessarily complex, there could be other things you can do.
>
> Do nothing : let the model understand that choosing an unavailable action has no impact.
>
>
>
While this will not harm your model negatively, in any way the negative thing about this is that the model could take a long time to understand that some actions don't matter for some values of state.
>
> Do -almost- nothing : impact slightly negatively the reward when the model chooses an unavailable action.
>
>
>
Same response as for the previous point except that this could harm your model negatively (but not sure if this is significant). Assume you give a reward of -100 for every illegal action it takes. Looking at only the negative rewards, we have:
* -100 when initial\_state == 0
* -200 when initial state == 1
By doing this, you might be implicitly favoring situations where state == 0. Plus, I dont see the point of the -100 reward anyway since once they come to state 0 they will have to choose a value for the illegal actions too (it's not like they can ignore the values for the action and escape -100 reward)
>
> Help the model : by incorporating an integer into the state/observation space that informs the model what's the internal\_state value + bullet point 2 or 3
>
>
>
To me, this seems like the most ideal thing to do. You could branch out the final output layer (of your ActorCritic Agent) into 2 instead of 1.
Like: Input layer, fc1 (from input layer), output1(from fc1), output2(from fc1)
Based on initial\_state you can get the output from output1 or output2.
Upvotes: 2 |
2018/08/30 | 2,565 | 10,838 | <issue_start>username_0: How can I create an artificially intelligent aimbot for a game like [Counter-Strike Global Offensive (CS:GO)](https://en.wikipedia.org/wiki/Counter-Strike:_Global_Offensive)?
I have an initial solution (or approach) in mind. We can train an image recognition model that will recognize the head of the enemy (in the visible area of the player, so excluding the invisible area behind the player, to avoid being easily detected by VAC) and move the cursor to the position of the enemy's head and fire.
It would be much more preferable to train the recognition model in real-time than using demos. Most of the available demos you might have might be 32 tick, but while playing the game, it works at 64 tick.
It is a very fresh idea in my mind, so I didn't actually think a lot about it. Ignoring facts like detection by VAC for a few moments.
Is there any research work on the topic? What are the common machine learning approaches to tackle such a problem?
Later on, this idea can be expanded to a completely autonomous bot that can play the game by itself, but that is a bit too much initially.<issue_comment>username_1: Thare are many ways to approache this. One approach to start would be to try to describe the problem using a formalism closer to reinforcement learning.
* Output:
Aiming in any shooter type game, as I recall, involves moving the mouse. So the output of your aimbot has two dimentions. Depending on the required accuracy, you can consider these two dimentions continous, with a limited range, or if you consider each pixel as a integer, you might be able to discretize your action space. (I assume mouse XY coordinates should be input to the game, not increments)
* Input:
You definetly need screen information. You can take the whole screen as input to a CNN, similarly to DeepQLearning for Atari.
* Reward function
This might be tricky since your rewards need to be as dense as possible, however, the only feedback you get from the game is that someone was shot.
It might be ebough but this will definetly increase your training time.
* Training data / Environemnt:
Your environment for training is the game itseld. Using a [curriculum learning](https://ronan.collobert.com/pub/matos/2009_curriculum_icml.pdf) approach would probably make the training process more efficient.
You can also try an [imitation learning](https://arxiv.org/abs/1801.06503) approach, since I assume you are happy to provide expert training examples (in this case probably headshots in the game environment).
You can read more about how to apply reinforcement learning for games [here](https://github.com/Unity-Technologies/ml-agents). The Unity ML-Agents Library also includes sample tracking problems and their solutions.
Upvotes: 0 <issue_comment>username_2: **Automation of Game-play**
Aimbots are indeed designed to provide assistance to the human game player when the complexity of game play escapes full cybernetic autonomy at the current state of technology. There are five basic components in any game player, DNA based or digital.
* Acquisition of the current state of the game
* Control over execution of move options
* Intercommunication with other players
* Models related to the game
* Execution engine for applying these
The models are as follows for a CS:GO aimbot.
* Model of game players
* Model of the opposing team
* Model of the game player being assisted
* Model of that player's team
* Model of the opposing team
* Model of the game state
* Model of legal game moves that transition state
* Model of objectives (winning or maintaining a top score)
* Models of game-play strategy involving the first three items in the previous list
Learning all of these is not in the scope of current deep learning strategies but not outside the scope of AI if the following problem analysis and system approaches are taken.
* Assumptions are made similar to those of Morgenstern and von Neumann in the later chapters of their *Game Theory* to mathematically treat the decisioning of game players in a minimalistic way.
* DSP, GPU, network realization hardware, cluster computing, or some other artificial network hardware acceleration is available
* Models programmed in Prolog, DRools, or some other production system and then leveraged by the execution engine in conjunction with other components such as deep learning networks, convolution processing, Markov trees, fuzzy logic, and the application of oversight functions or heuristics as needed
The two services, (a) the provision of suggestions and (b) the automation of minor tasks, may indeed represent the low hanging fruit from a software engineering perspective, but the problem analysis and system approach above may provide more.
**Objectives in CS:GO**
The CS:GO (Counter-Strike Global Offensive) game seems to have been written from a Westphalian geopolitical point of view. This is the typical western perspective, somewhat oblivious to the mindset of the true nature of asymmetric warfare1. This answer will focus on the creation of an aimbot for the existing models of game-play rather than a realistic simulation of geopolitical balance in this decade.
We have the objective types listed in online resources that provide a game overview, again, narrowed in authenticity by the prevailing western view of asymmetric war1.
* Terminating players of the opposing team
* Planting a bomb toward that end (terrorists only)
* Defend hostages (terrorists only)
* Prevention of bomb casualties (counter-terrorists only)
* Rescue of hostages (counter-terrorists only)
**Ballistic Control**
The targetting of the body or head of an opponent is within the scope of what image recognition can do in conjunction with a movement model. In military applications, aeronautic devices must be propelled against air friction and the propulsion requires a largely exothermic reaction like combustion. Thus all targets have a heat signature, which can be recognized in an infrared video stream in such a way as to plot an intercept course for the ballistic weapon.
The targetting formulation for CS:GO is not as complex and aiming and firing may be fully automated with much less software machinery. A LSTM with sufficient speed can be trained to recognize a head in subsequent frames and terminate opponents even if moving. A simple web search for LSTM will provide a plethora of resources to the novice intending to learn about image recognition.
**One Ambiguity**
Whether the second objective can be met is dependent on what is meant by the term, "Viewing angles," in the context of image recognition. Can the player see from perspectives other than the location of their eyes? If so, this answer can be adjusted if given a clear picture of what is meant.
**Training and Re-entrant Learning**
Training of an artificial neural net to target a head is unnecessary unless the 3D rendering of the game objects and players is distorted by a wide angle virtual lens and trajectories and movements are curved. As mentioned LSTM can be used to locate a head in multiple frames and extrapolate an opposing players trajectory.
Where deep learning may be most effective is in the training of how to interact with the player to best assist. Also, if there are other non-targeting techniques that are more discrete, those who play CS:GO well could record their interactions and those recordings can be processed in preparation for use as training data.
Certainly a re-entrant learning strategy such as reinforcement is useful for game-play especially if the make up of teams changes and players exhibit different behaviors, executing differing strategies over different networks with different latencies and through-puts, and communicating with the game clients through different peripheral devices.
[DeepMind Lab Test Bed for Reinforcement Technology](<https://github.com/deepmind/lab>}
**More than Suggestions**
With proper architecture, more than suggestive strategies can be provided to the player. Statistical dashboards, identification of a bomb before or after planting, and identification of hostages should be among the aimbot services provided, which might suggest a new name, such as obot for objective bot or asbot for assistive bot.
It is not certain that the aimbot interface need be integrated with dashboards or bomb or hostage identifiers. Sometimes independent bots provide a more flexible arrangement for a user. Individual bots can always use the same underlying image recognition components and models.
**Entry Points into Developing Such a System**
Read some of the work on the above concepts and download what code you can find that demonstrates it in Python or Java, install what is necessary, and develop some proficiency with the components discussed above as well as the associated theory. Don't shy away from the math, since success will require some proficiency with feedback signalling and concepts like gradient descent and back-propagation.
[Reinforcement in Games](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C10&q=reinforcement+games&btnG=)
[LSTM Head Locating](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C10&q=lstm+locating+head&btnG=)
[Playing Atari with Deep Reinforcement Learning, Mnih et al., 2013](https://arxiv.org/pdf/1312.5602.pdf)
**Phased Approach**
The following phased research and development approach is suggested.
* Learn the theory
* Practice the theory in code
* Develop the image recognition front end
* Develop the library to control a virtual player
* Develop at least one of the above models
* Create the simplest bot to use it
* Expand automation from there
---
**Footnotes**
[1] In asymmetric power struggles, there are always at least two factions within each side because didactic legitimacy seeks division. Unity is not practically possible. Each real team usually has a more religious and more secular faction, each of which has economic, philosophic, and historical justifications for their position and agenda. Also, terrorists don't seek the public detonation of bombs or retention of hostages as objective but rather as means, with the total elimination of all not fully adhered to their view of legitimacy as the sole endgame objective. Suicide or high risk bombing is considered by most of those that employ it as the poor man's nukes, so without nuclear strike capability for the counter-terrorists and their allies, the terrorism lacks the important dimension of last resort. The last resort aspect of nuclear strike is missing from the counter-terrorist side too. CS:GO may sell better by glossing over these particular characteristics of asymmetric warfare and such was left out deliberately. There may be some benefit to adding these features in from an educational and anti-propaganda point of view.
Upvotes: -1 [selected_answer] |
2018/08/30 | 1,412 | 6,401 | <issue_start>username_0: I am studying reinforcement learning and the variants of it. I am starting to get an understanding of how the algorithms work and how they apply to an MDP.
What I don't understand is the process of defining the states of the MDP. In most examples and tutorials, they represent something simple like a square in a grid or similar.
For more complex problems, like a robot learning to walk, etc.,
* How do you go about defining those states?
* Can you use learning or classification algorithms to "learn" those states?<issue_comment>username_1: A common early approach to modeling complex problems was *discretization*. At a basic level, this is splitting a complex and continuous space into a grid. Then you can use any of the classic RL techniques that are designed for discrete, linear, spaces. However, as you might imagine, if you aren't careful, this can cause a lot of trouble!
Sutton & Barto's classic book *Reinforcement Learning* has some suggestions for other ways to go about this. One is *tile coding*, covered in section 9.5.4 of the [new, second edition](https://drive.google.com/file/d/1opPSz5AZ_kVa1uWOdOiveNiBFiEOHjkG/view). In tile coding, we generate a large number of grids, each with different grid spacing. We then overlay the grids on top of each other. This creates discrete regions non-uniform shapes, and can work well for a variety of problems.
Section 9.5 also covers a variety of other ways to encode a continuous space into a discrete MDP, including radial-basis functions, and coarse codings. Check it out!
Upvotes: 4 <issue_comment>username_2: The problem of state representation in Reinforcement Learning (RL) is similar to problems of feature representation, feature selection and feature engineering in supervised or unsupervised learning.
Literature that teaches the basics of RL tends to use very simple environments so that all states can be enumerated. This simplifies value estimates into basic rolling averages in a table, which are easier to understand and implement. Tabular learning algorithms also have reasonable theoretical guarantees of convergence, which means if you *can* simplify your problem so that it has, say, less than a few million states, then this is worth trying.
Most interesting control problems will not fit into that number of states, even if you discretise them. This is due to the "[curse of dimensionality](https://en.wikipedia.org/wiki/Curse_of_dimensionality)". For those problems, you will typically represent your state as a vector of different features - e.g. for a robot, various positions, angles, velocities of mechanical parts. As with supervised learning, you may want to treat these for use with a specific learning process. For instance, typically you will want them all to be numeric, and if you want to use a neural network you should also normalise them to a standard range (e.g. -1 to 1).
In addition to the above concerns which apply for other machine learning, for RL, you also need to be concerned with the [Markov Property](https://en.wikipedia.org/wiki/Markov_property) - that the state provides enough information, so that you can accurately predict expected next rewards and next states given an action, without the need for any additional information. This does not need to be perfect, small differences due to e.g. variations in air density or temperature for a wheeled robot will not usually have a large impact on its navigation, and can be ignored. Any factor which is essentially random can also be ignored whilst sticking to RL theory - it may make the agent less optimal overall, but the theory will still work.
If there are consistent unknown factors that influence result, and could logically be deduced - maybe from history of state or actions - but you have excluded them from the state representation, then you may have a more serious problem, and the agent may fail to learn.
It is worth noting the difference here between *observation* and *state*. An observation is some data that you can collect. E.g. you may have sensors on your robot that feed back the positions of its joints. Because the state should possess the Markov Property, a single raw observation might not be enough data to make a suitable state. If that is the case, you can either apply your domain knowledge in order to construct a better state from available data, or you can try to use techniques designed for [partially observable MDPs (POMDPs)](https://en.wikipedia.org/wiki/Partially_observable_Markov_decision_process) - these effectively try to build missing parts of state data statistically. You could use a RNN or hidden markov model (also called a "belief state") for this, and in some way this is using a "*learning or classification algorithms to "learn" those states*" as you asked.
Finally, you need to consider the type of approximation model you want to use. A similar approach applies here as for supervised learning:
* A simple linear regression with features engineered based on domain knowledge can do very well. You may need to work hard on trying different state representations so that the linear approximation works. The advantage is that this simpler approach is more robust against stability issues than non-linear approximation
* A more complex non-linear function approximator, such as a multi-layer neural network. You can feed in a more "raw" state vector and hope that the hidden layers will find some structure or representation that leads to good estimates. In some ways, this too is "*learning or classification algorithms to "learn" those states*" , but in a different way to a RNN or HMM. This might be a sensible approach if your state was expressed naturally as a screen image - figuring out the feature engineering for image data by hand is very hard.
The [Atari DQN work by DeepMind](https://deepmind.com/research/dqn/) team used a combination of feature engineering and relying on deep neural network to achieve its results. The feature engineering included downsampling the image, reducing it to grey-scale and - importantly for the Markov Property - using four consecutive frames to represent a single state, so that information about velocity of objects was present in the state representation. The DNN then processed the images into higher-level features that could be used to make predictions about state values.
Upvotes: 6 [selected_answer] |
2018/08/31 | 1,344 | 5,679 | <issue_start>username_0: I'm having a little trouble with the definition of rationality, which goes something like:
>
> An agent is rational if it maximizes its performance measure given its current knowledge.
>
>
>
I've read that a simple reflex agent will not act rationally in a lot of environments. For example, a simple reflex agent can't act rationally when driving a car, as it needs previous perceptions to make correct decisions.
However, if it does its best with the information it's got, wouldn't that be rational behaviour, as the definition contains "given its current knowledge"? Or is it more like: "given the knowledge it could have had at this point if it had stored all the knowledge it has ever received"?
Another question about the definition of rationality: Is a chess engine rational as it picks the best move given the time it's allowed to use, or is it not rational as it doesn't actually (always) find the best solution (would need more time to do so)?<issue_comment>username_1: When we use the term rationality in AI, it tends to conform to the [game theory](https://en.wikipedia.org/wiki/Game_theory)/[decision theory](https://en.wikipedia.org/wiki/Decision_theory) definition of [rational agent](https://en.wikipedia.org/wiki/Rational_agent#Artificial_intelligence).
In a solved or tractable game, an agent can have perfect rationality. If the game is intractable, rationality is necessarily [bounded](https://en.wikipedia.org/wiki/Bounded_rationality). (Here, "game" can be taken to mean any problem.)
There is also the issue of [imperfect information](https://en.wikipedia.org/wiki/Perfect_information) and [incomplete information](https://en.wikipedia.org/wiki/Complete_information).
Rationality isn't restricted to objectively optimal decisions but includes subjectively optimal decisions, where the the optimality can only be presumed. (That's why defection is the optimal strategy in 1-shot Prisoner's Dilemma, where the agents don't know the decisionmaking process of the the competitor.)
* Rationality here conforms to Russell & Norvig's definition, where it is related to performance in an environment.
What may be rational in one environment may not be rational in a different environment. Additionally, what may be locally rational for a simple reflex agent will not appear rational from the perspective of an agent with more knowledge, or a learning agent.
Iterated Dilemmas, where there is communication in the form of prior choices, may provide an analogy. An agent that always defects, even where the competitor has shown willingness to cooperate, may not be regarded as rational because defecting vs. a cooperative agent does not maximize utility. A simple reflex agent wouldn't have the capacity to alter its strategy.
However, rationality used in the most general sense might allow that, to the agent making the decision, if the decision is based on achieving an objective, and the decision is reached utilizing the information available to that agent, the decision is may be regarded as rational, regardless of actual optimality.
Upvotes: 3 <issue_comment>username_2: >
> I've read that a simple reflex agent will not act rationally in a lot of environments. E.g. a simple reflex agent can't act rationally when driving a car as it needs previous perceptions to make correct decisions.
>
>
>
I wouldn't say that the need for previous perceptions is the reason why a simple reflex agent doesn't act rationally. I'd say the more serious issue with simple reflex agents is that they do not perform **long-term planning**. I think that is the primary issue that causes them to not always act rationally, and that is also consistent with the definition of rationality you provided. A reflex-based agent typically doesn't involve long-term planning, and that's why it in fact does not often do best given the knowledge it has.
>
> Another question about the definition of rationality: Is a chess engine rational as it picks the best move given the time its allowed to use, or is it not rational as it doesn't actually (always) find the best solution (would need more time to do so)?
>
>
>
An algorithm like minimax in its "purest" formulation (without a limit on search depth) would be rational for games like chess, since it would play optimally. However, that is not feasible in practice, it would take too long to run. In practice, we'll run algorithms with a limit on search depth, to make sure that they stop thinking and pick a move in a reasonable amount of time. Those will not necessarily be rational. This gets back to bounded rationality as described by username_1 in his answer.
The story is not really clear if we try to talk about this in terms of *"picking the best move given the time it's allowed to use"* though, because what is or isn't possible given a certain amount of time depends very much on factors such as:
* algorithm we choose to implement
* speed of our hardware
* efficiency of implementation / programming language used
* etc.
For example, hypothetically I could say that I implement an algorithm that requires a database of pre-computed optimal solutions, and the algorithm just looks up the solutions in the database and instantly plays the optimal moves. Such an algorithm would be able to truly be rational, even given a highly limited amount of time. It would be difficult to implement in practice because we'd have difficulties constructing such a database in the first place, but the algorithm itself is well-defined. **So, you can't really include something like "given the time it's allowed to use" in your definition of rationality.**
Upvotes: 3 [selected_answer] |
2018/08/31 | 1,328 | 5,791 | <issue_start>username_0: My question relates to but doesn't duplicate a question that has been asked [here](https://ai.stackexchange.com/questions/4282/measuring-object-size-using-deep-neural-network).
I've Googled a lot for an answer to the question: *Can you find the dimensions of an object in a photo if you don't know the distance between the lens and the object, and there are no "scales" in the image?*
The overwhelming answer to this has been "no". This is, from my understanding, due to the fact that, in order to solve this problem with this equation,
$$Distance\ to\ object(mm) = \frac{f(mm) \* real\ height(mm) \* image\ height(pixels)}{object\ height(pixels) \* sensor\ height(mm)} $$
you will need to know either the "real height" or the "distance to object". It's the age old issue of "two unknowns, one equation". That's unsolvable. A way around this is to place an object in the photo with a known dimension in the same plane as the unknown object, find the distance to this object and use that distance to calculate the size of the unknown (this relates to answer from the question I linked above). This is an equivalent of putting a ruler in the photo and it's a fine way to solve this problem easily.
This is where my question remains unanswered. What if there is no ruler? What if you want to find a way to solve the unsolvable problem? **Can we train an Artificial Neural Network to approximate the value of the real height without the value of the object distance or use of a scale?** Is there a way to leverage the unexpected solutions we can get from AI to solve a problem that is seemingly unsolvable?
Here is an example to solidify the nature of my question:
I would like to make an application where someone can pull out their phone, take a photo of a hail stone against the ground at a distance of ~1-3 ft, and have the application give them the hail stone dimensions. My project leader wants to make the application accessible, which means he doesn't want to force users to carry around a quarter or a special object of known dimensions to use as a scale.
In order to avoid the use of a scale, would it be possible to use all of the [EXIF](https://en.wikipedia.org/wiki/Exif) meta-data from these photos to train a neural network to approximate the size of the hail stone within a reasonable error tolerance? For some reason, I have it in my head that if there are enough relevant variables, we can design an ANN that can pick out some pattern to this problem that we humans are just unable to identify. Does anyone know if this is possible? If so, is there a deep learning model that can best suit this problem? If not, please put me out of my misery and tell me why it it's impossible.<issue_comment>username_1: >
> Can one use an Artificial Neural Network to determine the size of an object in a photograph?
>
>
>
Yes: [Learning Depth from Single Monocular Images](https://papers.nips.cc/paper/2921-learning-depth-from-single-monocular-images.pdf)
In the end, depth is just one special form of size.
Of course, you need something partially known, e.g. another car. You don't need to know the exact size of the car, but you know which size cars in general have. If you have an image without any reference, it is impossible.
Upvotes: 2 <issue_comment>username_2: In my thesis I actually solve the problem of depth estimation with a CNN based on a single monocular image so I can share my experiences for understanding that problem.
As you already stated in general you have the problem that you cannot recover the scale of the scene in an image by geometrical approaches directly. And that is still not the case even if you know the properties of your camera and lens, like focal length, but don't know any absolute sizes of the scene. However, a neural network is still able to solve the task of depth estimation based on monocular images (at least for fixed camera properties) due to known objects sizes it learned through the training on the dataset. That means it can use the learned size of specific objects and the relative depth relations to give a fairly good approximation of the depth in the scene.
However, in your special case this approach would **not work**, if I understand you correctly. If you just take a photo of a stone that can have arbitrary sizes and no depth cues or any unique patterns that relate to depth are present in the image there is no chance to ever estimate the absolute depth. A CNN eventually would probably just learn some average depth values or recurring depth patterns of your used dataset or memorize the whole training set to minimize the training error since it simply cannot solve this task. So you would not get a tool that does somehow generalize to new scenes. A neural network is still just a function approximator and not something magical that can solve the unsolveable.
For your usecase there could be some (complex) solutions that could give you a more or less accurate depth estimation. For example you could use a structure from motion approach where you somehow mesure the absolute camera movement with the accelerometer of the phone. Or best would be a stereo-camera based setup where you know the absolute camera displacement of the camera positions which could solve this task if you have textures in your images. With that you could find the absolute depth of specific points through a classical stereo depth estimation or by using a CNN that estimates the depth on the stereo image pair. Another approach would be to let the user input the phone height above the ground or approximate it through the accelerometer of the smartphone to then approximate the stone size based on the size in the image and the absolute known height above the ground (probably inaccurate).
Upvotes: 2 |
2018/09/01 | 739 | 3,063 | <issue_start>username_0: I am new to the field and I am trying to understand how is possible to use categorical variables / enums?
Lets say we have a data set and 2 of its features are `home_team` and `away_team`, the possible values of these 2 features are all the NBA teams.
How can we "normalize" these features to be able to use them to create a deep network model (e.g. with tensorflow)?
Any reference to read about techniques of modeling that are also very appreciated.<issue_comment>username_1: Authors use many different approaches.
One approach is to have a different input neuron for each possible category, and then use a "1-hot" encoding. So if you have 10 categories, then you can encode this as 10 binary features.
Another is to use some sort of binary encoding. If you have 10 categories, it is sufficient to use 4 neurons to represent all possible categories by using binary numbers.
A third approach is to convert your categories to cardinal values, and then normalize them. This may be more effective if your categories really are cardinal (i.e. orderable). If there isn't a natural ordering to them though, this might lead to strange results or make the problem difficult to learn (since it ends up embedding non-linear relationships in the learning problem that don't need to exist).
Upvotes: 3 <issue_comment>username_2: A one-hot encoding, as described in [John's answer](https://ai.stackexchange.com/questions/7781/how-to-model-categorical-variables-enums/7783#7783), is probably the most straightforward / simple solution (maybe even the most common?). It is not without its problems though. For example, if you have a large number of such categorical variables, and each has a large number of possible values, the number of binary inputs you need for one-hot encodings may grow too large.
>
> Lets say we have a data set and 2 of its features are home\_team and away\_team, the possible values of these 2 features are all the NBA teams.
>
>
>
In this specific example, a different possible solution might be not to use the "identity" of a team as a feature itself, but try to find a number of (ideally numeric) features corresponding to that team.
For example, instead of trying to encode "home\_team" in some way in your inputs, you could (if you manage to find the data you need to do this) use the following features (not really familiar with NBA, so not sure if all these make sense):
* Win percentage of home\_team in recent X amount of time
* Historical win percentage of home\_team against away\_team
* Average points scored per match by this team
* In football there's something like how many minutes per game a team is "in control" of the ball, is there something similar in NBA maybe?
* etc.
And then you can try to get a similar list of features for the away\_team.
This kind of solution would work for your example, and maybe also for various other examples. It might not work in all cases of categorical features though, in some cases you'd have to revert to solutions like those in John's answer.
Upvotes: 2 |
2018/09/01 | 518 | 2,048 | <issue_start>username_0: I have been researching LSTM neural networks. I have seen [this](https://cdn-images-1.medium.com/max/1600/1*Niu_c_FhGtLuHjrStkB_4Q.png) diagram a lot and I have few questions about it. Firstly, is this diagram used for most LSTM neural networks?
Secondly, if it is, wouldn't only having single layers reduce it's usefulness?<issue_comment>username_1: (1) Yes this is the diagram for a classical LSTM unit. Of cause there are some variants and those diagrams would look slightly different.
(2) It is very common for researchers to use more than one layers of LSTM and achieves better performance than a single layer one. A common way to "stack" LSTMs is to use the previous layer's output ($h\_t$ in your diagram) as the input to the next layer ($x\_t$). However, I have seldom seen any successful application of 5+ layers of LSTMs, while for CNNs it is common to use tens or even hundreds of layers.
Upvotes: 1 <issue_comment>username_2: Just to be 100% sure - the diagram you refer to is a diagram of an LSTM CELL, not NETWORK. The operands you see on the diagram are operations within a cell, not separate "neurons". I think it is quite obvious, however reading your questions I just wanted to be 100% sure we are on the same page.
Now, about layers. RNN networks (LSTM in particular) are just like any other ANN structure. Theoretically, a 1-hidden layer network can do any computation of a "deeper" network. ANN is a universal approximate of math functions. Still, multi-layer ANNs typically work better on more complex problems. Multi-layer typically needs less total connections, learns better and is less resource-demanding.
In particular, multi-layer LSTMs are believed to be better at determining complex in-time patterns. I think there is no rigorous proof for this, however.
Also, in practical applications - I did not see much improvement in network capabilities by adding additional LSTM layers. Adding more dense layers before/ after LSTM seemed to have a much better effect.
Upvotes: 1 [selected_answer] |
2018/09/02 | 352 | 1,536 | <issue_start>username_0: What benefits can we got by applying Graph Convolutional Neural Network instead of ordinary CNN? I mean if we can solve a problem by CNN, what is the reason should we convert to Graph Convolutional Neural Network to solve it? Are there any examples i.e. papers can show by replacing ordinary CNN with Graph Convolutional Neural Network, an accuracy increasement or a quality improvement or a performance gain is achieved? Can anyone introduce some examples as image classification, image recognition especially in medical imaging, bioinfomatics or biomedical areas?<issue_comment>username_1: Generally speaking a graph CNN is applied to data represented by [graphs](https://en.wikipedia.org/wiki/Graph_(abstract_data_type)), not images.
* a **graph** is a collection of nodes and edges connecting them.
* an **image** is a 2D or 3D matrix, in which each element denotes a pixel in space
If your data are just images, or something similar (e.g. some fMRI data), you usually cannot benefit from graph CNN compared with usual CNN.
Sometimes, the class labels of your images may be organized in a graph-like (or tree-like) structure. In that case, you may have a chance to benefit from graph CNN.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Bioinformatics is an area that Graph Convolutional Neural Network is useful. Consider protein networks, or gene-gene networks. Surely, the biological networks can be represented as a graph. Now, you should see how GCN is useful for bioinformatics.
Upvotes: 2 |
2018/09/02 | 1,888 | 6,392 | <issue_start>username_0: I want to explore and experiment the ways in which I could use a neural network to identify patterns in text.
examples:
1. Prices of XYZ stock went down at **11:00** am **today**
2. Retrieve a list of items exchanged on **03/04/2018**
3. Show error logs between **3 - 5 am** **yesterday**.
4. Reserve a flight for **3rd October**.
5. Do I have any meetings **this Friday**?
6. Remind to me wake up early **tue**, **4th sept**
This is for a project so I am not using regular expressions. Papers, projects, ideas are all welcome but I want to approach feature extraction/pattern detection to have a model trained which can Identify patterns that it has already seen.<issue_comment>username_1: If want to use deep learning approaches, you should look to recurrent neural networks (RNN). Recurrent networks will take into account temporal dependencies and could detect thatn *this* in *this Friday* belong to datetime but not in *this apple*.
As a simple model, you could create a model with a bidirectional LSTM layer (a type of RNN):
* **Input**: the sequences of characters.
* **Output**: whether the character belongs to datetime or not.
The longest part will gather many sentences with its corresponding solution to create a training/testing dataset. [Keras](https://keras.io/) might be a good framework to start playing around and with many examples.
Upvotes: 1 <issue_comment>username_2: If you dont want to use machine learning you may use date [time parser in python](https://dateparser.readthedocs.io/en/latest/index.html). Few examples are given below. It will return you formatted date time from given string. It works with all languages.
```
>>> import dateparser
>>> dateparser.parse('12/12/12')
datetime.datetime(2012, 12, 12, 0, 0)
>>> dateparser.parse(u'Fri, 12 Dec 2014 10:55:50')
datetime.datetime(2014, 12, 12, 10, 55, 50)
>>> dateparser.parse(u'Martes 21 de Octubre de 2014') # Spanish (Tuesday 21 October 2014)
datetime.datetime(2014, 10, 21, 0, 0)
>>> dateparser.parse(u'Le 11 Décembre 2014 à 09:00') # French (11 December 2014 at 09:00)
datetime.datetime(2014, 12, 11, 9, 0)
>>> dateparser.parse(u'13 января 2015 г. в 13:34') # Russian (13 January 2015 at 13:34)
datetime.datetime(2015, 1, 13, 13, 34)
>>> dateparser.parse(u'1 เดือนตุลาคม 2005, 1:00 AM') # Thai (1 October 2005, 1:00 AM)
datetime.datetime(2005, 10, 1, 1, 0)
```
Upvotes: 0 <issue_comment>username_3: **Approaches**
There are two main approaches to detecting any human readable representation of a discrete quantity within text.
1. Detect well known and stable patterns in the input stream and by adjacency determine the output stream.
2. Windowing through the text in the input stream and directly detect the quantities.
There are other approaches and there are hybrids of these two or one of these and the other approaches, but these two are the theoretically most straightforward and likely to produce both reliability and accuracy.
**Re-entrant Learning**
Whether the training involves re-entrant learning techniques, such as reinforcement, is a tangential issue that this answer will not address, but know that whether all training is solely a deployment component or whether adaptation and/or convergence occurs in real time is an architectural decision to be made.
**Practical Concerns**
Practically, the outputs of each recognition are as follows.
* Starting index
* Ending index
* Integer year or null
* Integer day of year or null
* Integer hour in military time or null
* Minute or null
* Second or null
* Time zone or null
* Probability the recognition unit was correctly identified
* Probability the recognition produced accurate results
Also practically, the input must either be from within one particular locale's norms in terms of
* Calendar,
* Time,
* Written language,
* Character encoding, and
* Collation,
... or ...
* The learning must occur using training sets that include the locales that will be encountered during system use
... or ...
* Much of the locale specific syntax must be normalized to a general date and time language such as this:
जनवरी --> **D\_01**
Enero --> **D\_01**
Janúar --> **D\_01**
so that Filipino and Icelandic names for the first month of the year enter the artificial network as the same binary pattern.
\*\*Date and Time Specifically\*
In the case of 1. above, which is semi-heuristic in nature, and assuming that the locale is entirely en-US.utf-8, the CASE INSENSITIVE patterns for a PCRE library or equivalent to use as a search orientation heuristic include the following.
```
(^|[^0-9a-z])(19|20|21)[0-9][0-9])([^0-9a-z]|$)
(^|[^0-9a-z])(Mon|Monday|Tue|Tues|Tuesday|Wed|Wednesday|Thu|Thur|Thurs|Thursday|Fri|Friday|Sat|Saturday|Sun)([^0-9a-z]|$)
(^|[^0-9a-z])(Jan|January|Feb|February|Mar|March|Apr|April|May|Jun|June|Jul|July|Aug|August|Sep|Sept|September|Oct|October|Nov|November|Dec|December)([^0-9a-z]|$)
(^|[^0-9a-z])(Today|Yesterday|Tomorrow)([^0-9a-z]|$)
(^|[^0-9])[AP]M|[AP][.]M[.]|Noon|Midnight)([^0-9a-z]|$)
(^|[^a-z])(0?[1-9])(:[0-5][0-9]){1,2}([^a-z]|$)
```
There should be others for time, hyphenated or slash delimited dates, or time zone.
The positions and normalized encoding of these date and time artifacts are then substituted into the artificial network inputs instead of the original text in the stream, reducing redundancy and improving both the speed of training and the resulting accuracy and reliability of recognition.
In the case of 2. above, the entire burden of recognition is left to the artificial network. The advantage is less reliance on date and time conventions. The disadvantage is a much larger burden placed on training data variety and training epochs, meaning a much higher burden on computing resources and the pacience of the stake holder for the project.
**Windowing**
An overlapping windowing strategy is necessary. Unlike FFT spectral analysis in real time, the windowing must be rectangular, because the size of the window is the width of the input layer of the artificial network. Experimenting with the normalization of input such that the encoding of text and data and time components entering the input layer could greatly vary the results in terms of training speed, recognition accuracy, reliability, and adaptability to varying statistical distributions of date and time instances and relationships.
Upvotes: 2 [selected_answer] |
2018/09/04 | 814 | 3,288 | <issue_start>username_0: I am not sure if I can use the words *binomial* and *binary* and *boolean* as synonyms to describe a data attribute of a data set which has two values (yes or no). Are there any differences in the meaning on a deeper level?
Moreover, if I have an attribute with three possible values (yes, no, unknown), this would be an attribute of type polynominal. What further names are also available for this type of attribute? Are they termed as "symbolic"?
I am interested in the realtion between the following attribute type: binary, boolean, binominal, polynominal (and alternative describtions) and nominal.<issue_comment>username_1: Binomial is a **distribution** characterised by $p$, the probability of success for an independent trial. Each sample you get from the distribution is a **binary** variable, 0 or 1.
Upvotes: 0 <issue_comment>username_2: @SmallChess's answer is a good start, but there are some additional parts to the question.
***binary*** variables or *binary* data consist of data with the values 0 or 1, and no other values. We usually don't talk about "binary distributions", because it's only data, variables, or outcomes that can be binary. A distribution might produce binary data, but is not itself binary because its parameters typically take on real-values.
A ***binomial*** distribution is a distribution that produces binary data. In particular, it is a random process that produces the value 1 with probability $$p$$, and the value 0 with probability $$1-p$$. Notice that although it makes binary data, it is not itself a kind of data, and is in fact charactorized by a non-binary number (p).
***Boolean*** data takes on the values *true* or *false*. Often, but not always, these are stored as 0's and 1's. The distinction is that boolean data may not be stored numerically. There might also be different expectations about how Boolean data should be processed (for instance, $$true + true = true$$, but $1 + 1 = 2$.
I am not aware of the term ***polynomial*** being applied to data. However, ***multinomial*** distributions are probability distributions that produce 0 with probability $p\_0$, 1 with probability $p\_1$, 2 with probability $p\_2$, and so on, producing $p\_n$ with probability $1 - \sum\_{i=0}^{n-1} p\_i$ for $n$ different numbers. Like binomial distributions, multinomial distributions are characterized by a set of real-valued numbers, and are distinct from the kind of data they generate.
***Categorical*** data takes on values from a set of categories. The example you give (yes, no, maybe) is not strictly multinomial data, but could be generated from a multinomial distribution by mapping the values 0, 1 and 2 onto yes, no and maybe. Note again that categorical data might be non-numeric. Operations like adding might be non-sensical.
***Cardinal*** data isn't something you asked about, but arises when data can be nicely ordered. For example, playing cards are easily mapped to the numbers 1-13, and can have reasonable semantic meaning when represented this way (e.g. A + 2 = 3, and 1 + 2 = 3).
***Nominal Data*** is just literal numbers that mean exactly what they purport to mean. For example, if you store the number of cans of beer a customer purchased, that would be nominal data.
Upvotes: 1 |
2018/09/05 | 1,145 | 5,054 | <issue_start>username_0: I am interested in the field of artificial intelligence. I began by learning the various machine learning algorithms. The maths behind some were quite hard. For example, back-propagation in convolutional neural networks.
Then when getting to the implementation part, I learnt about TensorFlow, Keras, PyTorch, etc. If these provide much faster and more robust results, will there be a necessity to code a neural network (say) from scratch using the knowledge of the maths behind back-prop, activation functions, dimensions of layers, etc., or is the role of a data scientist only to tune the hyper-parameters?
Further, as of now the field of AI does not seem to have any way to solve for these hyperparameters, and they are arrived at through trial and error. Which begs the question, can a person with just basic intuition about what the algorithms do be able to make a model just as good as a person who knows the detailed mathematics of these algorithms?<issue_comment>username_1: This is a good question. I tend to think the answer is **yes** it is necessary to know the details, because a person without mathematical understanding of these algorithms cannot *consistently* make a model as good as someone who does have that understanding.
The reason is right at the core of computer science: abstractions are useful, but usually obscure details. When those details matter, someone who only knows the abstraction and not the details that lie beneath can't understand what's going on.
As an example, if you don't understand the math behind optimizing the weights of a neural network, it might not be apparent how parameters like the learning rate are impacted by properties like network depth when some of the inputs have not been properly normalized. If you understand the optimization process mathematically, you can reason through the effects even if you are trying to work on an unfamiliar problem. This ability to reason through the probable effects of parameter decisions in *new* domains is the main thing that you miss by working from intuition.
Upvotes: 3 <issue_comment>username_2: In my answer I will call people who do not know mathematics behind ML algorithms as data science practitioners and those who know as data scientists (this terms may not be true in real life).
So with the advent of Neural Networks the importance of understanding maths behind ML algorithms has diminished significantly since earlier based on different data parameters you had to do something called [$feature$ $engineering$](https://en.wikipedia.org/wiki/Feature_engineering). This needed some kind of knowledge in statistics and basic co-ordinate geometry.
Nowadays practitioners can easily apply well known models to problems without any thought process or mathematics involved. Examples of this include CNN architectures like [AlexNet](https://en.wikipedia.org/wiki/AlexNet), [LeNet](http://yann.lecun.com/exdb/lenet/), [ResNet](https://en.wikipedia.org/wiki/Residual_neural_network), etc, RNN architectures like [LSTM's](https://en.wikipedia.org/wiki/Long_short-term_memory) and [GRU's](https://en.wikipedia.org/wiki/Gated_recurrent_unit). We even are copying the weights of pre-trained models.
So what edge does a data scientist hold over practitioners? To me here is a list of points on which data scientists hold an edge:
* Hyper-parameter tuning: In any NN there are minimum of 3-4 hyper-parameters. Which gives rise to $^4C\_n = 16$ where $n$ varies from 0 to 4 possible tunings. A data scientist from loss curves, accuracy graphs, and other score graphs will be easily able to narrow down to the tuning required to make the NN perform the best. Whereas, a practitioner may have to try out the entire 16 combinations (each combination requiring many combinations of values for each) to be able to get to the solution. Time and resource consuming.
* Special Architectures: Some problems require out of the box thinking to come up with the best solution. Like a combination of CNN and RNN is used to predict captions for images, CNN's may be used in field of sequence processing (like genetic sequences). Such uncommon/unconventional solutions to problems can only be applied by a person who knows how a NN works in details.
* Intuition: Although strongly advised against by <NAME> in his course, I believe every ML programmers out there use NN architectures and solutions which they believe will work best entirely due to gut feeling (rather than going through tedious methodical way of using it on small scale problems of the same type). In this scenario data scientists are bound to get a more accurate model working just due to the fact they know the mathematics and the working of a NN. They will have an intuitive understanding of how the model will work on the data to perform the task at hand.
These I feel are some of the places where a scientist holds an edge over practitioners. I may have missed some other edges that a Data scientist may hold over practitioners, you are free to edit it in.
Upvotes: 0 |
2018/09/05 | 829 | 3,049 | <issue_start>username_0: In [Proximal Policy Optimization Algorithms](https://arxiv.org/abs/1707.06347) (2017), Schulman et al. write
>
> With this scheme, we only ignore the change in probability ratio when it would make the objective improve, and we include it when it makes the objective worse.
>
>
>
I don't understand why the clipped surrogate objective works. How can it work if it doesn't take into account the objective improvements?<issue_comment>username_1: Ok, so I think I have a better understanding of this now.
Firstly, let's remind the main idea of the PPO : staying close to the previous policy. It's the same idea than in TRPO, but the L function is improved.
So, you wanna make "small but safe steps". With clipped surrogate objective, you don't give too much importance to promising actions. You learn that bad actions are bad, so you decrease their probability according to "how bad" they are. But for good actions, you only learn that they are "a little bit good", and their probability will be just slightly increased.
This mechanism allows you to perform small but relevant updates of your policy.
hope this will help someone :)
Upvotes: 3 [selected_answer]<issue_comment>username_2: I think [@username_1](https://ai.stackexchange.com/users/17759/16aghnar) explains the concept quite well. However, by clipping the surrogate objective alone doesn't ensure the trust region as stated in the paper:
>
> [Engstrom et al., 2020](https://openreview.net/attachment?id=r1etN1rtPB&name=original_pdf), Implementation Matters in Deep RL: A Case Study on PPO and TRPO.
>
>
>
The authors inspected OpenAI's implementation of PPO and find many code-level optimizations, I'll list the most important optimizations below:
1. Clipped surrogate objective
2. Value function clipping
3. Reward scaling
4. Orthogonal initialization and layer scaling
5. Adam learning rate and annealing
They find that:
* PPO-M (`2.`-`5.`) alone can maintain the trust region.
(PPO-M: PPO without Clipped surrogate objective, but with code-level optimizations)
* PPO-Clip (`1.` only) cannot maintain the trust region.
(PPO-Clip: PPO without code-level optimizations, but with Clipped surrogate objective)
* TRPO+ has better performance comparing to TRPO, and with similar performance comparing to PPO
(TRPO+: TRPO with code-level optimizations used in PPO OpenAI implementation)
A intuitive thought on why Clipped surrogate objective alone does not work is: The first step we take is unclipped.
>
> As a result, since we initialize $\pi\_\theta$ as $\pi$ (and thus the ratios start all equal to one) the first step we take is identical to a maximization step over the unclipped surrogate reward. Therefore, the size of step we take is determined solely be the steepness of the surrogate landscape (i.e. Lipschitz constant of the ptimization problem we solve), and we can end up moving arbitrarily far from the trust region. -- [Engstrom et al., 2020](https://openreview.net/attachment?id=r1etN1rtPB&name=original_pdf)
>
>
>
Upvotes: 1 |
2018/09/06 | 4,055 | 15,554 | <issue_start>username_0: What is the definition of artificial intelligence?<issue_comment>username_1: Over the years, many people attempted to define artificial intelligence. A lot of those definitions are summed up by <NAME> and <NAME> in their book [Artificial Intelligence - A Modern Approach](http://aima.cs.berkeley.edu/)
The definitions of AI can be summarised as falling into the following categories:
>
> 1. Those that address thought process and reasoning (how an AI thinks/reasons)
> 2. Those that address behaviour (how an AI acts given what it knows)
>
>
>
Furthermore, the above 2 categories are further divided into definitions that:
>
> I. assess the success of an AI (to do the above) based on its ability
> to replicate human performance
>
>
> II. or an ability to replicate an ideal performance measure called
> 'rationality' (does it do the 'right' thing based on what it knows?)
>
>
>
I will cite you definitions that fit into each of the above categories:
>
> * 1.I. "The [automation of] activities that we associate with human thinking, activities such as decision making, problem solving, learning.." - Bellman 1978
> * 1.II. "The study of the computations that make it possible to perceive, reason, and act." - Winston, 1992
> * 2.I. "The study of how to make computers do things at which, at the moment, people do better" - Rich and Knight, 1991
> * 2.II. "The study of the design of intelligent agents" - Poole et al., 1998
>
>
>
In summary, AI is devoted to the creation of intelligent and rational machines that can make rational decisions and take rational actions.
I would suggest you read up on the Turing test, which Alan Turing proposed to test if a computer was intelligent. However, the Turing test has a few issues, because it is anthropomorphic.
When Aeronautical engineers created the airplane, they didn't set their goal that planes should fly exactly like birds, but rather, they started learning how lift forces were generated, based on the study of aerodynamics. Using this knowledge, they created planes.
Similarly, people in the AI world shouldn't put, IMHO, human intelligence as the standard to strive for, but, rather, we could use, say, rationality as a standard (amongst others).
Upvotes: 3 <issue_comment>username_2: **Intelligence**
A measure of the strength of a decision-making agent relative to other decision-making agents, in regard to a given task or set of tasks. The medium is irrelevant—intelligence is exhibited by both organic and intentionally created mechanisms. May also be the capability to solve a problem, as in the case of a [solved game](https://en.wikipedia.org/wiki/Solved_game).
**Artificial**
Relates to the term [artifact](https://www.etymonline.com/word/artifact), a thing which is intentionally created. Typically this term has been used to connote physical objects, but algorithms created by humans are also regarded as artifacts.
The etymology is derived from the Latin words *[ars](http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%3A1999.04.0060%3Aentry%3Dars)* and *[faciō](http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%3A1999.04.0060%3Aentry%3Dfacio)*: "To skillfully construct", or, "the art of making".
**Artificial Intelligence**
* Any decision-making agent that is skillfully (intentionally) constructed.
---
APPENDIX: The meaning of "intelligence"
The original meaning of "intelligence" seems to be "to acquire", back to the Indo-European. See: [intelligence (etymology)](https://www.etymonline.com/word/intelligence); [\*leg/\*leh₂w-](https://en.wiktionary.org/wiki/Appendix:List_of_Proto-Indo-European_roots#l)
[The OED 1st definition of intelligence](https://en.oxforddictionaries.com/definition/intelligence) is not incorrect, extending the meaning to acquisition of capability (demonstrable utility), just that the second definition is the older and fundamental: "The collection of information of [strategic] value; 2.3 (archaic) Information in general; news."
[You can regard the universe as being comprised of information](https://bigthink.com/philip-perry/the-basis-of-the-universe-may-not-be-energy-or-matter-but-information), whatever form that information takes (matter, energy, states, relative positions, etc.) From the standpoint of an algorithm, this makes sense since the only means they have to gauge the universe are [percepts](https://en.wikipedia.org/wiki/Percept_(artificial_intelligence)).
Take a flat text file. It may just be data, but you could try and execute. If it actually runs, it might demonstrate utility at some task. (For instance, if it is a minimax algorithm.)
"Intelligence as a measure of utility" is itself "intelligence" in the sense of information, specifically that information by which we measure intelligence, as a degree, relative to a task or to other intelligences.
Upvotes: 2 <issue_comment>username_3: The shortest answer I could can come up with could be as follow; take it with a grain of salt though since we still do not know a lot about natural intelligence:
What natural intelligence is could be seen as the process of learning abstract concepts from limited observations with the intention to use them for solving a [new] task. This process involves using those concepts to imagine new, hypothetically correct scenarios/theories and combine them in a meaningful way to cut down the enormous hypothesis space of possibilities and enable generalization to new situations without observing any data beforehand. **Artificial intelligence** is to bring what natural intelligence does into machines.
Upvotes: -1 <issue_comment>username_4: There is no formal definition that most people agree on. Hence here is what I, as a data science / machine learning consultant, think:
Artificial intelligence *as a research field* is the study of agents which sense and act autonomously in an environment and improve their situation according to some metric with their actions.
I don't like the term, because it is too broad / vague. Instead, look at the definition of machine learning by <NAME>:
>
> A computer program is said to learn from experience 'E', with respect to some class of tasks 'T' and performance measure 'P' if its performance at tasks in 'T' as measured by 'P' improves with experience E
>
>
>
Machine learning is an important part of AI, but not the only one. Search algorithms, SLAM, constrained optimization, knowledge bases and automatic inference are also certainly part of AI.
Upvotes: 0 <issue_comment>username_5: >
> What is artificial intelligence?
>
>
>
This question is ambiguous. I will address the two less ambiguous but related questions.
1. What is the goal of the AI field?
2. What is **an** artificial intelligence?
What is the goal of the AI field?
---------------------------------
In the article [What is artificial intelligence?](http://www-formal.stanford.edu/jmc/whatisai/) (2007), <NAME>, one of the founders of artificial intelligence and who also coined the expression *artificial intelligence*, writes
>
> **Artificial intelligence** is the science and engineering of making intelligent machines, especially intelligent computer programs.
>
>
> It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.
>
>
>
Therefore, the goal of the AI field is to create *intelligent* programs (or machines). So, he defines the goal of the field based on the concept of *intelligence*, which he defines as follows.
>
> **Intelligence** is the *computational* part of the ability to achieve *goals* in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines.
>
>
>
So, we could conclude that the goal of the AI field is to create programs (or machines) that achieve **goals** in the world to different extents.
This definition of intelligence is reasonable and consistent with [reinforcement learning](http://incompleteideas.net/book/RLbook2020.pdf) (which could be [the path to AGI](https://ai.stackexchange.com/q/17084/2444)), but maybe not formal and rigorous enough. In [this answer](https://ai.stackexchange.com/a/13100/2444), I report a possibly more sound definition of intelligence given by Hutter and Legg, so I suggest that you read it, but the definitions are roughly consistent with each other (because the concepts of "goal" and "goal-seeking behavior" are present in both definitions), although they emphasize different aspects (e.g. computation or generality).
What is an artificial intelligence?
-----------------------------------
Nowadays, most people distinguish two types of artificially intelligent systems:
* [Narrow AI](https://en.wikipedia.org/wiki/Weak_AI) (aka weak AI, although this term may not exactly be a synonym for narrow AI, but it's just the opposite of strong AI: see [the Chinese-Room argument](https://plato.stanford.edu/entries/chinese-room/)): a system that solves a very specific problem (e.g. playing go)
* [Artificial general intelligence](http://www.scholarpedia.org/article/Artificial_General_Intelligence) (aka strong AI, although this term may not always be used as a synonym for AGI): a system that can solve multiple problems
This distinction started with philosophical arguments, such as [the Chinese room argument](https://plato.stanford.edu/entries/chinese-room/), where the ability of a computer to "understand" the actual problem was questioned. Nowadays, there are multiple successful cases of narrow AIs (e.g. AlphaGo), but there isn't yet a "truly" AGI system. This is mainly due to the fact that more people have been (probably wisely) focusing on solving specific problems rather than solving the "holy grail" problem of the AI field, i.e. create an AGI, which seems to be a lot more difficult than creating narrow AI systems. (Anyway, the creation of an AGI could actually arise from solutions to these specific problems, so maybe we are already creating the tools needed to build an AGI, without realizing it). See [What is the difference between strong-AI and weak-AI?](https://ai.stackexchange.com/q/74/2444) for more details about the difference between narrow AI and strong AI.
Upvotes: 2 <issue_comment>username_6: According to the book [Artificial Intelligence: A Modern Approach](https://www.cin.ufpe.br/%7Etfl2/artificial-intelligence-modern-approach.9780131038059.25368.pdf#page=27) (section 1.1), artificial intelligence (AI) has been defined in *multiple* ways, which can be organized into 4 categories.
1. **Thinking Humanly**
2. **Thinking Rationally**
3. **Acting Humanly**
4. **Acting Rationally**
The following picture (from the same book) provides 8 definitions of AI, where each box contains 2 definitions that fall into the same category. For example, the definitions in the top-left corner fall into the category **thinking humanly**.
[](https://i.stack.imgur.com/J9B2y.png)
There is also the [**AI effect**](https://en.wikipedia.org/wiki/AI_effect), which <NAME> describes (in her book [Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence](https://pdfs.semanticscholar.org/965b/39ad83c545849d473ce30cfc3d569f6e3828.pdf?_ga=2.48167655.1174938146.1610754221-826121516.1600962951#page=234), p. 204) as follows
>
> it's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something — play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, but *that's not thinking*
>
>
>
Upvotes: 2 <issue_comment>username_7: The [facts and anomalies paper](https://www.techrxiv.org/articles/preprint/Facts_and_Anomalies_to_Keep_in_Perspective_When_Designing_an_Artificial_Intelligence/12299945) (mine) has attempted narrowing down on what intelligence is. Only once you identify it, you'll be on the right path to replicating it. For now, nobody knows exactly what constitutes intelligence. From the perspective of just what intelligence is: ***The ability to search for, parse and bring in the right information into a context in order to deduce past, present and future (temporal) occurrences that enable moving toward favourable outcomes***. So a program or machine that operates based on some if-then-style conditions would be considered as having a basic level of intelligence.
However, when we say "artificial intelligence", we usually mean an intelligence that's approximately as good or better than living creatures. From the facts and anomalies paper, the decision-making process was observed to be similar across almost all living creatures. What differs, is the amount of memory stored, the length of the attention-span, questioning capability and to what level of complexity the creature can mix and match memories in the world model it creates in its mind.
The paper also brings forth an important point that an intelligence created via any tech can be specific to that tech. So an AI created via microprocessors will inherently be different from an organic intelligence (and there is nothing wrong in building it as such), but we will recognize it as being intelligent in the same way as an artificial sweetener is accepted as a replacement for sugar.
To think of it another way: If you had to say which one of two people are intelligent, how would you evaluate them? The person we'd consider more intelligent, would be the one who understands and analyzes situations better to take decisions that have a better outcome than others. Even a person lacking vast knowledge will be considered intelligent if their creativity and depth of thought is higher than others.
A crucial factor that enables this is the attention span of the mind. A machine that is programmed to access vast stores of memories for even trivial decision making tasks (which helps it evaluate consequences of various actions: commonsense) and is capable of asking questions and can simulate situations in memory by loading and modifying stored memories in the simulation (imagination) will be a lot more "intelligent" than us.
There is a second paper ([cognitive memory constructs](https://www.techrxiv.org/articles/preprint/Context_and_event-based_cognitive_memory_constructs_for_embodied_intelligence_machines/12630248/1)) that describes a bit about the theory of implementation.
All this being said, this is just our perspective on intelligence. The fact that the universe exists in such a complex form, is probably evidence of much higher forms of intelligence (like how we are much more intelligent than the Age of Empires AI characters we created). Intelligence may exist in far more dimensions than we are currently capable of imagining.
Upvotes: 0 <issue_comment>username_8: Along the observation in the comment, the artificial part can in principle be assigned to artifacts (or at least combination of natural beings and artifacts) and focus on the intelligence knot.
Assuming the shortest definition cannot be less than three words long, intelligence looks like a measure of an operation onto something.
Here are some tentative points around a sensible concise definition, depending on the concrete word we choose for each piece of the abstract "measure of operation on something".
Data organization simplicity.
Information processing efficiency.
Model adaptation adequacy.
Upvotes: 0 |
2018/09/06 | 711 | 2,892 | <issue_start>username_0: I developed a CNN for image analysis. I've around 100K labeled images. I'm getting a accuracy around 85% and a validation accuracy around 82%, so it looks like the model generalize better than fitting. So, I'm playing with different hyper-parameters: number of filters, number of layers, number of neurons in the dense layers, etc.
For every test, I'm using all the training data, and it is very slow and time consuming.
Is there a way to have an early idea about if a model will perform better than another?<issue_comment>username_1: I would like you to try with following changes in the model.
* Introduce batch normalization layer in the model.
* Try with the batch size of 32 - 64
* Use different architecture, Like VGG, Resnet, etc..
There is no full proof answer to this question but you can get best by trying some know strategies.
Upvotes: 0 <issue_comment>username_2: Your description sounds like something similar to [Imagenet dataset](http://www.image-net.org/). According to [this website](https://paperswithcode.com/sota/image-classification-on-imagenet), the state-of-the-art top-1 accuracy just as high as 86%, not much higher than yours. There are plenty of methods to improve accuracy. I would suggest you to read the paper or github listed in [SOTA](https://paperswithcode.com/sota/image-classification-on-imagenet) to find ideas that best fits your situation.
Upvotes: 0 <issue_comment>username_3: I can't comment but here are a few suggestions:
* Play with lr and lr finder
* Use a pretrained model
* Use architecture search or use something like efficientnet b6
* Use swish over relu
* Try different optimizers
* Try Bayesian Optimization
Upvotes: 0 <issue_comment>username_4: The simple answer to your question is "No" with a caveat.
The caveat is that there are signs that your network is never going to perform well. For example, the epoch accuracy fails to improve or even consistently declines over the first several epochs, or the validation accuracy is flat or declining. It could be that the validation loss starts high and just keeps increasing from the beginning. These are all bad signs.
Outside of this, however, it's very tough to know the model won't work well in the long run. For example, we have a model we built for solving a set of CAPTCHAs. The regression portions of that converged very quickly, but the portions that solved the rest of the CAPTCHA took something like 18 hours before they converged. Honestly, we only ran it that long because it was the end of the day and the regression piece looked so promising; there was nothing in the training behavior of the CAPTCHA solver that looked like it would work (even though our intuition was that it should.)
In the end, we have a 96%+ accuracy CAPTCHA solver that we likely would have killed if we had watched it train for more than 10 or 15 minutes.
Upvotes: 1 |
2018/09/06 | 1,088 | 2,952 | <issue_start>username_0: In the Berkeley RL class [CS294-112 Fa18 9/5/18](https://www.youtube.com/watch?v=XGmd3wcyDg8&feature=youtu.be&t=1071), they mention the following gradient would be 0 if the policy is deterministic.
$$
\nabla\_{\theta} J(\theta)=E\_{\tau \sim \pi\_{\theta}(\tau)}\left[\left(\sum\_{t=1}^{T} \nabla\_{\theta} \log \pi\_{\theta}\left(\mathbf{a}\_{t} \mid \mathbf{s}\_{t}\right)\right)\left(\sum\_{t=1}^{T} r\left(\mathbf{s}\_{t}, \mathbf{a}\_{t}\right)\right)\right]
$$
Why is that?<issue_comment>username_1: Well, I'd rather comment, but I don't have yet this privilege, so here are some comments.
First, having a deterministic policy inside the log would do create trivial terms.
Secondly, for me, in Policy Gradient methods, it's a non sense to have a deterministic policy during the optimization, because you want to explore the space of weights. In my experience, you only set the policy to deterministic (in PG method) when you're done with the optimization, and you want to test your network.
Upvotes: 2 <issue_comment>username_2: Here is the gradient that they are discussing in the video:
$$\nabla\_{\theta} J(\theta) \approx \frac{1}{N} \sum\_{i=1}^N \left( \sum\_{t=1}^T \nabla\_{\theta} \log \pi\_{\theta} (\mathbf{a}\_{i, t} \vert \mathbf{s}\_{i, t}) \right) \left( \sum\_{t = 1}^T r(\mathbf{s}\_{i,t}, \mathbf{a}\_{i, t}) \right)$$
In this equation, $\pi\_{\theta} (\mathbf{a}\_{i, t} \vert \mathbf{s}\_{i, t})$ denotes the probability of our policy $\pi\_{\theta}$ selecting the actions $\mathbf{a}\_{i, t}$ that it actually ended up selecting in practice, given the states $\mathbf{s}\_{i, t}$ that it encountered during the episode that we're looking at.
In the case of a deterministic policy $\pi\_{\theta}$, we know for sure that the probability of it selecting the actions that it did select must be $1$ (and the probability of it selecting any other actions would be $0$, but such a term does not show up in the equation). So, we have $\pi\_{\theta} (\mathbf{a}\_{i, t} \vert \mathbf{s}\_{i, t}) = 1$ for every instance of that term in the above equation. Because $\log 1 = 0$, this leads to:
\begin{aligned}
\nabla\_{\theta} J(\theta) &\approx \frac{1}{N} \sum\_{i=1}^N \left( \sum\_{t=1}^T \nabla\_{\theta} \log \pi\_{\theta} (\mathbf{a}\_{i, t} \vert \mathbf{s}\_{i, t}) \right) \left( \sum\_{t = 1}^T r(\mathbf{s}\_{i,t}, \mathbf{a}\_{i, t}) \right) \\
%
&= \frac{1}{N} \sum\_{i=1}^N \left( \sum\_{t=1}^T \nabla\_{\theta} \log 1 \right) \left( \sum\_{t = 1}^T r(\mathbf{s}\_{i,t}, \mathbf{a}\_{i, t}) \right) \\
%
&= \frac{1}{N} \sum\_{i=1}^N \left( \sum\_{t=1}^T \nabla\_{\theta} 0 \right) \left( \sum\_{t = 1}^T r(\mathbf{s}\_{i,t}, \mathbf{a}\_{i, t}) \right) \\
%
&= \frac{1}{N} \sum\_{i=1}^N 0 \left( \sum\_{t = 1}^T r(\mathbf{s}\_{i,t}, \mathbf{a}\_{i, t}) \right) \\
%
&= 0 \\
\end{aligned}
(i.e. you end up with a sum of terms that are all multiplied by $0$).
Upvotes: 5 [selected_answer] |
2018/09/06 | 1,201 | 4,280 | <issue_start>username_0: In a convolutional neural network, which layer consumes more training time: convolution layers or fully connected layers?
We can take AlexNet architecture to understand this. I want to see the time breakup of the training process. I want a relative time comparison so we can take any constant GPU configuration.<issue_comment>username_1: **NOTE:** I did these calculations speculatively, so some errors might have crept in. Please inform of any such errors so I can correct it.
In general in any CNN the maximum time of training goes in the Back-Propagation of errors in the Fully Connected Layer (depends on the image size). Also the maximum memory is also occupied by them. Here is a slide from Stanford about VGG Net parameters:
[](https://i.stack.imgur.com/Dmvrt.png)
[](https://i.stack.imgur.com/9Rl1G.png)
Clearly you can see the fully connected layers contribute to about 90% of the parameters. So the maximum memory is occupied by them.
As far as training time goes, it somewhat depends on the size (pixels\*pixels) of the image being used. In FC layers it is straightforward that the number of derivatives you have to calculate is equal to the number of parameters. As far as the convolutional layer goes, let's see an example,let us take the case of 2nd layer:
It has got 64 filters of $(3\*3\*3)$ dimensions to be updated. The error is propagated from the 3rd layer. Each channel in the 3rd layer propagates it error to its corresponding $(3\*3\*3)$ filter. Thus $224\*224$ pixels will contribute to about $224\*224\*(3\*3\*3)$ weight updations. And since there are $64$ such $224\*224$ channels we get the total number of calculations to be performed as $64\*224\*224\*(3\*3\*3) \approx 87\*10^6 $ calculations.
Now let us take the last layer of $56\*56\*256$. It will pass it gradients to the previous layer. Each $56\*56$ pixel will update a $(3\*3\*256)$ filter. And since there are 256 such $56\*56$ the total calculations required would be $256 \* 56 \* 56 \* (3\*3\*256) \approx 1850 \*10^6$ calculations.
So number of calculations in a convolutional layer really depends on the number of filters and the size of the picture. In general I have used the following formulae to calculate the number of updations required for filters in a layer, also I have considered $stride = 1$ since it will be the worst case:
$channels\_{output} \* (pixelOutput\_{height} \* pixelOutput\_{width}) \* (filter\_{height} \* filter\_{width} \* channels\_{input})$
Thanks to fast GPU's we are easily able to handle these huge calculations. But in FC layers the entire matrix needs to be loaded which causes memory problems which is generally not the case of convolutional layers, so training of convolutional layers is still easy. Also all these have to be loaded in the GPU memory itself and not the RAM of the CPU.
Also here is the parameter chart of AlexNet:
[](https://i.stack.imgur.com/zSUzd.jpg)
And here is a performance comparison of various CNN architectures:
[](https://i.stack.imgur.com/0ruCU.png)
I suggest you check out the [CS231n Lecture 9](https://www.youtube.com/watch?v=DAOcjicFr1Y&index=9&list=PL3FW7Lu3i5JvHM8ljYj-zLfQRF3EO8sYv) by Stanford University for better understanding of the nooks and crannies of CNN architectures.
Upvotes: 5 [selected_answer]<issue_comment>username_2: As CNN contains convolution operation, but DNN uses constructive divergence for training. CNN is more complex in terms of Big O notation.
For reference:
1. See [Convolutional Neural Networks at Constrained Time Cost](https://arxiv.org/pdf/1412.1710.pdf) for more details about the time complexity of CNNs
2. See [What is the time complexity of the forward pass algorithm of a neural network?](https://ai.stackexchange.com/q/13612/2444) and [What is the time complexity for training a neural network using back-propagation?](https://ai.stackexchange.com/q/5728/2444) for more details about the time complexity of the forward and backward passes of an MLP
Upvotes: 3 |
2018/09/07 | 733 | 3,136 | <issue_start>username_0: I have an application where I want to find the locations of objects on a simple, relatively constant background (fixed camera angle, etc). For investigative purposes, I've created a test dataset that displays many characteristics of the actual problem.
Here's a sample from my test dataset.
[](https://i.stack.imgur.com/AKV7f.png)
Our problem description is to **find the bounding box of the single circle in the image**. If there is more than one circle or no circles, we don't care about the bounding box (but we at least need to know that there is no valid single bounding box).
For my attempt to solve this, I built a CNN that would regress `(min_x, min_y, max_y, max_y)`, as well as one more value that could indicate how many circles were in the image.
I played with different architecture variations, but, in general, the architecture a was very standard CNN (3-4 ReLU convolutional layers with max-pooling in between, followed by a dense layer and an output layer with linear activation for the bounding box outputs, set to minimise the mean squared error between the outputs and the ground truth bounding boxes).
Regardless of the architecture, hyperparameters, optimizers, etc, the result was always the same - **the CNN could not even get close to building a model** that was able to regress an accurate bounding box, even with over 50000 training examples to work with.
What gives? Do I need to look at using another type of network as CNNs are more suited to classification rather than localisation tasks?
Obviously, there are computer vision techniques that could solve this easily, but due to the fact that the actual application is more involved, I want to know strictly about NN/AI approaches to this problem.<issue_comment>username_1: There are some special architectures of CNNs which are designed exactly for the task you mention. The [Detector](https://github.com/facebookresearch/Detectron) library includes a collection of these architectures, [this](https://arxiv.org/abs/1703.06870) paper describes the Mask R-CNN network in detail, which is designed for image segmentation tasks.
Upvotes: 1 <issue_comment>username_2: In the R-CNN family approach, the BBox regression is performed using a specific module providing **region proposal**: initially the [Selective Search](https://www.learnopencv.com/selective-search-for-object-detection-cpp-python/) has been used then [Region Proposal Network](https://www.quora.com/How-does-the-region-proposal-network-RPN-in-Faster-R-CNN-work) has been introduced, which has the advantage, being a NN, to be trainable.
This has not to be confused with the BBox Regressor which comes after in the architecture and is focused on refining the proposal bboxes for the “Non Background” proposal objects
Please consider that to perform object detection properly, at least using a region proposal based architecture, you need to perform the basic “Background / Non Background” classification (which is where CNN features help) so to filter out non relevant proposals
Upvotes: 0 |
2018/09/07 | 1,304 | 4,972 | <issue_start>username_0: The term Singularity is often used in mainstream media for describing visionary technology. It was introduced by <NAME> in a popular book [The Singularity Is Near: When Humans Transcend Biology](https://en.wikipedia.org/wiki/The_Singularity_Is_Near) (2005).
In his book, Kurzweil gives an outlook to a potential future of mankind which includes nanotechnology, computers, genetic modification and artificial intelligence. He argues that Moore's law will allow computers an exponential growth which results in a superintelligence.
Is the *technological singularity* something that is taken seriously by A.I. developers or is this theory just a load of popular hype?<issue_comment>username_1: In order to be on the same page, you should give references about "technological singularity", as it comprises multiple fields (mathematics, statistics, philosophy of science, epistemology, sociology, politics, economics, to mention a few).
Generally, when you consider concepts related to **adj + ai** (where adj = {weak, strong, full, narrow, ...}), the breath of speculation is quite large and *in fieri*, so as a developer (where for developer I assume you work on coding-related problems, not the project manager at google x and the like) I would not be worried, unless you are enjoying a cup of tea with your colleagues during a break.
Upvotes: 1 <issue_comment>username_2: I can say that among AI researchers I interact with, it **far** more common to view it as wild speculation than as settled fact.
This is borne out by [surveys of AI researchers](https://nickbostrom.com/papers/survey.pdf), with 80% thinking strong forms of AI will emerge in "more than 50 years" or "never", and just a few percent thinking that such forms of AI are "near".
Software Developers are not the same as AI researchers, and I have found the Singularity myth to be much more widespread among developers. It has a nice ring to it: Computers keep getting faster, at some point they'll be faster than brains, at that point we just simulate brains. Soon after, we simulate something better than brains.
I suspect that the reasons AI researchers are less optimistic are rooted in the fact that we still don't have a good understanding of human intelligence, or even enough of an understanding of the brain to simulate it. For example, in the last two weeks we have discovered [previously unknown types of brain cells](http://www.sciencemag.org/news/2018/08/mysterious-new-brain-cell-found-people). This gives the (correct) impression that even if we had a fast enough computer, we are not at all close to being able to accurately simulate a human brain. We don't really know what a human brain is.
Even if we did know that, simulations are necessarily lossy. We may not have good simulation techniques. Even if we did have good techniques, we may simulate the brain and discover our simulation does not behave as expected for reasons that we don't understand. This is very probable when simulating new systems. In some sense, proponents of the Singularity resemble people predicting that [weather control](https://en.wikipedia.org/wiki/History_of_numerical_weather_prediction) was near in the 1940s. After all, we could simulate simple weather patterns already then, and generate forecasts that sort of worked. How much more complex could it really be to generate perfect forecasts?
Upvotes: 3 <issue_comment>username_3: There is at least one very important and *serious* AI scientist that apparently believes in the creation of true artificial general intelligence and possibly superintelligence: <NAME>, who is the co-author of the [LSTM](https://www.bioinf.jku.at/publications/older/2604.pdf), among many other important contributions. In fact, he recently founded [NNAISENSE](https://nnaisense.com) for this ultimate purpose, that is, to build a general-purpose artificial general intelligence. In his talk [When creative machines overtake man](https://www.youtube.com/watch?v=KQ35zNlyG-o), at TEDxLausanne, Schmidhuber talks about the *singularity* (also known as *omega*). See also his web article [Is History Converging? Again?](http://people.idsia.ch/~juergen/history.html) (2012) or the paper [2006: Celebrating 75 years of AI - History and Outlook: the Next 25 Years](https://arxiv.org/abs/0708.4311) (2007).
Upvotes: 2 <issue_comment>username_4: Yes, it must be taken seriously. There are two main reasons:
1. There is no sharp argument or no-go theorem against the existence of a singularity. It's unclear how fast the singularity could develop, but many authors given a non zero probability to this event (see this [reference](https://www.springer.com/gp/book/9783642325595), it contains different points of view on the singularity by leading experts).
2. The consequences of a singularity would be dramatic. So even if the perceived probability in point 1 is very small, it's worth studying possible mitigations routes.
Upvotes: 1 |
2018/09/07 | 262 | 1,204 | <issue_start>username_0: Objects tracking is finding the trajectory of each object in consecutive frames. Human tracking is a subset of object tracking which just considers humans.
I've seen many papers that divide tracking methods into two parts:
1. Online tracking: Tracker just uses current and previous frames.
2. Offline tracking: Tracker uses all frames.
All of them mention that online tracking is suitable for autonomous driving and robotics, but I don't understand this part. What are the applications of object/human tracking in autonomous driving?
Do you know some related papers?<issue_comment>username_1: There are two important parts:
* When you track, you have a trajectory you can expect. So it becomes easier to "look into the future" is you understand the past
* When you loose an object you had a trajectory of before, you might be able to come up with an explanation that can influence your behavior
Upvotes: 2 [selected_answer]<issue_comment>username_2: Offline tracking requires knowlege of the future of an object.
For real time applications such as the ones you cited, offline tracking might be impeditive given that the method needs to wait for future frames.
Upvotes: 0 |
2018/09/09 | 1,116 | 4,357 | <issue_start>username_0: I want to generate images of childrens' drawings consistent with the developmental state of children of a given age. The training data set will include drawings made by real children in a school setting. The generated images will be used for developmental analysis.
I have heard that Generative Adversarial Networks are a good tool for this kind of problem. If this is true, how would I go about applying a GAN to this challenge?<issue_comment>username_1: I think your task is as follows.
$\ $
Let's assume 5-year-old children.
You have numbers of pictures that drew by them.
(Let, these pictures are your [training set].)
And, you want to synthesis similar pictures with the training set.
Because you need more pictures for your study.
Am I right?
$\ $
OK.
From the pictures, you want to extract some meaningful information about a real child who drew them, right?
Then, I think GAN is not faithful for your study.
Of course, GAN can make very similar pictures with your training set.
However, it does not mean the synthesized images can contain the things what you want!
GAN just synthesizes "fake pictures" that cannot be distinguished from your training set.
The synthesized pictures may do not have any meaningful thing.
Because it is not drawn by the real child.
$\ $
But it is worth to do.
May GAN captures some features of "children-like". (But I think it is too hard.)
You can find lots of GAN for your research, especially DCGAN.
Upvotes: -1 <issue_comment>username_2: A generative adversarial network is probably not the best approach for generating the images desired. We can assume from the comments that the data is not collected. That's a good thing, because a set of rasterized images, labeled with student age or grade is an inferior input form.
It appears that access to a student population is planned or already negotiated, which is also good.
Although the drawing, as it is being drawn, is seen through each student's eyes, the primary features correlated with drawing skill development is motor control, shape formation, and color choice. If the sheet of paper is placed over a drawing tablet, the tablet's incoming USB stream events are captured to a file, and the color selection is somehow recorded or automatically determined by having students hold the pencil or crayon up to the computer's camera before using it, a much better *in natura* input stream can be developed.
Pre-processing can lead to an expression of each drawing experience as a sequence of events arranged in temporal order with the following dimensions for each event.
* Relative time from the instruction to draw in seconds
* Color
* Nearest x grid
* Nearest y grid
* Pressure
Determining color from camera input may be developed using LSTM approaches.
The dimensions of the label for each of these sequences would be those demographics and rankings that would most closely correlated with developmental stages.
* Student age
* Student gender
* Curriculum grade (-1, 0, 1, 2, ... 12, where -1 is preschool and 0 is kindergarten)
* Identifier of the drawing instructions given to the class
* Grade ranking of the student in the class
The micro-analysis attached to each ELEMENT in the sequence includes these additional dimensions.
* Drawing rate of the utensil given by $r = \frac {\sqrt{(x - x\_p)^2 + (y - y\_p)^2}} {t - t\_p}$ where the subscript p indicates the values are drawn from the previous event in the sequence.
* Drawing direction given by $\theta = \arctan (x - x\_p, \; y - y\_p)$
* Curvature $\kappa$ calculated using cubic splines or some other data fitting approach
* FFT spectrum $\vec{a}$ and Lyapunov exponent $\lambda$ applied to auto-correlation results
This is a modification of the system Google uses to synthesize speech, based on the WaveNet design. In the diagram, the residual function is defined as follows.
$z = \tanh \, (W\_{f,k} x + V\_{f,k} y) \, \odot \, \sigma \, (W\_{g,k} x + V\_{g,k} y)$
The development required is that the $\vec{a}$ must now be accompanied with scalars $r, \theta, \kappa, and \lambda$, but the resulting drawings are likely to have many of the hand-eye developmental features of the examples.
[](https://i.stack.imgur.com/eA3i4.png)
Upvotes: 2 [selected_answer] |
2018/09/09 | 1,098 | 3,197 | <issue_start>username_0: In the [homework for the Berkeley RL class](http://rail.eecs.berkeley.edu/deeprlcourse/static/homeworks/hw2.pdf), problem 1, it asks you to show that the policy gradient is still unbiased if the baseline subtracted is a function of the state at time step $t$.
$$ \triangledown \_\theta \sum\_{t=1}^T \mathbb{E}\_{(s\_t,a\_t) \sim p(s\_t,a\_t)} [b(s\_t)] = 0 $$
I am struggling through what the first step of such a proof might be.
Can someone point me in the right direction? My initial thought was to somehow use the [law of total expectation](https://en.wikipedia.org/wiki/Law_of_total_expectation) to make the expectation of $b(s\_t)$ conditional on $T$, but I am not sure.<issue_comment>username_1: It appears that the homework was due two days prior to this answer's writing, but in case it is still relevant in some way, the relevant class notes (which would have been useful if provided in the question along with the homework) [are here](http://rail.eecs.berkeley.edu/deeprlcourse-fa17/f17docs/lecture_3_rl_intro.pdf).
The first instance of expectation placed on the student is, "Please show equation 12 by using the law of iterated expectations, breaking $\mathbb{E}\_{\tau \sim p \theta(\tau)}$ by decoupling the state-action marginal from the rest of the trajectory." Equation 12 is this.
$\sum\_{t = 1}^{T} E\_{\tau \sim p \theta(\tau)} [\nabla\_\theta \log \pi\_\theta(a\_t|s\_t)(b(s\_t))] = 0$
The class notes identifies $\pi\_\theta(a\_t|s\_t)$ as the state-action marginal. It is not a proof sought, but a sequence of algebraic steps to perform the decoupling and show the degree to which independence of the state-action marginal can be achieved.
This exercise is a preparation for the next step in the homework and draws only on the review of CS189, Burkeley's Introduction to Machine Learning course, which does not contain the Law of Total Expectation in its syllabus or class notes.
All the relevant information is in the above link for class notes and requires only intermediate algebra.
Upvotes: 1 <issue_comment>username_2: Using the [law of iterated expectations](https://en.wikipedia.org/wiki/Law_of_total_expectation) one has:
$\triangledown \_\theta \sum\_{t=1}^T \mathbb{E}\_{(s\_t,a\_t) \sim p(s\_t,a\_t)} [b(s\_t)] = \nabla\_\theta \sum\_{t=1}^T \mathbb{E}\_{s\_t \sim p(s\_t)} \left[ \mathbb{E}\_{a\_t \sim \pi\_\theta(a\_t | s\_t)}
\left[ b(s\_t) \right]\right] =$
written with integrals and moving the gradient inside (linearity) you get
$= \sum\_{t=1}^T \int\_{s\_t} p(s\_t) \left(\int\_{a\_t} \nabla\_\theta b(s\_t) \pi\_\theta(a\_t | s\_t) da\_t \right)ds\_t =$
you can now move $\nabla\_\theta$ (due to linearity) and $b(s\_t)$ (does not depend on $a\_t$) form the inner integral to the outer one:
$=
\sum\_{t=1}^T \int\_{s\_t} p(s\_t) b(s\_t) \nabla\_\theta \left(\int\_{a\_t} \pi\_\theta(a\_t | s\_t) da\_t \right)ds\_t= $
$\pi\_\theta(a\_t | s\_t)$ is a (conditional) probability density function, so integrating over all $a\_t$ for a given fixed state $s\_t$ equals $1$:
$=
\sum\_{t=1}^T \int\_{s\_t} p(s\_t) b(s\_t) \nabla\_\theta 1 ds\_t = $
Now $\nabla\_\theta1 = 0$, which concludes the proof.
Upvotes: 3 |
2018/09/09 | 1,563 | 6,598 | <issue_start>username_0: Is there research that employs realistic models of neurons? Usually, the model of a neuron for a neural network is quite simple as opposed to the realistic neuron, which involves hundreds of proteins and millions of molecules (or even greater numbers). Is there research that draws implications from this reality and tries to design realistic models of neurons?
Particularly, recently, [Rosehip neuron](https://en.wikipedia.org/wiki/Rosehip_neuron) was discovered. Such neuron can be found only in human brain cells (and in no other species). Are there some implications for neural network design and operation that can be drawn by realistically modelling this Rosehip neuron?<issue_comment>username_1: It looks like you really have two questions here. I'll try to answer the first one, and you should think about making a separate question for the second.
There is research into using simulated models of biologically realistic neurons. While there are large projects like the [Human Brain Project](https://www.humanbrainproject.eu/en/) aimed at simulating human brains, there is also a lot of lower-level AI research. [SPAWN](http://compneuro.uwaterloo.ca/publications/choo2018.html) is an interesting system that got a lot of press a few years ago, and has continued to be developed since then. It uses realistic neurons to simulate several brain-regions at once, creating a surprisingly general AI system that could perform many types of motor and vision tasks using the same basic design.
Upvotes: 2 <issue_comment>username_2: It is true that the current Machine learning is based on treating neurons as a component in the whole complexity , mesh of neurons. The focus is more on the architecture rather than understanding or imitating the basic block of it more clearly , i.e. the neurons.
Anirban Bandhopadhyay is a biologist and Neurologist who has studied how the harmony changes the memory element and decision making power in microtubles inside the neurons.
Here, is the snippet of him explaining , and trying to see what exactly computation is , and how the brain does computation.
[How does the Brain Act?](https://www.closertotruth.com/interviews/48850)
Upvotes: 2 <issue_comment>username_3: **State of Rosehip Research**
The Rosehip neuron is an important discovery, with vast implications to AI and its relationship to the dominant intelligence on earth for at least the last 50,000 years. The paper that has spawned other articles is [*Transcriptomic and morphophysiological evidence for a specialized human cortical GABAergic cell type*, Buldog et. al., September 2018, Nature Neuroscience](https://www.nature.com/articles/s41593-018-0205-2.epdf).
The relationship between this neuron type and its DNA expression is beginning. No data is available regarding the impact of the Rosehop distinctions on neural activity during learning or leveraging what has been learned. Surely, research along those lines is indicated, but the discovery was just published.
**Benefit of the Interdisciplinary Approach to AI**
That those who reference papers like this can see value in the unification or at least alignment of knowledge across disciplines is most likely beneficial to AI progress and progress in the other fields of cognitive science, bioinformatics, business automation, manufacturing and consumer robotics, psychology, and even law, ethics and philosophy.
That such interest in aligning understanding along interdisciplinary lines is present in AI Stack Exchange is certainly beneficial to the community growth in both professional and social dimensions.
**Disparity Between What Works**
In the human brain, neurons work. Whether Rosehip neurons are a prerequisite to language, the building of and leveraging of complex models, or transcendent emotions such as love in homo sapiens is unknown and will remain so in the near future. However, we have a fifty millennia long proof of concept.
We also know that artificial networks work. We use them in business, finance, industry, consumer products, and a variety of web services today. When a pop-up asks whether the answer given was helpful, our answer becomes a label in a set of real data from which samples are extracted for machine learning.
Nonetheless, the cells that are working are offspring of the 1957 perceptron with the addition of the application of gradient descent using an efficient corrective signal distribution strategy we call back propagation. The comprehension of neuron function in 1957 was grossly short of what we now know to be functional features of mammalian brain neurons. The Rosehip discovery may widen that gap.
**Spiking Networks**
The spiking network research more realistically models neurons, and neuromorphic research and development has been placing improved models into VLSI chips. The joint venture between IBM and MIT is another.
**Correlating Neural Function to Brain Function**
The relationship intelligence and the number of proteins or molecules may not be the most telling. These are more likely relationships between metrics and features and the intelligence of the system.
* Genetic features that have been identified (22 of them) that directly affect intelligence testing results — For instance the correlation between polymorphisms of the oxytocin receptor genes OXTR rs53576, rs2254298, and rs2228485 and intelligence is known — See the [question containing references to discovery of 22 genes that affect intelligence test results significantly](https://ai.stackexchange.com/questions/7337/is-the-singularity-concept-mathematically-flawed)
* Neurochemical expression resulting from environmental factors varying the levels of oxytosin, dopamine, serotonin, neuropeptide Y, and canabinoids which is involved in global and regional functional behavior in the human brain
* Signal topology (distinct from sizes and counts and distinct from the topology of created by packaging neural nets in the cranial region) — Signal topology is now being identified. Scanning technology has developed to the point where signal paths can be identified by tracking pulses in temporal space and determine causality.
* Synaptic plasticity, a type of neural plasticity
* Total number of neurons applied to a particular brain function
* Impact on axon and cell body thermodynamics on signal transmission, a key element in modelling a brain neuron
None of these are yet modelled in such a way that simulation accuracy has been confirmed, but the need to research along these lines is clearly indicated as this question implies.
Upvotes: 3 [selected_answer] |
2018/09/10 | 1,588 | 6,762 | <issue_start>username_0: Would AlphaGo Zero become theoretically perfect with enough training time? If not, what would be the limiting factor?
(By perfect, I mean it always wins the game if possible, even against another perfect opponent.)<issue_comment>username_1: Yes [AlphaGo Zero](https://en.wikipedia.org/wiki/AlphaGo_Zero) could become undeniably perfect.
It has won 100:0 against [AlphaGo Lee](https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol) (which won 4:1 against 18-time world champion (human) [Lee Sedol](https://en.wikipedia.org/wiki/Lee_Sedol)) and 89:11 against [AlphaGo Master](https://en.wikipedia.org/wiki/Master_(software)) (which won 60 straight online games against human professional Go players from 29 December 2016 to 4 January 2017).
From the [official AlphaGo website](https://deepmind.com/research/alphago/):
>
> "AlphaGo's 4-1 victory in Seoul, South Korea, in March 2016 was watched by over 200 million people worldwide. It was a landmark achievement that experts agreed was a decade ahead of its time, and earned AlphaGo a 9 dan professional ranking (the highest certification) - the first time a computer Go player had ever received the accolade.".
>
>
>
From AlphaGo's webpage: "[AlphaGo's next move](https://deepmind.com/blog/alphagos-next-move/)":
>
> "We plan to publish one final academic paper later this year that will detail the extensive set of improvements we made to the algorithms’ efficiency and potential to be generalised across a broader set of problems. Just like our [first AlphaGo paper](https://www.nature.com/articles/nature24270), we hope that other developers will pick up the baton, and use these new advances to build their own set of strong Go programs.
>
>
> ...
>
>
> Since our match with <NAME>, AlphaGo has become its own teacher, playing millions of high level training games against itself to continually improve.".
>
>
>
Upvotes: 2 <issue_comment>username_2: We cannot tell with certainty whether AlphaGo Zero would become perfect with enough **training time**. This is because none of the parts (Neural Network) that would benefit from **infinite training time** (= a nice approximation of "enough" training time) are guaranteed to ever converge to a perfect solution.
The main limiting factor is that we do not know whether the Neural Network used is big enough. Sure, it's pretty big, it has a lot of capacity... but is that enough? Imagine if they had used a tiny Neural Network (for example just a single hidden layer with a very low number of nodes, like 2 hidden nodes). Such a network certainly wouldn't have enough capacity to ever learn a truly, perfectly optimal policy. With a bigger network it becomes more plausible that it may have sufficient capacity, but we still cannot tell for sure.
---
Note that AlphaGo Zero does not just involve a trained part; it also has a Monte-Carlo Tree Search component. After running through the Neural Network to generate an initial policy (which in practice turns out to already often be extremely good, but we cannot tell for certain if it's perfect), it does run some MCTS simulations during it's "thinking time" to refine that policy.
MCTS doesn't benefit\* from increased **training time**, but it does benefit from increased **thinking time** (i.e. processing time per turn during the actual game being played, rather than offline training time / self-play time before the evaluation game). In the most common implementation of MCTS (UCT, using the UCB1 equation in the Selection Phase), we can prove that it does in fact learn to play truly perfectly if it is given an infinite amount of **thinking time**. Now, AlphaGo Zero does use a slightly different implementation of the Selection Phase (which involves the policy generated by the trained Neural Network as prior probabilities), so without a formal analysis I can't tell for sure whether that proof still holds up here. Intuitively it looks like it still should hold up fine though.
---
**\*Note**: I wrote above that "MCTS doesn't *benefit* from increased training time. In practice, of course it does tend to benefit from increased training time because that tends to result in a better Network, which tends to result in better decisions during the Selection Phase and better evaluations of later game states in the tree. What I mean is that MCTS is not **theoretically guaranteed** to always keep benefitting from increases in training time as we tend to infinity, precisely because that's also where we don't have theoretical guarantees that the Neural Network itself will forever keep improving.
Upvotes: 3 [selected_answer]<issue_comment>username_3: Assuming you mean a mathematically perfect player, similar to what we can achieve trivially in Tic Tac Toe, then the answer is "maybe". The underlying reinforcement learning algorithms that it uses do have some convergence guarantees, but there are some caveats:
* Theories of convergence that apply to value and policy functions learned by RL assume unrealistic timescales and decays of learning parameters. If you have actual infinite time and resources then it is possible to explore all board states and learn their values accurately. But then, if you have those resources at your disposal, a brute-force search would work too.
* Using neural network approximations to true value functions can put bounds on how well value functions are learned, as they rely on generalisation, and are characterised by an error metric. The values that they calculate are in fact guaranteed to *not* be mathematically perfect, and in part that is by design (because you want learning from *similar* states to apply to new unseen states, and as part of that need to accept the compromise that most state values will be slightly incorrect). This is especially true of the fast policy network used to drive Monte Carlo Tree Search (MCTS).
* Running longer MCTS during play improves performance at cost of spending more time per move decision. Given infinite resources, MCTS can play perfectly, even from very crude heuristics.
The difference between AlphaGo Zero and a more traditional game search algorithm is to do with optimal use of available computing resources, as set by available hardware, training time and decision time when playing. It is orders of magnitude more effective to use the RL self-play approach combined with the focused MCTS as in AlphaGo Zero, than any basic search. At least for similar puzzles and games, it is more efficient than any other game playing technique that has been explored. We are likely still more orders of magnitude of effort away from a perfect Go player, and there is no reason yet to suspect that this will ever become practical, and Go become "solved".
Upvotes: 2 |
2018/09/11 | 3,910 | 16,550 | <issue_start>username_0: I don’t believe in free will, but most people do. Although I’m not sure how an act of free will could even be described (let alone replicated), is [libertarian freewill](https://en.wikipedia.org/wiki/Libertarianism_(metaphysics)) something that is considered for AI? Or is AI understood to be deterministic?<issue_comment>username_1: AI is algorithmic not free willed in a sense that humans have free will. So in that sense it is deterministic. Give it the same data each time you would expect the same result. Change something (ie feed it new data to learn from) and then it will give a different result. Hence the determinism.
EDIT: Some algorithms do use some randomising - ie some versions of hill climbing - but if we want to get technical there's no such thing as true random numbers anyway. (Unless you're using one of those supercomputers that use the radiation from the sun as a seeding factor)
Upvotes: 0 <issue_comment>username_2: AI is "deterministic" in the sense that it follows exactly the algorithm. "Deterministic" means different things to a data scientist/programmer, but let's not go into details here.
There is no "freewill" in AI, it's all about mathematics and algorithms. Don't watch too many scientific movies!
Upvotes: 2 <issue_comment>username_3: I'm going to assume that by free will, you mean something like the philosophical concept of [libertarian free will](https://en.wikipedia.org/wiki/Libertarianism_(metaphysics)), which is defended by philosophers like [<NAME>](https://en.wikipedia.org/wiki/Robert_Kane_(philosopher)). In Libertarian Free Will, individuals have some capability to make choices about their actions. The classic way to argue this is by assuming some kind of spirit-stuff (e.g. a soul) that exists outside the material world, and that this spirit-stuff constitutes the consciousness of a person. Kane tries some mental gymnastics to avoid this, but then concedes something like it in a footnote. I'm not aware of any serious work that doesn't make some kind of non-physical assumption to justify this view. If someone can point at one, I'll update the answer.
By determinism, I'm going to assume you mean the usual notion of [philosophical determinism](https://en.wikipedia.org/wiki/Determinism): since people's decisions depend on what happened in the past, and where they are in the present, they don't really have a choice in any meaningful sense. Philosopher's like Dennett adopt a slightly softer view (*compatibilism*, essentially: you don't get to make big choices, but you do get to make small ones). Appeals to Quantum Mechanics are common to justify that view. In this context, free action means something more like "did something we couldn't predict exactly". An example might be: you are pre-destined to put a can of campbell's brand tomato soup in your shopping cart, but "make a choice" about exactly which of the dozens of cans you will put in. Since small choices can have large impacts (maybe that can will give you food poisoning, and the others wouldn't), this can make all sorts of things impossible to predict exactly.
I think most AI researchers don't worry too much about these issues, but Turing actually addresses them in his paper right at the start of the field, [Computing Machinary and Intelligence](https://www.csee.umbc.edu/courses/471/papers/turing.pdf). The deterministic/compatibilist view point is introduced as Lady Lovelace's objection: Computers only know how to do what we tell them to, so they can't be called intelligence.
Turing's counterargument uses two prongs. First, Turing notes that computers can probably be made to learn (he was right!). If they can learn, they can do things that we don't expect, just like children do. Second, Turing notes that computers already do all sorts of things we don't expect: they have bugs. Anytime a computer exhibits a bug, it did something that was unexpected. Since we cannot generally rule out bugs in programs, computers will always do surprising things. Therefore, computers satisfy the deterministic notion of free will.
Turing also addresses the libertarian notion of free will, which is part of what he calls the "Theological Objection". The objection is that intelligence requires some kind of divine spark (like free will). Turing argues that we can't detect sparks like this right now (he actually thought we would be able to one day, and spent a lot of time looking at supernatural phenomena too). However, there's no reason to suppose that computers with the right programs won't be endowed with them. A divine creator *could* decide that anytime you build something brain-like, it gets a spark. If we build a program that's brain-like, maybe it gets a spark too. In the absence of some way to detect souls, it seems like we ought to just agree to treat things that *seem* intelligent as though they had these souls, since otherwise we don't have a really clear way to decide who is and isn't intelligent. The only remaining way is to say "only things made of human meat have souls and are intelligent". While a lot of people do actually say things like this (e.g. animals have no souls), this is a pretty arbitrary view, and I think there's no hope arguing against it. Turing suggests the "Anthropic Principal": we shouldn't assume that we're special, because the earth isn't in a special place in the galaxy, or in the universe, and we have pretty compelling evidence that we're an evolved version of other animals around us, but some groups (e.g. biblical literalists) find this unconvincing.
Upvotes: 5 [selected_answer]<issue_comment>username_4: >
> Although I’m not sure how an act of freewill could even be described (let alone replicated),
>
>
>
Well, one popular definition goes like this:
>
> *[Free will is] the freedom to act according to one's motives without arbitrary hindrance from other individuals or institutions*
>
>
>
[Source - Wikipedia entry on Compatibilism](https://en.wikipedia.org/wiki/Compatibilism)
Note that this definition is perfectly compatible with determinism (hence the name "Compatibilism"). Actually, proponents usually argue that free will *requires* determinism, because if your choices were ultimately random, like rolling a dice, how could they be *your* free choices?
Now, if you assume that an AI can be said to have "motives", then according to this view, it would have free will - if noone hinders it.
The contrary view, Incompatibilism, has been described in another answer by username_3. I agree with him that most AI researchers probably don't worry about philosphical questions like that. All the proponents of indeterminism (sometimes called metaphysical free will or libertarianism) I'm aware of assume that there exist "causes" that are neither deterministic / physical, nor purely random. (e.g. [Agent causation](https://en.wikipedia.org/wiki/Agent_causation)). Since AIs are more or less by definition based on physical processes, deterministic algorithms and possibly some random number generator, I don't see how they could posess this kind of freedom.
Upvotes: 2 <issue_comment>username_5: The free will in the spiritual world would be for you to have the right to follow or not the main path (God's way). Taking into consideration that whatever path you follow, you will have its consquences (good or bad).
However, an artificial intelligence created to solve problems does not need these distractions. Determinism considers the feeling of freedom a subjective illusion. Trying to apply free will to an AI is to create an inner inner space within the "mind" .. but creating that is already impossible because we can not understand exactly what is going on in our minds. The big question is, how to program something that we can not explain accurately? How to induce an AI to learn something that does not make sense?
I believe that it is not just code and database that will lead the AI to have faith in its existence or to believe in free will. But to construct an AI that reads and understands every human mind, every thought, house illusion, every paranoia, every confusion, every sadness, every lie ... may be a good source of studies, but whoever qualifies for this kind of experiment? What would be the conclusion of an AI by understanding the whole confused and troubled human mind?
It's a very complex question ... let's continue studying and raising this kind of subject. I spent my last years trying to recreate human thoughts and actions in an AI .. it's a study for a lifetime .. my fear is the disappointment at the end of everything :(
Upvotes: 0 <issue_comment>username_6: Love question. Firstly, as pointed out in other answers Narrow AI today are mostly algorithms following their procedures given some inputs. No need for philosophy here as they are following reproducible steps.
However, if your referring to general AI or AI a kin to human level intelligence or better then maybe the question holds some weight. But again as pointed out it would come back to whether you believe you or I indeed have free will?
For me I believe free will can be modelled as a sort of entropy. If you look on the macro level things are blurry, agents are making decisions and moving around in a unpredicatable way. On the micro level however, given all the data in one state one could predict the next state shattering the idea of free will. I guess it’s up to you to decide whether this fits with your free wil definition or not.
Upvotes: 0 <issue_comment>username_7: No, because there is no utility in building a "libertarian free AI" as far as I know of.
AI is another tool. What is the purpose in building an AI with such a distinction?
The reason for that question is this. Let's say you want an AI to accomplish some kind of task you want machine assistance in. That's what tools do- assisting with tasks. *What exactly would this task be that a "non-libertarian free AI" couldn't achieve but a "libertarian free AI" could?*
Upvotes: 0 <issue_comment>username_8: "there's no such thing as true random numbers anyway." that's all you really need to deter any idea of AI on any computer. Any software or set of functions on a computer is pretty much (right now at least) all set code, by humans.
Also the an execution of actions based on variables not listed is not artificial intelligence, it's just simpler code.
Any REAL Artificial Intelligence will not be made on a board of 1's and 0's, that's defeating the purpose, all of the actions are predetermined (even if they are extended intricately to cover many possibilities) so they have no chance to create something deterministic.
real independent intellect is most likely (in my eyes) found, not made.
Upvotes: 1 <issue_comment>username_9: *I was very bored.*
The short answer is ***yes*** and the long answer is basically ***yes***.
I'm going to gloss over what you could possibly mean by *AI* and instead focus on what *AI programs* pretty much are:
A mish-mash of algorithms and methods borrowed from mathematics, perhaps more specifically statistics, inserted into a good old-fashioned program.
Let's start from the very beginning. The **very beginning**.
For the sake of this argument, **everything** is a language and there exists an **omnipotent alphabet** which contains all possible symbols you can come up with to convey a message. Such an alphabet would contain everything you've ever known, all their permutations, and then some. It would contain your clothes as well. Forget countable infinity or all those concepts, those come way later.
It is important to realise that right now I'm communicating with you through *not just* through the power of a string of characters. You're also experiencing other stimuli that you, or whatever part of "you", is/are capable of **interpreting** into another language. To be crude, everything you decide to classify as a 'standalone thing' is a compiler that runs in perpetual execution, translating all the 'things' it 'receives' into other 'things' it 'spits out' for other 'things' to 'receive'.
Think about a modern day computer. You write a program in your fancy little language. You hit compile. A compiler goes through your code, and spits out more code. Except this time this code is written in another language, sometimes "closer to the metal", sometimes "just about the same abstract level" and sometimes "even more abstract", and this process repeats itself until somewhere along the line that code you started with, has been interpreted to mere electrical signals, which then themselves are being 'compiled' by an entity we will call '**the universe**' and that's up to empirical observation to determine what and what's not going on. (Except "the universe" was always responsible. But we will partition things for the sake of ..partitioning things. You get what I mean, I know you do.)
Now let's jump back to languages. Mathematics is a language in the sense that:
* It is built of symbols contained in an alphabet we will call **X**
* The several fields specify their own grammatical structure through which we can decide whether or not a statement is *well-formed*. This encompasses everything from where you can put the **+** operator in high school algebra, to how you can write a **proof** in formal logic. There needs to be no justification for how you build a grammar. You can always just make a new grammar and use it instead. Of course, it might not be capable of forming statements which are compatible with other grammars.
What's interesting about **X** is that its definition is not fixed. Throughout time, we've introduced new symbols into the mathematical alphabet to be able to express more concepts while keeping things separate. (Or rather some people have had the sense to keep it this way.) For example, whenever you see Leibniz's integration symbol, you know you're probably dealing with some kind of integration and not something novel that you've never heard of before.
Now here's where I actually answer your question:
* I assume that by "program" you are referring to the mathematical construct as defined in theoretical computer science: A **string** of **characters** from an **alphabet**.
* This **string** is then fed to a **compiler** (lexer|parser|**semantic analyser**) which spits out another **string** (mainly the job of the semantic analyser). This string usually is built from characters of a different alphabet. That is to say, the **compiler is a function which maps a well formed string of language A to a string of language B**
* The end goal of compiling a program is to execute it, which basically means a succession of **compilers** will take the output of the previous compiler and spit back yet another string, until the string is essentially **electrons moving about the circuitry of the computer in your bedroom and producing fancy lights on your monitor**
So whenever you write an "AI program", you're just writing a "program" that contains some "AI algorithms" which are really just applications of things we've known in mathematics for 100s of years, which again, are really just a string of characters that are about to be translated by a compiler.
In other words, nothing you can ever write is *not* deterministic, provided you look at the bigger picture.
A common argument I see is that since AI programs usually "adapt" and "self-optimise" when solving the problem, they're not quite deterministic, in the sense that feeding the program the same input twice will (hopefully) yield better results the second time.
Except what really happened is that you had an input string that you partitioned into inputs A and B, and fed them to the algorithm in succession. Had you fed AB initially, you would've obtained the same results.
Upvotes: 0 <issue_comment>username_10: You guys are missing the point completely.
Everything I read assumes that being is based on science. You are trying to fit the reality of being into a brain, that is, into a machine. You know that you exist. A billion sensors of every imaginable type cannot equate to reality. Even with a billion more neurons than what's in the brain.
It's like the mathematics you use. You believe you can be accurate when the truth is, you cannot be. The position between 2 numbers can never be accurate, they are infinite.
It's like time. You believe it to be organic when the truth is, there is only the now.
Try as you may, you will never fit being into an algorithm. The real illusion is that you believe you can.
Upvotes: 0 |
2018/09/12 | 1,037 | 4,132 | <issue_start>username_0: This is a q-learning snake using a neural network as a q function aproximator and I'm losing my mind here the current model it's worst than the initial one.
The current model uses a 32x32x32 MLPRegressor from scikit-learn using relu as activation function and the adam solver.
The reward function is like following:
* death reward = -100.0
* alive reward = -10.0
* apple reward = 100.0
The features extracted from each state are the following:
1. what is in front of the snake's head(apple, empty, snake)
2. what is in the left of the snake's head
3. what is in the right of the snake's head
4. euclidian distance between head and apple
5. the direction from head to the apple measured in radians
6. length of the snake
One episode consists of the snake playing until it dies, I'm also using in training a probability epsilon that represent the probability that the snake will take a random action if this isn't satisfied the snake will take the action for which the neural network gives the biggest score, this epsilon probability gradually decrements after each iteration.
The episode is learned by the regressor in reverse order one statet-action at a time.
However the neural network fails too aproximate the q function, no matter how many iterations the snake takes the same action for any state.
Things I tried:
* changing the structure of the neural network
* changing the reward function
* changing the features extracted, I even tried passing the whole map to the network
Code (python): <https://pastebin.com/57qLbjQZ><issue_comment>username_1: There are two problems here.
1. The code you posted doesn't incrimentally train your multilayer perceptron. Instead, it effectively re-randomizes the weights, and then re-fits the model each time you call .fit() at lines 35 & 54. Using SKLearn's \_fit() function with Incremental=true might solve this, or you can package up the data into a larger batch, and train on that offline instead after several episodes.
2. Your reward function makes it painful to be alive, and doesn't give enough benefits through the Apples to make up for this. There are 100 squares that could contain the apple. On average, the apple will spawn about 5 squares from the snake in each direction. Since the snake can't move diagonally, that's 10 moves (5 left/right, 5 up/down). That means that if the snake plays *perfectly*, then on average, it might be able to get zero reward total. In practice, the snake will not play perfectly. This means living gives negative expected reward.
In contrast, if the snake can kill itself, it will stop getting negative rewards. The reward function you've used is maximized by getting big enough to run into your own tail as fast as possible. The snake should be able to do this after eating 3 apples I think. There is some incentive to hunt for food well, but not much compared with hitting your own tail as soon as possible.
If you want the snake to learn to hunt for the food, reduce the penalty for being alive to -1, or even -0.1. The snake will be much more responsive to signals from the food.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Assume you are the snake.
In front of you is empty. Left of you is empty. Right of you is empty. The distance to the apple is 4. The apple straight in front of you. Your length is 20.
Can you make a good decision with this input? In which direction would you go to achieve maximum score?
From the given input, you could go straight forward to the apple. But that might be a failure and lead to death.
IMHO, the input state is simply not enough to make a good decision, because
a) the snake doesn't even know in which direction it's currently moving.
b) the snake has no idea about where its body is
The situation could look like this:
[](https://i.stack.imgur.com/AZFmT.png)
The only way for the snake to move out of this trap is as indicated by the arrow, so that the tail frees the way out just in time. Your neural network does not have the necessary input to make that decision.
Upvotes: 2 |
2018/09/13 | 872 | 3,573 | <issue_start>username_0: The Alpha Zero (as well as AlphaGo Zero) papers say they trained the value head of the network by "minimizing the error between the predicted winner and the game winner" throughout its many self-play games. As far as I could tell, further information was not given.
To my understanding, this is basically a supervised learning problem, where, from the self-play, we have games associated with their winners, and the network is being trained to map game states to the likelihood of winning. My understanding leads me to the following question:
What part of the game is the network trained to predict a winner on?
Obviously, after only five moves, the winner is not yet clear, and trying to predict a winner after five moves based on the game's eventual winner would learn a meaningless function. As a game progresses, it goes from tied in the initial position to won at the end.
How is the network trained to understand that if all it is told is who eventually won?<issue_comment>username_1: >
> What part of the game is the network trained to predict a winner on
>
>
>
The positional evaluation. How to give a static score to a chess position.
Upvotes: 0 <issue_comment>username_2: >
> To my understanding, this is basically a supervised learning problem, where from the self play we have games associated with their winners, and the network is being trained to map game states to likelihood of winning.
>
>
>
Yes, although the data for this supervised learning problem was provided by self-play. As AlphaZero learned, the board evaluations of the same positions would need to change, so this is a non-stationary problem, requiring that the ML forgot the training for older examples over time.
>
> What part of the game is the network trained to predict a winner on?
>
>
>
Potentially all of it, including the starting empty board. I am not sure if the empty board was evaluated in this way, but it is not only feasible, but can be done accurately in practice for simpler games (Tic Tac Toe and Connect 4 for example), given known player policies.
>
> Obviously after only five moves, the winner is not yet clear, and trying to predict a winner after five moves based on the game's eventual winner would learn a meaningless function.
>
>
>
Not at all. This is purely a matter of complexity and difficultly. In practice at such an early stage, the value network will output something non-committal, such as $p=0.51$ win chance for player 1. And it will have learned to do this, because in its experience during self-play similar positions at the start of the game lead to *almost* equal numbers of player 1 and player 2 winning.
The function is not meaningless either, it can be used to assess results of look-ahead searches without needing to play to the end of the game. It *completely replaces* position-evaluation heuristics as used in more traditional game tree searches. In practice, very early position data in something as complex as chess or go is not going to be as useful as later position evaluations, due to ambivalent predictions. However, for consistency it can still be learned and used in the game algorithms.
>
> How is the network trained to understand that, if all it is told is who eventually won?
>
>
>
If a supervised learning technique is given the same input data $X$ that on different examples predicts the labels $A, B, B, B, A, A, B, B$, then it should learn $p(B|X) = 0.625$. That would minimise a cross-entropy loss function, and is what is going on here.
Upvotes: 3 [selected_answer] |
2018/09/13 | 1,280 | 4,426 | <issue_start>username_0: [In a CNN, does each new filter have different weights for each input channel, or are the same weights of each filter used across input channels?](https://ai.stackexchange.com/questions/5769/in-a-cnn-does-each-new-filter-have-different-weights-for-each-input-channel-or)
This question helps me a lot.
Let, I have RGB input image. (3 channels)
Then each filter has n×n weights for one channel.
It means, actually the filter has totally 3×n×n weights.
For channel R, it has own n×n filter.
For channel G, it has own n×n filter.
For channel B, it has own n×n filter.
After inner product, add them all to make one feature map.
Am I right?
And then, my question starts here.
For some purpose, I will only use greyscale images as input.
So the input images always have the same values for each RGB channel.
Then, can I reduce the number of weights in the filters?
Because in this case, using three different n×n filters and adding them is same with using one n×n filter that is the summation of three filters.
Does this logic hold on a trained network?
I have a trained network for RGB image input, but it is too heavy to run in real time.
But I only use the greyscale images as input, so it seems I can make the network less heavy (theoretically, almost 1/3 of original).
I'm quite new in this field, so detailed explanations will be really appreciated.
Thank you.<issue_comment>username_1: >
> After inner product, add them all to make one feature map. Am I right?
>
>
>
**yes**, you are right.
>
> Then, can I reduce the number of weights in the filters? Because in this case, using three different n×n filters and adding them is same with using one n×n filter that is the summation of three filters.
>
>
>
If you have transformed the image into greyscale then you no longer need 3 filters. You should retrain your model on greyscale images. In a greyscale image the value of each pixel is a single sample representing only an amount of light (the light intensity).
The network will run faster if that is the only architectural change you make, but keep in mind that by converting the image to greyscale you will lose information and probably some of the predictive power of your network.
Upvotes: 2 <issue_comment>username_2: >
> After inner product, add them all to make one feature map. Am I right?
>
>
>
Yes you are right. Now I will try to perform a transformation to preserve accuracy, cannot say about efficiency of the method.
**Note:** I have not worked on such type of problem, but knowing the maths behind CNN I will try to solve the problem theoretically.
First you have to know the RGB to greyscale conversion formulae which have bben used. [Here](https://www.johndcook.com/blog/2009/08/24/algorithms-convert-color-grayscale/) are some common schemes.
So let us say each pixel had values $r, g, b$ and you converted it to $x\_1\*r + x\_2\*g +x\_3\*b$. So initially for simplicity let us say the we are talking about the corner pixel and convolution scheme $valid$, so the corner pixel with $RGB$ channels get multiplied with values $w\_r, w\_g, w\_b$ during convolution and gets summed up.
But now you only have one pixel which is $x\_1\*r + x\_2\*g +x\_3\*b$. Now let us multiply this by $\frac {w\_r}{x\_1} + \frac {w\_g}{x\_2} + \frac {w\_b}{x\_3}$. This will result in: $(w\_r\*r + w\_g\*g +w\_b\*b) + (\frac {w\_r\*(x\_2\*g + x\_3\*b)}{x\_1} +\frac {w\_g\*(x\_1\*r + x\_3\*b)}{x\_2} + \frac {w\_b\*(x\_2\*g + x\_1\*r)}{x\_3})$.
Now we have to try to remove the 2nd term of the equation. The paramters $w\_r, w\_g, w\_b, x\_1, x\_2, x\_3$ are predetermined already. So taking a single term $r$ from the second term of the equation gives $r\*(\frac {w\_g\*x\_1}{x\_2} + \frac {w\_b\*x\_1}{x\_3})$. The second term has pre-determined value, and for the $r$ I think somehow using modern image analysis techniques you can find some approximate value of possible $r$. Do this for $g$ and $b$ as well and subtract it from the aforementioned equation and you will finally get $(w\_r\*r + w\_g\*g +w\_b\*b)$ which was the term obtained by convolution of filters with $RGB$ images.
I have done all this hypothetically, such image analysis techniques might not exist, but it is still worth a try. Probably better methods to reduce the second term exists in mathematical literature. I will leave it upto mathematicians to point in the right direction.
Upvotes: 0 |
2018/09/14 | 912 | 4,061 | <issue_start>username_0: Is it possible for a genetic algorithm + Neural Network that is used to learn to play one game such as a platform game able to be applied to another different game of the same genre.
So for example, could an AI that learns to play Mario also learn to play another similar platform game.
Also, if anyone could point me in the direction of material i should familiarise myself with in order to complete my project.<issue_comment>username_1: Definitely depends on the design of your algorithm. According to my knowledge, almost all ML algorithms are targeting at specific issues, thus it’s difficult for general usage. And you have to train again for any new issues. It’s also difficult to understand the internal working mechanism of those AI algorithms due to general statistics methods (and yes, they call AI). I will recommend a regional/encapsulated method applied on general methods, therefore algorithms are not specific and micro-structured for general purposes. If games are similar, definitely we can apply on both with appropriate designed methods. Hinton has started his Capsule network which I think is a good direction. Just beware, training shouldn’t be specific object related. Instead, it should be micro-structure related or feature (it’s hard to differ current AI feature from human insight feature). For example, human can easily differ differences even though never see those before. And human do not have to re-train nerve units except for better understanding or accuracy. Genetic algorithms should have the same ability to survive in different but similar environments. Unfortunately, we are at the beginning of AI era but also luckily we have a lot to do. In fact, almost all current techs imitate the nature. If the nature can, definitely we can at one day.
Upvotes: 0 <issue_comment>username_2: Genetic algorithms can learn multiple games, yes, in fact genetic algorithms is a bad term to describe this family, there is only one generic genetic algorithm with many variations depending on the problem at hand. I recommend this pdf for a introduction on how they work and how to build them:
<http://www.boente.eti.br/fuzzy/ebook-fuzzy-mitchell.pdf>
Upvotes: 0 <issue_comment>username_3: Genetic algorithms and Neural Networks both are "general" methods, in the sense that they are not "domain-specific", they do not rely specifically on any domain knowledge of the game of Mario. So yes, if they can be used to successfully learn how to play Mario, it is likely that they can also be applied with similar success to other Platformers (or even completely different games). Of course, some games may be more complex than others. Learning Tic Tac Toe will likely be easier than Mario, and learning Mario will likely be easier than StarCraft. But in principle the techniques should be similarly applicable.
If you only want to learn in one environment (e.g., Mario), and then immediately play a different game without separately training again, that's much more complicated. For research in that area you'll want to look for Transfer Learning and/or Multi-Task learning. There has definitely been research there, with the latest developments that I'm aware of having been published [yesterday](https://deepmind.com/blog/preserving-outputs-precisely-while-adaptively-rescaling-targets/) (this is Deep Reinforcement Learning though, no GAs I think).
The most "famous" recent work on training Neural Networks to play games using Genetic Algorithms that I'm aware of is [this work by Uber](https://eng.uber.com/deep-neuroevolution/) (blog post links to multiple papers). I'm not 100% sure if that really is the state of the art anymore, if it's the best work, etc... I didn't follow all the work on GAs in sufficient detail to tell for sure. It'll be relevant at least though.
I know there's also been quite a lot of work on AI in general for Mario / other platformers (for instance in venues such as the IEEE Conference on Computational Intelligence and Games, and the TCIAIG journal).
Upvotes: 4 [selected_answer] |
2018/09/14 | 1,868 | 8,254 | <issue_start>username_0: I started teaching myself about reinforcement learning a week ago and I have this confusion about the learning experience. Let's say we have the game Go. And we have an agent that we want to be able to play the game and win against anyone. But let's say this agent learn from playing against one opponent, my questions then are:
1. Wouldn't the agent (after learning) be able to play only with that opponent and win? It estimated the value function of this specific behaviour only.
2. Would it be able to play as good with weaker players?
3. How do you develop an agent that can estimate a value function that generalizes against any behaviour and win? Self-play? If yes, how does that work?<issue_comment>username_1: Reinforcement Learning (RL) at its core does not have anything directly to say about adversarial environments, such as board games. That means in a purely RL set up, it is not really possible to talk about the "strength" of a player.
Instead, RL is about solving consistent environments, and that consistency requirement extends to any opponents or adversarial components. Note that consistency is not the same as determinism - RL theory copes well with opponents that effectively make random decisions, provided the *distribution* of those decisions does not change based on something the RL agent cannot know.
Provided an opponent plays *consistently*, RL can learn to optimise against that opponent. This does not directly relate to the "strength" of an opponent, although usually strong opponents present a more challenging environment to learn overall.
>
> 1. Wouldn't the agent (after learning) be able to play only with that opponent and win? since it estimated the value function of this specific behavior only.
>
>
>
If the RL has enough practice and time to optimise against the opponent, then yes the value function (and any policy based on it) would be specific to that opponent. Assuming, the opponent did not play flawlessly, then the RL would learn to play such it would win as often as possible against the opponent.
When playing against other opponents, the success of the RL agent will depend on how similar the new opponent was to the original that it trained against.
>
> 2. would it be able to play as good with weaker players?
>
>
>
As stated above, there is not really a concept of "stronger" or "weaker" in RL. It depends on the game, and how general the knowledge is that strong players require in order to win.
In theory you could construct a game, or deliberately play strongly, but with certain flaws, so that RL would play very much to counter one play style, and would fail against another player that did not have the same flaws.
It is difficult to measure this effect, because human players learn from their mistakes too, and are unlikely to repeat the exact same game time after time, but with small variations at key stages. Humans do not make consistent enough opponents, and individual humans do not play enough games at each stage of their ability to study fine-grained statistics of their effective policies.
In practice it seems likely that the effect of weakening against new players would be there in RL, due to sampling error if nothing else. However, it seems that the "strength" of players as we measure them in any game of skill such as chess or go, does correlate with a generalised ability. In part this is backed up by consistent results with human players and Elo ratings.
Any game where you can form "rings" of winning players:
* Player B consistently beats Player A
* Player C consistently beats Player B
* Player A consistently beats Player C
Could cause issues of the type you are concerned about when applying RL to optimise an artificial agent.
>
> 3. How do you develop an agent that can estimate a value function that generalizes against any behavior and win?
>
>
>
If is possible to play perfectly, then a value function which estimated for perfect play would work. No player could beat it. Think of Tic Tac Toe - it is relatively easy to construct perfect play value functions for it.
This is not achievable in practice in more complex games. To address this, and improve the quality of its decisions, what AlphaGo does is common to many game-playing systems, using RL or not. It performs a look-ahead search of positions. The value function is used to guide this. The end result of the search is essentially a more accurate value function, but only for the current set of choices - the search *focuses* lots of computation on a tiny subset of all possible game states.
One important detail here is that this focus applies at run time whilst playing against any new opponent. This does not 100% address your concerns about differing opponents (it could still miss a future move by a different enough opponent when searching). But it does help mitigate smaller statistical differences between different opponents.
This search tree is such a powerful technique that for many successful game playing algorithms, it is possible to start with an inaccurate value function, or expert heuristics instead, which are fixed and general against all players equally. IBM's Deep Blue is an example of using heuristics.
>
> self-play? if yes, how does that work?
>
>
>
Self-play appears to help. Especially in games which have theoretical optimal play, value functions will progress towards assessing this optimal policy, forming better estimates of state value with enough training. This can give a better starting point than expert heuristics when searching.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Most of your questions are already addressed very well by Neil's answer, so I won't address those again. I'd just like to clarify additionally on the following point:
>
> But let's say this agent learn from playing against one opponent
>
>
>
Precisely that assumption of learning from against a single opponent causes many issues. In fact, even if that "single opponent" is changing/improving, you can still have an unstable learning process. For instance, two agents that are simultaneously learning (or a single agent and a copy of itself) can keep infinitely going around in circles as Neil also already hinted at in a game like Rock-Paper-Scissors.
In the original [AlphaGo publication (2016)](https://www.nature.com/articles/nature16961), learning from self-play was done by randomly selecting one of a set of (relatively recent) copies of the learning agent every game, rather than always playing against an exact copy of the single most recent version of the agent. Adding more diversity to the "training partners" in that way can help to learn a more robust policy that can handle different opponents. Of course, we shouldn't go overboard with this kind of randomization; you still want to make sure to train against strong training partners (or opponents that have a roughly similar level of strength as the learning agent), an agent that already is quite strong won't be able to learn a lot from playing against an extremely weak agent anymore.
In [2017, a new paper appeared on AlphaGo Zero](https://www.nature.com/articles/nature24270). In this paper, they no longer used such randomization as described above, but still had a stable learning process from self-play. As far as I'm aware, the most likely hypothesis to explain this stability is the fact that Monte-Carlo Tree Search was used **during the self-play training process** to improve the update targets. **This is different from the use of lookahead search that Neil already described during gameplay, after training.** By also incorporating lookahead search during the training process, and using it to improve update targets, the hypothesis is that you can reduce the risk of "overfitting" against the training partner. The lookahead search actually "thinks" a bit about other moves that the training partner could have selected other than what it actually did, and incorporates that in the update targets. A similar combination of MCTS and self-play reinforcement learning was also [independently published (by different authors) to result in stable learning in different games](https://arxiv.org/abs/1705.08439).
Upvotes: 2 |
2018/09/14 | 669 | 2,720 | <issue_start>username_0: Are there neural networks that can decide to add/delete neurons (or change the neuron models/activation functions or change the assigned meaning for neurons), links or even complete layers during execution time?
I guess that such neural networks overcome the usual separation of learning/inference phases and they continuously live their lives in which learning and self-improving occurs alongside performing inference and actual decision making for which these neural networks were built. Effectively, it could be a neural network that acts as a [Gödel machine](http://people.idsia.ch/~juergen/goedelmachine.html).
I have found the term *dynamic neural network* but it is connected to adding some delay functions and nothing more.
Of course, such self-improving networks completely redefine the learning strategy, possibly, single shot gradient methods can not be applicable to them.
My question is connected to the neural-symbolic integration, e.g. [*Neural-Symbolic Cognitive Reasoning* by <NAME>, 2009](https://www.amazon.co.uk/Neural-Symbolic-Cognitive-Reasoning-Technologies/dp/3642092292/). Usually this approach assigns individual neurons to the variables (or groups of neurons to the formula/rule) in the set of formulas in some knowledge base. Of course, if knowledge base expands (e.g. from sensor readings or from inner nonmonotonic inference) then new variables should be added and hence the neural network should be expanded (or contracted) as well.<issue_comment>username_1: I mostly studied HMMs and such models are called Infinite HMMs in that specific domain.
I believe that what you are looking for is called Infinite Neural Networks. Not having access to scientific publications, I cannot really refer any work here. However, I found this GitHub repository: <https://github.com/kutoga/going_deeper> that provides some implementation and a document with multiple references.
Upvotes: 2 <issue_comment>username_2: This article on [Dynamically Expandable Neural Networks](https://hackernoon.com/dynamically-expandable-neural-networks-ce75ff2b69cf) (DEN) (by <NAME>) is based on this paper [Lifelong Learning with Dynamically Expandable Networks](https://arxiv.org/pdf/1708.01547v2.pdf) (by <NAME>, <NAME>, <NAME>, <NAME>)
This presents 3 solutions to increase the capacity of the network if needed retaining whatever useful information from the old model and train the new model:
* Selective retraining
* Dynamic Network Expansion
* Network Split/Duplication
To me, it seems that such neural network is dynamic and improving. As such, they answer partially your question. If they don't sorry about that.
Upvotes: 2 |
2018/09/14 | 615 | 2,474 | <issue_start>username_0: Problem:
We have a fairly big database that is built up by our own users.
The way this data is entered is by asking the users 30ish questions that all have around 12 answers (x, a, A, B, C, ..., H). The letters stand for values that we can later interpret.
I have already tried and implemented some very basic predictors, like random forest, a small NN, a simple decision tree etc.
But all these models use the full dataset to do one final prediction. (fairly well already).
**What I want to create is a system that will eliminate 7 to 10 of the possible answers a user can give at any question.** This will reduce the amount of data we need to collect, store, or use to re-train future models.
I have already found several methods to decide what are the most discriminative variables in the full dataset. Except, when a user starts filling the questions I start to get lost on what to do. None of the models I have calculate the next question given some previous information.
It feels like I should use a Naive Bayes Classifier, but I'm not sure. Other approaches include recalculating the Gini or entropy value at every step. But as far as my knowledge goes, we can't take into account the answers given before the recalculating.<issue_comment>username_1: I mostly studied HMMs and such models are called Infinite HMMs in that specific domain.
I believe that what you are looking for is called Infinite Neural Networks. Not having access to scientific publications, I cannot really refer any work here. However, I found this GitHub repository: <https://github.com/kutoga/going_deeper> that provides some implementation and a document with multiple references.
Upvotes: 2 <issue_comment>username_2: This article on [Dynamically Expandable Neural Networks](https://hackernoon.com/dynamically-expandable-neural-networks-ce75ff2b69cf) (DEN) (by <NAME>) is based on this paper [Lifelong Learning with Dynamically Expandable Networks](https://arxiv.org/pdf/1708.01547v2.pdf) (by <NAME>, <NAME>, <NAME>, <NAME>)
This presents 3 solutions to increase the capacity of the network if needed retaining whatever useful information from the old model and train the new model:
* Selective retraining
* Dynamic Network Expansion
* Network Split/Duplication
To me, it seems that such neural network is dynamic and improving. As such, they answer partially your question. If they don't sorry about that.
Upvotes: 2 |
2018/09/15 | 1,658 | 7,146 | <issue_start>username_0: At the time when the basic building blocks of machine learning (the perceptron layer and the convolution kernel) were invented, the model of the neuron in the brain taught at the university level was simplistic.
>
> Back when neurons were still just simple computers that electrically beeped untold bits to each other over cold axon wires, spikes were not seen as the hierarchical synthesis of every activity in the cell down to the molecular scale that we might say they are today. In other words, spikes were just a **summary report of inputs to be integrated** with the current state, and passed on. In comprehending the intimate relationships of mitochondria to spikes (and other molecular dignitaries like calcium) we might now more broadly interpret them as **synced messages that a neuron sends to itself**, and by implication its spatially extended inhabitants. Synapses weigh this information heavily but ultimately, but like the electoral college, fold in a heavy dose of local administration to their output. The sizes and positions within the cell to which mitochondria are deployed can not be idealized or anthropomorphized to be those metrics that the neuron decides are best for itself, but rather **what is thermodynamically demanded**.1
>
>
>
Notice the reference to summing in the first bolded phrase above. This is the astronomically oversimplified model of biology upon which contemporary machine learning was built. Of course ML has made progress and produced results. This question does not dismiss or criticize that but rather widen the ideology of what ML can become via a wider field of thought.
Notice the second two bolded phrases, both of which denote statefulness in the neurons. We see this in ML first as the parameters that attenuate the signals between arrays of artificial neurons in perceptrons and then, with back-propagation into deeper networks. We see this again as the trend in ML pushes toward embedded statefulness by integrating with object oriented models, the success of LSTM designs, the interrelationships of GAN designs, and the newer experimental attention based network strategies.
But does the achievement of higher level thought in machines, such as is needed to ...
* Fly a passenger jet safely under varying conditions,
* Drive a car in the city,
* Understand complex verbal instructions,
* Study and learn a topic,
* Provide thoughtful (not mechanical) responses, or
* Write a program to a given specification
... requiring from us a much more radical is the transition in thinking about what an artificial neuron should do?
Scientific research into brain structure, its complex chemistry, and the organelles inside brain neurons have revealed significant complexity. Performing a vector-matrix multiplication to apply learning parameters to the attenuation of signals between layers of activations is not nearly a simulation of a neuron. Artificial neurons are not very neuron-like, and the distinction is extreme.
A little study on the current state of the science of brain neuron structure and function reveals the likelihood that it would require a massive cluster of GPUs training for a month just to learn what a single neuron does.
>
> Are artificial networks based on the perceptron design inherently limiting?
>
>
>
**References**
[1] Fast spiking axons take mitochondria for a ride,
by <NAME>, Medical Xpress, January 13, 2014,
<https://medicalxpress.com/news/2014-01-fast-spiking-axons-mitochondria.html><issue_comment>username_1: In my opinion, there are many functions in our brain. Surely much more than the artificial neural network nowadays. I guess this is the field of brain science or cognitive psychology.
Some brain structures may help for certain applications, but not all. Neural network though is a simplest form of our brain, but has the most general usages. On the other words, if you want to improve the neural networks, different fields or different functions may needs totally different structures. You can refer this as so many types of neural networks nowadays for different applications.
Upvotes: 2 [selected_answer]<issue_comment>username_2: In the perceptron design generally used in Artificial Neural Networks, we know precisely what a single neuron is capable of computing. It can compute a function
$$f(x) = g(w^{\top} x),$$
where $x$ is a vector of inputs (may also be vector of activation levels in previous layer), $w$ is a vector of learned parameters, and $g$ is an activation function. We know that a single node in such an ANN can compute precisely that, and nothing else. This observation could be interpreted as "of course it is limited; it can do precisely his and nothing else".
The universal function approximation theory tells us (very informally here) that if a Neural Network is "big enough", has at least 1 hidden layer, and has non-linear activation functions, it may in theory learn to approximate any function reasonably well. If we add recurrence (i.e. an RNN), we also get, in theory, Turing completeness. Based on this, we could say that they are not particularly limited in theory... but of course there are many complications in practice:
* How big is "big enough"?
* How do we effectively learn our parameters? (SGD is the most common approach, but can get stuck in local minima; global optimization methods like evolutionary algorithms wouldn't get stuck... but I don't believe that they're famous for being fast either).
* etc.
Just the observation that they may not be highly limited in theory of course doesn't mean that there wouldn't be anything else that works better in practice either. I can very well imagine that a more complex model (trying to simulate additional functionality that we also observe in the brain) may be more capable of learning more complex functions more easily.
An important caveat there is that it tends to be the case that more complex function approximators tend to be more difficult to train in practice. We understand very well how to effectively train a linear function approximator. They also typically aren't very data-hungry. The downside is, they can only approximate linear functions.
We also understand quite well how to train, for example, Decision Trees. They're still quite easy models to understand intuitively, they can learn more complicated functions than just linear functions. I'd say we have a worse understanding of how to train them well than linear functions, but still a good understanding.
ANNs as they are used now... it looks like they are more powerful in practice than the two mentioned above, but there's also still more "mystery" surrounding them (in particular the deep variants). We can train them quite well, but we don't understand everything about them as well as we'd like.
Intuitively, I'd expect that trend to continue if we try to imitate the brain more faithfully. I wouldn't be surprised if there exist more powerful things out there, but they'll be more complex to understand, more difficult to train, maybe also more data-hungry (current ANNs already tend to be very data-hungry).
Upvotes: 2 |
2018/09/19 | 746 | 3,041 | <issue_start>username_0: Usually, using the [Manhattan distance](https://en.wikipedia.org/wiki/Taxicab_geometry) as a heuristic function is enough when we do an A\* search with one target. However, it seems like for multiple goals, this is not the most useful way. Which heuristic do we have to use when we have multiple targets?<issue_comment>username_1: If by "visit multiple targets", you mean "visit several points in the fastest order", you are no longer in a simple path-finding-style search problem, but instead in an optimization problem. This is roughly the difference between chapters 3 & 6 of [Russell & Norvig's](http://aima.cs.berkeley.edu/) section on search.
To do this, you can't just change your heuristic, instead you need to reframe your problem:
* Instead of states in your search being locations, they should be *tours*. Each state is a list of all the places you need to visit, *in a specific order*.
* Instead of actions being movements from one location to another, they need to be transformations from one tour to another. For example, if you swap the order that you visit two adjacent locations, then you'll get a different tour. This gives you a way to "move" between tours.
* A solution means visiting all the locations as fast as possible. If you know how to get between two locations, just store the distances, and then sum all the distances together to get the cost of a tour. If you don't know, you can just run A\* from each place to each other place once, and then cache the distances afterwards.
* A heuristic will depend on your domain. A reasonable start might be to assume that you can visit each location from the nearest other location you've already visited. Generally, heurstics based on the idea of [minimum spanning trees](https://en.wikipedia.org/wiki/Minimum_spanning_tree) are effective for this domain.
The real answer though, is to try a technique that is meant for this kind of problem, like a [local search algorithm](https://en.wikipedia.org/wiki/Local_search_(optimization)). Notice that if we know the cost of moving between any two points, we can just adopt a greedy approach: make the move that improves things the most each time. This is often faster than waiting for A\* in practice if you just want a good solution, but it doesn't need to be the very best one.
Upvotes: 2 <issue_comment>username_2: Genetic algorithms are giving promising results for problems with multiple objectives[goals].
<http://www.iitk.ac.in/kangal/Deb_NSGA-II.pdf>
the above paper will give best algorithm for multi objectives
Upvotes: 0 <issue_comment>username_3: I see that the objective is to achieve multiple goals using A\* algorithm.
So if your problem is more like a Traveling Salesman Problem, which is what it kind of sounds like, you can refer to this post:
<https://stackoverflow.com/questions/4453477/using-a-to-solve-travelling-salesman>
The problem can be converted to a graph search problem, and could utilize Minimum Spanning Tree. A-star can be used to compute edge weight.
Upvotes: 0 |
2018/09/19 | 1,519 | 6,853 | <issue_start>username_0: I'm finding it hard to understand the relationship between chaotic behavior, the human brain, and artificial networks. There are a number of explanations on the web, but it would be very helpful if I get a very simple explanation or any references providing such simplifications.<issue_comment>username_1: It looks like you have some common misconceptions about AI and neural networks.
First, AI programs generally do not try to imitate the human behaviour of a human brain. Instead, they try to imitate some higher-level behaviour. For example, they might imitate the reasoning process that you go through when you make a plan. In this context, the building-blocks (silicon or flesh) don't matter too much.
Second, artificial neural networks are also (mostly) not intended to imitate the human brain. Although they are *inspired* by the arrangement of neurons in a human brain, the networks used in most ANN systems have very little to do with real brains. The main similarity is that both systems have a lot of simple little computational units connected in patterns such that signals passed from one to another lead to interesting computations. However, real neurons produce lots of different kinds of signals, are connected in arbitrary ways, and randomization and transmission times play a significant role. Artificial neurons generally are deterministic, produce only one kind of signal (or sometimes a couple different kinds), are connected in extremely regular ways, and usually simulate instantaneous transmissions between neurons.
Upvotes: 2 <issue_comment>username_2: Regression for models more complex than $y = a x + b$ is a convergence strategy. Surface fitting algorithms, such as Levenberg–Marquardt, are often successful at achieving regression using a damped version of least squares as an optimization criterion. The marriage of regression and the multilayer perceptron, an early model artificial network, led to the use of a back propagation strategy to distribute corrective signals that drive regression.
Back propagation using gradient descent is now used in artificial networks with a variety of cell and connection designs, such as LSTM and CNN networks as a convergence strategy. Both surface fitting and artificial network convergence share the method of successive approximation. With each successive application of some test, the result is used to attempt to improve the next iteration. Proofs have developed around convergence for many algorithms. Actual successive approximation runs have five possible outcomes.
* Convergence within the time allotted and within the accuracy required
* Convergence within the time allotted but not within the accuracy required
* Convergence appears that it would have occurred but time allotted was exceeded
* Oscillation appeared by the end of time allotted
* Chaos appeared by the end of time allotted
The following illustration from Chaos Theory Tamed (<NAME>, 1997, p 164) modified slightly for easy viewing can explain how chaos arises when the learning rate or some other factor is set too aggressively. The graphs are of the behavior of the logistic equation $x\_{i+1} = k x\_i (1 - x)$ which plots as an inverted parabola in phase space. The one dimensional maps on the right of each of five cases show the relationship between adjacent values in the time series on the left of each of the five. Although the logistic equation is quite simple compared to regression algorithms and artificial nets, the principles involved are the same.
[](https://i.stack.imgur.com/4BbZN.png)
The right hand cases, with $k = 3.4$ and $k = 3.75$ correspond to the last two possible outcomes in the list above, oscillation and chaos respectively.
**Care in Drawing Parallels**
Care should be taken in drawing parallels between distinct things.
* Surface fitting algorithms, like Levenberg–Marquardt
* Algorithms that realize back propagation with gradient descent
* Logical inference AI, such as production systems and fuzzy logic
* Real time learning, such as Q-learning
* Devotion of the human brain to a problem
Regression and artificial networks can be compared meaningfully because the math for each is fully defined and easy for those with the mathematical skill to analyze them for the comparison.
Comparing known mathematical systems with unknown biological ones is interesting, but to a large degree, grossly premature. The perceptron, on which MLPs (multilayer perceptrons) and their deep learning derivatives are based, are simplified and flattened models of what was once thought to be how neurons in the brain work. By flattened is meant that they are placed in the time domain where they are convenient for looping in software and do not take into consideration these complexities.
* Neuron behavior is sensitive to the timing of incoming signals — Incoming signals may overlap but not precisely align in time.
* Neuron behavior is sensitive to the history of incoming signals (because of cell body and axon thermodynamics, synaptic chemistry, and other neuro-chemical and structural functions not yet understood)
* Neuron structure changes in terms of its connectivity
* New neurons appear
* Neurons may die due to cell apoptosis
In summary, multilayer perceptrons are not a model of neural networks in the human brain. They are merely roughly inspired by obsolete knowledge of them.
**Chaos in the Human Brain**
Whether there is chaotic behavior in the brain is known. It has been observed in real time. How coupled it is with human intelligence is a matter of conjecture, but it is already clear that it may appear to contribute to function in some cases and contribute to dysfunction in others. This is also true in artificial systems.
* When used to deliberately interfere with a stable condition that may not be the optimum state of stability to find a better one, chaos may be a source of noise that benefits learning. This is related to the difference between local minima and global minimum. The good is sometimes the enemy of the best. Improved learning speed has been documented for artificial network algorithms with deliberate injection of pseudo-random noise into the back propagation.
* When appearing not to deliberately inject noise in a portion of the system that can benefit from the noise but out of a basic instability in the system, the chaos can be detrimental to overall system function. Chaotic behavior in the human brain is a likely cause for various disorders. There is much supporting data but not yet a proof.
In summary, chaos in a system is neither productive nor counterproductive in every case. It depends on where it is in the system (in detail) and what the system design is expected to perform.
Upvotes: 1 [selected_answer] |
2018/09/23 | 585 | 2,044 | <issue_start>username_0: I want to understand what the `gamma` parameter does in an SVM. According to [this page](http://scikit-learn.org/stable/auto_examples/svm/plot_rbf_parameters.html).
>
> Intuitively, the `gamma` parameter defines how far the influence of a single training example reaches, with low values meaning ‘far’ and high values meaning ‘close’. The `gamma` parameters can be seen as the inverse of the radius of influence of samples selected by the model as support vectors.
>
>
>
I don't understand this part "**of a single training example reaches**", does it refer to the training dataset?<issue_comment>username_1: Rougly speaking, the higher the gamma, the more complex the model, the higher the risk of overfitting.
In fact, as you can read on the page you linked:
>
> If **gamma is too large**, the radius of the area of influence of the
> support vectors **only includes the support vector itself**[...]
>
>
> When **gamma is very small**, the model is too constrained and **cannot
> capture the complexity or “shape” of the data**. The **region of influence
> of any selected support vector would include the whole training set**.
>
>
>
Upvotes: 1 <issue_comment>username_2: I've summarized the [key ideas of SVMs](https://martin-thoma.com/svm-with-sklearn/). So this is how $\gamma$ is used with a gaussian Kernel:
$$K\_{\text{Gauss}}(\mathbf{x}\_i, \mathbf{x}\_j) = e^{\frac{-\gamma\|\mathbf{x}\_i - \mathbf{x}\_j\|^2}{2 \sigma^2}}$$
The bigger the $\gamma$, the more "linear" the decision boundary will be. The closer to 0, the more support vectors you have/the more non-linear the boundary is (see [**interactive example**](https://cs.stanford.edu/people/karpathy/svmjs/demo/))
You can also find it on udacity: [SVM Gamma Parameter](https://www.youtube.com/watch?v=m2a2K4lprQw)
In practice, you can use a [grid search](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html#sklearn.model_selection.GridSearchCV) or random search to get good values.
Upvotes: 2 |
2018/09/25 | 2,999 | 11,054 | <issue_start>username_0: I have difficulty understanding the following paragraph in the below excerpts from page 4 to page 5 from the paper [*Dueling Network Architectures for Deep Reinforcement Learning*](http://proceedings.mlr.press/v48/wangf16.pdf).
The author said "we can force the advantage function estimator to have zero advantage at the chosen action."
For the equation $(8)$ below, is it correct that $A - \max A$ is at most zero?
>
> ... lack of identifiability is mirrored by poor practical performance when this equation is used directly.
>
>
>
>
> To address this issue of identifiability, we can force the advantage
> function estimator to have zero advantage at the chosen action. That is, we let the last module of the network implement the forward mapping
>
>
>
>
> $$Q(s, a; \theta, \alpha, \beta) = V(s; \theta, \beta) + \left( A(s, a; \theta, \alpha) - \max\_{a' \in | \mathcal{A} |} A(s, a'; \theta, \alpha) \right). \tag{8}$$
>
>
>
>
> Now, for $a^∗ = \text{arg max}\_{a' \in \mathcal{A}} Q(s, a'; \theta, \alpha, \beta) = \text{arg max}\_{a' \in \mathcal{A}} A(s, a'; \theta, \alpha)$, we obtain $Q(s, a^∗; \theta, \alpha, \beta) = V (s; \theta, \beta)$. Hence, the stream $V(s; \theta, \beta)$ provides an estimate of the value function, while the other stream produces an estimate of the advantage function.
>
>
>
I would like to request further explanation on Equation 9, when the author wrote what is bracketed between the red parentheses below.
>
> An alternative module replaces the max operator with an average:
>
>
>
>
> $$Q(s, a; \theta, \alpha, \beta) = V (s; \theta, \beta) + \left( A(s, a; \theta, \alpha) − \frac {1} {|A|} \sum\_{a' \in \mathcal{A}} A(s, a'; \theta, \alpha) \right). \tag{9}$$
>
>
>
>
> On the one hand this loses the original semantics of $V$ and $A$ because they are now off-target by a constant,
> but on the other hand it increases the stability of the optimization: with (9) the advantages only need to change as fast as the mean, instead of having to compensate any change to the optimal action’s advantage in (8).
>
>
>
In the paper, to address the identifiability issue, there are two equations used. My understanding is both equations are trying to fix the advantage part - the last module.
For equation $(8)$, are we trying to make $V(s) = Q^\*(s)$, as the last module is zero?
For equation $(9)$, the resulting $V(s)$ = true $V(s)$ + mean$(A)$? As the author said "On the one hand this loses the original semantics of $V$ and $A$ because they are now off-target by a constant". And the constant refers to mean$(A)$? Is my understanding correct?<issue_comment>username_1: I believe that is explained on the prior page:
>
> "Intuitively, the value function $V$ measures the how good it is to be in a particular state $s$. The $Q$ function, however, measures the the value of choosing a particular action when in this state. The advantage function subtracts the value of the state from the $Q$
> function to obtain a relative measure of the importance of
> each action."
>
>
>
Then two paragraphs above were you started your quote:
>
> "However, we need to keep in mind that $Q(s, a; \theta, \alpha, \beta)$ is only a parameterized estimate of the true $Q$-function. Moreover, it would be wrong to conclude that $V (s; \theta, \beta)$ is a good estimator of the state-value function, or likewise that $A(s, a; \theta, \alpha)$ provides a reasonable estimate of the advantage function.
>
>
> Equation (7) is unidentifiable in the sense that given $Q$ we cannot recover $V$ and $A$ uniquely. To see this, add a constant to $V (s; \theta, \beta)$ and subtract the same constant from $A(s, a; \theta, \alpha)$. This constant cancels out resulting in the same $Q$ value. This lack of identifiability is mirrored by poor practical performance when this equation is used directly."
>
>
>
Another way of looking at it would be:
* You receive answers to your question
* Answers receive votes
* Answerers have reputation
In a *perfect* world people could vote based on reputation, with a weighing based upon the correctness of the answer.
You could simply look at which answer received the most votes and choose it as correct.
In the *real* world things don't work that way, things are correct or incorrect whether they are measured or not (think quantum mechanics) and measurement doesn't always reveal the true answer.
See: [Parameter Estimation](https://en.wikipedia.org/wiki/Parameter#Statistics_and_econometrics).
The estimate of the advantage is only so good, sometimes it's useful to consider it and in other instances it's useful to reject it - intelligently doing both maximizes it's usefulness.
Upvotes: 0 <issue_comment>username_2: Yes, you're correct, if Equation 8 is used it will only be possible to get estimates $\leq 0$ out of the term
$$\left( A(s, a; \theta, \alpha) - \max\_{a' \in \vert \mathcal{A} \vert} A(s, a'; \theta, \alpha) \right).$$
This matches the meaning that we intuitively assign to the $Q(s, a)$, $V(s)$, and $A(s, a)$ estimators (I'm leaving the parameters $\theta$, $\alpha$, and $\beta$ out of those parentheses for the sake of notational brevity). Intuitively, we want:
* $Q(s, a)$ to estimate the value of being in state $s$ and executing action $a$ for the policy that we are learning about.
* $V(s)$ to estimate the value of being in state $s$ for the policy that we are learning about.
* $A(s, a)$ to estimate the *advantage* of executing action $a$ in state $s$ for the policy that we are learning about.
In the above three points, "the policy that we are learning about" is the greedy policy, the "optimal" policy given what we have learned so far (ideally this would be truly the optimal policy after a long period of training).
In the last point of the three points above, *advantage* can intuitively be understood as the gain in estimated value if we choose action $a$ over whatever the expected value would be if we were following the policy that we are learning about.
Since we are trying to learn about the greedy policy, we'll ideally (according to our intuition) want the maximum advantage $A(s, a)$ to be equal to $0$; intuitively, the best action is precisely the one we want to execute in our greedy policy, so that best action should not have any relative "advantage". Similarly, *all non-optimal* actions should have a negative advantage, because they are estimated to be worse than what we estimate to be the optimal action(s).
This intuition is mathematically enforced by using Equation 8 from the paper for training:
$$Q(s, a; \theta, \alpha, \beta) = V(s; \theta, \beta) + \left( A(s, a; \theta, \alpha) - \max\_{a' \in \vert \mathcal{A} \vert} A(s, a'; \theta, \alpha) \right).$$
We can consider two cases to explain what this is doing:
1. Suppose that action $a$ is the best action we could have selected in state $s$ according to our current estimates, i.e. $a = \arg \max\_{a' \in \vert \mathcal{A} \vert} A(s, a'; \theta, \alpha)$. Then, the two terms in the large brackets are equal to each other, so the subtraction yields $0$, and the state-action value estimate $Q(s, a)$ equals the state value estimate $V(s)$. This is exactly what we want **because we are trying to learn about the greedy policy**.
2. Suppose that action $a$ is worse than the best action we could have selected in state $s$ according to our current estimates, i.e. $A(s, a; \theta, \alpha) < \max\_{a' \in \vert \mathcal{A} \vert} A(s, a'; \theta, \alpha)$. Clearly, I've just stated here that the first term in our subtraction is less than the second term in our subtraction... so the subtraction yields a negative number. This means that the state-action value estimate $Q(s, a)$ becomes **less than** the estimated state value $V(s)$. This is also what we want intuitively, **because we started with the assumption that action $a$ was a suboptimal action**. Clearly, if we assume that the action $a$ is suboptimal, that should lead to a reduction in the estimated value.
---
Note that afterwards, when they start explaining Equation 9, they actually intentionally deviate from these standard, intuitive understandings that we have of what the three estimators should represent.
---
*Concerning the additional question about Equation 9:*
A major problem in the stability of training processes for Deep Reinforcement Learning algorithms (such as these DQN-based algorithms) is that **the update targets contain components that are predictions made by the NN that is being trained**. For example, the Dueling DQN architecture in this paper generates $V(s)$ and $A(s, a)$ predictions, which are combined into $Q(s, a)$ predictions, and those $Q(s, a)$ predictions of the network itself are also used (combined with some non-prediction reward observations $r$) in the loss function defined to train the Neural Network.
In other words, the Neural Network's own predictions are a part of its training signal. When these are used to update the Network, this will likely change its future predictions in similar situations, which means that its update target will also actually change when it reaches a similar situation again; this is a **moving target** problem. We do not have a consistent set of update targets as we would in a traditional supervised learning setting for example (where we have a dataset collected offline with fixed labels as prediction targets). Our targets are moving around during the training process, and this can destabilize learning.
Now, in that explanation following Equation 9, they essentially argue that this "moving target" problem is less bad with Equation 9 than it is with Equation 8, which can result in more stable training. I'm not sure if there is a formal proof of this, but intuitively it does make sense that this would happen in practice.
Suppose that you update your network once based on **Equation 8**. If your learning step changes the prediction of the advantage $A(s, a)$ of the best action $a$ by a magnitude of $1$ (kind of informal here, hopefully it makes sense what I'm trying to say), this will in turn move future targets for updates also roughly by a magnitude of $1$ (again, quite informal here).
Now, suppose that you update your network once based on **Equation 9**. It is unlikely that all of the different actions $a$ have their advantage $A(s, a)$ move by the same magnitude and in the same direction as a result of this update. It is more likely that some will move up, some will move down, etc. And even if they all move in the same direction, some will likely move by a smaller magnitude than others. In some sense, Equation 9 "averages out" the movements triggered by the learning update in all of these different advantage estimates, which causes the network's prediction targets overall to simply move more slowly, reducing the moving target problem. At least, that's the intuitive idea. Again, I don't think there is a formal proof that this happens, but it does turn out to often help in practice.
Upvotes: 1 |
2018/09/29 | 1,425 | 5,610 | <issue_start>username_0: I recently learned about genetic algorithms and I solved the 8 queens problem using a genetic algorithm, but I don't know how to optimize any functions using a genetic algorithm.
$$
\begin{array}{r}
\text { maximize } f(x)=\frac{-x^{2}}{10}+3 x \\
0 \leq x \leq 32
\end{array}
$$
I want a guide on how to find chromosomes and fitness functions for such a function? And I don't want code.<issue_comment>username_1: Yours is a really nice and easy question: you've seen how to use GAs in a complex problem but you're missing how to apply them to the most basic of all. I'll show you:
In a real-world implementation we have to structure our problem to be solved with GAs; we need to modify it, finding an equivalent representation that accepts individuals, each one of these is built from a genome and must be evaluated using a fitness function.
If you see this graphically you'll discover that the fitness function is just describing an area (or line or volume, depending on the grade) on our space and we are randomly dropping individuals on it, awarding the ones that fell higher than the others. We then try to modify the genomes of these guys to move them towards peaks of this fitness function.
[](https://i.stack.imgur.com/kaXXd.png)
In practice the fitness function is our world, our ground truth and we are exploring it.
Now to the function optimization part: well, we do not need any abstraction here, any strange individual representation or transformation of the problem; we just want to find a maximum, and this is exactly what GAs are for!
So, let's have a look at the elements you need to solve your problem with GAs:
* Fitness function => you have it already! It's the function you want to find the max of.
* Individual => as the maximum is enough, so the individual will be just a point in space.
* Genome => like every respective point in space, the genome will just be a collection of real numbers (one for each dimension).
Now, like in any GA, you will instantiate randomly an initial population (random points in the domain). The evaluation of course is the easy part, just put them into the function and you'll se the value of the individuals.
What about crossover and mutation though? You'll have to use techniques that work with real-number genes, like BLXalpha, BLXalphabeta.
These are just defining ranges between values and picking random values inside these ranges, I wrote a pretty detailed answer about this, you can check it out at: <https://ai.stackexchange.com/a/6323/15530>
Upvotes: 1 <issue_comment>username_2: People usually say that genetic algorithms are used to solve optimization problems, but when it comes to optimizing a *specific* function given in an analytic form (i.e. when it comes to finding a maximum or minimum of such a function), it may not be clear how to proceed. I have created a complete but simple implementation and explanation of how to solve this problem [here](https://github.com/username_2/function_max_with_ga), but let me also describe here the main idea behind the approach. Before that, let's briefly review genetic algorithms (GAs).
### Genetic algorithms
Genetic algorithms are composed of
* a **population** (i.e. a set) of individuals (also known as chromosomes or genotypes), which represent the solutions to some problem
* a **fitness function** that evaluates each individual (i.e. how "good" it is, maybe compared to other individuals, where "good" depends on the problem)
* **genetic operations** to stochastically change the individuals in the population: typically, these operations are the **mutation** and **cross-over**
* a method to **select individuals** for the cross-over (where you combine 2 or more individuals to produce other individuals); the selection is also a *genetic operation*
### How to solve your problem?
To solve *any* problem with genetic algorithms, you first need to address all the four points above, i.e. define what your individuals (i.e. solutions) are, how to compute the fitness of a solution (i.e. how good it is), and define the specific evolutionary operations (specifically, mutation, cross-over and selection).
In your specific problem, the solutions are $\hat{x} \in \mathbb{R}$, such that
$$f(x)=\frac{-x^{2}}{10}+3 x$$
is a (local or global) maximum, i.e. $f(\hat{x}) \geq f(x)$, for all $x \in \mathbb{R}$ in a neighbourhood of $\hat{x}$.
(Note that this is just the definition of the problem of *function maximization*: if you are not familiar with it, you should probably get familiar with it before trying to understand this answer or even trying to solve this problem with GAs).
Therefore, in this case, the **individuals** are real numbers (which are the inputs to $f$).
The **fitness function** can be a function that computes $f(x\_i)$, for all $x\_i$ in your population, then compares $f(x\_i)$ to $f(x\_j)$ for all $i \neq j$. The higher the $f(x\_i)$, the closer it is to a maximum.
The **genetic operations** can be implemented in different ways. You should think about it. If you are familiar with GAs and you know now that solutions are real numbers, at least one way of implementing these genetic operations should come to your mind at this point. Keep in mind that your solutions should be in the range $[0, 32]$, i.e. this is a *constrained* optimization problem. If you do not have any idea on how to implement them, take a look at [my implementation/explanation](https://github.com/username_2/function_max_with_ga).
Upvotes: 0 |
2018/09/30 | 841 | 3,323 | <issue_start>username_0: I made my first neural net in C++ without any libraries. It was a net to recognize numbers from the MNIST dataset. In a 784 - 784 - 10 net with sigmoid function and 5 epochs with every 60000 samples, it took about 2 hours to train. It was probably slow anyways, because I trained it on a laptop and I used classes for Neurons and Layers.
To be honest, I've never used TensorFlow, so I wanted to know how the performance of my net would be compared to the same in TensorFlow. Not too specific but just a rough approximation.<issue_comment>username_1: A lot. There are all these optimizations that we might not have thought of like combining layers, functions, etc. I am a pytorch guy though, its clean and doesn't get in your way like tensorflow does.
Upvotes: 2 <issue_comment>username_2: >
> I wanted to know how the performance of my net would be compared to the same in Tensor Flow. Not to specific but just a rough aproximation.
>
>
>
This is very hard to answer in specific terms because [benchmarking is very hard and is often wrong](http://dtrace.org/blogs/bmc/2018/09/28/the-relative-performance-of-c-and-rust/).
The main point of TensorFlow as I see it is to make it easier for you to use a GPU and further allows you to use a large supply of programs written in Python/JavaScript that still give C++ level performance.
>
> How fast is TensorFlow compared to self written neural nets?
>
>
>
This is answering the general question of using TensorFlow/PyTorch vs a custom solution, rather than your specific question of how much of a speed up you'd get.
There was a relatively recent MIT paper, [Differentiable Programming for Image Processing and Deep Learning in Halide](https://people.csail.mit.edu/tzumao/gradient_halide/) trying to discuss the performance vs flexibility vs time spent coding a solution of 3 different languages.
Specifically they compared a solution in their language Halide vs PyTorch vs CUDA.
>
> Consider the following example. A recent neural network-based
> image processing approximation algorithm was built around a new
> “bilateral slicing” layer based on the bilateral grid [Chen et al. 2007; Gharbi et al. 2017]. At the time it was published, neither PyTorch
> nor TensorFlow was even capable of practically expressing this
> computation. As a result, the authors had to define an entirely
> new operator, written by hand in about 100 lines of CUDA for the
> forward pass and 200 lines more for its manually-derived gradient
> (Fig. 2, right). This was a sizeable programming task which took
> significant time and expertise. While new operations—added in just
> the last six months before the submission of this paper—now make
> it possible to implement this operation in 42 lines of PyTorch, this
> yields less than 1/3rd the performance on small inputs and runs
> out of memory on realistically-sized images (Fig. 2, middle). The
> challenge of efficiently deriving and computing gradients for custom
> nodes remains a serious obstacle to deep learning.
>
>
>
So in general you'll probably get faster performance with TensorFlow/PyTorch than a custom C++ implementation, but for specific cases if you have CUDA knowledge on top of C++ then you will be able to write more performant programs.
Upvotes: 4 [selected_answer] |
2018/09/30 | 1,279 | 4,010 | <issue_start>username_0: I was able to find [the original paper on LSTM](https://www.bioinf.jku.at/publications/older/2604.pdf), but I was not able to find the paper that introduced "vanilla" RNNs. Where can I find it?<issue_comment>username_1: The two tech reports below both call RNNs explicitly "recurrent net(work)s".
1. Rumelhart, <NAME>; Hinton, <NAME>, and Williams, <NAME> (Sept. 1985). [Learning internal representations by error propagation](https://apps.dtic.mil/dtic/tr/fulltext/u2/a164453.pdf). Tech. rep. ICS 8504. San Diego, California: Institute for Cognitive Science, University of California.
2. Jordan, <NAME>. (May 1986). [Serial order: a parallel distributed processing approach](https://www.osti.gov/biblio/6910294). Tech. rep. ICS 8604. San Diego, California: Institute for Cognitive Science, University of California.
[Jordan was a student of Rumelhart](https://grad.berkeley.edu/news/headlines/jordan-wins-rumelhart-prize/), so I would lean on identifying [1](https://apps.dtic.mil/dtic/tr/fulltext/u2/a164453.pdf) as the paper introducing RNNs, with the caveat that the first sentence in the section "Recurrent Nets" of [1](https://apps.dtic.mil/dtic/tr/fulltext/u2/a164453.pdf) reads:
>
> We have thus far restricted ourselves to *feedforward* nets. This may
> seem like a substantial restriction, but as Minsky and Papert point
> out, there is, for every recurrent network, a feedforward network with
> identical behavior (over a finite period of time).
>
>
>
This is interesting for two reasons:
1. After this sentence, he then goes on to show how RNNs can be unrolled and the error propagated back. Not a full-fledged BPTT yet, though.
2. The sentence shows that the idea of recurrence (and unrolling) has been around since at least 1969.
Unfortunately, I don't have access to Minsky and Papert (1969), so I cannot follow this line any further.
Upvotes: 4 <issue_comment>username_2: Hopfield networks, a special case of RNNs, were first proposed in 1982: <https://www.pnas.org/content/79/8/2554>
Otherwise (shameless plug, I am the author) a non-technical timeline for NLP can be found here: <https://blog.exxcellent.de/ki-machine-learning>
[](https://i.stack.imgur.com/aBMl0.jpg)
Upvotes: 3 <issue_comment>username_3: <NAME> and <NAME> talk about recurrent neural nets in their paper McCulloch, W.S., <NAME>. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics 5, 115–133 (1943). <https://doi.org/10.1007/BF02478259>.
They finish their introduction with the paragraph:
>
> The nervous system contains many circular paths, whose activity so regenerates the excitation of any participant neuron that reference to time past becomes indefinite, although it still implies that afferent activity has realized one of a certain class of configurations over time. Precise specification of these implications by means of recursive functions, and determination of those that can be embodied in the activity of nervous nets, completes the theory.
>
>
>
Their paper contains a section titled:
>
> 2. The Theory: Nets Without Circles.
>
>
>
in which they introduce feed-forward (nets without cycles) and recurrent (nets with cycles) networks, and the next section, titled
>
> 3. The Theory: Nets with Circles.
>
>
>
in which they prove a few theorems about recurrent neural networks.
<NAME> quotes them, and discusses recurrent neural networks extensively throughout his book, Computation: Finite and Infinite Machines (1967). Prentice Hall, ISBN: 0131655639,9780131655638
I am not sure, are there earlier references.
Upvotes: 2 <issue_comment>username_4: According to [this meta paper](https://arxiv.org/abs/1708.02709), "vanilla" RNN of today are based on Elman's work on networks with dynamic memory: [Finding structure in time](https://onlinelibrary.wiley.com/doi/abs/10.1207/s15516709cog1402_1)
Upvotes: 2 |
2018/09/30 | 888 | 3,904 | <issue_start>username_0: In the circumstances of two perfect AI's playing each other, will white have an inherent advantage? Or can black always play for a stalemate by countering every white strategy?<issue_comment>username_1: I'm no expert chess player.
Some specialist chess forums also discussed this issue and there's no clear answer.
Due to the high amount of possible moves, I would suggest, that most outcome would probably be a remis.
White would have the advantage of starting, but as black as a perfect AI would know which possible strategies are in play it could block every try for a clear winning strategy but probably always trying to avoid a loss and therefore probably reaching just a remis.
Basis for this thought is that black as a reaction can always make the choice with fewest loss possibility so probably most strategies of white would stall.
But now comes the interesting point: I am not quite sure how these AIs would look like. They obviously would have to lean on a strategy to choose between all the possible moves.
A pure statistical best outcome algorithm even then would leave tooo much options open, specially in the beginning. So you would be forced to prefer certain strategy choices, and with this you would clearly implementing a bias (what to do if two moves have the same win/loss outcome?). This would make the experiment more random or human and thus no outcome could be predicted.
So if both AIs are perfect (and thus identical) the outcome is most probably a draw, but I assume that statistically some games would be won/loss...
I think...
EDIT: I just read about googles AlphaZero (a AI which has taught itself Chess, Go and Shogi) and excells in most games against other AIs. But apparently by precalculating not so much possible outcomes... So it could be that there still is no "absolute perfect AI" for this game(s)
Upvotes: -1 <issue_comment>username_2: This relates to the concept of "[solved games](https://en.wikipedia.org/wiki/Solved_game)". In general, two player turn-based games with perfect information - of which chess is an example - [can result in all three possible outcomes](https://en.wikipedia.org/wiki/First-player_and_second-player_win): a forced win for white, a forced win for black, or a forced draw.
The short, although unsatisfactory answer is that [chess is not solved, and it is not clear whether it can be](https://en.wikipedia.org/wiki/Solving_chess). There is generally thought to be an advantage to white for the first move, so *likely* results are considered to be a forced win for white, or a forced draw.
No current AI attempts to "solve" chess, although some of the techniques such as MCTS might be adapted *theoretically* to find a solution, the available computing power to run that search to completion from the start positions is too low by a few orders of magnitude.
Upvotes: 3 <issue_comment>username_3: Since both AI know the best possible moves at each step, black would never win as white already knows all games that leads to black's victory and would easily avoid it, but since the black is also a perfect AI, the optimal reponses to white's moves are fixed. So the logical reasoning is these two perfect AI would not make any moves at all if they know their opponent is a perfect AI as well. Both of them know the outcome of the match even before it begins, which my gut feeling says is always a draw. The correct question is does the nth move matter?
Upvotes: -1 <issue_comment>username_4: No that doesnt have to be the case because from each machine's point of view the game is different so they will have different possibilities like saying different paths to follow, now something different would be if two machines with the same brain play against the same opponent with same parameters then we can say theboutcome will problably be the same, if they are "programed in a deterministic way"
Upvotes: 0 |
2018/10/02 | 1,378 | 5,694 | <issue_start>username_0: **The dialog context**
Turing proposed at the end of the description of his famous test, "Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, 'Can machines think?'"1
Turing effectively challenged the 1641 statement of <NAME> in his *Discourse on Method and Meditations on First Philosophy*:
>
> "It never happens that [an automaton] arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do."
>
>
>
Descartes and Turing, when discussing automatons achieving human abilities, shared a single context through which they perceived intelligence. Those that have been either the actor or the administrator in actual Turing Tests understand the context: ***Dialog***.
**Other contexts2**
The context of the dialog is distinct from other contexts such as writing a textbook, running a business, or raising children. If you apply the principle of comparing machine and human intelligence to automated vehicles (e.g. self-driving cars), an entirely different context becomes immediately apparent.
**Question**
Can a brain be intelligent without a body? More generally, does intelligence require a context?
---
**References**
[1] Chapter 1 ("Imitation Game") of *Computing Machinery and Intelligence*, 1951.
[2] [Multiple Intelligences Theory](https://www.intelltheory.com/mitheory.shtml)<issue_comment>username_1: >
> Can a brain be intelligent without a body?
>
>
>
In my opinion, yes, if you give it the right inputs. The brain is like a machine and its behavior depends on its architecture and the interaction with the environment, whether it is the internet or anything else, so it all boils down to the actual architecture of the system.
Intelligence is just an information processing system. A human gets info from his/her eyes, hears, or other sensors, then the brain does info processing and storage. One can potentially replace our sensors with other sensors that acquire the info from the world and send it to the brain.
Upvotes: 1 <issue_comment>username_2: It depends what you mean by intelligence. A robot that acts has a different sort of intelligence than a neural net that merely maps inputs to outputs. Bit patterns within a robot brain have meaning, whereas the meaning of the inputs and outputs of gain meaning only through the larger system in which humans steer input data to it, and act on the basis of the outputs.
In particular, a system that can act needs a causal model of the world that, at least in part, includes itself.
So a non-embodied system may be intelligent in some useful ways, but its intelligence will be radically different than a human intelligence. That's not necessarily a bad thing: we already have lots of humans, and can produce more fairly cheaply. The most cost-effective AIs are surely not human-like ones.
Upvotes: 2 <issue_comment>username_3: >
> Can a brain be intelligent without a body?
>
>
>
If you define "intelligence" as "doing the right thing at the right time", then the statement itself implies some sort of embodied context, whether humanoid, networked or otherwise.
If you have a more existential definition where by fact that there are internal workings, or goings on but aren’t apparent in any embodied output then one could argue either way. Akin to the simulated universe theory: either way the outcome is only how we think about it, rather than having an experimental truth.
If one can refine the question so that the outcome can be used practically then I believe that may be more useful.
Upvotes: 1 <issue_comment>username_4: >
> Can a brain be intelligent without a body?
>
>
>
No. Don't forget that the main function of the brain is to provide homeostasis between the body and the environment. Without the body, the utility of the brain is no longer relevant.
Alternatively, why consider intelligence only in the brain? How far does our body extend? [Embodied cognitive science](https://en.wikipedia.org/wiki/Embodied_cognitive_science) asks us to consider our entire body and its
surrounding extensions utility on the intelligent faculty; The fact that our thumbs stick out in a direction different to our other four fingers allows us to grab things in a effortless way is itself, intelligent.
From this understanding then, the segregation between the intelligent faculty and the "non-intelligent" seems to be murky at best. We'd might as well not consider it unless there is better motivation.
>
> Does intelligence require a context?
>
>
>
Pragmatically speaking, yes. Intelligent behaviour is typically understood as being goal-directed and intentional. Intentionality implies some sort of agency. It being an agent, implies some sort of agent/environment relationship, which implies an environment to act as context.
On the other hand, <NAME> notes [in his review](https://www.fil.ion.ucl.ac.uk/~karl/The%20free-energy%20principle%20A%20unified%20brain%20theory.pdf) that a general principle that the brain (and thus one would imagine, intelligent behaviour) entails is reduction of thermodynamic free energy while maintaining homeostasis with its environment.
This hints at a promise to describe intelligent behaviour purely in terms of thermodynamic processes, but interpretively what this means in a general language is still unclear.
Also, keep in mind that there is no guarantee that we'd recognize a agent as intelligent if its construction is radically different from ours.
Upvotes: 3 [selected_answer] |
2018/10/03 | 575 | 2,537 | <issue_start>username_0: Imagine a system that controls dampers in a complex vent system that has an objective to perfectly equalize the output from each vent. The system has sensors for damper position, flow at various locations and at each vent. The system is initially implemented using a rather small data set or even a formulaic algorithm to control the dampers. What if that algorithm were programmed to "try" different configurations of dampers to optimize the air flows, guided broadly by either the initial (weak) training or the formula? The system would try different configurations and learn what improved results, and what worsened results, in an effort to reduce error (differential outflow).
What is that kind of AI system called? What is that system of learning called? Are there systems that do that currently?<issue_comment>username_1: Near solution to your problem definition is reinforcement learning. You can define some reward using the objective function and define some possible state space for the machine and finally solve the problem by reinforcement learning techniques (near to trial and error by learning the preferences).
Upvotes: 2 <issue_comment>username_2: I think any learning algorithm probably uses trial and error and analysis of the results with the ultimate goal of maximizing utility.
It seems that the recent milestones in AI fall under the general umbrella of [machine learning](https://en.wikipedia.org/wiki/Machine_learning), which includes all forms of [reinforcement learning](https://en.wikipedia.org/wiki/Reinforcement_learning). Essentially, any learning algorithm is using some form of statistical analysis.
* For an umbrella term, I've been using "learning algorithm"
However, there is also a venerable history of less capable adaptive systems such as [self-organizing networks](https://en.wikipedia.org/wiki/Self-organizing_network). (See also [optimal control](https://en.wikipedia.org/wiki/Optimal_control).)
Upvotes: 1 <issue_comment>username_3: I believe "Reinforcement Learning" is the term you are looking for (as mentioned by others as well) *but* keep in mind that the scope of your problem falls under the section of AI that is called **Search**.
[Search algorithms](https://hackernoon.com/search-algorithms-in-artificial-intelligence-8d32c12f6bea) are based upon experimenting with different **actions** (decisions) and selecting the one that minimizes an arbitrary **cost function** (reward), given the current and past problem **states**.
Upvotes: 2 [selected_answer] |
2018/10/04 | 852 | 3,115 | <issue_start>username_0: I'm looking for **annotated** dataset of traffic signs. I was able to find Belgium, German and many more traffic signs datasets. The only problem is these datasets contain only cropped images, like this:
[](https://i.stack.imgur.com/6SJxW.png)
While i need (for YOLO -- You Only Look Once network architecture) not-cropped images.
[](https://i.stack.imgur.com/X6jI3.png)
I've been looking for hours but didn't find dataset like this. Does anybody know about this kind of annotated dataset ?
**EDIT:**
I prefer European datasets.<issue_comment>username_1: I searched the web but there are no such dataset published but [Check this out](https://www.mapillary.com/dataset/vistas?pKey=<KEY>&lat=-6.7203390321265175&lng=-76.83556631804362&z=0.3424751357705809)
Upvotes: 0 <issue_comment>username_2: Check this one by [UCSD](http://cvrr.ucsd.edu/LISA/lisa-traffic-sign-dataset.html).
It contains both video as well as images related to traffic signs. The annotations are present in csv
Upvotes: 1 <issue_comment>username_3: **Direct Answer**
The [Belgium TS Dataset](https://btsd.ethz.ch/shareddata/) may be helpful, as well as [The German Traffic Sign Detection Benchmark](http://benchmark.ini.rub.de/?section=gtsdb&subsection=dataset).
**Additional Notes Based on Question Author's Idea**
The idea in the question author's addendum of placing signs onto street sides and corners is a good one, but to do it repeatably and in a way that doesn't bias the training is its own research project. However, it is a good research direction. What would be of benefit to AV researchers worldwide is a multi-network topology and equilibrium strategy with the objective to create the following data generation features.
* Street sign symbol inputs in image form, with or without cropping, as movie frame sequences or single still shots, or from SVG files.
* Annotation generation using partially human-labelled data.
* 3D analysis of sign angle and perspective setting so that the images appear exactly as they would from a vehicle's imaging system.
* Matching of lighting between the superimposed sign and the background scene.
* Automatic blue-screening for the sign image.
This is obviously not a basic data hygiene problem. It is its own AI project, but the return on this research project in terms of furthering the AV technology is immense and may have significant data set statistical advantages over collecting data from the vendors that supply images to Google maps and other Big Data aggregators.
Upvotes: 3 [selected_answer]<issue_comment>username_4: Look at Google's Open Image Dataset @ <https://storage.googleapis.com/openimages/web/index.html>
They provide image-level labels, object bounding boxes, object segmentation masks, and visual relationships.
Here is the [link](https://storage.googleapis.com/openimages/web/visualizer/index.html?set=train&type=detection&c=%2Fm%2F01mqdt) for the traffic signs dataset.
Upvotes: 2 |
2018/10/05 | 920 | 3,709 | <issue_start>username_0: I'd like to implement a partially connected neural network with ~3 to 4 hidden layers (a sparse deep neural network?) where I can specify which node connects to which node from the previous/next layer. So I want the architecture to be highly specified/customized from the get-go and I want the neural network to optimize the weights of the specified connections, while keeping everything else 0 during the forward pass AND the backpropagation (connection does not ever exist).
I am a complete beginner in neural networks. I have been recently working with tensorflow & keras to construct fully connected deep networks. Is there anything in tensorflow (or something else) that I should look into that might allow me to do this? I think with tf, I should be able to specify the computational graph such that only certain connections exist but I really have no idea yet where to start from to do this...
I came across papers/posts on network pruning, but it doesn't seem really relevant to me. I don't want to go back and prune my network to make it less over-parameterized or eliminate insignificant connections.
I want the connections to be specified and the network to be relatively sparse from the initialisation and stay that way during the back-propagation.<issue_comment>username_1: I searched the web but there are no such dataset published but [Check this out](https://www.mapillary.com/dataset/vistas?pKey=<KEY>&lat=-6.7203390321265175&lng=-76.83556631804362&z=0.3424751357705809)
Upvotes: 0 <issue_comment>username_2: Check this one by [UCSD](http://cvrr.ucsd.edu/LISA/lisa-traffic-sign-dataset.html).
It contains both video as well as images related to traffic signs. The annotations are present in csv
Upvotes: 1 <issue_comment>username_3: **Direct Answer**
The [Belgium TS Dataset](https://btsd.ethz.ch/shareddata/) may be helpful, as well as [The German Traffic Sign Detection Benchmark](http://benchmark.ini.rub.de/?section=gtsdb&subsection=dataset).
**Additional Notes Based on Question Author's Idea**
The idea in the question author's addendum of placing signs onto street sides and corners is a good one, but to do it repeatably and in a way that doesn't bias the training is its own research project. However, it is a good research direction. What would be of benefit to AV researchers worldwide is a multi-network topology and equilibrium strategy with the objective to create the following data generation features.
* Street sign symbol inputs in image form, with or without cropping, as movie frame sequences or single still shots, or from SVG files.
* Annotation generation using partially human-labelled data.
* 3D analysis of sign angle and perspective setting so that the images appear exactly as they would from a vehicle's imaging system.
* Matching of lighting between the superimposed sign and the background scene.
* Automatic blue-screening for the sign image.
This is obviously not a basic data hygiene problem. It is its own AI project, but the return on this research project in terms of furthering the AV technology is immense and may have significant data set statistical advantages over collecting data from the vendors that supply images to Google maps and other Big Data aggregators.
Upvotes: 3 [selected_answer]<issue_comment>username_4: Look at Google's Open Image Dataset @ <https://storage.googleapis.com/openimages/web/index.html>
They provide image-level labels, object bounding boxes, object segmentation masks, and visual relationships.
Here is the [link](https://storage.googleapis.com/openimages/web/visualizer/index.html?set=train&type=detection&c=%2Fm%2F01mqdt) for the traffic signs dataset.
Upvotes: 2 |
2018/10/05 | 325 | 1,168 | <issue_start>username_0: I implemented an image segmentation pipeline and I trained it on the [DICOM dataset](https://en.wikipedia.org/wiki/DICOM). I compared the results of the model with manual segmentation to find the accuracy. Is there other methods for evaluation?<issue_comment>username_1: See:
>
> username_1: [A Survey of Semantic Segmentation](https://arxiv.org/pdf/1602.06541.pdf), Section III
>
>
>
Subsection A is about metrics and B is about datasets.
Metrics include: accuracy, IoU, frequency weighted IoU, F-beta score, speed, ...
Upvotes: 2 <issue_comment>username_2: The paper referenced by username_1 is the go-to for semantic segmentation. However I will also like to add the Panoptic Segmentation metric as an aggregated method to measure both the detection task and segmentation task of the model.
It is a very well-known and widely used metric since it is the standard metric for COCO dataset (segmentation)
[This is the paper](https://arxiv.org/abs/1801.00868) where the metric is proposed.
And here is the metric:
[](https://i.stack.imgur.com/DEJa8.png)
Upvotes: 1 |
2018/10/06 | 446 | 1,521 | <issue_start>username_0: With reference to the research paper entitled [Sentiment Embeddings with Applications to Sentiment Analysis](https://ieeexplore.ieee.org/document/7296633), I am trying to implement its sentiment ranking model in Python, for which I am required to optimize the following hinge loss function:
$$\operatorname{loss}\_{\text {sRank}}=\sum\_{t}^{T} \max \left(0,1-\delta\_{s}(t) f\_{0}^{\text {rank}}(t)+\delta\_{s}(t) f\_{1}^{\text {rank}}(t)\right)$$
Unlike the usual mean square error, I cannot find its gradient to perform backpropagation.
How do I calculate the gradient of this loss function?<issue_comment>username_1: See:
>
> username_1: [A Survey of Semantic Segmentation](https://arxiv.org/pdf/1602.06541.pdf), Section III
>
>
>
Subsection A is about metrics and B is about datasets.
Metrics include: accuracy, IoU, frequency weighted IoU, F-beta score, speed, ...
Upvotes: 2 <issue_comment>username_2: The paper referenced by username_1 is the go-to for semantic segmentation. However I will also like to add the Panoptic Segmentation metric as an aggregated method to measure both the detection task and segmentation task of the model.
It is a very well-known and widely used metric since it is the standard metric for COCO dataset (segmentation)
[This is the paper](https://arxiv.org/abs/1801.00868) where the metric is proposed.
And here is the metric:
[](https://i.stack.imgur.com/DEJa8.png)
Upvotes: 1 |
2018/10/06 | 1,702 | 7,224 | <issue_start>username_0: I have a neural network with the following structure:
[](https://i.stack.imgur.com/eHMGH.jpg)
I am expecting specific outputs from the neural network which are the target values for my training. Let's say the target values are 0.8 for the upper output node and -0.3 for the lower output node.
The activations function used for the first 2 layers are ReLu or LeakyReLu while the last layer uses atan as an activation function.
For back propogation, instead of adjusting values to make the network's output approach 0.8,-0.3. is it suitable if I use the inverse function for atan -which is tan itself- to get "the ideal input to the output layer multiplied by weights and adjusted by biases".
The tan of 0.8 and -0.3 is 0.01396 and -0.00524 approximately.
My algorithm would then adjust weights and biases of the network so that the "pre-activated output" of the output layer -which is basically (sum(output\_layer\_weight\*output\_layer's inputs)+output\_layer\_biases)- approaches 0.01396 and -0.00524.
Is this suitable<issue_comment>username_1: I think your idea would work... fine, but I don't necessarily see any advantages to it. I haven't actually tried it (that'd be the best way for you to also see whether it works!), so I'm mostly going by first thoughts and intuition here.
Anyway, what you are essentially doing with your idea is "cutting off" the last layer of the Neural Network from the perspective of your learning algorithm (typically backpropagation). Whatever weights you have between the last hidden layer and the output layer will be fixed to their initial values. The last hidden layer can actually be viewed as an "output" layer, since you also have fixed targets that you want to converge towards for them.
Whether this makes your learning process better/faster/easier, or worse/slower/harder seems to be very much dependent on how the weights between your last hidden layer and your output layer are initialized. For example:
* If those weights are initialized to all-zero, your "real" output layer is doomed to always predict zeros, so your problem becomes impossible to solve.
* If those weights are initialized to implement the identity function, this becomes 100% equivalent to the case you would have if you'd simply cut off the last layer and train that in the traditional sense (i.e. you effectively have one layer less than you really do).
* If those weights are initialized randomly, it looks to me like you have a post-processing step consisting of a random projection. Such a random projection may be beneficial for training (random projections can be useful for dimensionality reduction, or for, in combination with the subsequent non-linear function, turning an otherwise linear function into a non-linear function).
I don't think it'd very often be better than actually have an extra "real", trainable layer with a non-linear activation function though. I suspect such a "non-trainable" extra layer can sometimes be better than not having anything there, but I don't think it'd often be better than having a real, trainable layer.
Upvotes: 1 <issue_comment>username_2: The main difference your change would have is to allow you to apply a loss function to a different part of the network. This may affect training.
If you keep the same loss function (e.g. MSE), but apply it to the pre-transformed values, then you will have changed the objective of the network, perhaps significantly. Whether or not this is a good thing depends on how much you needed the original loss function. However, the fact that it would result in a different training target is usually going to be a bad thing if your original training target was correct. This will also be true if you pick a new arbitrary loss function that seems to fit the pre-transform representation better.
If you engineer a "correct" loss function such that the objective of the network remains unchanged, then the behaviour of the network will not change much - probably not at all. However, in some cases this can lead to more stable and/or faster training - it is often used for classifiers to avoid need to use exponentiation, see [tf.nn.softmax\_cross\_entropy\_with\_logits](https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits) in TensorFlow, which does exactly this.
Upvotes: 2 <issue_comment>username_3: Be careful to specify exactly what you mean by `adjust weights and biases of the network so that the "pre-activated output" ... approaches ...`.
When training a neural network, one minimizes a loss function. This loss function is determines how important the deviation from `0.01396` is compared to the deviation of the other node from `-0.00524`. By transforming the target labels backwards you should also express the original loss function in terms of the back-transformed labels.
---
What one can do in some cases is to combine the input to the last layer's activation function with the loss function and algebraically simplify the resulting expression.
This concept is for example implemented in Tensorflow's [tf.nn.sigmoid\_cross\_entropy\_with\_logits](https://www.tensorflow.org/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits). This function can be used for the case of a single output with sigmoid activation function and binary cross-entropy loss (a similar function also exists for the case of multiple output nodes with a softmax activation function).
Instead of first passing the values through a sigmoid activation and then calculating the binary cross-entropy with respect to the target label, it combines the two expressions and uses an equivalent but simpler expression.
If you look at the documentation of this function, you'll see that the number of transcendental functions (which are computationally expensive to calculate) can be reduced from
`loss(x,z) = z * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x)))`
to
`loss(x,z) = max(x, 0) - x * z + log(1 + exp(-abs(x)))`
where `x` is the input to the (sigmoid) nonlinearity (corresponding to the output of the green nodes in your diagram), also called 'logits' and `z` is the target label.
Upvotes: 1 <issue_comment>username_4: For the above stated artificial network, these two training scenarios are similar.
* Training to converge to the ideal output vector at the point after the last layer's activation functions are applied
* Training to converge to the vector formed by applying $tan$ functions to each component of the ideal output vector, when convergence occurs at the point just after the vector-matrix multiplication with the last layer's parameters, prior to the last layer's $atan$ activation functions
Distinctions between them include these.
* The applications of gradients and associated code must be adapted to the modification of the starting point of back propagation to before the final $atan$ activation functions.
* The slope and curvature of the loss function will differ if the same loss function is used for the two scenarios, so the accuracy, speed, and reliability of convergence will also be different.
Upvotes: 2 [selected_answer] |
2018/10/07 | 530 | 1,886 | <issue_start>username_0: I want to implement a neural network on a big dataset. But training time is long (~1h30 per epoch). I'm still in the development process, so I don't want to wait such long time just to have poor results at the end.
[This](https://blog.slavv.com/37-reasons-why-your-neural-network-is-not-working-4020854bd607) and [this](https://www.reddit.com/r/MachineLearning/comments/5pidk2/d_is_overfitting_on_a_very_small_data_set_a/) suggest that overfitting the network on a very small dataset (1 ~ 20 samples) and reach a loss near 0 is a good start.
I did it and it works great. However, I am looking for the next step of validating my architecture. I tried to overfit my network over 100 samples, but I can't reach a loss near 0 in reasonable time.
How can I ensure the results given by my NN will be good (or not), without having to train it on the whole dataset ?<issue_comment>username_1: >
> How can I ensure the results given by my NN will be good (or not), without having to train it on the whole dataset ?
>
>
>
You can't.
If you're interested in diagnostic techniques for neural networks, read [section 2.5 of my Master's thesis](https://arxiv.org/pdf/1707.09725)
Upvotes: 1 <issue_comment>username_2: Don't know anything about your dataset, but maybe by using clustering\* on it, you can get N "most distinct" examples and train only on them. This obviously will not give you same performance as if network would have seen all examples, but this way at least you will show it "diverse" examples.
\*That is, of course, if you have time for that.
Upvotes: 1 <issue_comment>username_3: You can try to train it on 1% data, then on 2%, 3%, etc... Then plot it and see if increasing data increases the performance and how it is changing. Not sure if that's correct answer, but at least you can iterate this method pretty fast.
Upvotes: 3 [selected_answer] |
2018/10/09 | 814 | 3,338 | <issue_start>username_0: Almost all the convolutional neural network architecture I have come across have a square input size of an image, like $32 \times 32$, $64 \times 64$ or $128 \times 128$. Ideally, we might not have a square image for all kinds of scenarios. For example, we could have an image of size $384 \times 256$
My question is how to we handle such images during
1. training,
2. development, and
3. testing
of a neural network?
Do we force the image to resize to the input of the neural network or just crop the image to the required input size?<issue_comment>username_1: I think the squared image is more a choice for simplicity.
There are two types of convolutional neural networks
* **Traditional CNNs**: CNNs that have fully connected layers at the end, and
* **fully convolutional networks (FCNs)**: they are only made of convolutional layers (and subsampling and upsampling layers), so they do not contain fully connected layers
With traditional CNNs, the inputs always need to have the same shape, because you flatten the last convolutional layer, with a fixed size. As the flatten layer has a fixed size, the feature map shape from the layer before has to be the same shape, and so the inputs (images) also have to.
However, in FCN, you don't flatten the last convolutional layer, so you don't need a fixed feature map shape, and so you don't need an input with a fixed size.
In both cases, you don't need a squared image. You just have to be careful in the case you use CNN with a fully connected layer, to have the right shape for the flatten layer.
For instance, if you have an input size $320 \times 160$, and you have 3 pooling layers, so your output in the last convolutional layer is $40 \times 20 \times c$ (with $c$ the number of filters/channels), then you just need the flatten layer to have $40\*20\*c$ neurons.
If you create a new network, just design it to handle a rectangle image.
If you want to use an already pre-trained one, I think the better choice is to resize the image.
If the information in the cropped parts is important, maybe your prediction can be wrong (it depends if the object of interest is in the parts of the image that is cropped). Actually, in Yolo (an object recognition network), images are resized if they don't fit the input requirements. [See figure 1 of the YOLO paper](https://pjreddie.com/media/files/papers/yolo.pdf). It's because you don't need a high resolution to detect an object (for example, the CIFAR dataset has images of shape $32 \times 32$, but the network can still predict the correct label). So, I think that resizing your image may not affect the prediction much (unless the new size is very different from the original)
Upvotes: 3 <issue_comment>username_2: If you have a rectangular image and you are using existing models (or existing code), then you have to add an input pre-processing pipeline which transforms the image to standard dimensions. This is very common in computer vision and both PyTorch and Tensorflow have support for easily adding input pre-processing input pipeline for such a transformation.
Also, if you have a fixed size rectangular image data, then you can design your own network architecture (or initial module) which takes image features into account by using asymmetric pooling and convolutions.
Upvotes: 1 |
2018/10/14 | 469 | 1,930 | <issue_start>username_0: So let's say you had a really nice day in a flight simulator and you are getting videos of this type of quality:
[](https://i.stack.imgur.com/ftsYV.png)
This is Full HD (1080p), but heavily compressed. You can literally see the pixels. Now I tried to use something like RAISR, and this [python implementation](https://github.com/hugopilot/raisr), but it only scales the image up and does not 'fix the thicc pixels'.
So is there a type of AI that does fix this kind of video/photo into a reasonable quality video? I just want to get rid of those pixels and image artefacts that was generated during the compression.<issue_comment>username_1: I've answered a part of the question below, i.e. can the photo/video be decompressed in theory?
If the RGB values are continuous (well behaved function), interpolation should generate the pixels in between them in a predictable pattern like say in old pictures of models where skin color shades and data is available in plenty for models today and they resemble each other whereas in outlier cases like this picture where the designs of cockpit circular meter maybe different for each plane or maybe too much data is lost say of a distant object, no neural network should be able to fix it because the possibilities of an orange blur can be an orange, a sun or many other things.
One of the solutions to such things can be recognizing context but finding sufficiently dense pixels so exact should be difficult (I think impossible) because sun will be very bright but orange might have some spots.
In short, average things can be done, not outliers.
Upvotes: 0 <issue_comment>username_2: Check out [source code to DeepImagePrior](https://github.com/DmitryUlyanov/deep-image-prior) it does a remarkable job guessing what's missing to repair images with a variety of damage.
Upvotes: 1 |
2018/10/15 | 944 | 3,547 | <issue_start>username_0: In keras, when we use an LSTM/RNN model, we need to specify the node [i.e., LSTM(128)]. I have a doubt regarding how it actually works. From the LSTM/RNN unfolding image or description, I found that each RNN cell take one time step at a time. What if my sequence is larger than 128? How to interpret this? Can anyone please explain me? Thank in advance.<issue_comment>username_1: In Keras, what you specify is the hidden layer size. So :
```
LSTM(128)
```
gives you a Keras layer representing a LSTM with a hidden layer size of 128.
As you said :
>
> From the LSTM/RNN unfolding image or description, I found that each RNN cell take one time step at a time
>
>
>
So if you picture your RNN for one time step, it will look like this :
[](https://i.stack.imgur.com/zbbpu.png)
And if you unfold it **in time**, it look like this :
[](https://i.stack.imgur.com/0yUiz.png)
You are not limited in your sequence size, this is one of the feature of RNN : since you input your sequence element by element, the size of the sequence can be variable.
That number, 128, represent just the size of the hidden layer of your LSTM. You can see the hidden layer of the LSTM as the memory of the RNN.
Of course the goal is not for the LSTM to remember everything of the sequence, just link between elements. That's why the size of the hidden layer can be smaller than the size of your sequence.
Sources :
* [Keras documentation](https://keras.io/layers/recurrent/#lstm)
* [This blog](http://adventuresinmachinelearning.com/keras-lstm-tutorial/)
Edit
----
From [this blog](https://skymind.ai/wiki/lstm#long) :
>
> The larger the network, the more powerful, but it’s also easier to overfit. Don’t want to try to learn a million parameters from 10,000 examples – parameters > examples = trouble.
>
>
>
So the consequence of reducing the size of hidden state of LSTM is that it will be simpler. Might not be able to get the links between the element of the sequence. But if you put a too big size, your network will overfit ! And you absolutely don't want that.
Another really good blog on LSTM : [this link](http://colah.github.io/posts/2015-08-Understanding-LSTMs/)
Upvotes: 2 [selected_answer]<issue_comment>username_2: Although this question has been answered I'd add a couple of remarks towards general neural networks design
As you know very NN has three types of layers: input, hidden, and output. Once this network is initialized, you can iteratively tune the configuration during training.
To optimize the network configuration we can use pruning.
Pruning describes a set of techniques to trim network size (by nodes not layers) to improve computational performance and sometimes resolution performance. The gist of these techniques is removing nodes from the network during training by identifying those nodes which, if removed from the network, would not noticeably affect network performance (i.e., resolution of the data). (Even without using a formal pruning technique, you can get a rough idea of which nodes are not important by looking at your weight matrix after training; look weights very close to zero--it's the nodes on either end of those weights that are often removed during pruning.)
You can find more here: <https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw>
Upvotes: 0 |
2018/10/15 | 1,281 | 5,137 | <issue_start>username_0: *Following-up my [question](https://ai.stackexchange.com/questions/8338/interpretation-of-a-good-overfitting-score) about my over-fitting network*
My deep neural network is over-fitting :
[](https://i.stack.imgur.com/6xKGF.png)
I have tried several things :
* Simplify the architecture
* Apply more (and more !) Dropout
* Data augmentation
But I always reach similar results : training accuracy is eventually going up, while validation accuracy never exceed ~70%.
I think I simplified enough the architecture / applied enough dropout, because my network is even too dumb to learn anything and return random results (3-classes classifier => 33% is random accuracy), even on training dataset :
[](https://i.stack.imgur.com/nh4Bs.png)
My question is : **This accuracy of 70% is the best my model can reach ?**
If yes :
* Why the training accuracy reach such high scores, and why so fast, knowing this architecture seems to be not compatible ?
* My only option to improve the accuracy is then to change my model, right ?
If no :
* What are my options to improve this accuracy ?
*I'v tried a bunch of hyperparameters, and a lot of time, depending of these parameters, the accuracy does not change a lot, always reaching ~70%. However I can't exceed this limit, even though it seems easy to my network to reach it (short convergence time)*
Edit
----
Here is the Confusion matrix :
[](https://i.stack.imgur.com/S2Hro.png)
I don't think the data or the balance of the class is the problem here, because I used a well-known / explored dataset : [SNLI Dataset](https://nlp.stanford.edu/projects/snli/)
And here is the learning curve :
[](https://i.stack.imgur.com/MjFPu.png)
*Note : I used accuracy instead of error rate as pointed by the resource of Martin Thoma*
It's really ugly one. I guess there is some problem here.
Maybe the problem is that I used the result after 25 epoch for every values. So with little data, training accuracy don't really have time to converge to 100% accuracy. And for bigger training data, as pointed in earlier graphs, the model overfit so the accuracy is not the best one.<issue_comment>username_1: I think sometimes it can also help to examine your test and training sets.
Fundamentally, your data was produced by an underlying process/system that has certain properties. The system can have many "states" and all the possible states form the state space. If you have really tried things like dropout and regularization, my guess would be that the test set is somehow different from your train set. It is possible that your training set only takes samples from one part of the state space (AKA, your samples might all be similar in the training set and the test set has different samples - imagine you are classifying humans and all of your training samples have a class label of 1 meaning all the training samples have humans in them -> and all your test samples have no humans in them Good luck with that!). Some questions to ask:
1. Are you combining datasets from different sources? If so:
If you have "n" sources of data, you need to make sure that your training set has many samples from each of the "n" sources of data and your test set has samples from each of the "n" sources.
2. Are you shuffling your data enough and randomly putting samples in both the training and test sets? This relates to the human example I gave, make sure your training set has a little bit of everything (different combinations of inputs and/or outputs) and your testing set has a little bit of everything (different combinations of inputs and/or outputs).
Upvotes: 0 <issue_comment>username_2: I identified the origin of this overfitting..
Origins
-------
I tried a lot of models, putting more and more dropout, simplifying as much as I could.
No matter what I did, after a few epoch of good learning, invariably my loss function was going up. I tried simpler and simpler models, always the same overfitting behavior.
What bugged me at that moment is that no matter what kind of model I used, how deep or how complex, always the accuracy was fine, stabilized at some nice level.
So I tried the simplest model I could imagine : Input => Dense with 3 hidden units => Output. Finally I got random results, with a 33% accuracy !
From here, I guilt again my network, layer by layer, to see which one was causing the overfitting.
And it was the **Embedding layer**.
Even with a simple network like Input => Embeddings => Dense with 3 hidden units => Output, the model was overfitting.
How to solve it
---------------
In Keras, simply instantiate the Embeddings layer with `trainable=False`. After doing this, no more overfit.
In my opinion, this is quite counter-intuitive : I want my embeddings to evolve with the data I show to the network. But look like I can't...
Upvotes: 3 [selected_answer] |
2018/10/16 | 616 | 2,132 | <issue_start>username_0: This is a question about pattern recognition and feature extraction.
I am familiar with Hough transforms, the Fast Radial Transform and variants (e.g., GFRS), but these highlight circles, spheres, etc.
I need an image filter that will highlight the centroid of a series of spokes radiating from it, such as the center of a asterix or the spokes of a bicycle wheel (even if the round wheel is obscured. Does such a filter exist?<issue_comment>username_1: First step would be getting the object out of the scene. This bit is not trivial in your case, however, there are many methods to choose from. I suggest reading about watershed threshold algorithm.
Second part is easier. Once you have a single segmented object at hand, perform noise removal. Next step is to extract the contours. Find the center of gravity, transform coordinates to polar and represent these contours as a function which has the x axis as degrees and y axis as distance from the center. Take the Fourier transform of this function. If the shape is symmetrical, there will be few non-zero entries, and a large spike in the spectrum.
Upvotes: 0 <issue_comment>username_2: The Hough Transform extended to orthogonal ellipses uses this model, accumulating on $\theta$ for all $\{x, y\}$ with parameter matrix
\begin{Bmatrix}
c\_x & c\_y \\
r\_x & r\_y
\end{Bmatrix}
where
$$1 = \dfrac {(x - c\_x) \, \cos \theta} {r\_x} + \dfrac {(y - c\_y) \, \sin \theta} {r\_y}$$
The question is looking to detect the normal lines, so any of the several algorithms for the above model can be modified to accumulate on $r$ for all $\{x, y\}$ with parameter matrix
\begin{Bmatrix}
c\_x & c\_y \\
r\_x & r\_y
\end{Bmatrix}
where
$$0 = \dfrac {x - c\_x} {r\_x} + \dfrac {y - c\_y} {r\_y}$$
Lines that intersect $(c\_x, c\_y)$ don't rely on $r\_x$ or $r\_y$. However, it may be useful to recognize that, if radially equally spaced, viewing the lines from a position other than one that projects into the plane of the lines at $(c\_x, c\_y)$ will present a line density that is a function of $\arctan (r\_x, r\_y)$.
Upvotes: 3 [selected_answer] |
2018/10/17 | 714 | 3,156 | <issue_start>username_0: I have a collection of scanned documents (which come from newspapers, books, and magazines) with complex alignments for the text, i.e. the text could be at any angle w.r.t. the page. I can do a lot of processing for different features extraction. However, I want to know some robust methods that do not need many features.
Can machine learning be helpful for this purpose? How could I use machine learning to detect text and non-text regions in these scanned documents?<issue_comment>username_1: TextDetector, Tesseract and other open source packages implement text detection (object detection for text). There's also a pretrained Tensorflow model that does text detection. A text detector will give you the bounding boxes in your image for any text that it recognizes. In the case of Tesseract, it will also output the text (OCR is built in). So you can read the code in these packages to get ideas for your own machine learning pipeline. Basically you need both a regressor (for the bounding boxes) and a classifier (to detect whether the box contains text or not).
Upvotes: 2 <issue_comment>username_2: Since the document is scanned, it will not be in an open document format so no associated API can be used.
**Approach 1**
Evaluate TextBridge Pro, FreeOCR, and other alternatives that purport page layout detection. If any of them work, drive them programmatically (preferably headless) to read the scanned document, detect page layout and OCR the text, export to a document with an open format, and then use the API
With this approach, the object recognition AI is in the product and development time and resources are saved.
**Approach 2**
Do a 2D FFT windowing through the page in both directions. See the cosine, trapazoidaly, Hamming, and Hanning windows and apply them in horizontal and vertical directions. Use Approach 1, assuming those products work with the scanned documents to label the examples, and then train a DCNN (deep convolutional NN) to recognize from the 2D FFT output spectra where the pictures are. By interpolation, close to a perfect crop of the images and the text regions can be obtained with some hyper-parameters on the model obtained.
**Approach 3**
This approach is just Approach 2 but preparing the labeled example data set by hand, which may be necessary because the existing software products may not handle the images being laid out at angles other than 0, 90, 180, or 270 degrees.
**Approach 4**
Create an architecture that is based on feature extraction, and use font rendering libraries to build the back half of an auto-encoder, allowing portions of image that do not auto-encode to be preserved as an x-y coordinate pair, which will allow the images to jump over the pictures if the convergence is set up correctly.
**Final Note**
One can offload some processing to a learning process, so that the actual document process runs faster, but sometimes the preparation of the example data set and the learning consumes more resources. That's why those who assess which approach will cost less and can recommend the best approach with some reliability are high paid.
Upvotes: 1 |
2018/10/18 | 1,261 | 5,434 | <issue_start>username_0: In the book *Reinforcement Learning: An Introduction* (Sutton and Barto, 2018). The authors ask
>
> *Exercise 3.2:* Is the MDP framework adequate to usefully represent all goal-directed learning tasks? Can you think of any clear
> exceptions?
>
>
>
I thought maybe a card game would be an example if the state does not contain any pieces of information on previously played cards. But that would mean that the chosen state leads to a system that is not fully observable. Hence, if I track all cards and append it to the state (state vector with changing dimension) the problem should have the Markov Property (no information on the past states is needed). This would not be possible if the state is postulated as invariant in MDP.
If the previous procedure is allowed, then it seems to me that there are no examples where the MDP is not appropriate.
I would be glad if someone could say if my reasoning is right or wrong. What would be an appropriate answer to this question?<issue_comment>username_1: **Background**
The Markov Decision Process is an extension of <NAME>'s action sequence that visualize action-result sequence possibilities as a directed acyclic graph. One path through the acyclic graph, if it satisfies the Markov Property is called a Markov Chain.
The Markov Property requires that the probability distribution of future states at any point within the acyclic graph be evaluated solely on the basis of the present state.
Markov Chains are thus a stochastic model theoretically representing one of the set of possible paths. And the action-result sequence is a list of state transitions corresponding to actions chosen solely by each action's previous state and the expectations that the expected subsequent state will most probably lead to the desired outcome.
<NAME> based his work on <NAME> work on spanning trees, which is based on Euler's initial directed graph work.
**The Exercise**
Exercise 3.2 was given with two parts.
>
> Is the MDP framework adequate to usefully represent all goal-directed learning tasks?
>
>
> Can you think of any clear exceptions?
>
>
>
The first question is subjective in that it inquires about usefulness but does not define what it means. If "useful" means the MDP will improve the chances of achieving a goal over a random selection of action at each state, then except in no win scenarios or the most contrived case where all actions have equal distribution of probable results, then the MDP is useful.
If "useful" means optimal, then there are other approaches, with additional complexity and requiring additional computing resources that will improve odds of goal achievement. These other approaches overcome one or more of the limitations of pure MDP.
**Advancements and Alternatives**
Advancements made to MDP and alternatives to MDP, which number in the hundreds, include these.
* Logical detection of the infeasibility of goal achievement (no win scenario)
* Calculation of probabilities when only partial information is available about the current state
* Invocation of the decision at any point (continuous MDP used in real time systems)
* Probabilities are not known and must be learned from past experience where simple Q-learning is employed
* Past experience is used by statistically relating action-state details to generalizations derived from past action-result sequences or such information acquired or shared
* The action-state decisions, made within the context of an unknown system of changing or not reliably applied rules, can be used to tune a set of fuzzy rules in a fuzzy logic container and utilize fuzzy inference in the decisions
* Bluff and fraud detection
**Card Games**
Game play for a typical card game could make use of MDP, so MDP would be strictly useful, however not optimal. Some of the above decisioning features would be more optimal, particularly those that deals with unknowns and employ rules, since the card game has them.
**Random or Decoupled**
Two obvious cases are (a) a truly randomized action-result world where goal achievement has equal probability no matter the sequence of moves or (b) a scenario where goal achievement is entirely decoupled from actions the actor can take. In those cases, nothing will be useful with regard to the particular objective chosen.
**Challenge**
The way to best learn from the exercise, though, is to find a scenario where MDP would be useless and one of the above listed **Advancements and Alternatives** would be required rather than simply preferred. If you look at the list, there will be some cases that will eventually come to mind. I suggest you think it through, since the goal is to learn from the book.
Upvotes: 3 [selected_answer]<issue_comment>username_2: According to <http://tianlinliu.com/files/notes_exercise_RL.pdf>, MDP may not be feasible to multi-target tasks.
In contrast, EA-based methods like NSGA-II, NSGA-III, can solve the multi-target tasks.
And also, tasks that need more than one state to predict the next action are also not suitable to use MDP. For example, when we predict the next action a stranger who just meets you at a party will do, we need to consider all the behaviors he did in the past minutes. It will be more suitable than using the MDP way, which will simply label the stranger is a "good" guy or "bad" guy (or a continuous number between good or bad).
Upvotes: 0 |