date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2019/09/11
1,149
3,704
<issue_start>username_0: A single neuron will be able to do linear separation. For example, XOR simulator network: ``` x1 --- n1.1 \ / \ \/ \ n2.1 /\ / / \ / x2 --- n1.2 ``` Where `x1`, `x2` are the 2 inputs, `n1.1` and `n1.2` are the 2 neurons in hidden layer, and `n2.1` is the output neuron. The output neuron `n2.1` does a **linear separation**. How about the 2 neurons in hidden layer? **Is it still called linear separation** (at 2 nodes and join the 2 separation lines)? or **polynomial separation of degree 2**? I'm confused about how it's called because there are curvy lines in this wiki article: <https://en.wikipedia.org/wiki/Overfitting> [![Some curvy separation?](https://i.stack.imgur.com/o8idU.png)](https://i.stack.imgur.com/o8idU.png) [![Some other curvy separation?](https://i.stack.imgur.com/UByN2.png)](https://i.stack.imgur.com/UByN2.png)<issue_comment>username_1: KL-divergence is a measure on probability distributions. It essentially captures the information loss between ground truth distribution and predicted. L2-norm/MSE/RMSE doesn't do well with probabilities, because of the power operations involved in the calculation of loss. Probabilities, being fractions under 1, are significantly affected by any power operations (square or root), and considering we are calculating the squares of differences of probabilities, the values that are summed are abnormally small, essentially barely learning anything as the random initialization itself starts with an abnormally small loss, almost always staying constant. L1 norm, on the other hand, does not have any power operations, making it relatively acceptable. Loss functions, such as Kullback-Leibler-divergence or Jensen-Shannon-Divergence, are preferred for probability distributions because of the statistical meaning they hold. KL-Divergence, as mentioned before, is a statistical measure of information loss between distributions, or, in other words, assuming $Q$ is the ground truth distribution, KL-Divergence is a measure of how much $P$ deviates from $Q$. Also, considering probability distributions, convergence is much stronger in measures of Information Loss such KL-Divergence. More clarity on the motivation behind Kullback-Leibler can be read [here](https://math.stackexchange.com/q/90537). Upvotes: 2 <issue_comment>username_2: In the context of Variational Inference (VI): the KL allows you to move from the unknown posterior $p(z \mid x)$, to the known joint $p(z,x)=p(x|z)p(z)$ and optimize only the ELBO. You cannot do this with L2. $p(z|x)$ is the desired posterior, of which you cannot calculate the evidence (i.e., using Bayes formula we can set: $p(z|x) = \frac{p(x|z)p(z)}{\int\_z p(x|z)p(z)dz}$, and you can't calculate the integral in the denominator [also denoted by $p(x)$] due to it's intractability). Now suppose $q$ is a Variational distribution (e.g. a family of Gaussians which you can control), VI tries to approximates $p(z|x)$ by $q$ by minimizing their KL divergence. $$KL(q(z)||p(z|x)) = \int\_z q(z) \log \frac{q(z)}{p(z|x)}dz = \mathbb E\_q[\log q(z)]-\mathbb E\_q[\log \frac{p(x|z)p(z)}{p(x)}] =$$ $$ -\mathbb E\_q[\log p(x|z)p(z)] + \mathbb E\_q[\log q(z)] + \mathbb E\_q[\log p(x)] = -ELBO(q) + \log p(x) $$ Since you're only optimizing $q$ (it's the only thing you can control), you can discard the unknown and difficult to compute normalizing constant $p(x)$. If you would use the (squared) L2 norm you would get: $$\int\_z [q(z)-p(z|x)]^2dz = \int\_z [q^2(z)-2q(z)p(z|x)+p^2(z|x)]dz $$ While the 3rd term doesn't depend on q, the 2nd term does, and it also requires $p(x)$ to compute. Upvotes: 2 [selected_answer]
2019/09/12
982
4,588
<issue_start>username_0: When we are working on an AI project, does the domain/context (academia, industry, or competition) make the process different? For example, I see in the competition most participants even winners use the stacking model, but I have not found anyone implementing it in the industry. How about the cross-validation process, I think there is a slight difference in industry and academia. So, does the context/domain of an AI project will make the process different? If so, what are the things I need to pay attention to when creating an AI project based on its domain?<issue_comment>username_1: I cannot comment about the process for AI for academia. I can compare AI for competitions and AI for business. To clarify whatever I say is about ML not any other AI techniques. The process might be different for other techniques. But most of things that I say are general enough that I am assuming should still apply. The main difference that I saw while doing ML for a competition vs. for a business was that of focus. When doing it for a competition for Kaggle the focus was mainly creating the model * machine learning metrics are specified for you * some data was given to you * business problem was given to you When doing it for business what is different * given a business problem finding the parts that can actually benefit from ML. You have to define the ML problem in it and define how it actually benefits the business. This may involve significant discussions with business stakeholders, weighting the pros and cons of doing it versus doing something else, communicate the benefits to the business stakeholders, take them into confidence for the process to start * find the right data for the problem from scratch, ensure it is collected by rest of the system or brought from 3rd parties * define business metrics over and above machine learning metrics. At the end of day nobody really cares about whether ML model recall, accuracy is good or bad. What is important is the relevant business effect. * make the model, deploy it and integrate it with the rest of the system. This is important because if your goal is just to making the model you would not care about the factors associated with actually using it i.e. latency of predictions, cost of machines needed to run it etc. * A/B testing for the models, running multiple models in parallel, dynamically being able to adjust which models to use Hope this gives some idea about the differences in AI for competitions and AI for business. Upvotes: 3 [selected_answer]<issue_comment>username_2: Not very sure about the AI in competitions, as I have not taken part in any competitive competitions. On comparing AI in Academia and Industry, the biggest difference is probably freedom. In academia, considering a research project or so, a large number of experiments and trying new things are encouraged. New learnings are heeded to, and it usually involves rigorous literature survey and studies of previous works. Even if a model performed badly, if there were new learnings one could take from it, it wouldn't be deemed a failure. There is also a lot of data available that could be used for research purposes, and open-source projects used or learned from, are always thanked and appreciated. In industry the scene is quite different. There is more of a focus on using pre-trained models or transfer learning. Quite frequently, open-source projects are just cloned, mildly developed, and deployed under the companies name without releasing the code - basically requiring bare minimum effort towards literature. More of a focus was given (*In my case at least*) on reading blog posts and readme's, over the papers themselves, in order to save time. And compute efficiency is key. In industry, the effort is more directed towards scaling these models, building the data pipelines, and satisfying the clients needs. Data is also another concern in industry, with it being common practice to outsource data collection and preparation to third parties (*Usually other companies that specialize in this area*). The key difference, I would say, is the amount of freedom one has in academia, as compared to a strong sense of direction towards a singular goal in industry. AI in industry pretty much mostly is in the solutions-and-services sector (*mostly*), making it quite similar to software engineering, broadly speaking. So, summarizing, the domain of the AI project makes a big difference, with the main difference being what part of the project most effort and focus is put into. Upvotes: 1
2019/09/12
473
1,991
<issue_start>username_0: I am confused in understanding the maximum likelihood as a classifier. I know what is Bayesian network and I know that ML is used for estimating the parameters of models. Also, I read that there are two methods to learn the parameters of a Bayesian network: MLE and Bayesian estimator. The question which confused me are the following. 1. Can we use ML as a classifier? For example, can we use ML to model the user's behaviors to identify the activity of them? If yes, How? What is the likelihood function that should be optimized? Should I suppose a normal distribution of users and optimize it? 2. If ML can be used as a classifier, what is the difference between ML and BN to classify activities? What are the advantages and disadvantages of each model?<issue_comment>username_1: If you read nothing else, **maximum likelihood estimate => chance that the data predicted is the data observed.** If you have a range of points (2, 3, 4, 5, 71) your MLE is going to favour ~4.5 because of means and standard deviations. MLE speeds up finding good input parameters, usually for a different classifier. To answer your question: 1) [Columbia University have a great example of using MLE classifiers](https://towardsdatascience.com/bayes-classifier-with-maximum-likelihood-estimation-4b754b641488), where everything is broken down into bitesize (or bytesize) chunks. Read this. Seriously. 2) In short, **MLE is best used for simple, univariable distributions.** It doesn't scale well to big problems but is waaay faster than a Bayesian network for simple tasks like predicting your height based on the heights of your immediate relatives. If you want to get technical, the conditional probability network of the Bayesian model reveals insights faster than the chain multiplication of the more primitive MLE. Hope it helps! Upvotes: 0 <issue_comment>username_2: Yes, it's called hypothesis testing but normally you need a little bit more than pure MLE. Upvotes: -1
2019/09/12
1,063
5,564
<issue_start>username_0: Say I have x,y data connected by a function with some additional parameters (a,b,c): $$ y = f(x ; a, b, c) $$ Now given a set of data points (x and y) I want to determine a,b,c. If I know the model for $f$, this is a simple curve fitting problem. What if I don't have $f$ but I do have lots of examples of y with corresponding a,b,c values? (Or alternatively $f$ is expensive to compute, and I want a better way of guessing the right parameters without a brute force curve fit.) Would simple machine-learning techniques (e.g. from sklearn) work on this problem, or would this require something more like deep learning? Here's an example generating the kind of data I'm talking about: ``` import numpy as np import matplotlib.pyplot as plt Nr = 2000 Nx = 100 x = np.linspace(0,1,Nx) f1 = lambda x, a, b, c : a*np.exp( -(x-b)**2/c**2) # An example function f2 = lambda x, a, b, c : a*np.sin( x*b + c) # Another example function prange1 = np.array([[0,1],[0,1],[0,.5]]) prange2 = np.array([[0,1],[0,Nx/2.0],[0,np.pi*2]]) #f, prange = f1, prange1 f, prange = f2, prange2 data = np.zeros((Nr,Nx)) parms = np.zeros((Nr,3)) for i in range(Nr) : a,b,c = np.random.rand(3)*(prange[:,1]-prange[:,0])+prange[:,0] parms[i] = a,b,c data[i] = f(x,a,b,c) + (np.random.rand(Nx)-.5)*.2*a plt.figure(1) plt.clf() for i in range(3) : plt.title('First few rows in dataset') plt.plot(x,data[i],'.') plt.plot(x,f(x,*parms[i])) ``` [![enter image description here](https://i.stack.imgur.com/msnzH.png)](https://i.stack.imgur.com/msnzH.png) Given `data`, could you train a model on half the data set, and then determine the a,b,c values from the other half? I've been going through some sklearn tutorials, but I'm not sure any of the models I've seen apply well to this type of a problem. For the guassian example I could do it by extracting features related to the parameters (e.g. first and 2nd moments, %5 and .%95 percentiles, etc.), and feed those into an ML model that would give good results, but I want something that would work more generally without assuming anything about $f$ or its parameters.<issue_comment>username_1: If you read nothing else, **maximum likelihood estimate => chance that the data predicted is the data observed.** If you have a range of points (2, 3, 4, 5, 71) your MLE is going to favour ~4.5 because of means and standard deviations. MLE speeds up finding good input parameters, usually for a different classifier. To answer your question: 1) [Columbia University have a great example of using MLE classifiers](https://towardsdatascience.com/bayes-classifier-with-maximum-likelihood-estimation-4b754b641488), where everything is broken down into bitesize (or bytesize) chunks. Read this. Seriously. 2) In short, **MLE is best used for simple, univariable distributions.** It doesn't scale well to big problems but is waaay faster than a Bayesian network for simple tasks like predicting your height based on the heights of your immediate relatives. If you want to get technical, the conditional probability network of the Bayesian model reveals insights faster than the chain multiplication of the more primitive MLE. Hope it helps! Upvotes: 0 <issue_comment>username_2: Yes, it's called hypothesis testing but normally you need a little bit more than pure MLE. Upvotes: -1
2019/09/13
340
1,581
<issue_start>username_0: I often read *"the performance of the system is satisfactory"* or *" when your model is satisfactory"*. But what does it mean in the context of Machine Learning? Are there any clear and/or generic criteria for Machine Learning model to be satisfactory for commercial use? Is decision what model to choose or whether additional model adjustments or improvements are needed based on data scientist experience, customer satisfaction or benchmarking academic or market competition results?<issue_comment>username_1: The answer is "when it works well enough to perform the task that you have set it". It is a good idea to set your performance criteria in advance so that you can clearly identify the goal that you are trying to achieve and also so that you will know if the model is likely to be successful or not. Upvotes: 3 [selected_answer]<issue_comment>username_2: From what I have observed, the ability to scale an ML model is key. Real time inference must be quick, and cause no delays from the provider side. Being able to deploy the model also carries enormous weight - that is, how easy would it be to build the data pipelines and how easy would it be to integrate it in a web application from the server perspective. Apart from the obvious achievement of set metrics and performance criterion, speed and ease of deployment also carry a very important role. There have been scenarios of brilliant solutions being denied (*from what I have seen*) because they exceeded the limits set for time and compute in an application scenario. Upvotes: 1
2019/09/13
651
2,558
<issue_start>username_0: I have a regression MLP network with all input values between 0 and 1, and am using MSE for the loss function. The minimum MSE over the validation sample set comes to 0.019. So how to express the 'accuracy' of this network in 'lay' terms? If RMSE is 'in the units of the quantity being estimated', does this mean we can say: "The network is on average (1-SQRT(0.019))\*100 = 86.2% accurate"? Also, in the validation data set, there are three 'extreme' expected values. The lowest MSE results in predicted values closer to these three values, but not as close to all the other values, whereas a slightly higher MSE results in the opposite - predicted values further from the 'extreme' values but more accurate relative to all other expected values (and this outcome is actually preferred in the case I'm dealing with). I assume this can be explained by RMSE's sensitivity to outliers?<issue_comment>username_1: You can not use error to reliably measure accuracy. Error is best used as a measure of how fast the model is currently learning. As an example, using different loss functions (cross entorpy vs MSE) results in massively different values for the error at similar accuracy. Also considering this, an error of 0.0000000001 quite often has lower validation set accuracy then and error of 0.1, as the prior is likely over trained. As for you second question, yes this is because MSE has a huge bias towards outliers. I have personally found regression networks to struggle in most circumstances, so if it is at all possible to turn the network into a classifier, you may see an improvement. Upvotes: 2 <issue_comment>username_2: Just as a general remark, notice that technically we don't use the term "accuracy" for regression settings, such as yours - only for classification ones. > > If RMSE is 'in the units of the quantity being estimated', does this mean we can say: "The network is on average (1-SQRT(0.019))\*100 = 86.2% accurate"? > > > No. The advantage of the RMSE, as you have correctly quoted, is that it is in the same units with your predicted quantity; so, if this quantity is, say, USD, you can say (to the *business* user) that the error of the model is 0.019 USD, and this can be perfectly fine by itself. But you cannot convert it to a percentage - it would be meaningless. If required to give the performance of a *regression* model in a percentage, your best option would be the Mean Absolute Percentage Error ([MAPE](https://en.wikipedia.org/wiki/Mean_absolute_percentage_error)). Upvotes: 1
2019/09/16
6,752
27,188
<issue_start>username_0: We often hear that artificial intelligence may harm or even kill humans, so it might prove dangerous. How could artificial intelligence harm us?<issue_comment>username_1: tl;dr ----- There are many **valid** reasons why people might fear (or better *be concerned about*) AI, not all involve robots and apocalyptic scenarios. To better illustrate these concerns, I'll try to split them into three categories. Conscious AI ------------ This is the type of AI that your question is referring to. A super-intelligent conscious AI that will destroy/enslave humanity. This is mostly brought to us by science-fiction. Some notable Hollywood examples are *"The terminator"*, *"The Matrix"*, *"Age of Ultron"*. The most influential novels were written by <NAME> and are referred to as the *"Robot series"* (which includes *"I, robot"*, which was also adapted as a movie). The basic premise under most of these works are that AI will evolve to a point where it becomes conscious and will surpass humans in intelligence. While Hollywood movies mainly focus on the robots and the battle between them and humans, not enough emphasis is given to the actual AI (i.e. the "brain" controlling them). As a side note, because of the narrative, this AI is usually portrayed as supercomputer controlling everything (so that the protagonists have a specific target). Not enough exploration has been made on "ambiguous intelligence" (which I think is more realistic). In the real world, AI is focused on solving specific tasks! An AI agent that is capable of solving problems from different domains (e.g. understanding speech and processing images and driving and ... - like humans are) is referred to as General Artificial Intelligence and is required for AI being able to "think" and become conscious. Realistically, we are a **loooooooong way** from General Artificial Intelligence! That being said there is **no evidence** on why this **can't** be achieved in the future. So currently, even if we are still in the infancy of AI, we have no reason to believe that AI won't evolve to a point where it is more intelligent than humans. Using AI with malicious intent ------------------------------ Even though an AI conquering the world is a long way from happening there are **several reasons to be concerned with AI today**, that don't involve robots! The second category I want to focus a bit more on is several malicious uses of today's AI. I'll focus only on **AI applications that are available today**. Some examples of AI that can be used for malicious intent: * [DeepFake](https://en.wikipedia.org/wiki/Deepfake): a technique for imposing someones face on an image a video of another person. This has gained popularity recently with celebrity porn and can be used to generate **fake news** and hoaxes. Sources: [1](https://www.youtube.com/watch?v=gLoI9hAX9dw), [2](https://theoutline.com/post/3179/deepfake-videos-are-freaking-experts-out?zd=1&zi=zw6lhzz2), [3](https://www.nytimes.com/2018/03/04/technology/fake-videos-deepfakes.html) * With the use of **mass surveillance systems** and **facial recognition** software capable of recognizing [millions of faces per second](https://www.digitaltrends.com/cool-tech/goodbye-anonymity-latest-surveillance-tech-can-search-up-to-36-million-faces-per-second/), AI can be used for mass surveillance. Even though when we think of mass surveillance we think of China, many western cities like [London](https://www.cctv.co.uk/how-many-cctv-cameras-are-there-in-london/), [Atlanta](https://cloudtweaks.com/2019/09/mass-surveillance-adversarial-ai-in-atlanta/) and Berlin are among the [most-surveilled cities in the world](https://www.comparitech.com/vpn-privacy/the-worlds-most-surveilled-cities/). China has taken things a step further by adopting the [social credit system](https://futurism.com/china-social-credit-system-rate-human-value), an evaluation system for civilians which seems to be taken straight out of the pages of George Orwell's 1984. * **Influencing** people through **social media**. Aside from recognizing user's tastes with the goal of targeted marketing and add placements (a common practice by many internet companies), AI can be used malisciously to influence people's voting (among other things). Sources: [1](https://theintercept.com/2018/04/13/facebook-advertising-data-artificial-intelligence-ai/), [2](https://www.martechadvisor.com/articles/machine-learning-ai/the-impact-of-artificial-intelligence-on-social-media/), [3](https://venturebeat.com/2018/04/13/ai-weekly-facebook-fiasco-shows-we-need-a-new-scheme-for-personal-data/). * [Hacking](https://gizmodo.com/hackers-have-already-started-to-weaponize-artificial-in-1797688425). * Military applications, e.g. drone attacks, missile targeting systems. Adverse effects of AI --------------------- This category is pretty subjective, but the development of AI might carry some adverse side-effects. The distinction between this category and the previous is that these effects, while harmful, aren't done intentionally; rather they occur with the development of AI. Some examples are: * **Jobs becoming redundant**. As AI becomes better, many jobs will be replaced by AI. Unfortunately there are not many things that can be done about this, as most technological developments have this side-effect (e.g. agricultural machinery caused many farmers to lose their jobs, automation replaced many factory workers, computers did the same). * Reinforcing the **bias in our data**. This is a very interesting category, as AI (and especially Neural Networks) are only as good as the data they are trained on and have a tendency of perpetuating and even enhancing different forms of social biases, already existing in the data. There are many examples of networks exhibiting racist and sexist behavior. Sources: [1](https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing), [2](https://arxiv.org/abs/1607.06520), [3](https://cacm.acm.org/magazines/2018/6/228035-bias-on-the-web/fulltext), [4](https://read.dukeupress.edu/world-policy-journal/article-abstract/33/4/111/30942/Racist-in-the-MachineThe-Disturbing-Implications?redirectedFrom=fulltext). Upvotes: 7 [selected_answer]<issue_comment>username_2: In addition to the other answers, I would like to add to nuking cookie factory example: Machine learning AIs basically try to fulfill a goal described by humans. For example, humans create an AI running a cookie factory. The goal they implement is to sell as many cookies as possible for the highest profitable margin. Now, imagine an AI which is sufficiently powerful. This AI will notice that if he nukes all other cookie factories, everybody has to buy cookies in his factory, making sales rise and profits higher. So, the human error here is giving no penalty for using violence in the algorithm. This is easily overlooked because humans didn't expect the algorithm to come to this conclusion. Upvotes: 3 <issue_comment>username_3: I would say the biggest real threat would be the unbalancing/disrupting we are already seeing. The changes of putting 90% the country out of work are real, and the results (which will be even more uneven distribution of wealth) are terrifying if you think them through. Upvotes: 3 <issue_comment>username_4: My favorite scenario for harm by AI involves not high intelligence, but low intelligence. Specifically, the [grey goo](https://en.wikipedia.org/wiki/Gray_goo) hypothesis. This is where a self-replicating, automated process runs amok and converts all resources into copies of itself. The point here is that the AI is not "smart" in the sense of having high intelligence or general intelligence--it is merely very good at a single thing and has the ability to replicate exponentially. Upvotes: 3 <issue_comment>username_5: Short term ========== * **Physical accidents**, e.g. due to industrial machinery, aircraft autopilot, self-driving cars. Especially in the case of *unusual situations* such as extreme weather or sensor failure. Typically an AI will function poorly under conditions where it has not been extensively tested. * **Social impacts** such as reducing job availability, barriers for the underprivileged wrt. loans, insurance, parole. * **Recommendation engines** are manipulating us more and more to change our behaviours (as well as reinforce our own "small world" bubbles). Recommendation engines routinely serve up inappropriate content of various sorts to young children, often because content creators (e.g. on YouTube) use the right keyword stuffing to appear to be child-friendly. * **Political manipulation...** Enough said, I think. * **Plausible deniability of privacy invasion**. Now that AI can read your email and even make phone calls for you, it's easy for someone to have humans act on your personal information and claim that they got a computer to do it. * **Turning war into a video game**, that is, replacing soldiers with machines being operated remotely by someone who is not in any danger and is far removed from his/her casualties. * **Lack of transparency**. We are trusting machines to make decisions with very little means of getting the justification behind a decision. * **Resource consumption and pollution.** This is not just an AI problem, however every improvement in AI is creating more demand for Big Data and together these ram up the need for storage, processing, and networking. On top of the electricity and rare minerals consumption, the infrastructure needs to be disposed of after its several-year lifespan. * **Surveillance** — with the ubiquity of smartphones and listening devices, there is a gold mine of data but too much to sift through every piece. Get an AI to sift through it, of course! * **Cybersecurity** — cybercriminals are increasingly leveraging AI to attack their targets. Did I mention that *all* of these are in full swing already? Long Term ========= Although there is no clear line between AI and AGI, this section is more about what happens when we go further towards AGI. I see two alternatives: * Either we develop AGI as a result of our improved understanding of the nature of intelligence, * or we slap together something that seems to work but we don't understand very well, much like a lot of machine learning right now. In the first case, if an AI "goes rogue" we can build other AIs to outwit and neutralise it. In the second case, we can't, and we're doomed. AIs will be a new life form and we may go extinct. Here are some potential problems: * **Copy and paste.** One problem with AGI is that it could quite conceivably run on a desktop computer, which creates a number of problems: + **Script Kiddies** ­— people could download an AI and set up the parameters in a destructive way. Relatedly, + **Criminal or terrorist groups** would be able to configure an AI to their liking. You don't need to find an expert on bomb making or bioweapons if you can download an AI, tell it to do some research and then give you step-by-step instructions. + **Self-replicating AI** — there are plenty of computer games about this. AI breaks loose and spreads like a virus. The more processing power, the better able it is to protect itself and spread further. * **Invasion of computing resources**. It is likely that more computing power is beneficial to an AI. An AI might buy or steal server resources, or the resources of desktops and mobile devices. Taken to an extreme, this could mean that all our devices simply became unusable which would wreak havoc on the .world immediately. It could also mean massive electricity consumption (and it would be hard to "pull the plug" because power plants are computer controlled!) * **Automated factories.** An AGI wishing to gain more of a physical presence in the world could take over factories to produce robots which could build new factories and essentially create bodies for itself. * These are rather philosophical considerations, but some would argue that AI would destroy what makes us human: + **Inferiority.** What if plenty of AI entities were smarter, faster, more reliable and more creative than the best humans? + **Pointlessness.** With robots replacing the need for physical labour and AIs replacing the need for intellectual labour, we will really have nothing to do. Nobody's going to get the Nobel Prize again because the AI will already be ahead. Why even get educated in the first place? + **Monoculture/stagnation** — in various scenarios (such as a single "benevolent dictator" AGI) society could become fixed in a perpetual pattern without new ideas or any sort of change (pleasant though it may be). Basically, *Brave New World.* I think AGI is coming and we need to be mindful of these problems so that we can minimise them. Upvotes: 4 <issue_comment>username_6: I have an example which goes in kinda the opposite direction of the public's fears, but is a very real thing, which I already see happening. It is not AI-specific, but I think it will get worse through AI. It is the problem of **humans trusting the AI conclusions blindly** in critical applications. We have many areas in which human experts are supposed to make a decision. Take for example medicine - should we give medication X or medication Y? The situations I have in mind are frequently complex problems (in the Cynefin sense) where it is a really good thing to have somebody pay attention very closely and use lots of expertise, and the outcome really matters. There is a demand for medical informaticians to write decision support systems for this kind of problem in the medicine (and I suppose for the same type in other domains). They do their best, but the expectation is always that a human expert will always consider the system's suggestion just as one more opinion when making the decision. In many cases, it would be irresponsible to promise anything else, given the state of knowledge and the resources available to the developers. A typical example would be the use of computer vision in radiomics: a patient gets a CT scan and the AI has to process the image and decide whether the patient has a tumor. Of course, the AI is not perfect. Even when measured against the gold standard, it never achieves 100% accuracy. And then there are all the cases where it performs well against its own goal metrics, but the problem was so complex that the goal metric doesn't capture it well - I can't think of an example in the CT context, but I guess we see it even here on SE, where the algorithms favor popularity in posts, which is an imperfect proxy for factual correctness. You were probably reading that last paragraph and nodding along, "Yeah, I learned that in the first introductory ML course I took". Guess what? Physicians never took an introductory ML course. They rarely have enough statistic literacy to understand the conclusions of papers appearing in medical journals. When they are talking to their 27th patient, 7 hours into their 16 hour shift, hungry and emotionally drained, and the CT doesn't look all that clear-cut, but the computer says "it's not a malignancy", they don't take ten more minutes to concentrate on the image more, or look up a textbook, or consult with a colleague. They just go with what the computer says, grateful that their cognitive load is not skyrocketing yet again. So they turn from being experts to being people who read something off a screen. Worse, in some hospitals the administration does not only trust computers, it also has found out that they are convenient scapegoats. So, a physician has a bad hunch which goes against the computer's output, it becomes difficult for them to act on that hunch and defend themselves that they chose to overrode the AI's opinion. AIs are powerful and useful tools, but there will always be tasks where they can't replace the toolwielder. Upvotes: 3 <issue_comment>username_7: This only intents to be a complement to other answers so I will not discuss to possibility of AI trying to willingly enslave humanity. But a different risk is already here. I would call it *unmastered technology*. I have been teached science and technology, and IMHO, AI has *by itself* no notion of good and evil, nor freedom. But it is built and used by human beings and because of that non rational behaviour can be involved. I would start with a real life example more related to general IT than to AI. I will speak of viruses or other malwares. Computers are rather stupid machines that are good to quickly process data. So most people rely on them. An some (bad) people develop malwares that will disrupt the correct behaviour of computers. And we all know that they can have terrible effects on small to medium organizations that are not well prepared to an computer loss. AI is computer based so it is vulnerable to computer type attacks. Here my example would be an AI driven car. The technology is almost ready to work. But imagine the effect of a malware making the car trying to attack other people on the road. Even without a direct access to the code of the AI, it can be attacked by *side channels*. For example it uses cameras to read signal signs. But because of the way machine learning is implemented, AI generaly does not analyses a scene the same way a human being does. Researchers have shown that it was possible to change a sign in a way that a normal human will still see the original sign, but an AI will see a different one. Imagine now that the sign is the road priority sign... What I mean is that even if the AI has no evil intents, bad guys can try to make it behave badly. And to more important actions will be delegated to AI (medecine, cars, planes, not speaking of bombs) the higher the risk. Said differently, I do not really fear the AI for itself, but for the way it can be used by humans. Upvotes: 3 <issue_comment>username_8: I think one of the most real (ie. related to current, existing AIs) risks are in blindly relying on unsupervised AIs, for two reasons. ### 1. AI systems may degrade Physical error in AI systems may start producing wildly wrong results in regions in which they were not tested for because the physical system starts providing wrong values. This is sometimes redeemed by self-testing and redundancy, but still requires occasional human supervision. Self learning AIs also have a software weakness - their weight networks or statistic representations may approach local minima where they are stuck with one wrong result. ### 2. AI systems are biased This is fortunately frequently discussed, but worth mentioning: AI systems' classification of inputs is often biased because the training/testing dataset were biased as well. This results in AIs not recognizing people of certain ethnicity, for more obvious example. However there are less obvious cases that may only be discovered after some bad accident, such as AI not recognizing certain data and accidentally starting fire in a factory, breaking equipment or hurting people. Upvotes: 2 <issue_comment>username_9: Human beings currently exist in an ecological-economic niche of "the thing that thinks". AI is also a thing that thinks, so it will be invading our ecological-economic niche. In both ecology and economics, having something else occupy your niche is not a great plan for continued survival. How exactly Human survival is compromised by this is going to be pretty chaotic. There are going to be a bunch of plausible ways that AI could endanger human survival as a species, or even as a dominant life form. --- Suppose there is a strong AI without "super ethics" which is cheaper to manufacture than a human (including manufacturing a "body" or way of manipulating the world), and as smart or smarter than a human. This is a case where we start competing with that AI for resources. It will happen on microeconomic scales (do we hire a human, or buy/build/rent/hire an AI to solve this problem?). Depending on the rate at which AIs become cheap and/or smarter than people, this can happen slowly (maybe an industry at a time) or extremely fast. In a capitalist competition, those that don't move over to the cheaper AIs end up out-competed. Now, in the short term, if the AI's advantages are only marginal, the high cost of educating humans for 20-odd years before they become productive could make this process slower. In this case, it might be worth paying a Doctor above starvation wages to diagnose disease instead of an AI, but it probably isn't worth paying off their student loans. So new human Doctors would rapidly stop being trained, and existing Doctors would be impoverished. Then over 20-30 years AI would completely replace Doctors for diagnostic purposes. If the AI's advantages are large, then it would be rapid. Doctors wouldn't even be worth paying poverty level wages to do human diagnostics. You can see something like that happening with muscle-based farming when gasoline-based farming took over. During past industrial revolutions, the fact that humans where able to think means that you could repurpose surplus human workers to do other actions; manufacturing lines, service economy jobs, computer programming, etc. But in this model, AI is cheaper to train and build and as smart or smarter than humans at that kind of job. As evidenced by the ethanol-induced Arab spring, crops and cropland can be used to fuel both machines and humans. When machines are more efficient in terms of turning cropland into useful work, you'll start seeing the price of food climb. This typically leads to riots, as people really don't like starving to death and are willing to risk their own lives to overthrow the government in order to prevent this. You can mollify the people by providing subsidized food and the like. So long as this isn't economically crippling (ie, if expensive enough, it could result in you being out-competed by other places that don't do this), this is merely politically unstable. As an alternative, in the short term, the ownership caste who is receiving profits from the increasingly efficient AI-run economy can pay for a police or military caste to put down said riots. This requires that the police/military castes be upper lower to middle class in standards of living, in order to ensure continued loyalty -- you don't want them joining the rioters. So one of the profit centers you can put AI towards is AI based military and policing. Drones that deliver lethal and non-lethal ordnance based off of processing visual and other data feeds can reduce the number of middle-class police/military needed to put down food-price triggered riots or other instability. As we have already assumed said AIs can have bodies and training cheaper than a biological human, this can also increase the amount of force you can deploy per dollar spent. At this point, we are talking about a mostly AI run police and military being used to keep starving humans from overthrowing the AI run economy and seizing the means of production from the more efficient use it is currently being put to. The vestigial humans who "own" the system at the top are making locally rational decisions to optimize their wealth and power. They may or may not persist for long; so long as they drain a relatively small amount of resources and don't mess up the AI run economy, there won't be much selection pressure to get rid of them. On the other hand, as they are contributing nothing of value, they position "at the top" is politically unstable. This process assumed a "strong" general AI. Narrower AIs can pull this off in pieces. A cheap, effective diagnostic computer could reduce most Doctors into poverty in a surprisingly short period of time, for example. Self driving cars could swallow 5%-10% of the economy. Information technology is already swallowing the retail sector with modest AI. It is said that every technological advancement leads to more and better jobs for humans. And this has been true for the last 300+ years. But prior to 1900, it was also true that every technological advancement led to more and better jobs for horses. Then the ICE and automobile arrived, and now there are far fewer working horses; the remaining horses are basically the equivalent of human personal servants: kept for the novelty of "wow, cool, horse" and the fun of riding and controlling a huge animal. Upvotes: 2 <issue_comment>username_10: AI that is used to solve a real world problem could pose a risk to humanity and doesn't exactly require sentience, this also requires a degree of human stupidity too.. Unlike humans, an AI would find the most logical answer without the constraint of emotion, ethics, or even greed... Only logic. Ask this AI how to solve a problem that humans created (for example, Climate Change) and it's solution might be to eliminate the entirety of the human race to protect the planet. Obviously this would require giving the AI the ability to act upon it's outcome which brings me to my earlier point, human stupidity. Upvotes: 1 <issue_comment>username_11: In addtion to the many answers already provided, I would bring up the issue of [***adversarial examples***](https://arxiv.org/pdf/1607.02533.pdf) in the area of image models. Adversarial examples are images that have been perturbed with specifically designed noise that is often imperceptible to a human observer, but strongly alters the prediction of a model. Examples include: * Affecting the predicted diagnosis in a [chest x-ray](https://arxiv.org/pdf/1804.05296.pdf) * Affecting the [detection of roadsigns](https://arxiv.org/pdf/1807.07769.pdf) necessary for autonomous vehicles. Upvotes: 1 <issue_comment>username_12: Artificial intelligence can harm us in any of the ways of natural intelligence (of humans). The distinction between natural and artificial intelligence will vanish when humans start augmenting themselves more intimately. Intelligence may no longer characterize the identity and will become a limitless possession. The harm caused will be as much the humans can endure for preserving their evolving self-identity. Upvotes: 0 <issue_comment>username_13: Few people realize that our global economy should be considered an AI: - The money transactions are the signals over a neural net. The nodes in the neural net would be the different corporations or private persons paying or receiving money. - It is man-made so qualifies as artificial This neural network is better in its task then humans: Capitalism has always won against economy planned by humans (plan-economy). Is this neural net dangerous ? Might differ if you are the CEO earning big versus a fisherman in a river polluted by corporate waste. How did this AI become dangerous? You could answer it is because of human greed. Our creation reflects ourselves. In other words: we didnot train our neural net to behave well. Instead of training the neural net to improve living quality for all humans, we trained it to make rich fokes more rich. Would it be easy to train this AI to be no longer dangerous ? Maybe not, maybe some AI are just larger then life. It is just survival of the fittest. Upvotes: 0
2019/09/16
1,463
5,246
<issue_start>username_0: Why does estimation error increase with $|H|$ and decrease with $m$ in PAC learning? I came across [this statement](https://i.stack.imgur.com/JLnLi.png) in the section 5.2 of the book ["understanding machine learning: from theory to algorithms"](https://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning/understanding-machine-learning-theory-algorithms.pdf). You just search "**increases (logarithmically)**" in your browser and then you can find the sentence. I just can't understand the statement. And there is no proof in the book either. What I would like to do is prove that estimation error $\epsilon\_{est}$ increase (logarithmically) with || and decrease with . Hope you can help me out. A rigorous proof can't be better!<issue_comment>username_1: Definitely, you can find the proof in different resources (for example, in [these notes](http://karlstratos.com/notes/pac.pdf) or in the paper that originally proposed PAC learnability, [A Theory of the Learnable](http://web.mit.edu/6.435/www/Valiant84.pdf)). However, the intuition behind your question is when the size of the hypothesis increases, if you do not change anything, you can't see more part of the space. Hence, the estimation error will increase. Moreover, when you increase the number of samples, you have more chance to see more part of the hypothesis space, hence, the estimation error decrease. Also, you can see some lemma about the relation of the PAC learnability and other similar concepts in the Wikipedia article [Probably approximately correct learning](https://en.wikipedia.org/wiki/Probably_approximately_correct_learning): > > Under some regularity conditions these three conditions are equivalent: > > > 1. The concept class $C$ is PAC learnable. > 2. The VC dimension of $C$ is finite. > 3. $C$ is a uniform Glivenko-Cantelli class. > > > Upvotes: 2 <issue_comment>username_2: The book has actually proven the theorem rigorously in Chapter 2. I don't want to prove it here, but you can look it up. I will try to explain parts which are non obvious (and somewhat confusing according to the book's literature). So for PAC learning (with or without the realizability assumption) the theory is that given a data-set of size: $$m \geq [\frac{log(|H|/\delta)}{\epsilon}]$$ where $|H|$ is the size of the finite hypothesis class. which when simplified is nothing but: $$|H|e^{-\epsilon m} \leq \delta$$ where $\delta$ is the probability that your sample is not representative of the underlying distribution (according to the book, hence the term Probably in PAC Learning) and $\epsilon$ is the maximum probability that your learned hypothesis $h$ predicts new unseen samples wrong (basically accuracy of your hypothesis and hence the term Approximately Correct in PAC Learning). This equation/bound comes from the last step of the proof which states: $$D^m [ {S|\_x : L\_{(D,f)}(h\_S)\gt \epsilon}] \leq |H\_B|e^{-\epsilon m} \leq |H|e^{-\epsilon m}$$ where $H\_B$ are all the bad hypothesis (over-fitting hypothesis) which is your answer to the **question**: > > Estimation error increase with linearly $|H|$ and decrease exponentially with $m$ in PAC learning > > > Now here comes the tricky part, following this equation the proof directly jumps to: $$|H|e^{-\epsilon m} \leq \delta$$ The justification for this is given in previous part of the proof (I am not entirely sure if they meant this justification, but it seems the only one): > > Since > the realizability assumption implies that $L\_S (h\_S ) = 0$, it follows that the event > $L\_{(D,f )} (h\_S ) > \epsilon$ can only happen if for some $h ∈ H\_B$ we have $L\_S (h) = 0$. In other words, this event will only happen if our sample is in the set of **misleading** samples. > > > Do not mistakenly we confuse **misleading** $\rightarrow$ **non-representative** otherwise we will not able to justify the aforementioned jump ($\epsilon$ and $\delta$ becomes dependent on each other) The actual interpretation of $\delta$ is that it is our confidence parameter i.e we want to ensure: $$D^m [ {S|\_x : L\_{(D,f)}(h\_S)\gt \epsilon}] \leq \delta$$ which means we are $1-\delta$ confident that our learned $h\_s$ will have $L\_{(D,f)}(h\_s) \leq \epsilon$ (complementary expression). **NOTE:** This idea is skipped in most resources I read, I found its explanation [here](https://www.youtube.com/watch?v=qOMOYM0WCzU&t=924s). Now, coming to the statement: $$m\_H \leq [\frac{log(|H|/\delta)}{\epsilon}]$$ $m\_H$ unlike $m$ is defined as: > > If $H$ is PAC learnable, there are many functions $m\_H$ that satisfy the > requirements given in the definition of PAC learnability. Therefore, to be precise,we will define the sample complexity of learning $H$ to be the “minimal function,” in the sense that for any $\epsilon, \delta$ $m\_H (\epsilon, \delta)$ is the minimal integer that satisfies the requirements of PAC learning with accuracy $\epsilon$ and confidence $\delta$. > > > And hence the equality sign is reversed, since many good samples will result in good hypothesis being generated in a smaller number of samples. Side note: All conventions are from Understanding Machine Learning: From Theory to Algorithms. Upvotes: 0
2019/09/16
481
2,045
<issue_start>username_0: I'm dealing with a "ticket similarity task". Every time new tickets arrive at the help desk (customer service), I need to compare them and find out about similar ones. In this way, once the operator responds to a ticket, at the same time he can solve the others similar to the one solved. I expect an input ticket and all the other tickets with their similarity in output. I thought about using **DOC2VEC**, but it requires training every time a new ticket enters. What do you recommend?<issue_comment>username_1: You need to create an active learning loop over the process of the learning. Try to start from a history of tickets and using doc2vec to get the similarity. When you find a bad result in the result of your classifier, then report it and then try to retrain the classifier. Also, you can wait to retrain the model, up to finding the predefined batch0size of the new data which are not in the training set. Also, to get a better result in the active learning loop, you can testify incoming data by the measuring of the classifier uncertainty over it. If the entropy of the classifier over the data is not in a good situation, you can label the data by the operator (as an oracle) and then up reach to the predefined batch-size, retrain the classifier. Morevoer, to know better about the active learning process and query strategies follow [this link](https://en.wikipedia.org/wiki/Active_learning_(machine_learning)) (and other articles in that link like [this article](http://burrsettles.com/pub/settles.activelearning.pdf)). Upvotes: 0 <issue_comment>username_2: Not sure if you are bent on using DOC2VEC, but why not use OpenAI embeddings or any Hugging Face model, and store them as a sparse-dense vector in a vector database (NOTE - the concept of sparse-dense was designed by Pinecone) they have publicly made it transparent how to do it yourself). Using Sparse-dense makes sure your similarities are a blend of semantics as well as lexical which will work perfectly for your case. Upvotes: 1
2019/09/17
1,374
5,636
<issue_start>username_0: AI experts like <NAME> and <NAME> say that AGI will be developed in the coming decade. Are they credible?<issue_comment>username_1: As a riff on my answer to [this question](https://ai.stackexchange.com/questions/7875/is-the-singularity-something-to-be-taken-seriously/7888#7888), which is about the broader concern of the development of the singularity, rather than the narrower concern of the development of AGI: I can say that among AI researchers I interact with, it far more common to view the development of AGI in the next decade as speculation (or even wild speculation) than as settled fact. This is borne out by [surveys of AI researchers](https://nickbostrom.com/papers/survey.pdf), with 80% thinking "The earliest that machines will be able to simulate learning and every other aspect of human intelligence" is in "more than 50 years" or "never", and just a few percent thinking that such forms of AI are "near". It's possible to quibble over what exactly is meant by AGI, but it seems likely that for us to reach AGI, we'd need to simulate human-level intelligence in at least most of its aspects. The fact that AI researchers think this is very far off suggests that they also think AGI is not right around the corner. I suspect that the reasons AI researchers are less optimistic about AGI than Kurzweil or others in tech (but not in AI), are rooted in the fact that we still don't have a good understanding of what human intelligence *is*. It's difficult to simulate something that we can't pin down. Another factor is that most AI researchers have been working in AI for a long time. There are countless past proposals for AGI frameworks, and *all* of them have been not just wrong, but in the end, more or less hopelessly wrong. I think this creates an innate skepticism of AGI, which may perhaps be unfair. Nonetheless, expert opinion on this one is pretty well settled: no AGI this decade, and maybe not ever! Upvotes: 4 [selected_answer]<issue_comment>username_2: I wouldn't take anything username_2 Kurzweil says especially seriously. Actual AI experts spend large quantities of time reading the existing scientific literature, and working to expand it. Because Kurzweil doesn't spend much of his time actually *learning* about AI, he has plenty of time in which to talk about it. Loudly. This is harmful to research, because 1) a lot of the uninformed predictions he and others make resemble doomsday scenarios, and 2) the predictions of *good* things have insanely optimistic time frames attached, and when they don't come true, research funding may be lost because AI hasn't lived up to what people thought it promised. AI research has been progressing very rapidly in the last decade, but if we're being honest, a lot of the credit for that has to go to the people who develop research-grade graphics cards. The ability to perform *massive* amounts of linear algebra in parallel has allowed us to use techniques that we've known about for a couple decades, but that were too computationally expensive to be practical at the time. And because those techniques are now practical, a lot of current research is applying those techniques to new problems, and modifying and improving them based on what we've learned. (I don't want to understate the contributions here; there have been a lot of *really* clever ideas developed in the last ten years. But it's mostly consistent iterative improvement of techniques that already existed, rather than completely revolutionary ideas.) To make human-equivalent AIs, we'll probably need to make a few of those giant conceptual leaps. And each of those leaps will then need to be followed up by a decade or two of iterative improvement, because that's how the process works. Case in point, the revolutionary idea that eventually led to all the Deep Learning models out there today was [this one](https://www.nature.com/articles/323533a0), dated 1986. First, there was the revolutionary idea. It was followed up by a bunch of work that built on it and expanded it in new directions. The work eventually stagnated because of hardware constraints. Then hardware scientists and engineers made some advances that let us continue work, and only then did we finally start getting the major applications that we're seeing today. We know human-level intelligence is possible, since humans manage it. I have little doubt that we'll figure out how to do it with AI eventually (maybe in my lifetime, maybe not). But if you want Kurzweil's predictions to be even remotely plausible, you might want to add a zero to the end of most of his time frames. Upvotes: 2 <issue_comment>username_3: My simple answer is **NO**. Let me elaborate. If you closely observe nature, you see that nothing changes drastically all of a sudden. Even when it does, it doesn't stay for long. Field of AI, has just started and it needs a lot more evolution to achieve AGI. Though AI is solving many directed problems like Face Recognition, Speech Recognition and many more (applications are innumerable), all these can be considered as Narrow AI. They solve a particular task. For AI reach to the state where it can better than humans in all aspects, not only do we need breakthroughs in algorithms, we also need many more breakthroughs in electronics and physics. Please read this article. Summary is experts(around 350) estimate that there’s a 50% chance that AGI will occur until 2060. So, there is a very bleak chance that AGI will become a reality in next decade. <https://blog.aimultiple.com/artificial-general-intelligence-singularity-timing/> Upvotes: 0
2019/09/18
393
1,591
<issue_start>username_0: I would like to develop a platform in which people will write text and upload images. I am going to use Google API to classify the text and extract from the image all kinds of metadata. In the end, I am going to have a lot of text which describes the content (text and images). Later, I would like to show my users related posts (that is, similar posts, from the content point of view). What is the most ppropriate way of doing this? I am not an AI expert and the best approach from my prescriptive it to have some tools, like google API or Apache Lucene search engine, which can hide the details of how this is done.<issue_comment>username_1: Google has introduced [Universal Sentence Encoder](https://ai.googleblog.com/2019/07/multilingual-universal-sentence-encoder.html), which converts sentences into vector representations while preserving the semantic details. The pre-trained models are available on [Tensorflow Hub](https://tfhub.dev/google/universal-sentence-encoder/3). The [Colab notebook](https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/semantic_similarity_with_tf_hub_universal_encoder.ipynb) would help you get started as well. Upvotes: 0 <issue_comment>username_2: I would suggest to convert the documents into **TF-IDF(use Gensim**) vectors and then compare them using various similarity calculating techniques like **cosine similarity**. You should read this amazing article for the same. I once used it while working on my project. <https://medium.com/@adriensieg/text-similarities-da019229c894> Upvotes: 1
2019/09/18
815
2,836
<issue_start>username_0: I want to give some examples of AI via movies to my students. There are [many movies that include AI](https://en.wikipedia.org/wiki/List_of_artificial_intelligence_films), whether being the main character or extras. Which movies have the most realistic (the most possible or at least close to being made in this era) artificial intelligence?<issue_comment>username_1: [Just A Rather Very Intelligent System (J.A.R.V.I.S.)](https://ironman.fandom.com/wiki/J.A.R.V.I.S.) in [Iron Man](https://ironman.fandom.com/wiki/Iron_Man_(film)) (and related films, such as The Avengers) is something (a personal assistant) that people are already trying to develop, so JARVIS is a quite realistic artificial intelligence. Examples of existing [personal assistants](https://en.wikipedia.org/wiki/Virtual_assistant) are [Google Assistant](https://en.wikipedia.org/wiki/Google_Assistant) (integrated into [Google Home](https://en.wikipedia.org/wiki/Google_Home) devices), [Cortana](https://en.wikipedia.org/wiki/Cortana), [Siri](https://en.wikipedia.org/wiki/Siri) and [Alexa](https://en.wikipedia.org/wiki/Amazon_Alexa). There are [other virtual assistants](https://en.wikipedia.org/wiki/Virtual_assistant#Comparison_of_notable_assistants), but, unfortunately, there aren't many reliable open-source ones. Note that JARVIS is way more intelligent and capable than the other mentioned personal assistants. Similarly, [HAL 9000](https://2001.fandom.com/wiki/HAL_9000), in [2001: A Space Odyssey](https://en.wikipedia.org/wiki/2001:_A_Space_Odyssey_(film)), is a sentient artificial intelligence which can be considered a personal assistant. Upvotes: 3 [selected_answer]<issue_comment>username_2: I would like to mention WOPR from [War Games](https://en.wikipedia.org/wiki/WarGames), maybe is an old movie for your students, but it is a more realistic IA centered around the problem of playing board games (if you exclude the part about deciding that a game is not worth the time). Also I remember an artificial assistant in ["The Time machine"](https://www.imdb.com/title/tt0268695/) that was more convincing than J.A.R.V.I.S because it is not so intelligent, I remember it more like an agent that can find and read you wikipedia articles, but without reasoning about them a lot, but I could be wrong. The robot companion in [Moon](https://www.imdb.com/title/tt1182345/) is also interesting and comical as it is like a small child that has been told to cheat but can't disobey direct orders. Other films go around the dilema of creating AGI, like, "Blade runner", "The bicentenary Man", Spilbergs' "I.A.", "Her", or "Ex Machina", they are more interesting from a philosofical point of view (they are all very similar to Mary Shelley's Frankenstein) because the actual implementation is unconceivable right now. Upvotes: 2
2019/09/18
2,770
8,819
<issue_start>username_0: The Transformer model introduced in ["Attention is all you need"](https://arxiv.org/abs/1706.03762) by Vaswani et al. incorporates a so-called position-wise feed-forward network (FFN): > > In addition to attention sub-layers, each of the layers in our encoder > and decoder contains a fully connected feed-forward network, which is > applied to each position separately and identically. This consists of > two linear transformations with a ReLU activation in between. > > > $$\text{FFN}(x) = \max(0, x \times {W}\_{1} + {b}\_{1}) \times {W}\_{2} + {b}\_{2}$$ > > > While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is ${d}\_{\text{model}} = 512$, and the inner-layer has dimensionality ${d}\_{ff} = 2048$. > > > I have seen at least one implementation in Keras that directly follows the convolution analogy. Here is an excerpt from [attention-is-all-you-need-keras](https://github.com/Lsdefine/attention-is-all-you-need-keras/blob/master/transformer.py). ``` class PositionwiseFeedForward(): def __init__(self, d_hid, d_inner_hid, dropout=0.1): self.w_1 = Conv1D(d_inner_hid, 1, activation='relu') self.w_2 = Conv1D(d_hid, 1) self.layer_norm = LayerNormalization() self.dropout = Dropout(dropout) def __call__(self, x): output = self.w_1(x) output = self.w_2(output) output = self.dropout(output) output = Add()([output, x]) return self.layer_norm(output) ``` Yet, in Keras you can apply a single `Dense` layer across all time-steps using the `TimeDistributed` wrapper (moreover, a simple `Dense` layer applied to a 2D input [implicitly behaves](https://stackoverflow.com/a/44616780/3846213) like a `TimeDistributed` layer). Therefore, in Keras a stack of two Dense layers (one with a ReLU and the other one without an activation) is exactly the same thing as the aforementioned position-wise FFN. So, why would you implement it using convolutions? **Update** Adding benchmarks in response to the answer by @mshlis: ``` import os import typing as t os.environ['CUDA_VISIBLE_DEVICES'] = '0' import numpy as np from keras import layers, models from keras import backend as K from tensorflow import Tensor # Generate random data n = 128000 # n samples seq_l = 32 # sequence length emb_dim = 512 # embedding size x = np.random.normal(0, 1, size=(n, seq_l, emb_dim)).astype(np.float32) y = np.random.binomial(1, 0.5, size=n).astype(np.int32) ``` --- ``` # Define constructors def ffn_dense(hid_dim: int, input_: Tensor) -> Tensor: output_dim = K.int_shape(input_)[-1] hidden = layers.Dense(hid_dim, activation='relu')(input_) return layers.Dense(output_dim, activation=None)(hidden) def ffn_cnn(hid_dim: int, input_: Tensor) -> Tensor: output_dim = K.int_shape(input_)[-1] hidden = layers.Conv1D(hid_dim, 1, activation='relu')(input_) return layers.Conv1D(output_dim, 1, activation=None)(hidden) def build_model(ffn_implementation: t.Callable[[int, Tensor], Tensor], ffn_hid_dim: int, input_shape: t.Tuple[int, int]) -> models.Model: input_ = layers.Input(shape=(seq_l, emb_dim)) ffn = ffn_implementation(ffn_hid_dim, input_) flattened = layers.Flatten()(ffn) output = layers.Dense(1, activation='sigmoid')(flattened) model = models.Model(inputs=input_, outputs=output) model.compile(optimizer='Adam', loss='binary_crossentropy') return model ``` --- ``` # Build the models ffn_hid_dim = emb_dim * 4 # this rule is taken from the original paper bath_size = 512 # the batchsize was selected to maximise GPU load, i.e. reduce PCI IO overhead model_dense = build_model(ffn_dense, ffn_hid_dim, (seq_l, emb_dim)) model_cnn = build_model(ffn_cnn, ffn_hid_dim, (seq_l, emb_dim)) ``` --- ``` # Pre-heat the GPU and let TF apply memory stream optimisations model_dense.fit(x=x, y=y[:, None], batch_size=bath_size, epochs=1) %timeit model_dense.fit(x=x, y=y[:, None], batch_size=bath_size, epochs=1) model_cnn.fit(x=x, y=y[:, None], batch_size=bath_size, epochs=1) %timeit model_cnn.fit(x=x, y=y[:, None], batch_size=bath_size, epochs=1) ``` I am getting 14.8 seconds per epoch with the Dense implementation: ``` Epoch 1/1 128000/128000 [==============================] - 15s 116us/step - loss: 0.6332 Epoch 1/1 128000/128000 [==============================] - 15s 115us/step - loss: 0.5327 Epoch 1/1 128000/128000 [==============================] - 15s 117us/step - loss: 0.3828 Epoch 1/1 128000/128000 [==============================] - 14s 113us/step - loss: 0.2543 Epoch 1/1 128000/128000 [==============================] - 15s 116us/step - loss: 0.1908 Epoch 1/1 128000/128000 [==============================] - 15s 116us/step - loss: 0.1533 Epoch 1/1 128000/128000 [==============================] - 15s 117us/step - loss: 0.1475 Epoch 1/1 128000/128000 [==============================] - 15s 117us/step - loss: 0.1406 14.8 s ± 170 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` and 18.2 seconds for the CNN implementation. I am running this test on a standard Nvidia RTX 2080. So, from a performance perspective there seems to be no point in actually implementing an FFN block as a CNN in Keras. Considering that the maths are the same, the choice boils down to pure aesthetics.<issue_comment>username_1: 1) The math is the exact same, so from an optimization or mathematical perspective there is no difference 2) Here are my **guesses** to a possible answer. * Habit: People may just call one over the other out of habit * Generality: Across frameworks a 1d convolution op would work, while Dense of FC may need adjustments to work on the temporal axis * Parallel Workers: Convolution and Dense call different subroutines in the backend, and the one used by convolution may have better gains on sequential input for this purpose **Edit** Regarding bench-marking the 2, your experiment was shallow. I didn't have time to wait to do a full gird search, so i held 3 paramaters constant and fluctuated one. Here are the results (note the model was just a simple feed forward relu residual model) [![enter image description here](https://i.stack.imgur.com/Khm6S.png)](https://i.stack.imgur.com/Khm6S.png) [![enter image description here](https://i.stack.imgur.com/xl5b0.png)](https://i.stack.imgur.com/xl5b0.png) [![enter image description here](https://i.stack.imgur.com/O7XXV.png)](https://i.stack.imgur.com/O7XXV.png) [![enter image description here](https://i.stack.imgur.com/VK07u.png)](https://i.stack.imgur.com/VK07u.png) Note that in a couple yeah dense out performs conv but it isn't consistent and there are scenarios where it is not true. This is only for a small grid that I chose but you can extend this yourself to check. So it is not as straightforward to say one is sheerly better than the other. Upvotes: 2 <issue_comment>username_2: I'm going to post another guess to this question - it won't be a complete answer, but hopefully it'll provide some direction towards finding a more legitimate answer. The feed-forward networks as suggested by Vaswani are very reminiscent of the [sparse autoencoders](https://web.stanford.edu/class/cs294a/sparseAutoencoder.pdf). Where the input / output dimensions are much greater than the hidden input dimension. [![enter image description here](https://i.stack.imgur.com/jDlzn.png)](https://i.stack.imgur.com/jDlzn.png) If you aren't familiar with sparse autoencoders, this is a little counter intuitive - WTF would you have a larger hidden dimension? The intuition borrows from infinitely wide neural networks. If you have an infinitely wide neural network, you have basically have a Gaussian process and sample any function you'd like. So the wider the network you have, the more approximation power that you have. In the case of inputs, this is a matter of learning a dictionary. If you have only discrete inputs, this hidden layer will be capped at $O(2^N)$ width, where $N$ is the maximum number of bits it takes to represent the input (which would boil down to approximating a lookup table). Of course, these aren't trivial to implement in practice. These layers are bound to be bloated with identifiability issues. Common approaches include $L\_1$ regularization. I'm guessing that the convolutional layers + dropout are just another attempt to deal with these sorts of identifiability issues. Furthermore, the FFN is an attempt to learn an arbitrary mapping for individual words (you can think of mapping words to synonyms for instance). These are all guesses though - more intuition is welcome. Upvotes: 2
2019/09/19
1,759
4,358
<issue_start>username_0: I'm testing out TensorFlow LSTM layer text generation task, not classification task; but something is wrong with my code, it doesn't converge. What changes should be done? Source code: ``` import tensorflow as tf; # t=0 t=1 t=2 t=3 #[the, brown, fox, is, quick] # 0 1 2 3 4 #[the, red, fox, jumps, high] # 0 5 2 6 7 #t0 x=[[the], [the]] # y=[[brown],[red]] #t1 ... #t2 #t3 bsize = 2; times = 4; #data x = []; y = []; #t0 the: the: x.append([[0/6], [0/6]]); #normalise: x divided by 6 (max x) # brown: red: y.append([[1/7], [5/7]]); #normalise: y divided by 7 (max y) #t1 x.append([[1/6], [5/6]]); y.append([[2/7], [2/7]]); #t2 x.append([[2/6], [2/6]]); y.append([[3/7], [6/7]]); #t3 x.append([[3/6], [6/6]]); y.append([[4/7], [7/7]]); #model inputs = tf.placeholder(tf.float32,[times,bsize,1]) #4,2,1 exps = tf.placeholder(tf.float32,[times,bsize,1]); layer1 = tf.keras.layers.LSTMCell(20) hids1,_ = tf.nn.static_rnn(layer1,tf.split(inputs,times),dtype=tf.float32); w2 = tf.Variable(tf.random_uniform([20,1],-1,1)); b2 = tf.Variable(tf.random_uniform([ 1],-1,1)); outs = tf.sigmoid(tf.matmul(hids1,w2) + b2); loss = tf.reduce_sum(tf.square(exps-outs)) optim = tf.train.GradientDescentOptimizer(1e-1) train = optim.minimize(loss) #train s = tf.Session(); init = tf.global_variables_initializer(); s.run(init) feed = {inputs:x, exps:y} for i in range(10000): if i%1000==0: lossval = s.run(loss,feed) print("loss:",lossval) #end if s.run(train,feed) #end for lastloss = s.run(loss,feed) print("loss:",lastloss,"(last)"); #eof ``` Output showing loss values (a little different every run): ``` loss: 3.020703 loss: 1.8259083 loss: 1.812584 loss: 1.8101325 loss: 1.8081319 loss: 1.8070083 loss: 1.8065354 loss: 1.8063282 loss: 1.8062303 loss: 1.8061805 loss: 1.8061543 (last) ``` Colab link: <https://colab.research.google.com/drive/1TsHjmucuynCPOgKuo4a0hiM8B8UaOWQo><issue_comment>username_1: writing here my suggestion, because i haven't earned the right to comment yet. Your main "problem" could be your loss function. It converges, this is why your loss value is decreasing. So I suggest to let it maybe train longer. Alternatively you could change the loss function to fit your need. For example you could use: ``` loss = tf.reduce_mean(tf.square(exps-outs)) ``` You will get a smaller loss value which decreases clearly after every output. I hope this helps :) Upvotes: 3 [selected_answer]<issue_comment>username_2: I'm still working on how to make the code work for text generation, but the following converges and work for text classification: ``` import tensorflow as tf; tf.reset_default_graph(); #data ''' t0 t1 t2 british gray is => cat (y=0) 0 1 2 white samoyed is => dog (y=1) 3 4 2 ''' Bsize = 2; Times = 3; Max_X = 4; Max_Y = 1; X = [[[0],[1],[2]], [[3],[4],[2]]]; Y = [[0], [1] ]; #normalise for I in range(len(X)): for J in range(len(X[I])): X[I][J][0] /= Max_X; for I in range(len(Y)): Y[I][0] /= Max_Y; #model Inputs = tf.placeholder(tf.float32, [Bsize,Times,1]); Expected = tf.placeholder(tf.float32, [Bsize, 1]); #single LSTM layer #''' Layer1 = tf.keras.layers.LSTM(20); Hidden1 = Layer1(Inputs); #''' #multi LSTM layers ''' Layers = tf.keras.layers.RNN([ tf.keras.layers.LSTMCell(30), #hidden 1 tf.keras.layers.LSTMCell(20) #hidden 2 ]); Hidden2 = Layers(Inputs); ''' Weight3 = tf.Variable(tf.random_uniform([20,1], -1,1)); Bias3 = tf.Variable(tf.random_uniform([ 1], -1,1)); Output = tf.sigmoid(tf.matmul(Hidden1,Weight3) + Bias3); Loss = tf.reduce_sum(tf.square(Expected-Output)); Optim = tf.train.GradientDescentOptimizer(1e-1); Training = Optim.minimize(Loss); #train Sess = tf.Session(); Init = tf.global_variables_initializer(); Sess.run(Init); Feed = {Inputs:X, Expected:Y}; for I in range(1000): #number of feeds, 1 feed = 1 batch if I%100==0: Lossvalue = Sess.run(Loss,Feed); print("Loss:",Lossvalue); #end if Sess.run(Training,Feed); #end for Lastloss = Sess.run(Loss,Feed); print("Loss:",Lastloss,"(Last)"); #eval Results = Sess.run(Output,Feed); print("\nEval:"); print(Results); print("\nDone."); #eof ``` Upvotes: 0
2019/09/19
1,046
2,973
<issue_start>username_0: Let's assume we have an ANN which takes a vector $x\in R^D$, representing an image, and classifies it over two classes. The output is a vector of probabilities $N(x)=(p(x\in C\_1), p(x\in C\_2))^T$ and we pick $C\_1$ iff $p(x\in C\_1) \geq 0.5$. Let the two classes be $C\_1= \texttt{cat}$ and $C\_2= \texttt{dog}$. Now imagine we want to extract this ANN's idea of ideal cat by finding $x^\* = argmax\_x N(x)\_1$. How would we proceed? I was thinking about solving $\nabla\_xN(x)\_1=0$, but I don't know if this makes sense or if it is solvable. **In short, how do I compute the input which maximizes a class-probability?**<issue_comment>username_1: writing here my suggestion, because i haven't earned the right to comment yet. Your main "problem" could be your loss function. It converges, this is why your loss value is decreasing. So I suggest to let it maybe train longer. Alternatively you could change the loss function to fit your need. For example you could use: ``` loss = tf.reduce_mean(tf.square(exps-outs)) ``` You will get a smaller loss value which decreases clearly after every output. I hope this helps :) Upvotes: 3 [selected_answer]<issue_comment>username_2: I'm still working on how to make the code work for text generation, but the following converges and work for text classification: ``` import tensorflow as tf; tf.reset_default_graph(); #data ''' t0 t1 t2 british gray is => cat (y=0) 0 1 2 white samoyed is => dog (y=1) 3 4 2 ''' Bsize = 2; Times = 3; Max_X = 4; Max_Y = 1; X = [[[0],[1],[2]], [[3],[4],[2]]]; Y = [[0], [1] ]; #normalise for I in range(len(X)): for J in range(len(X[I])): X[I][J][0] /= Max_X; for I in range(len(Y)): Y[I][0] /= Max_Y; #model Inputs = tf.placeholder(tf.float32, [Bsize,Times,1]); Expected = tf.placeholder(tf.float32, [Bsize, 1]); #single LSTM layer #''' Layer1 = tf.keras.layers.LSTM(20); Hidden1 = Layer1(Inputs); #''' #multi LSTM layers ''' Layers = tf.keras.layers.RNN([ tf.keras.layers.LSTMCell(30), #hidden 1 tf.keras.layers.LSTMCell(20) #hidden 2 ]); Hidden2 = Layers(Inputs); ''' Weight3 = tf.Variable(tf.random_uniform([20,1], -1,1)); Bias3 = tf.Variable(tf.random_uniform([ 1], -1,1)); Output = tf.sigmoid(tf.matmul(Hidden1,Weight3) + Bias3); Loss = tf.reduce_sum(tf.square(Expected-Output)); Optim = tf.train.GradientDescentOptimizer(1e-1); Training = Optim.minimize(Loss); #train Sess = tf.Session(); Init = tf.global_variables_initializer(); Sess.run(Init); Feed = {Inputs:X, Expected:Y}; for I in range(1000): #number of feeds, 1 feed = 1 batch if I%100==0: Lossvalue = Sess.run(Loss,Feed); print("Loss:",Lossvalue); #end if Sess.run(Training,Feed); #end for Lastloss = Sess.run(Loss,Feed); print("Loss:",Lastloss,"(Last)"); #eval Results = Sess.run(Output,Feed); print("\nEval:"); print(Results); print("\nDone."); #eof ``` Upvotes: 0
2019/09/22
1,057
3,320
<issue_start>username_0: I'm not sure what this type of data is called, so I will give an example of the type of data I am working with: * A city records its inflow and outflow of different types of vehicles every hour. More specifically, it records the engine size. The output would be the pollution level X hours after the recorded hourly interval. It's worth noting that the data consists of individual vehicle engine size, so they cant be aggregated. This means the 2 input vectors (inflow and outflow) will be of variable length (different number of vehicles would be entering and lraving every hour) and I'm not sure how to handle this. I could aggregate and simply sum the number of vehicles, but I want to preserve any patterns in the data. E.g. perhaps there is a quick succession of several heavy motorbike engines, denoting a biker gang have just entered the city and are known to ride recklessly, contributing more to pollution than the sum of its parts. Any insight is appreciated.<issue_comment>username_1: writing here my suggestion, because i haven't earned the right to comment yet. Your main "problem" could be your loss function. It converges, this is why your loss value is decreasing. So I suggest to let it maybe train longer. Alternatively you could change the loss function to fit your need. For example you could use: ``` loss = tf.reduce_mean(tf.square(exps-outs)) ``` You will get a smaller loss value which decreases clearly after every output. I hope this helps :) Upvotes: 3 [selected_answer]<issue_comment>username_2: I'm still working on how to make the code work for text generation, but the following converges and work for text classification: ``` import tensorflow as tf; tf.reset_default_graph(); #data ''' t0 t1 t2 british gray is => cat (y=0) 0 1 2 white samoyed is => dog (y=1) 3 4 2 ''' Bsize = 2; Times = 3; Max_X = 4; Max_Y = 1; X = [[[0],[1],[2]], [[3],[4],[2]]]; Y = [[0], [1] ]; #normalise for I in range(len(X)): for J in range(len(X[I])): X[I][J][0] /= Max_X; for I in range(len(Y)): Y[I][0] /= Max_Y; #model Inputs = tf.placeholder(tf.float32, [Bsize,Times,1]); Expected = tf.placeholder(tf.float32, [Bsize, 1]); #single LSTM layer #''' Layer1 = tf.keras.layers.LSTM(20); Hidden1 = Layer1(Inputs); #''' #multi LSTM layers ''' Layers = tf.keras.layers.RNN([ tf.keras.layers.LSTMCell(30), #hidden 1 tf.keras.layers.LSTMCell(20) #hidden 2 ]); Hidden2 = Layers(Inputs); ''' Weight3 = tf.Variable(tf.random_uniform([20,1], -1,1)); Bias3 = tf.Variable(tf.random_uniform([ 1], -1,1)); Output = tf.sigmoid(tf.matmul(Hidden1,Weight3) + Bias3); Loss = tf.reduce_sum(tf.square(Expected-Output)); Optim = tf.train.GradientDescentOptimizer(1e-1); Training = Optim.minimize(Loss); #train Sess = tf.Session(); Init = tf.global_variables_initializer(); Sess.run(Init); Feed = {Inputs:X, Expected:Y}; for I in range(1000): #number of feeds, 1 feed = 1 batch if I%100==0: Lossvalue = Sess.run(Loss,Feed); print("Loss:",Lossvalue); #end if Sess.run(Training,Feed); #end for Lastloss = Sess.run(Loss,Feed); print("Loss:",Lastloss,"(Last)"); #eval Results = Sess.run(Output,Feed); print("\nEval:"); print(Results); print("\nDone."); #eof ``` Upvotes: 0
2019/09/23
760
3,362
<issue_start>username_0: Is randomness (either true randomness or simulated randomness) necessary for AI? If true, does it mean "intelligence comes from randomness"? If not, can a robot lacking the ability to generate random numbers be called an artificial general intelligence?<issue_comment>username_1: Yes, randomness is necessary to achieve generality in theory. Right now AIs we have are on the basis of seeking pattern and use them to predict future moves or outcomes. If we don't include randomness in data then machine might consider that as pattern and behave according to that (Which will be bias for us). Generating random numbers is a different story in itself and won't be a criterion alone to judge. While this might be one of the conditions for sure. Upvotes: 2 <issue_comment>username_2: 1. It might be too philosophical answer, but maybe first we need to answer the question whether a human way of thinking or his creativeness includes random elements. For example if an author writing a book uses some randomness in developing some side thread or some episodic character and I would say, that yes - sometimes we think up of something random. 2. Some algorithms uses randomness at their basis, for example evolutionary algorithms for generating first population. Upvotes: 1 <issue_comment>username_3: > > Is randomness (either true randomness or simulated randomness) necessary for AI > > > It depends on how you define Artificial Intelligence. If you regard it strictly as an intentionally created construct which demonstrates utility, then no. (For instance, [Nimatron](https://en.wikipedia.org/wiki/Nimatron), potentially the first functioning AI, beat most human competitors at NIM. But Nimatron was classical AI, entirely rules based with no learning.) That said: * **Randomness has proved a useful component in machine learning, and any feasible AGI would likely require ML.** Given sufficient computing power, aka time and space, it would absolutely be possibly to brute force anything, including AGI, but the resulting algorithm would be "brittle", unable to "compute" anything not previously defined. A learning algorithm, presented a problem outside of its domain of knowledge may initially degrade in performance, but it can learn from those outcomes, and gradually improve performance. *IBM brute forced Chess with [Deep Blue](https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)#Design), but Chess is a strictly narrow problem that turned out not to require general intelligence. AGI requires human level performance in all tasks engaged in by humans, which, even if they could be broken down to a set of individual narrow problems, it's an ever expanding set of problems.* > > Does it mean "intelligence comes from randomness"? > > > Not if the definition of intelligence is rooted in utility because deterministic processes can demonstrate utility. * **In statistical AI, the intelligence arises from the analysis of random search or the fitness of the genetic algorithm, not the randomness per se.** In other words, if you have the randomness without the analysis, every decision is an unqualified guess. *My sense is that it is free will that would arise from randomness—effects unrelated to causes—because without true randomness, the universe and everything in it is purely deterministic.* Upvotes: 2
2019/09/23
1,613
4,830
<issue_start>username_0: My weights go from being between 0 and 1 at initialization to exploding into the tens of thousands in the next iteration. In the 3rd iteration, they become so large that only arrays of nan values are displayed. How can I go about fixing this? Is it to do with the unstable nature of the sigmoid function, or is one of my equations incorrect during backpropagation which makes my gradients explode? ``` import numpy as np from numpy import exp import matplotlib.pyplot as plt import h5py # LOAD DATASET MNIST_data = h5py.File('data/MNISTdata.hdf5', 'r') x_train = np.float32(MNIST_data['x_train'][:]) y_train = np.int32(np.array(MNIST_data['y_train'][:,0])) x_test = np.float32(MNIST_data['x_test'][:]) y_test = np.int32(np.array(MNIST_data['y_test'][:,0])) MNIST_data.close() ############################################################################## # PARAMETERS number_of_digits = 10 # number of outputs nx = x_test.shape[1] # number of inputs ... 784 --> 28*28 ny = number_of_digits m_train = x_train.shape[0] m_test = x_test.shape[0] Nh = 30 # number of hidden layer nodes alpha = 0.001 iterations = 3 ############################################################################## # ONE HOT ENCODER - encoding y data into 'one hot encoded' lr = np.arange(number_of_digits) y_train_one_hot = np.zeros((m_train, number_of_digits)) y_test_one_hot = np.zeros((m_test, number_of_digits)) for i in range(len(y_train_one_hot)): y_train_one_hot[i,:] = (lr==y_train[i].astype(np.int)) for i in range(len(y_test_one_hot)): y_test_one_hot[i,:] = (lr==y_test[i].astype(np.int)) # VISUALISE SOME DATA for i in range(5): img = x_train[i].reshape((28,28)) plt.imshow(img, cmap='Greys') plt.show() y_train = np.array([y_train]).T y_test = np.array([y_test]).T ############################################################################## # INITIALISE WEIGHTS & BIASES params = { "W1": np.random.rand(nx, Nh), "b1": np.zeros((1, Nh)), "W2": np.random.rand(Nh, ny), "b2": np.zeros((1, ny)) } # TRAINING # activation function def sigmoid(z): return 1/(1+exp(-z)) # derivative of activation function def sigmoid_der(z): return z*(1-z) # softamx function def softmax(z): return 1/sum(exp(z)) * exp(z) # softmax derivative is alike to sigmoid def softmax_der(z): return sigmoid_der(z) def cross_entropy_error(v,y): return -np.log(v[y]) # forward propagation def forward_prop(X, y, params): outs = {} outs['A0'] = X outs['Z1'] = np.matmul(outs['A0'], params['W1']) + params['b1'] outs['A1'] = sigmoid(outs['Z1']) outs['Z2'] = np.matmul(outs['A1'], params['W2']) + params['b2'] outs['A2'] = softmax(outs['Z2']) outs['error'] = cross_entropy_error(outs['A2'], y) return outs # back propagation def back_prop(X, y, params, outs): grads = {} Eo = (y - outs['A2']) * softmax_der(outs['Z2']) Eh = np.matmul(Eo, params['W2'].T) * sigmoid_der(outs['Z1']) dW2 = np.matmul(Eo.T, outs['A1']).T dW1 = np.matmul(Eh.T, X).T db2 = np.sum(Eo,0) db1 = np.sum(Eh,0) grads['dW2'] = dW2 grads['dW1'] = dW1 grads['db2'] = db2 grads['db1'] = db1 # print('dW2:',grads['dW2']) return grads # optimise weights and biases def optimise(X,y,params,grads): params['W2'] -= alpha * grads['dW2'] params['W1'] -= alpha * grads['dW1'] params['b2'] -= alpha * grads['db2'] params['b1'] -= alpha * grads['db1'] return # main for epoch in range(iterations): print(epoch) outs = forward_prop(x_train, y_train, params) grads = back_prop(x_train, y_train, params, outs) optimise(x_train,y_train,params,grads) loss = 1/ny * np.sum(outs['error']) print(loss) ``` ```<issue_comment>username_1: This problem is called exploding gradients, resulting in an unstable network that at best cannot learn from the training data and at worst results in NaN weight values that can no longer be updated. One way to assure it is exploding gradients, is if loss is unstable and not improving, or if loss shows NaN value during training. Apart from the usual gradient clipping and weights regularization that are recommended, I think the problem with your network is the architecture. 30 is an abnormally high number of nodes for 2 layer perceptron model. Try increasing number of layers and reducing nodes per layer. - This is under the assumption that you're experimenting with MLP's, because for the problem above, convolutional neural networks seem like an obvious way to go. If unexplored - definitely check out CNN's for digit recognition, two layer models will surely work better there. Hope this helped! Upvotes: 4 [selected_answer]<issue_comment>username_2: Try using float64 instead of float32; int64 instead of int32; increasing the bits of memory gradually increases the weights that can be stored Upvotes: 1
2019/09/25
928
3,405
<issue_start>username_0: I was following some examples to get familiar with TensorFlow's LSTM API, but noticed that all LSTM initialization functions require only the `num_units` parameter, which denotes the number of hidden units in a cell. According to what I have learned from the famous [colah's blog](https://colah.github.io/posts/2015-08-Understanding-LSTMs/), the cell state has nothing to do with the hidden layer, thus they could be represented in different dimensions (I think), and then we should pass at least 2 parameters denoting both `#hidden` and `#cell_state`. So, this confuses me a lot when trying to figure out what the TensorFlow's cells do. Under the hood, are they implemented like this just for the sake of convenience or did I misunderstand something in the blog mentioned? [![dimensions illustration](https://i.stack.imgur.com/pb4NN.png)](https://i.stack.imgur.com/pb4NN.png)<issue_comment>username_1: I had a very similar issue as you did with the dimensions. Here's the rundown: Every node you see inside the LSTM cell has the exact same output dimensions, *including the cell state*. Otherwise, you'll see with the forget gate and output gate, how could you possible do an element wise multiplication with the cell state? They have to have the same dimensions in order for that to work. Using an example where `n_hiddenunits = 256`: ``` Output of forget gate: 256 Input gate: 256 Activation gate: 256 Output gate: 256 Cell state: 256 Hidden state: 256 ``` Now this can obviously be problematic if you want the LSTM to output, say, a one hot vector of size 5. So to do this, a softmax layer is slapped onto the end of the hidden state, to convert it to the correct dimension. So just a standard FFNN with normal weights (no bias', because softmax). Now, also imagining that we input a one hot vector of size 5: ``` input size: 5 total input size to all gates: 256+5 = 261 (the hidden state and input are appended) Output of forget gate: 256 Input gate: 256 Activation gate: 256 Output gate: 256 Cell state: 256 Hidden state: 256 Final output size: 5 ``` That is the final dimensions of the cell. Upvotes: 2 <issue_comment>username_2: What I understand with a layer of LSTM composed of 4 cells is depicted in the following picture: [![LSTM layer with 4 cells](https://i.stack.imgur.com/eOoET.png)](https://i.stack.imgur.com/eOoET.png) This would explain the fact that the hidden state of the whole layer has exactly the same dimension of the hidden states (or cells). However, what I still don't fully understand is the 'return sequence' between LSTM layers, which changes the shape from [hidden\_states] to [x\_dimension, hidden\_states]. This is explained because usually we only care about the state of the last cell, and when connecting multiple layers, all the states of the cells are passed into the next layer. Nevertheless, I still cannot make sense of it graphically. e.g. `model = keras.models.Sequential([ keras.layers.LSTM(20, return_sequences=True, input_shape=[None, 1]), keras.layers.LSTM(20, return_sequences=True), keras.layers.TimeDistributed(keras.layers.Dense(10)) ])` Upvotes: 1 <issue_comment>username_3: Look at the equation for computing the hidden state as a function of the cell state and output gate: $$ h\_t = \tanh(C\_t)\circ o\_t $$ This equation implies that the hidden state and cell state have the same dimensionality. Upvotes: 1
2019/09/28
389
1,821
<issue_start>username_0: My understanding of the vanishing gradient problem in deep networks is that as backprop progresses through the layers the gradients become small, and thus training progresses slower. I'm having a hard time reconciling this understanding with images such as below where the losses for a deeper network are higher than for a shallower one. Should it not just take longer to complete each iteration, but still reach the same level if not higher of accuracy? [![enter image description here](https://i.stack.imgur.com/y9HV7.png)](https://i.stack.imgur.com/y9HV7.png)<issue_comment>username_1: Those graphs do not disprove your 'vanishing gradient' theory. The deeper network may eventually do better than the shallower one, but it might take much longer to do it. Incidentally, the ReLU activation function was designed to mitigate the vanishing gradient problem. Upvotes: 0 <issue_comment>username_2: In theory, deeper architectures can encode more information than shallower ones because they can perform more transformations of the input which results in better results at the output. The training is slower because back propagation is quite expensive, as you increase the depth, you increase the number of parameters and gradients that need to be computed. Another issue that you need to take into account is the effect of the activation function. Saturating functions like sigmoid and hyperbolic tangent result in very small gradients in their edges, other activation functions are just flat, eg. ReLU is flat on the negatives therefore, there is no error to propagate because the gradient is either very small (as in saturating functions) or zero. Batch Norm greatly assists in this operation because it collapses values in better ranges where the gradients aren't close to zero. Upvotes: 1
2019/09/29
385
1,823
<issue_start>username_0: Intuitively, I understand that having an unbiased estimate of a policy is important because being biased just means that our estimate is distant from the truth value. However, I don't understand clearly why having lower variance is important. Is that because, in offline policy evaluation, we can have only 'one' estimate with a stream of data, and we don't know if it is because of variance or bias when our estimate is far from the truth value? Basically, variance acts like bias. Also, if that is the case, why having variance is preferable to having a bias?<issue_comment>username_1: Those graphs do not disprove your 'vanishing gradient' theory. The deeper network may eventually do better than the shallower one, but it might take much longer to do it. Incidentally, the ReLU activation function was designed to mitigate the vanishing gradient problem. Upvotes: 0 <issue_comment>username_2: In theory, deeper architectures can encode more information than shallower ones because they can perform more transformations of the input which results in better results at the output. The training is slower because back propagation is quite expensive, as you increase the depth, you increase the number of parameters and gradients that need to be computed. Another issue that you need to take into account is the effect of the activation function. Saturating functions like sigmoid and hyperbolic tangent result in very small gradients in their edges, other activation functions are just flat, eg. ReLU is flat on the negatives therefore, there is no error to propagate because the gradient is either very small (as in saturating functions) or zero. Batch Norm greatly assists in this operation because it collapses values in better ranges where the gradients aren't close to zero. Upvotes: 1
2019/10/01
464
2,044
<issue_start>username_0: People say **embedding** is necessary in NLP because if using just the word indices, the **efficiency is not high** as **similar words are supposed to be related to each other**. However, I still don't truly get it why. The **subword-based embedding** (aka syllable-based embedding) is understandable, for example: ``` biology --> bio-lo-gy biologist --> bio-lo-gist ``` For the 2 words above, when turning them into syllable-based embeddings, it's good because the 2 words will be related to each other due to the sharing syllables: `bio`, and `lo`. However, it's hard to understand the `autoencoder`, it turns an index value into vector, then feed these vectors to DNN. Autoencoder can turn vectors back to words too. **How does autoencoder make words related to each other?**<issue_comment>username_1: Those graphs do not disprove your 'vanishing gradient' theory. The deeper network may eventually do better than the shallower one, but it might take much longer to do it. Incidentally, the ReLU activation function was designed to mitigate the vanishing gradient problem. Upvotes: 0 <issue_comment>username_2: In theory, deeper architectures can encode more information than shallower ones because they can perform more transformations of the input which results in better results at the output. The training is slower because back propagation is quite expensive, as you increase the depth, you increase the number of parameters and gradients that need to be computed. Another issue that you need to take into account is the effect of the activation function. Saturating functions like sigmoid and hyperbolic tangent result in very small gradients in their edges, other activation functions are just flat, eg. ReLU is flat on the negatives therefore, there is no error to propagate because the gradient is either very small (as in saturating functions) or zero. Batch Norm greatly assists in this operation because it collapses values in better ranges where the gradients aren't close to zero. Upvotes: 1
2019/10/01
778
3,158
<issue_start>username_0: I'm interested in learning about Neural Networks and implementing them. I'm particularly interested in GANs and LSTM networks. I understand perceptrons and basic Neural Network configuration (sigmoid activation, weights, hidden layers etc). But what topics do I need to learn in order, to get to the point where I can implement GAN or LSTM. I intend to make an implementation of each in C++ to prove to myself that I understand. I haven't got a particularly good math background, but I understand most math-things when they are explained. For example, I understand backpropagation, but I don't really understand it. I understand how reinforced learning is used with backpropagation, but not fully how you can have things like training without datasets (like tD-backgammon). I don't quite understand CNNs, especially why you might make a particular architecture. If for each "topic" there was a book or website or something for each it would be great.<issue_comment>username_1: I think, once you are covered with the common stuff, you can probably go on and study all kinds of neural network variants. --- The common stuff: a) An undergraduate level Linear Algebra course -- covering matrix calculus. You might find [this](https://stats.stackexchange.com/questions/21346/reference-book-for-linear-algebra-applied-to-statistics) useful. b) An undergraduate level study in statistical inference. Concepts from this topic will come up most of the time and you might have hard time getting around even though you understand the rest of the math. I would recommend [this](https://fsalamri.files.wordpress.com/2015/02/casella_berger_statistical_inference1.pdf). c) A starter book on neural networks. Ex- [Neural networks](http://www.inf.fu-berlin.de/inst/ag-ki/rojas_home/documents/1996/NeuralNetworks/neuron.pdf) by <NAME>. --- After all these are covered you will certainly be ready for learning the variants of neural networks with ease. For LSTM I would recommend [<NAME>](https://www.cs.toronto.edu/~graves/preprint.pdf). Upvotes: 1 <issue_comment>username_2: Would personally recommend deeplearning.ai's [course](https://www.coursera.org/specializations/deep-learning) to begin with. There may be more comprehensive or better MOOC's for covering basic MLP's, CNN, RNN's, tuning and training of neural networks but this is probably the most common one and the one that I can personally vouch for. After this I'd recommend you get a physical or pdf copy of [Deep Learning by Goodfellow et al.](http://www.deeplearningbook.org/) and use it as reference material for any new idea you'd want to learn. Personally would not recommend reading the whole book and its better as a reference material as it is quite comprehensive. This should essentially be able to give you enough knowledge to be able to cover almost any paper/material on deep learning. The Course mentioned (most courses) would cover LSTM's as they are quite an old idea (~1997 I think) and GAN's are well covered in the book mentioned (The author invented them) since they are more of a recent advancement (2014). Hope this was helpful! Upvotes: 0
2019/10/04
946
3,630
<issue_start>username_0: It was noted today that automated text generation is advancing at a rapid pace, potentially accelerating. As bots become more and more capable of passing turing tests, especially in single iterations, such as social media posts or news blurbs, I have to ask: * Does it matter where a text originates, if the content is strong? Strength here is used in the sense of meaning. To elucidate my argument I'll present an example. (It helps to know the [Library of Babel](https://en.wikipedia.org/wiki/The_Library_of_Babel), an infinite memory array where every possible combination of characters exists.) > > An algorithm is set up to produce [aphorisms](https://en.wikipedia.org/wiki/Aphorism). The overwhelming majority of the output is gibberish, but among the junk is an incredibly profound observation emerges that changes the way people think about a subject or issue. > > > Where the bot just spams social media, the aphorism in question is identified because it recieves a high number of reposts by humans, who, in this scenario, provide the mechanism for finding the needle (the profound aphorism) in the haystack (the junk output). Does the value of the insight depend on the cognitive quality of the generator, in the sense of having to understand the statement? A real world example would be [Game 2, Move 37](https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol#Game_2) in the AlphaGo vs. Lee Sedol match.<issue_comment>username_1: I think, once you are covered with the common stuff, you can probably go on and study all kinds of neural network variants. --- The common stuff: a) An undergraduate level Linear Algebra course -- covering matrix calculus. You might find [this](https://stats.stackexchange.com/questions/21346/reference-book-for-linear-algebra-applied-to-statistics) useful. b) An undergraduate level study in statistical inference. Concepts from this topic will come up most of the time and you might have hard time getting around even though you understand the rest of the math. I would recommend [this](https://fsalamri.files.wordpress.com/2015/02/casella_berger_statistical_inference1.pdf). c) A starter book on neural networks. Ex- [Neural networks](http://www.inf.fu-berlin.de/inst/ag-ki/rojas_home/documents/1996/NeuralNetworks/neuron.pdf) by <NAME>. --- After all these are covered you will certainly be ready for learning the variants of neural networks with ease. For LSTM I would recommend [<NAME>](https://www.cs.toronto.edu/~graves/preprint.pdf). Upvotes: 1 <issue_comment>username_2: Would personally recommend deeplearning.ai's [course](https://www.coursera.org/specializations/deep-learning) to begin with. There may be more comprehensive or better MOOC's for covering basic MLP's, CNN, RNN's, tuning and training of neural networks but this is probably the most common one and the one that I can personally vouch for. After this I'd recommend you get a physical or pdf copy of [Deep Learning by Goodfellow et al.](http://www.deeplearningbook.org/) and use it as reference material for any new idea you'd want to learn. Personally would not recommend reading the whole book and its better as a reference material as it is quite comprehensive. This should essentially be able to give you enough knowledge to be able to cover almost any paper/material on deep learning. The Course mentioned (most courses) would cover LSTM's as they are quite an old idea (~1997 I think) and GAN's are well covered in the book mentioned (The author invented them) since they are more of a recent advancement (2014). Hope this was helpful! Upvotes: 0
2019/10/05
12,045
38,947
<issue_start>username_0: As a human being, we can think infinity. In principle, if we have enough resources (time etc.), we can count infinitely many things (including abstract, like numbers, or real). For example, at least, we can take into account integers. We can think, principally, and "understand" infinitely many numbers that are displayed on the screen. Nowadays, we are trying to design artificial intelligence which is capable at least human being. However, I am stuck with infinity. I try to find a way how can teach a model (deep or not) to understand infinity. I define "understanding' in a functional approach. For example, If a computer can differentiate 10 different numbers or things, it means that it really understand these different things somehow. This is the basic straight forward approach to "understanding". As I mentioned before, humans understand infinity because they are capable, at least, counting infinite integers, in principle. From this point of view, if I want to create a model, the model is actually a function in an abstract sense, this model must differentiate infinitely many numbers. Since computers are digital machines which have limited capacity to model such an infinite function, how can I create a model that differentiates infinitely many integers? For example, we can take a deep learning vision model that recognizes numbers on the card. This model must assign a number to each different card to differentiate each integer. Since there exist infinite numbers of integer, how can the model assign different number to each integer, like a human being, on the digital computers? If it cannot differentiate infinite things, how does it understand infinity? If I take into account real numbers, the problem becomes much harder. What is the point that I am missing? Are there any resources that focus on the subject?<issue_comment>username_1: By adding some rules for infinity in arithmetic (such as infinity minus a large finite number is infinity, etc.), the digital computer can appear to understand the notion of infinity. Alternatively, the computer can simply replace the number n with its [log-star](https://en.wikipedia.org/wiki/Iterated_logarithm) value. Then, it can differentiate the numbers at a different scale, and can learn that any number with log-star value > 10 is practically equivalent to infinity. Upvotes: 2 <issue_comment>username_2: I think this is a fairly common misconception about AI and computers, especially among laypeople. There are several things to unpack here. Let's suppose that there's something special about infinity (or about continuous concepts) that makes them especially difficult for AI. For this to be true, it must *both* be the case that humans can understand these concepts while they remain alien to machines, *and* that there exist other concepts that are not like infinity that both humans *and* machines can understand. What I'm going to show in this answer is that wanting both of these things leads to a contradiction. The root of this misunderstanding is the problem of what it means to *understand*. Understanding is a vague term in everyday life, and that vague nature contributes to this misconception. If by understanding, we mean that a computer has the conscious experience of a concept, then we quickly become trapped in metaphysics. There is a [long running](https://plato.stanford.edu/entries/chinese-room/), and essentially open debate about whether computers can "understand" anything in this sense, and even at times, about whether humans can! You might as well ask whether a computer can "understand" that 2+2=4. Therefore, if there's something *special* about understanding infinity, it cannot be related to "understanding" in the sense of subjective experience. So, let's suppose that by "understand", we have some more specific definition in mind. Something that would make a concept like infinity more complicated for a computer to "understand" than a concept like arithmetic. Our more concrete definition for "understanding" must relate to some objectively measurable capacity or ability related to the concept (otherwise, we're back in the land of subjective experience). Let's consider what capacity or ability might we pick that would make infinity a special concept, understood by humans and not machines, unlike say, arithmetic. We might say that a computer (or a person) understands a concept if it can provide a correct definition of that concept. However, if even one human understands infinity by this definition, then it should be easy for them to write down the definition. Once the definition is written down, a computer program can output it. Now the computer "understands" infinity too. This definition doesn't work for our purposes. We might say that an entity understands a concept if it can *apply* the concept correctly. Again, if even the one person understands how to apply the concept of infinity correctly, then we only need to record the rules they are using to reason about the concept, and we can write a program that reproduces the behavior of this system of rules. Infinity is actually very well characterized as a concept, captured in ideas like [Aleph Numbers](https://en.wikipedia.org/wiki/Aleph_number). It is not impractical to encode these systems of rules in a computer, at least up to the level that any human understands them. Therefore, computers can "understand" infinity up to the same level of understanding as humans by this definition as well. So this definition doesn't work for our purposes. We might say that an entity "understands" a concept if it can logically relate that concept to arbitrary new ideas. This is probably the strongest definition, but we would need to be pretty careful here: very few humans (proportionately) have a deep understanding of a concept like infinity. Even fewer can readily relate it to arbitrary new concepts. Further, algorithms like the [General Problem Solver](https://en.wikipedia.org/wiki/General_Problem_Solver) can, in principal, derive any logical consequences from a given body of facts, given enough time. Perhaps under this definition computers understand infinity *better* than most humans, and there is certainly no reason to suppose that our existing algorithms will not further improve this capability over time. This definition does not seem to meet our requirements either. Finally, we might say that an entity "understands" a concept if it can generate examples of it. For example, I can generate examples of problems in arithmetic, and their solutions. Under this definition, I probably do not "understand" infinity, because I cannot actually point to or create any concrete thing in the real world that is definitely infinite. I cannot, for instance, actually write down an infinitely long list of numbers, merely formulas that express ways to create ever longer lists by investing ever more effort in writing them out. A computer ought to be at least as good as me at this. This definition also does not work. This is not an exhaustive list of possible definitions of "understands", but we have covered "understands" as I understand it pretty well. Under every definition of understanding, there isn't anything special about infinity that separates it from other mathematical concepts. So the upshot is that, either you decide a computer doesn't "understand" anything at all, or there's no particularly good reason to suppose that infinity is harder to understand than other logical concepts. If you disagree, you need to provide a concrete definition of "understanding" that *does* separate understanding of infinity from other concepts, and that doesn't depend on subjective experiences (unless you want to claim your particular metaphysical views are universally correct, but that's a **hard** argument to make). Infinity has a sort of semi-mystical status among the lay public, but it's really just like any other mathematical system of rules: if we can write down the rules by which infinity operates, a computer can do them as well as a human can (or better). Upvotes: 7 [selected_answer]<issue_comment>username_3: **TL;DR**: The subtleties of infinity are made apparent in the notion of unboundedness. Unboundedness is finitely definable. "Infinite things" are really things with unbounded natures. Infinity is best understood not as a thing but as a concept. Humans theoretically possess unbounded abilities *not* infinite abilities (eg to count to any arbitrary number as opposed to "counting to infinity"). A machine can be made to recognize unboundedness. **Down the rabbit hole again** How to proceed? Let's start with "limits." **Limitations** Our brains are not infinite (lest you believe in some metaphysics). So, we do not "think infinity". Thus, what we purport as infinity is best understood as some **finite** mental *concept* against which we can "compare" other concepts. Additionally, we cannot "count infinite integers." There is a subtly here that is *very* important to point out: Our concept of quantity/number is ***unbounded***. That is, for any any finite value we have a finite/concrete way or producing another value which is strictly larger/smaller. That is, Provided *finite* time we could only count *finite* amounts. You cannot be "given infinite time" to "count all the numbers" this would imply a "finishing" which directly contradicts the notion of infinity. Unless you believe humans have metaphysical properties which allow them to "consistently" embody a paradox. Additionally how would you answer: What was the last number you counted? With no "last number" there is never a "finish" and hence never an "end" to your counting. That is you can never "have enough" time/resources to "count to infinity." I think what you mean is we can fathom the notion of [bijection](https://en.wikipedia.org/wiki/Bijection) between infinite sets. But this notion is a logical *construction* (ie it's a finite way of wrangling what we understand to be infinite). However, what we are really doing is: Within our bounds we are talking about our bounds and, when ever we need to, we can expand our bounds (by a finite amount). And we can even talk about the nature *of* expanding our bounds. Thus: **Unboundedness** A process/thing/idea/object is deemed unbounded if given some *measure* of its quantity/volume/existence we can in a finite way produce an "extension" of that object which has a measure we deem "larger" (or "smaller" in the case of infinitesimals) than the previous measure and that this extension process can be applied to the nascent object (ie the process is recursive). *Canonical case number one: The Natural Numbers* Additionally, our notion of infinity prevents any "at-ness" or "upon-ness" unto infinity. That is, one never "arrives" at infinity nor does one ever "have" infinity. Rather, one proceeds unboundedly. Thus how do we conceptualize infinity? **Infinity** It seems that "infinity" as a word is misconstrued to mean that there is a *thing that exists* called "infinity" as opposed to a *concept* called "infinity". Let's smash atoms with the word: > > Infinite: limitless or endless in space, extent, or size; impossible to measure or calculate. > > > in- :a prefix of Latin origin, corresponding to English un-, having a negative or privative force, freely used as an English formative, especially of adjectives and their derivatives and of nouns (inattention; indefensible; inexpensive; inorganic; invariable). > ([source](https://www.dictionary.com/browse/in-)) > > > Finite: having limits or bounds. > > > So in-finity is really un-finity which is *not having limits or bounds*. But we can be more precise here because we can all agree the natural numbers are infinite but *any* given natural number is finite. So what gives? Simple: *the* natural numbers satisfy our unboundedness criterium and thus we say "the natural numbers are infinite." That is, "infinity" is a concept. An object/thing/idea is deemed infinite if it possess a property/facet that is unbounded. As before we saw that unboundedness is finitely definable. Thus, if the agent you speak of was programmed well enough to spot the pattern in the numbers on the cards and that the numbers are all coming from the same set it could deduce the unbounded nature of the sequence and hence define the set of all numbers as infinite - purely because the set *has no upper bound*. That is, the progression of the natural numbers is unbounded and hence definably infinite. Thus, to me, infinity is best understood as a general concept for identifying when processes/things/ideas/objects posses an unbounded nature. That is, infinity is not independent of unboundedness. Try defining infinity without comparing it to finite things or the bounds of those finite things. **Conclusion** It seems feasible that a machine could be programmed to represent and detect instances of unboundedness or when it might be admissible to assume unboundedness. Upvotes: 4 <issue_comment>username_4: I think your premise is flawed. You seem to assume that to "understand"(\*) infinities requires infinite processing capacity, and imply that humans have just that, since you present them as the opposite to limited, finite computers. But humans *also* have finite processing capacity. We are beings built of a finite number of elementary particles, forming a finite number of atoms, forming a finite number of nerve cells. If we can, in one way or another, "understand" infinities, then surely finite computers can also be built that can. (\* I used "understand" in quotes, because I don't want to go into e.g. the definition of sentience etc. I also don't think it matters in regarding this question.) > > As a human being, we can think infinity. In principle, if we have enough resources (time etc.), we can count infinitely many things (including abstract, like numbers, or real). > > > Here, you actually say it out loud. "With enough resources." Would the same not apply to computers? While humans *can*, e.g. use infinities when calculating limits etc. and can think of the idea of something getting arbitrarily larger, we can only do it in the abstract, not in the sense being able to process arbitrarily large numbers. The same rules we use for mathematics could also be taught to a computer. Upvotes: 4 <issue_comment>username_5: In Haskell, you can type: `print [1..]` and it will print out the infinite sequence of numbers, starting with: ``` [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240,241,242,243,244,245,246,247,248,249,250,251,252,253,254,255,256,257,258,259,260,261,262,263,264,265,266,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,289,290,291,292,293,294,295,296,297,298,299,300,301,302,303,304,305,306,307,308,309,310,311,312,313,314,315,316,317,318,319,320,321,322,323,324,325,326,327,328,329,330,331,332,333,334,335,336,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,355,356,357,358,359,360,361,362,363,364,365,366,367,368,369,370,371,372,373,374,375,376,377,378,379,380,381,382,383,384,385,386,387,388,389,390,391,392,393,394,395,396,397,398,399,400,401,402,403,404,405,406,407,408,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,469,470,471,472,473,474,475,476,477,478,479,480,481,482,483,484,485,486,487,488,489,490,491,492,493,494,495,496,497,498,499,500,501,502,503,504,505,506,507,508,509,510,511,512,513,514,515,516,517,518,519,520,521,522,523,524,525,526,527,528,529,530,531,532,533,534,535,536,537,538,539,540,541,542,543,544,545,546,547,548,549,550,551,552,553,554,555,556,557,558,559,560,561,562,563,564,565,566,567,568,569,570,571,572,573,574,575,576,577,578,579,580,581,582,583,584,585,586,587,588,589,590,591,592,593,594,595,596,597,598,599,600,601,602,603,604,605,606,607,608,609,610,611,612,613,614,615,616,617,618,619,620,621,622,623,624,625,626,627,628,629,630,631,632,633,634,635,636,637,638,639,640,641,642,643,644,645,646,647,648,649,650,651,652,653,654,655,656,657,658,659,660,661,662,663,664,665,666,667,668,669,670,671,672,673,674,675,676,677,678,679,680,681,682,683,684,685,686,687,688,689,690,691,692,693,694,695,696,697,698,699,700,701,702,703,704,705,706,707,708,709,710,711,712,713,714,715,716,717,718,719,720,721,722,723,724,725,726,727,728,729,730,731,732,733,734,735,736,737,738,739,740,741,742,743,744,745,746,747,748,749,750,751,752,753,754,755,756,757,758,759,760,761,762,763,764,765,766,767,768,769,770,771,772,773,774,775,776,777,778,779,780,781,782,783,784,785,786,787,788,789,790,791,792,793,794,795, ``` It will do this until your console runs out of memory. Let's try something more interesting. ``` double x = x * 2 print (map double [1..]) ``` And here's the start of the output: ``` [2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126,128,130,132,134,136,138,140,142,144,146,148,150,152,154,156,158,160,162,164,166,168,170,172,174,176,178,180,182,184,186,188,190,192,194,196,198,200,202,204,206,208,210,212,214,216,218,220,222,224,226,228,230,232,234,236,238,240,242,244,246,248,250,252,254,256,258,260,262,264,266,268,270,272,274,276,278,280,282,284,286,288,290,292,294,296,298,300,302,304,306,308,310,312,314,316,318,320,322,324,326,328,330,332,334,336,338,340,342,344,346,348,350,352,354,356,358,360,362,364,366,368,370,372,374,376,378,380,382,384,386,388,390,392 ``` These examples show infinite computation. In fact, you can keep infinite data structures in Haskell, because Haskell has the notion of *non-strictness*-- you can do computation on entities that haven't been fully computed yet. In other words, you don't have to fully compute an infinite entity to manipulate that entity in Haskell. Reductio ad absurdum. Upvotes: 3 <issue_comment>username_6: I believe humans can be said to understand infinity since at least [Georg Cantor](https://en.wikipedia.org/wiki/Georg_Cantor) because we can recognize different [types of infinites](https://en.wikipedia.org/wiki/Aleph_number#Aleph-one) (chiefly countable vs. uncountable) via the concept of [cardinality](https://en.wikipedia.org/wiki/Cardinal_number). Specifically, a set is countably infinite if it can be mapped to the [natural numbers](https://en.wikipedia.org/wiki/Natural_number), which is to say there is a 1-to-1 correspondence between the elements of countably infinite sets. The set of all reals is uncountable, as is the set of all combinations of natural numbers, because there will always be more combinations than natural numbers where n>2, resulting in a set with a greater cardinality. *(The first formal proofs for uncountability can be found in Cantor, and is subject of [Philosophy of Math](https://en.wikipedia.org/wiki/Philosophy_of_mathematics).)* Understanding of infinity involves logic as opposed to arithmetic because we can't express, for instance, all of the decimals of a [transcendental number](https://en.wikipedia.org/wiki/Transcendental_number), only use approximations. Logic is a fundamental capability of what we think of as computers. * An analytic process (AI) that can recognize a function that produces an infinite loop, such as using $\pi$ to draw a circle, might be said to understand infinity... "Never ending" is a definition of infinity, with the set of natural numbers as an example (there is a least number, 1, but no greatest number.) **Intractability vs. Infinity** Outside of the special case of infinite loops, I have to wonder if an AI is more oriented on [computational intractability](https://en.wikipedia.org/wiki/Computational_complexity_theory#Intractability) as opposed to infinity. A problem is said to be intractable if there is not enough time and space to completely represent it, and this can be extended to many real numbers. $\pi$ may be understood to be infinite because it arises from/produces a circle, but I'm not sure this is the case with all real numbers with an intractable number of decimals. Would the AI assume such a number were infinite or merely intractable? The latter case is concrete as opposed to abstract--either it can finish the computation or not. This leads to the **[halting problem](https://en.wikipedia.org/wiki/Halting_problem)**. * Turing's proof that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist could be taken as an indication that an algorithm based on the [Turing-Church model of computation](https://en.wikipedia.org/wiki/Turing_machine) cannot have a perfect understanding of infinity. If an alternate computational model arose that could solve the halting problem, it might be argued that an algorithm could have a perfect understanding, or at least demonstrate an understanding comparable to humans. Upvotes: 3 <issue_comment>username_7: (There's a summary at the bottom for those who are too lazy or pressed for time to read the whole thing.) Unfortunately to answer this question I will mainly be deconstructing the various premises. > > As I mentioned before, humans understand infinity because they are capable, at least, counting infinite integers, in principle. > > > I disagree with the premise that humans would actually be able to count to infinity. To do so, said human would need an infinite amount of time, an infinite amount of memory (like a Turing machine) and most importantly an infinite amount of patience - in my experience most humans get bored before they even count to 1,000. Part of the problem with this premise is that infinity is actually not a number, it's a concept that expresses an unlimited amount of 'things'. Said 'things' can be anything: integers, seconds, lolcats, the important point is the fact that those things are not finite. See this relevant SE question for more details: <https://math.stackexchange.com/questions/260876/what-exactly-is-infinity> To put it another way: if I asked you "what number comes before infinity?" what would your answer be? This hypothetical super-human would have to count to that number before they could count infinity. And they'd need to know the number before that first, and the one before that, and the one before that... Hopefully this demonstrates why the human would not be able to actually count to infinity - because infinity does not exist at the end of the number line, it is the concept that explains the number line has no end. Neither man nor machine can actually count up to it, even with infinite time and infinite memory. > > For example, If a computer can differentiate 10 different numbers or things, it means that it really understand these different things somehow. > > > Being able to 'differentiate' between 10 different things doesn't imply the understanding of those 10 things. A well-known thought experiment that questions the idea of what it means to 'understand' is John Searle's [Chinese Room](https://plato.stanford.edu/entries/chinese-room/) experiment: > > Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese. > > > The point of the argument is this: if the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have. > > > The thing to take away from this experiment is that the ability to process symbols does not imply that one actually understands those symbols. Many computers process natural languages every day in the form of text (characters encoded as integers, typically in a unicode-based encoding like UTF-8), but they do not neccessarily understand those languages. On a simpler Effectively all computers are able to add two numbers together, but they do no necessarily understand what they are doing. In other words, even in the 'deep learning vision model' the computer arguably does not understand the numbers (or 'symbols') it is being shown, it is merely the algorithm's ability to simulate intelligence that allows it to be classed as artificial intelligence. > > For example, we can take a deep learning vision model that recognizes numbers on the card. This model must assign a number to each different card to differentiate each integer. Since there exist infinite numbers of integer, how can the model assign different number to each integer, like a human being, on the digital computers? If it cannot differentiate infinite things, how does it understand infinity? > > > If you were to perform the same card test on a human, and continually increased the number of cards used, eventually a human wouldn't be able to keep track of them all due to lack of memory. A computer would experience the same problem, but could theoretically outperform the human. So now I ask you, can a human really differentiate infinite things? Personally I suspect the answer is no, because all humans have limited memory, and yet I would agree that humans most likely can understand infinity to some degree (some can do so better than others). As such, I think the question "If it cannot differentiate infinite things, how does it understand infinity?" has a flawed premise - being able to differentiate infinite things is not a prerequisite for understanding the concept of infinity. --- **Summary:** Essentially your question hinges on what it means to 'understand' something. Computers can certainly *represent* infinity, the IEEE floating point specification defines both positive and negative infinity, and all modern processors are capable of processing floating points (either in hardware or through software). If AIs are ever capable of actually understanding things then theoretically they might be able to understand the concept of infinity, but we're a long way off being able to definitively prove this either way, and we'd have to come to a consensus about what it means to 'understand' something first. Upvotes: 3 <issue_comment>username_8: Computers don't understand "infinity" or even "zero", just like a screwdriver does not understand screws. It is a tool made for processing binary signals. In fact, a computer's equivalent in wetware is not a person but a brain. Brains don't think, persons do. The brain is just the platform persons are implemented with. It's a somewhat common mistake to conflate the two since their connection tends to be rather inseparable. If you wanted to assign understanding, you'd at least have to move to actual programs instead of computers. Programs may or may not have representations for zero or infinity, and may or may not be able to do skillful manipulations of either. Most symbolic math programs fare mostly better here than someone required to work with math as part of their job. Upvotes: 2 <issue_comment>username_9: I would think that a computer couldn’t understand infinity primarily because the systems and parts of a system, that are driving the computer are finite themselves. Upvotes: 1 <issue_comment>username_10: Then premise assumes that humans "understand" infinity. Do we? I think you'd need to tell me what criterion you would use, if you wanted to know whether I "understand" infinity, first. In the OP, the idea is given that I could "prove" I "understand" infinity, because *"In principle, if we have enough resources (time etc.), we can count infinitely many things (including abstract, like numbers, or real)."* Well, that's simply not true. Worse, if it were true (which it isnt), then it would be equally true for a computer. Here's why: 1. Yes, you can in principle count integers, and see that counting never ends. 2. But even if you had enough resources, you could never "count infinitely many things". There would always be more. That's what "infinite" means. 3. Worse, there are multiple orders ("cardinalities") of infinity. Most of them, you can't count, even with infinite time, and perhaps not even with infinite other resources. They are actually uncountable. They literally cannot be mapped to a number line, or to the set of integers. You cannot order them in such a way that they can be counted, even in principle. 4. Even worse, how do you do that bit where you decide "in principle" what I can do, when I clearly can't ever do it, or even the tiniest part of it? That step feels layman-style assumptive, not actually seeing the issues in doing it rigorously. It may not be trivial. 5. Last, suppose this was your actual test, like in the OP. So if I could "in principle with enough resources (time etc) count infinitely many things", it would be enough for you to decide I "understood" infinity (whatever that means). Then so could a computer with sufficient resources (RAM, time, algorithm). So the test itself would be satisfied trivially by a computer if you gave the computer the same criteria. I think maybe a more realistic line of logic is that what this question actually shows, is that most (probably all?) humans actually do **not** understand infinity. So understanding infinity is probably **not** a good choice of test/requirement for AI. If you doubt this, ask yourself. Do you honestly, truly, and seriously, "understand" a hundred trillion years (the possible life of a red dwarf star)? Like, can you really comprehend what its like, experiencing a hundred trillion years, or is it just a 1 with lots of zeros? What about a femtosecond? Or a time interval of about 10^-42 seconds? Can you truly "understand" that? A timescale compared to which, one of your heartbeats, compares like one of your heartbeats compares to a billion billion times the present life of this universe? Can you *really* "understand infinity", yourself? Worth thinking about...... Upvotes: 2 <issue_comment>username_11: The "concept" of infinity is 1 thing to understand. I can represent it with 1 symbol (∞). > > As I mentioned before, humans understand infinity because they are > capable, at least, counting infinite integers, in principle. > > > By this definition humans do not understand infinity. Humans are not capable of counting infinite integers. They will die (run out of compute resources / power) at some time. It would probably be easier in fact to get a computer to count towards infinity than it would be to get a human to do so. Upvotes: 1 <issue_comment>username_12: Just food for thought: how about if we try to program infinity not in theoretical, but in practical terms? Thus, if we deem something that a computer cannot calculate, given its resources as infinity, it would fulfill the purpose. Programmatically, it can be implemented as follows: if the input is less than available memory it's not infinity. Subsequently, infinity can be defined as something that returns out-of-memory error on an evaluation attempt. Upvotes: 1 <issue_comment>username_13: I think the concept that is missing in the discussion, so far, is symbolic representation. We humans represent and ***understand*** many concepts symbolically. The concept of Infinity is a great example of this. Pi is another, along with some other well-known irrational numbers. There are many, many others. As it is, we can easily represent and present these values and concepts, both to other humans and to computers, using symbols. Both computers and humans, can manipulate and reason with these symbols. For example, computers have been performing mathematical proofs for a few decades now. Likewise, commercial and/or open source programs are available that can manipulate equations symbolically to solve real world problems. So, as @JohnDoucette has reasoned, there isn't anything that special about Infinity vs many other concepts in math and arithmetic. When we hit that representational brick wall, we just define a symbol that represents "that" and move forward. Note, the concept of infinity has many practical uses. Any time you have a ratio and the denominator "goes to" zero, the value of the expression "approaches" infinity. This isn't a rare thing, really. So, while your average person on the street isn't conversant with these ideas, lots and lots of scientists, engineers, mathematicians and programmers are. It's common enough that software has been dealing with Infinity symbolically for a couple decades, now, at least. E.g. Mathematica: <http://mathworld.wolfram.com/Infinity.html> Upvotes: 2 <issue_comment>username_14: [The Questions That Computers Can Never Answer - Wired (magazine)](https://www.wired.com/2014/02/halting-problem/) --- Computers might not be able to reach infinity at all: < <https://www.nature.com/articles/35023282> >, never mind actually understand it. Computation and computers do have implications for "hard limits of systems." (<https://en.wikipedia.org/wiki/Limits_of_computation>) Upvotes: 2 <issue_comment>username_15: Its arguable if we humans understand infinity. We just create new concept to enplace old mathematics when we meet this problem. In division by infinity machine can understand it the same way as we: ``` double* xd = new double; *xd =...; if (*xd/y<0.00...1){ int* xi = new int; *xi = (double) (*xd); delete xd; ``` If human thinks of infinity - imagines just huge number in his/her current context. So key to writing algorithm is just finding a scale that AI is currently working with. And BTW this problem must ve been solved years ago. People designing float/double must ve been conscious what they were doing. Moving exponenta sign is linear operation in double. Upvotes: 1 <issue_comment>username_16: I think the property humans have which computers do not, is some sort of parallel process that runs alongside every other thing they are thinking and tries to assign an importance weighting evaluation to everything you are doing. If you ask a computer to run the program : A = 1; DO UNTIL(A<0) a=a+1; END; The computer will. If you ask a human, another process interjects with "I'm bored now... this is taking ages... I'm going to start a new parallel process to *examine the problem, project where the answer lies and look for a faster route to the answer* ... Then we discover that we are stuck in an infinite loop that will never be "solved".. and interject with an interrupt that flags the issue, kills the boring process and goes to get a cup of tea :-) Sorry if that is unhelpful. Upvotes: 0 <issue_comment>username_17: Well -- just to touch on the question of people and infinity -- my father has been a mathematician for 60 years. Throughout this time, he's been the kind of geek who prefers to talk and think about his subject over pretty much anything else. He loves infinity and taught me about it from a young age. I was first introduced to the calculus in 5th grade (not that it made much of an impression). He loves to teach, and at the drop of a hat, he'll launch into a lecture about any kind of math. Just ask. In fact, I would say that there are few things he is more familiar with than infinity...my mother's face, perhaps? I wouldn't count on it. If a human can understand anything, my father understands infinity. Upvotes: 1 <issue_comment>username_18: Humans certainly don't understand infinity. Currently computers cannot understand things that humans cannot because computers are programmed by humans. In a dystopian future that may not be the case. Here are some thoughts about infinity. The set of natural numbers is infinate. It has also been proved that the set of prime numbers, which is a subset of the natural numbers, is also infinate. So we have an infinate set within an infinate set. It gets worse, between any 2 real numbers there is an infinate number of real numbers. Have a look at the link to Hilbert's paradox of the Grand Hotel to see how confusing infinity can get - <https://en.wikipedia.org/wiki/Hilbert%27s_paradox_of_the_Grand_Hotel> Upvotes: 1 <issue_comment>username_19: username_2's [answer](https://ai.stackexchange.com/a/15744) covers my thoughts on this pretty well, but I thought a concrete example might be interesting. I work on a symbolic AI called Cyc, which represents concepts as a web of logical predicates. We often like to brag that Cyc "understands" things because it can elucidate logical relationships between them. It knows, for example, that people don't like paying their taxes, because paying taxes involves losing money and people are generally averse to that. In reality, I think most philosophers would agree that this is an incomplete "understanding" of the world at best. Cyc might know all of the rules that describe people, taxes, and displeasure, but it has no real experience of any of them. In the case of infinity, though, what more is there to understand? I would argue that as a mathematical concept, infinity has no reality beyond its logical description. If you can correctly apply every rule that describes infinity, you've grokked infinity. If there's anything that an AI like Cyc can't represent, maybe it's the emotional reaction that such concepts tend to evoke for us. Because we live actual lives, we can relate abstract concepts like infinity to concrete ones like mortality. Maybe it's that emotional contextualization that makes it seem like there's something more to "get" about the concept. Upvotes: 2
2019/10/05
1,293
5,366
<issue_start>username_0: As I understand, ResNet has some identity mapping layers, whose task is to create the output as the same as the input of the layer. The ResNet solved the problem of accuracy degrading. But what is the benefit of adding identity mapping layers in intermediate layers? What's the effect of these identity layers on the feature vectors that will be produced in the last layers of the network? Is it helpful for the network to produce better representation for the input? If this expression is correct, what is the reason?<issue_comment>username_1: **TL;DR**: Deep networks have some issues that skip connections fix. To address this statement: > > As I understand Resnet has some identity mapping layers that their task is to create the output as the same as the input of the layer > > > The residual blocks don't strictly learn the identity mapping. They are simply *capable* of learning such a mapping. That is, the residual block makes learning the identity function easy. So, at the very least, skip connections will not hurt performance (this is explained formally in the paper). From the paper: [![enter image description here](https://i.stack.imgur.com/5TPjb.png)](https://i.stack.imgur.com/5TPjb.png) Observe: it's taking some of the layer outputs from earlier layers and passing their outputs further down and element wise summing these with the the outputs from the skipped layers. These blocks may learn mappings that are not the identity map. From paper (some benefits): > > $$\boldsymbol{y} = \mathcal{F}(\boldsymbol{x},\{W\_i\})+\boldsymbol{x}\quad\text{(1)}$$The shortcut connections in Eqn.(1) introduce neither extra parameter nor computation complexity. This is not only attractive in practice but also important in our comparisons between plain and residual networks. We can fairly compare plain/residual networks that simultaneously have the same number of parameters, depth, width, and computational cost (except for the negligible element-wise addition). > > > An example of a residual mapping from the paper is $$\mathcal{F} = W\_2\sigma\_2(W\_1\boldsymbol{x})$$ That is $\{W\_i\}$ represents a set of i weight matrices ($W\_1,W\_2$ in the example) occurring in the layers of the residual (skipped) layers. The "identity shortcuts" are referring to performing the element wise addition of $\boldsymbol{x}$ with the output of the residual layers. So using the residual mapping from the example (1) becomes: $$\boldsymbol{y} = W\_2\sigma\_2(W\_1\boldsymbol{x})+\boldsymbol{x}$$ In short, you take the output $\boldsymbol{x}$ of a layer skip it forward and element wise sum it with the output of the residual mapping and thus produce a residual block. **Limitations of deep networks expressed in paper**: > > When deeper networks are able to start converging, a degradation problem has been exposed: with the network depth increasing, accuracy gets saturated (which might be unsurprising) and then degrades rapidly. Unexpectedly, such degradation is not caused by overfitting, and adding more layers to a suitably deep model leads to higher training error, as reported in [11, 42] and thoroughly verified by our experiments. Fig. 1 shows a typical example. > > > The skip connections and hence the residual blocks allow for stacking deeper networks while avoiding this degradation issue. [Link to paper](https://arxiv.org/pdf/1512.03385.pdf) I hope this helps. Upvotes: 4 [selected_answer]<issue_comment>username_2: As explained in this [paper](https://arxiv.org/pdf/1603.05027.pdf) , the major benefit of identity mapping is that it enables backpropagation signal to reach from output (last) layers to input (first) layers. You can see on the paper at section 2 that it resolves vanishing gradient problem which arises in deeper networks. Upvotes: 2 <issue_comment>username_3: > > As I understand Resnet has some identity mapping layers that their task is to create the output as the same as the input of the layer. the resnet solved the problem of accuracy degrading. But what is the benefit of adding identity mapping layers in intermediate layers? > > > See this is applicable to deep/very deep networks. We decide to add layers when the model output is not converging to the expected output (it is due to very slow convergence). By this mapping, author has suggested that some portion of complexity of the model can directly be adjusted with input value leaving just residual value for adjustment. The output is mapped to input by identity function - so it is identity mapping. So the shortcut identity mapping is doing the task of some layers in plain neural network. The identity mapping is applicable only if output and input are of same shape otherwise linear projection is required. Upvotes: 0 <issue_comment>username_4: As explained [here](https://d2l.ai/chapter_convolutional-modern/resnet.html) > > **only if larger function classes contain the smaller ones** are we guaranteed that increasing them strictly increases the expressive power of the network. For deep neural networks, if we can train the newly-added layer into an identity function $f(x)=x$, the new model will be as effective as the original model. As the new model may get a better solution to fit the training dataset, the added layer might make it easier to reduce training errors. > > > Upvotes: 0
2019/10/08
1,061
4,305
<issue_start>username_0: I am recording the vibrations of an AC Motor (50Hz Europe) and I am trying to find out whether it is powered on or not. When I record these vibrations, I basically get the vibration values ($-1$ to $+1$) over time. I would like to develop a program to **detect the presence** of a 50Hz sine wave on a steady stream of input data. I will have $X$ and $Y$ measurements, where $X$ represents amplitude and $Y$ the time (sampled at 100Hz - it is possible to increase the sample rate to 200Hz or 400Hz at max) *Is this a task suited for a neural network, and if so, would it be less efficient than other means of detection?*<issue_comment>username_1: You can implement an autoencoder network. Autoencoder is an unsupervised artificial neural network that learns how to efficiently compress and encode data then learns how to reconstruct the data back from the reduced encoded representation to a representation that is as close to the original input as possible. When you train autoencoder with 50Hz sine wave data, your model can reconstruct correctly if gets 50Hz sine wave data as an input. When input's "Reconstruction Loss" is less than your threshold value that can be say the given input is 50Hz sine wave. Upvotes: 0 <issue_comment>username_2: > > *Is this a task suited for a neural network* > > > Yes. You have choices in fact: * A fully-connected network would be simplest architecture, and would work if you gave it some time window of samples (e.g. every 0.5 seconds or every 50 samples) and supervised training data - sets of samples with sensor readings and the ground truth value of whether the motor was on or not. * A 1D convolutional neural network would likely be most efficient and robust to train, and would take the same inputs and outputs as the fully-connected network. * A recurrent neural network would be tricker to train, but a nicer design as you could feed it samples one at a time. The input would be the current sample, and output the probability that the motor was on. When training this, you would also want to provide it transitions between the motor being on and off. The nice feature about this is that it should give you quick feedback about whether the motor was on or off - with the caveat that it may be more likely to trigger intermittent false positives, so a little extra post-processing might be required. All of the above require you to collect training data, ideally in situations identical to planned use of the detector. So if the motor is mounted somewhere that could experience other vibrations, a few of those kind of scenarios should be simulated with motor both on and off. > > *and if so, would it be less efficient than other means of detection?* > > > In terms of computing power and effort on your part, you may find that an off-the-shelf Fast Fourier Transform (FFT) library function with a simple threshold at your target frequency will make a robust and simple detector, with no need for a neural network. Typically for specific frequency detection you would take a window of samples, adjust them (using e.g. [Hamming window](https://docs.scipy.org/doc/scipy-0.19.1/reference/generated/scipy.signal.hamming.html)) to reduce edge effects which would appear as frequencies in the conversion, and then run FFT. This combination is so common that you may find it already combined in the FFT library. For more on this, you would want to ask in [Signal Processing Stack Exchange](https://dsp.stackexchange.com/), where use of FFT is well understood. If the environment is noisy or the target frequency can drift (making it hard to set a simple threshold) then you could also combine FFT with a neural network. This combination can solve much more complicated signal detection, and is used in speech processing for instance. > > sampled at 100Hz - it is possible to increase the sample rate to 200Hz or 400Hz at max > > > For reliably detecting a 50Hz signal, I would say that 200Hz sample rate is minimum. The theorectical minimum is 100Hz (i.e. twice the signal frequency) but may give you problems with noise and the possibility that your sample points just happen to fall on low amplitude parts of the oscillations, making it look like the motor is off even when it is on. Upvotes: 2 [selected_answer]
2019/10/09
842
2,589
<issue_start>username_0: I'm just started to learn deep learning and I have a question about this neural network: [![enter image description here](https://i.stack.imgur.com/W6FuR.png)](https://i.stack.imgur.com/W6FuR.png) I think $h\_1$, $h\_j$ and $h\_n$ are perceptrons. So, if they are perceptrons, all of them will have an activation function. I'm wondering if it is possible to have only one activation function, and sum the output to all of the perceptrons, and pass that sum to that activation function. And the output of this activation function will be $y$. I will have this network, where $H1$, $Hj$ and $Hn$ don't have activation function: [![enter image description here](https://i.stack.imgur.com/2OuoU.png)](https://i.stack.imgur.com/2OuoU.png) The input for the activation function will be the sum of the outputs of $H1$, $Hj$ and $Hn$ without been processed by an activation function. Is that possible (or is it a good idea)?<issue_comment>username_1: Of course, it is possible, but why do you want to do this? Let's think about it. Imagine your weights for that layer is a matrix full of ones, so, if you have no bias, then your output of that layer would be the sum of all values in the $h\_1$, $h\_j$, $h\_n$ neurons, right? So, it is possible to sum all the values together, give the output to an activation function, and then you'll have your output. Upvotes: 0 <issue_comment>username_2: **TL;DR**: This is possible but removing the activations will decrease the expressivity of the network because it will become mathematically equivalent to a single neuron. **Mathematical Explanation** The outputs of your intermediate neurons (in the absence of activation functions) are now: $\text{(1)}\quad H\_i(x) = \sum\_{j=1}^nw\_{ij}\cdot x\_j+b\_i$ You are then summing each $H\_i$: $\text{(2)}\quad\mathcal{H(x)}=\sum\_{i=1}^nH\_i(x)$ You then pass $\mathcal{H(x)}$ to some activation say $g$: $\text{(3)}\quad\hat y = g(\mathcal{H(x)})$ The trouble is that the inner term $\mathcal{H(x)}$ mathematically reduces to a single linear operation on $x$. Proof: 1. $\mathcal{H(x)}=\sum\_{i=1}^nH\_i(x)$. Substituting in (1): 2. $\mathcal{H(x)}=\sum\_{i=1}^n((\sum\_{j=1}^nw\_{ij}\cdot x\_j)+b\_i)$. This can be re-aranged : 3. $\mathcal{H(x)}=(\sum\_{j=1}^n(\sum\_{i=1}^nw\_{ij})\cdot x\_j)+\sum\_{i=1}^nb\_i$. But this reduces to: 4. $\mathcal{H(x)}=\sum\_{j=1}^n\tilde w\_{j}\cdot x\_j+\tilde b$. Where $\tilde w\_{j},\tilde b$ are scalars. Thus, without the non-linear activations (3) mathematically reduces to a single neuron. Upvotes: 3 [selected_answer]
2019/10/09
2,413
9,592
<issue_start>username_0: Is there any research on the development of attacks against artificial intelligence systems? For example, is there a way to generate a letter "A", which every human being in this world can recognize but, if it is shown to the state-of-the-art character recognition system, this system will fail to recognize it? Or spoken audio which can be easily recognized by everyone but will fail on the state-of-the-art speech recognition system. If there exists such a thing, is this technology a theory-based science (mathematics proved) or an experimental science (randomly add different types of noise and feed into the AI system and see how it works)? Where can I find such material?<issue_comment>username_1: Sometimes if the rules used by an AI to identify characters are discovered, and if the rules used by a human being to identify the same characters are different, it is possible to design characters that are recognized by a human being but not recognized by an AI. However, if the human being and AI both use the same rules, they will recognize the same characters equally well. A student I advised once trained a neural network to recognize a set of numerals, then used a genetic algorithm to alter the shapes and connectivity of the numerals so that a human could still recognize them but the neural network could not. Of course, if he had then re-trained the neural network using the expanded set of numerals, it probably would have been able to recognize the new ones. Upvotes: 4 <issue_comment>username_2: Yes, there is some research on this topic, which can be called [adversarial machine learning](https://arxiv.org/abs/1611.01236), which is more an experimental field. An [*adversarial example*](https://arxiv.org/pdf/1312.6199.pdf) is an input similar to the ones used to train the model, but that leads the model to produce an unexpected outcome. For example, consider an artificial neural network (ANN) trained to distinguish between oranges and apples. You are then given an image of an apple similar to another image used to train the ANN, but that is slightly blurred. Then you pass it to the ANN, which unexpectedly predicts the object to be an orange. Several machine learning and optimization methods have been used to detect the boundary behaviour of machine learning models, that is, the unexpected behaviour of the model that produces different outcomes given two slightly different inputs (but that correspond to the same object). For example, evolutionary algorithms have been used to develop tests for self-driving cars. See, for example, [Automatically testing self-driving cars with search-based procedural content generation](https://dl.acm.org/citation.cfm?id=3330566) (2019) by <NAME> et al. Upvotes: 6 [selected_answer]<issue_comment>username_3: Isn't that essentially what chess does? For example, A human can recognize that a Ruy exchange offers white great winning chances (because of pawn structure) by move 4 while an engine would take several hours of brute force calculation to understand the same idea. Upvotes: 2 <issue_comment>username_4: There are many insightful comments and answers so far. I want to illustrate my idea of "color blindness test" more. Maybe it's a hint to lead us to the truth. Imagine there are two people here. One is colorblind (AI) and another one is non-colorblind (human). If we show them a normal number "6", both of them can easily recognize it as number 6. Now, if we show them a delicately designed colorful number "6", only human can recognize it as number 6 while AI will recognize it as number 8. The interesting of this analogy is that we can not teach/train colorblind people to recognize this delicately designed colorful number "6" because of natural difference, which I believe is also the case between AI and human. AI gets results from computation while human gets results from "mind". Therefore, like @username_1's answer, if we can find the fundamental difference between AI and human of how we read things, then this question is answered. Upvotes: 2 <issue_comment>username_5: Yes there are, for instance one pixel attacks described in > > <NAME>.; <NAME>.; <NAME>. One pixel attack for fooling deep > neural networks. [arXiv:1710.08864](https://arxiv.org/pdf/1710.08864.pdf) > > > One pixels attacks are attacks in which changing one pixel in input image can strongly affect the results. Upvotes: 4 <issue_comment>username_6: Here's an example: * [How to hack your face to dodge the rise of facial recognition tech](https://www.wired.co.uk/article/avoid-facial-recognition-software) In his recent book *The Fall*, Stephenson wrote about smartglasses that that project a pattern over the facial features to foil recognition algorithms (which seems not only feasible but likely;) Here's an article from our sponsors, *[Adversarial AI: As New Attack Vector Opens, Researchers Aim to Defend Against It](https://securityintelligence.com/adversarial-ai-as-new-attack-vector-opens-researchers-aim-to-defend-against-it/)* which includes this graphic of "[Five ways AI hacks can lead to real world problems](https://securityintelligence.com/wp-content/uploads/2018/04/2018-ibm-security-ai-hacks-infographic-1.2.jpg)". The article references the conference on [The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation](https://maliciousaireport.com/), where you can download the full report. *I'm assuming many such examples exist in the real world, and will amend this link-based answer as I find them. Good question!* Upvotes: 3 <issue_comment>username_7: Here's a live demo: <https://www.labsix.org/physical-objects-that-fool-neural-nets/> Recall that neural nets are trained by feeding in the training data, evaluating the net, and using the error between the observed and the intended output to adjust the weights and bring the observed output closer to the intended. Most attacks have been on the observation that you can, instead of updating the weights, update the input neurons. That is, permute the image. However, this attack is very finnicky. It falls apart when the permuted image is scaled, rotated, blurred, or otherwise altered. That's clearly a cat to us, but guacamole to the neural net. But a slight rotation and the net starts classifying it correctly again. However recent breakthroughs allow actual objects presented to a real camera to be reliably misclassified. That's clearly a turtle, albeit with a wonky pattern on its shell. But that net is convinced it's a rifle from practically every angle. Upvotes: 2 <issue_comment>username_8: There are some research at least on the "foolability" of neural networks, that gives insight on potential high risk of neural nets even when they "seem" 99.99% acurate. A very good paper on this is in Nature: <https://www.nature.com/articles/d41586-019-03013-5> In a nutshell: It shows diverse exemples of fooling neural networks/AIs, for exemple one where a few bits of scotch tape places on a "Stop" sign changes it, for the neural net, into a "limited to 40" sign... (whereas a human would still see a "Stop" sign!). And also 2 striking exemples of turning an animal into another by just adding invisible (for humans!) colored dots, (turning in the exemple a Panda into a Gibbon, where a human hardly see anything different so still sees a Panda). Then they elaborate on diverse research venues, involving for exemple ways to try to prevent such attacks. The whole page is a good read to any AI researcher and shows lots of troubling problems (especially for automated systems such as cars, and soon maybe armaments). --- An exerpt relevant to the question: *Hendrycks and his colleagues have suggested quantifying a DNN’s robustness against making errors by testing how it performs against a large range of adversarial examples. However, training a network to withstand one kind of attack could weaken it against others, they say. And researchers led by <NAME> at Google DeepMind in London are trying to inoculate DNNs against making mistakes. Many adversarial attacks work by making tiny tweaks to the component parts of an input — such as subtly altering the colour of pixels in an image — until this tips a DNN over into a misclassification. Kohli’s team has suggested that a robust DNN should not change its output as a result of small changes in its input, and that this property might be mathematically incorporated into the network, constraining how it learns.* *For the moment, however, no one has a fix on the overall problem of brittle AIs. The root of the issue, says Bengio, is that DNNs don’t have a good model of how to pick out what matters. When an AI sees a doctored image of a lion as a library, a person still sees a lion because they have a mental model of the animal that rests on a set of high-level features — ears, a tail, a mane and so on — that lets them abstract away from low-level arbitrary or incidental details. “We know from prior experience which features are the salient ones,” says Bengio. “And that comes from a deep understanding of the structure of the world.”* --- Another excerpt, near the end: *"Researchers in the field say they are making progress in fixing deep learning’s flaws, but acknowledge that they’re still groping for new techniques to make the process less brittle. There is not much theory behind deep learning, says Song. “If something doesn’t work, it’s difficult to figure out why,” she says. “The whole field is still very empirical. You just have to try things.”"* Upvotes: 2
2019/10/11
2,356
9,238
<issue_start>username_0: Is the gradient at a layer (of a feed-forward neural network) independent of the activations of the previous layers? I read this in a paper titled [Mean Field Residual Networks: On the Edge of Chaos](https://arxiv.org/abs/1712.08969) (2017). I am not sure how far this is true, because the error depends on those activations.<issue_comment>username_1: Sometimes if the rules used by an AI to identify characters are discovered, and if the rules used by a human being to identify the same characters are different, it is possible to design characters that are recognized by a human being but not recognized by an AI. However, if the human being and AI both use the same rules, they will recognize the same characters equally well. A student I advised once trained a neural network to recognize a set of numerals, then used a genetic algorithm to alter the shapes and connectivity of the numerals so that a human could still recognize them but the neural network could not. Of course, if he had then re-trained the neural network using the expanded set of numerals, it probably would have been able to recognize the new ones. Upvotes: 4 <issue_comment>username_2: Yes, there is some research on this topic, which can be called [adversarial machine learning](https://arxiv.org/abs/1611.01236), which is more an experimental field. An [*adversarial example*](https://arxiv.org/pdf/1312.6199.pdf) is an input similar to the ones used to train the model, but that leads the model to produce an unexpected outcome. For example, consider an artificial neural network (ANN) trained to distinguish between oranges and apples. You are then given an image of an apple similar to another image used to train the ANN, but that is slightly blurred. Then you pass it to the ANN, which unexpectedly predicts the object to be an orange. Several machine learning and optimization methods have been used to detect the boundary behaviour of machine learning models, that is, the unexpected behaviour of the model that produces different outcomes given two slightly different inputs (but that correspond to the same object). For example, evolutionary algorithms have been used to develop tests for self-driving cars. See, for example, [Automatically testing self-driving cars with search-based procedural content generation](https://dl.acm.org/citation.cfm?id=3330566) (2019) by <NAME> et al. Upvotes: 6 [selected_answer]<issue_comment>username_3: Isn't that essentially what chess does? For example, A human can recognize that a Ruy exchange offers white great winning chances (because of pawn structure) by move 4 while an engine would take several hours of brute force calculation to understand the same idea. Upvotes: 2 <issue_comment>username_4: There are many insightful comments and answers so far. I want to illustrate my idea of "color blindness test" more. Maybe it's a hint to lead us to the truth. Imagine there are two people here. One is colorblind (AI) and another one is non-colorblind (human). If we show them a normal number "6", both of them can easily recognize it as number 6. Now, if we show them a delicately designed colorful number "6", only human can recognize it as number 6 while AI will recognize it as number 8. The interesting of this analogy is that we can not teach/train colorblind people to recognize this delicately designed colorful number "6" because of natural difference, which I believe is also the case between AI and human. AI gets results from computation while human gets results from "mind". Therefore, like @username_1's answer, if we can find the fundamental difference between AI and human of how we read things, then this question is answered. Upvotes: 2 <issue_comment>username_5: Yes there are, for instance one pixel attacks described in > > <NAME>.; <NAME>.; <NAME>. One pixel attack for fooling deep > neural networks. [arXiv:1710.08864](https://arxiv.org/pdf/1710.08864.pdf) > > > One pixels attacks are attacks in which changing one pixel in input image can strongly affect the results. Upvotes: 4 <issue_comment>username_6: Here's an example: * [How to hack your face to dodge the rise of facial recognition tech](https://www.wired.co.uk/article/avoid-facial-recognition-software) In his recent book *The Fall*, Stephenson wrote about smartglasses that that project a pattern over the facial features to foil recognition algorithms (which seems not only feasible but likely;) Here's an article from our sponsors, *[Adversarial AI: As New Attack Vector Opens, Researchers Aim to Defend Against It](https://securityintelligence.com/adversarial-ai-as-new-attack-vector-opens-researchers-aim-to-defend-against-it/)* which includes this graphic of "[Five ways AI hacks can lead to real world problems](https://securityintelligence.com/wp-content/uploads/2018/04/2018-ibm-security-ai-hacks-infographic-1.2.jpg)". The article references the conference on [The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation](https://maliciousaireport.com/), where you can download the full report. *I'm assuming many such examples exist in the real world, and will amend this link-based answer as I find them. Good question!* Upvotes: 3 <issue_comment>username_7: Here's a live demo: <https://www.labsix.org/physical-objects-that-fool-neural-nets/> Recall that neural nets are trained by feeding in the training data, evaluating the net, and using the error between the observed and the intended output to adjust the weights and bring the observed output closer to the intended. Most attacks have been on the observation that you can, instead of updating the weights, update the input neurons. That is, permute the image. However, this attack is very finnicky. It falls apart when the permuted image is scaled, rotated, blurred, or otherwise altered. That's clearly a cat to us, but guacamole to the neural net. But a slight rotation and the net starts classifying it correctly again. However recent breakthroughs allow actual objects presented to a real camera to be reliably misclassified. That's clearly a turtle, albeit with a wonky pattern on its shell. But that net is convinced it's a rifle from practically every angle. Upvotes: 2 <issue_comment>username_8: There are some research at least on the "foolability" of neural networks, that gives insight on potential high risk of neural nets even when they "seem" 99.99% acurate. A very good paper on this is in Nature: <https://www.nature.com/articles/d41586-019-03013-5> In a nutshell: It shows diverse exemples of fooling neural networks/AIs, for exemple one where a few bits of scotch tape places on a "Stop" sign changes it, for the neural net, into a "limited to 40" sign... (whereas a human would still see a "Stop" sign!). And also 2 striking exemples of turning an animal into another by just adding invisible (for humans!) colored dots, (turning in the exemple a Panda into a Gibbon, where a human hardly see anything different so still sees a Panda). Then they elaborate on diverse research venues, involving for exemple ways to try to prevent such attacks. The whole page is a good read to any AI researcher and shows lots of troubling problems (especially for automated systems such as cars, and soon maybe armaments). --- An exerpt relevant to the question: *Hendrycks and his colleagues have suggested quantifying a DNN’s robustness against making errors by testing how it performs against a large range of adversarial examples. However, training a network to withstand one kind of attack could weaken it against others, they say. And researchers led by <NAME> at Google DeepMind in London are trying to inoculate DNNs against making mistakes. Many adversarial attacks work by making tiny tweaks to the component parts of an input — such as subtly altering the colour of pixels in an image — until this tips a DNN over into a misclassification. Kohli’s team has suggested that a robust DNN should not change its output as a result of small changes in its input, and that this property might be mathematically incorporated into the network, constraining how it learns.* *For the moment, however, no one has a fix on the overall problem of brittle AIs. The root of the issue, says Bengio, is that DNNs don’t have a good model of how to pick out what matters. When an AI sees a doctored image of a lion as a library, a person still sees a lion because they have a mental model of the animal that rests on a set of high-level features — ears, a tail, a mane and so on — that lets them abstract away from low-level arbitrary or incidental details. “We know from prior experience which features are the salient ones,” says Bengio. “And that comes from a deep understanding of the structure of the world.”* --- Another excerpt, near the end: *"Researchers in the field say they are making progress in fixing deep learning’s flaws, but acknowledge that they’re still groping for new techniques to make the process less brittle. There is not much theory behind deep learning, says Song. “If something doesn’t work, it’s difficult to figure out why,” she says. “The whole field is still very empirical. You just have to try things.”"* Upvotes: 2
2019/10/11
964
4,385
<issue_start>username_0: The answers to [this Quora question](https://qr.ae/TWYRNu) say it's OK to ignore machine learning and start right away with deep learning. Is machine learning required or is useful for understanding (theoretically and practically) deep learning? Can I start right away with deep learning or should I cover machine learning first? In what way machine learning useful for deep learning? (leave the mathematics part - I'm ok with it).<issue_comment>username_1: Deep learning is part of machine learning. * You will miss out useful information if you ignore machine learning. * You are ok to start your work in machine learning with deep learning and neural networks. You have to start somewhere and starting with a strong and successful method is resaonable, especially if you need to be able to produce good results quickly. * You will learn essential machine learning stuff while reading about deep learning. * The deep learning tutorials and other learning materials you will be reading may not be telling you that what you are learning also applies to other machine learning methods but you will be learning lot's of stuff that applies more generally. You will be studying some machine learning whether you want to or not. * If you have plenty of time a more broad view will help understanding. Still, there is no need to wait with deep learning to after mastering some other methods. * Broader knowledge helps you to relate and memorise concepts and be more aware of potential issues, especially issues that are rarely discussed in the deep learning community. Such knowledge and experience will be most useful when trying to apply deep learning to new problems or if trying to make substantial changes. Upvotes: 2 <issue_comment>username_2: [![enter image description here](https://i.stack.imgur.com/FyHLS.png)](https://i.stack.imgur.com/FyHLS.png) > > 1. Is Machine Learning required or is useful for understanding (theoretically and practically) Deep Learning? > > > NO Deep learning is itself a huge subject area with serious applications in NLP, Computer Vision, Speech and Robotics. You should learn deep learning from scratch like understanding forward propagation, back propagation, how weights are updated etc.. instead of using high level frameworks like keras, pytorch. It's OK to use them once you understand the basics to save time and code complexity, but remember "surely" you don't need machine learning for that. Since you are familiar with the mathematics part, I would suggest you to straight away jump into Deep Learning. Note that deep learning is inspired by how the brain works. > > 2. Can I start right away with Deep learning or should I cover Machine learning first? > > > Yes you may start right away, start with the hello world problems "MNIST DIGIT Classification" if you know little image processing. Start with a simple neural network model from scratch, then use keras (very easy) and then proceed to CNN ... You may start with simple problems in other fields too (NLP, Speech) I suggest <NAME>, course in Machine Learning (within this he explains a neural network model for MNIST I guess). > > 3. In what way machine learning useful for Deep learning? > > > You will understand that in machine learning you sit down and find useful features in the dataset yourself, but in deep learning it happens automatically (Learn Deep learning in detail and come back and read this, you will understand exactly what I mean!) If you learn Machine learning and then go to deep learning, you will realise that it was unnecessary .if you interested in this field of AI, jump into deep learning right now! Upvotes: 1 [selected_answer]<issue_comment>username_3: I would argue definitely since it is a bit sequential. (i) Start off applying basic machine learning concepts such as regression, classification and generalization techniques etc. to real world problems. (ii) you will soon realize the limitations of those techniques.(iii) Take your learning to the next level by learning and applying deep learning concepts specially if the issues are around image classification or NLP. As mentioned by @username_1 you will not only miss out on useful info but there will a huge gap in our learning. Hence, i would suggest learning concurrently or ML first otherwise DL will become black box of black box. Upvotes: 0
2019/10/12
981
4,484
<issue_start>username_0: I am new to neural networks. I would like to use them as a fitting or forecasting method. A simple NN model that does not contain hidden layers, that is, the input nodes are directly connected to the outputs nodes, represents a linear model. Nonlinearity begins to appear in an ANN model when we have hidden nodes, in which a nonlinear function is assigned to the hidden nodes, and using minimization their weights are determined. How do we choose the non-linear activation function that should be assigned to each hidden neuron?<issue_comment>username_1: Deep learning is part of machine learning. * You will miss out useful information if you ignore machine learning. * You are ok to start your work in machine learning with deep learning and neural networks. You have to start somewhere and starting with a strong and successful method is resaonable, especially if you need to be able to produce good results quickly. * You will learn essential machine learning stuff while reading about deep learning. * The deep learning tutorials and other learning materials you will be reading may not be telling you that what you are learning also applies to other machine learning methods but you will be learning lot's of stuff that applies more generally. You will be studying some machine learning whether you want to or not. * If you have plenty of time a more broad view will help understanding. Still, there is no need to wait with deep learning to after mastering some other methods. * Broader knowledge helps you to relate and memorise concepts and be more aware of potential issues, especially issues that are rarely discussed in the deep learning community. Such knowledge and experience will be most useful when trying to apply deep learning to new problems or if trying to make substantial changes. Upvotes: 2 <issue_comment>username_2: [![enter image description here](https://i.stack.imgur.com/FyHLS.png)](https://i.stack.imgur.com/FyHLS.png) > > 1. Is Machine Learning required or is useful for understanding (theoretically and practically) Deep Learning? > > > NO Deep learning is itself a huge subject area with serious applications in NLP, Computer Vision, Speech and Robotics. You should learn deep learning from scratch like understanding forward propagation, back propagation, how weights are updated etc.. instead of using high level frameworks like keras, pytorch. It's OK to use them once you understand the basics to save time and code complexity, but remember "surely" you don't need machine learning for that. Since you are familiar with the mathematics part, I would suggest you to straight away jump into Deep Learning. Note that deep learning is inspired by how the brain works. > > 2. Can I start right away with Deep learning or should I cover Machine learning first? > > > Yes you may start right away, start with the hello world problems "MNIST DIGIT Classification" if you know little image processing. Start with a simple neural network model from scratch, then use keras (very easy) and then proceed to CNN ... You may start with simple problems in other fields too (NLP, Speech) I suggest <NAME>, course in Machine Learning (within this he explains a neural network model for MNIST I guess). > > 3. In what way machine learning useful for Deep learning? > > > You will understand that in machine learning you sit down and find useful features in the dataset yourself, but in deep learning it happens automatically (Learn Deep learning in detail and come back and read this, you will understand exactly what I mean!) If you learn Machine learning and then go to deep learning, you will realise that it was unnecessary .if you interested in this field of AI, jump into deep learning right now! Upvotes: 1 [selected_answer]<issue_comment>username_3: I would argue definitely since it is a bit sequential. (i) Start off applying basic machine learning concepts such as regression, classification and generalization techniques etc. to real world problems. (ii) you will soon realize the limitations of those techniques.(iii) Take your learning to the next level by learning and applying deep learning concepts specially if the issues are around image classification or NLP. As mentioned by @username_1 you will not only miss out on useful info but there will a huge gap in our learning. Hence, i would suggest learning concurrently or ML first otherwise DL will become black box of black box. Upvotes: 0
2019/10/12
1,486
5,750
<issue_start>username_0: I'm certain that this is a very naive question, but I am just beginning to look more deeply at neural networks, having only used decision tree approaches in the past. Also, my formal mathematics training is more than 30 years in the past, so please be kind. :) As I'm reading <NAME>ollet's book on [Deep Learning with Python](https://rads.stackoverflow.com/amzn/click/com/1617294438), I'm struck that it appears that we are effectively treating the weights (kernel and biases) as terms in the standard linear equation ($y=mx+b$). At page 72 of the book, the author writes ``` output = dot(W, input) + b output = (output < 0 ? 0 : output) ``` Am I reading too much into this, or is this correct (and so fundamental I shouldn't be asking about it)?<issue_comment>username_1: In a neural network (NN), a neuron can act as a linear operator, but it usually acts as a non-linear one. The usual equation of a neuron $i$ in layer $l$ of an NN is $$o\_i^l = \sigma(\mathbf{x}\_i^l \cdot \mathbf{w}\_i^l + b\_i^l),$$ where $\sigma$ is a so-called [*activation function*](https://en.wikipedia.org/wiki/Activation_function), which is usually a non-linearity, but it can also be the [identity function](https://en.wikipedia.org/wiki/Identity_function), $\mathbf{x}\_i^l$ and $\mathbf{w}\_i^l$ are the vectors that respectively contain the inputs and the weights for neuron $i$ in layer $l$, and $b\_i^l \in \mathbb{R}$ is a [bias](https://en.wikipedia.org/wiki/Inductive_bias). Similarly, the output of a layer of a feed-forward neural network (FFNN) is computed as $$\mathbf{o}^l = \sigma(\mathbf{X}^l \mathbf{W}^l + \mathbf{b}^l).$$ In your specific example, you set the new weight to $0$, if the output of the linear combination is less than $0$, else you use the output of the linear combination. This is the definition of the [ReLU activation function](https://en.wikipedia.org/wiki/Rectifier_(neural_networks)), which is a non-linear function. Upvotes: 4 [selected_answer]<issue_comment>username_2: Taking the question from [comments](https://ai.stackexchange.com/questions/15877/do-neurons-of-a-neural-network-model-a-linear-relationship/15886#comment24003_15879) on username_1's answer. --- > > Am I wrong to see a clear relationship between how we are currently training networks and the classic function that defines a line? > > > You are right about it. This is an intuitive way to understand neural networks. You can create a neural network that only does simple linear regression, by using linear activations functions in all the layers, such as the neural network (model) output is a linear combination of the inputs. And, this seems like a great way to introduce neural networks to students. But, one must also look at the fact that neural networks provide the flexibility to model many kinds of non-linear relationships. --- A [list of activation functions](https://stats.stackexchange.com/questions/115258/comprehensive-list-of-activation-functions-in-neural-networks-with-pros-cons/). Upvotes: 2 <issue_comment>username_3: Almost never. The sum of linear functions is another linear function, so if neurons were only linear transformations there would be basically no point to having more than one neuron per layer. Instead, every neuron applies some kind of *nonlinear* function to its input. There are lots of different variations, but in the end the combination of the nonlinear activation function at each layer with the linear matrix multiplication connecting the outputs of each layer to the inputs of the next, creates something that has much more intricate behavior while still being reasonably efficient to compute. Upvotes: 3 <issue_comment>username_4: You're quite right in your interpretation, but I'll answer in two parts in order to avoid confusion with respect to activation functions. Part 1. (TLDR: a neurons weights is the normal vector of a hyperplane that divides input space in two parts. The neuron's preactivation is proportional to the distance of the input point to the plane.) Every artificial neuron learns a linear relationship between its inputs. The most recalled equation of a line is $y=m \cdot x+b$, but that's actually a very specific form that allows us going through values of X of the line and seeing to what values of Y it corresponds. A most general form would be $0=n \cdot y + m \cdot x + b$. This tells us that the line is formed by the points (X,Y) whose values make that series of calculations be zero. We can explore different values of (X,Y) and see that most of them give non-zero values, and that they give positive values at one side of the line and negative values at the other side. Only if you land just on the line it will give you a zero. This is a very important interpretation, because it's what allows neurons to find divisions of the input space (into a positive side and a negative side). Of course it probably won't be a 2d space, so it will be a hyperplane instead of a line, but I hope you get the idea. Part 2. However, if we only use linear transformations we couldn't learn non-linear functions. Here's where the activation function plays a very important role: it distorts the neuron's preactivation value (which is linear) in a non-linear way (what makes it a non-linear function). Activation functions have lots of bells and whistles, which are too much to write here, but you can start thinking about them as distortions applied to that distance of the input point to the neuron's hyperplane. The one you saw is called ReLU, and it basically truncates the negative values, thus focusing only on the positive side of the hyperplane (it may be interpreted as measuring how far the point has crossed a frontier). Upvotes: 1
2019/10/14
478
1,694
<issue_start>username_0: What are the standard (or baseline) problems (or at least common ones) for CNNs and LSTMs? As an example, for a feed-forward neural net, a common problem is the XOR problem. Is there a standard problem like this for CNNs and LSTMs? I think for a CNN the standard test is to try it on MNIST, but I'm not sure of an LSTM.<issue_comment>username_1: I think for LSTM, I have not come across a standard test, but when I started, I tried like this. Generate a sequence of numbers like, [0,1,2,3,4],[1,2,3,4,5]..... as the dataset and then your labels would be [5,6,.....].Train this using an LSTM network.This would be a good way to understand the parameters involved and by changing the number of layers, and different parameters you can easily check how it works. of course, mnist is the test for CNNs Upvotes: 1 <issue_comment>username_2: It's more domain- or task-specific. There is no obvious baseline anymore because these models and this field has evolved into too large of an ecosystem. Nonetheless, I'll list a couple of notorious examples below. **Image classification:** * MNIST * CIFAR * ImageNet **Detection/segmentation:** * PascalVOC * COCO * CityScapes **Pose estimation:** * MPII * LEEDS **Text classification:** * IMDB * yelp **Question answering:** * SQuAD **Translation:** * WMT * IWSLT This is just a taste. There are tons more both in each category and the number of categories, a good source is the [Papers with Code](https://paperswithcode.com/sota) website. Therefore, there is no single *standard* problem, given that there are too many that all in one shape or form use CNNs or RNNs (or others). Upvotes: 3 [selected_answer]
2019/10/17
918
3,952
<issue_start>username_0: Although I have a decent background in math, I'm trying to understand which courses from CS and logic to look into. My aim is to get into a Machine Learning PhD program.<issue_comment>username_1: In several projects, I found data analysis and data structures to be critical. Machine Learning requires huge amounts of data and, most likely, the data will come from multiple sources. Prior to use, data requires analysis, cleaning, interpretation, feature engineering (subject matter expertise), and structure. Upvotes: 1 <issue_comment>username_2: From the top of my mind roughly in order of priority excl. math: **Practical/applied CS**: machine learning, artificial intelligence (incl. symbolic AI), data mining, algorithms, data structures **Theoretical CS**: complexity theory **Programming**: Python Logic is highly relevant for symbolic AI but not so much for sub-symbolic approaches like ML. For all topics mentioned in the first two categories you can find free online lectures from different universities. Upvotes: 2 <issue_comment>username_3: I worked as a professor for a time, and often advised students on this. For a PhD in machine learning, I think the ideal background is: 1. **Core CS Courses** * Programming (typically 3-4 courses). Language choice is not highly important, but Python, C++, Java, and perhaps JavaScript, are reasonable picks, if only because of their prevalence. * Core topics: data structures, algorithms, operating systems, databases * numerical linear algebra or numerical methods * advanced algorithm design and analysis Together these will allow you to read and write the code that even highly optimized versions of ML algorithms are written in, and to understand what might be going wrong within them. **2. AI & ML courses, usually offered through a CS department** * a broad survey course in AI (like AI:AMA by Russel & Norvig), usually offered to senior undergraduates. * A course applied machine learning or data mining. You may also take other AI courses, but they are not as common to see offered to undergraduates, so many students wait until graduate school: * reinforcement learning * soft computing * computational learning theory * Bayesian Methods * Deep Learning * Multiagent Systems * Information Retrieval * Natural Language Processing * Computer Vision * Robotics Together, these will give you the broadest possible background in AI & ML. These can allow you to find new applications of ML, or to pull AI techniques from one area into another as you need. **3. Statistics courses** * a 1 or 2 term course in probability theory, ideally a version that requires and uses calculus. * at *minimum* a course in statistical hypothesis testing. Much stronger would be to also take courses in: * regression * generalized linear models * experiment design * causal inference * Bayesian methods These courses allow you to reason formally and comfortably about uncertainty. They also give you the correct framework for answering questions about whether your ML algorithm is working, and what patterns an ML algorithm uncovers mean. **4. Mathematics courses** * 3 semesters of calculus, going at least as far as multi-variate/vector calculus. * optionally, a more advanced course that builds on calculus, like real analysis, but only to reinforce calculus concepts. * at least 1, and preferably 2, courses in linear algebra * at least 1, and preferably 2, courses in discrete mathematics. * ideally, something like Knuth et al.'s *Concrete Mathematics* * ideally a course in advanced optimization techniques * optionally, courses in logic, but be aware that this is almost a fringe area in AI now, and essentially irrelevant to a PhD in machine learning. The parts you need are usually covered in a broad survey AI course. These courses give you the basic mathematical fluency to understand most machine learning algorithms well. Upvotes: 4 [selected_answer]
2019/10/17
1,039
4,498
<issue_start>username_0: Is artificial intelligence and, in particular, neural networks being used in real-world critical applications and devices? I had a discussion with my colleague who states that nobody would use artificial intelligence, especially neural nets, for critical stuff, like technical devices or sensors. I'm only aware of the problem of neural nets being so-called black-boxes, but, nevertheless, I think it is possible to make an NN robust so that it matches the demands of daily processes, also in sensitive fields like health care, energy market, self-driving cars, and so on. Yet I cannot underline this. Does somebody have more insights or other information, opinions and so on? I appreciate any meaningful answer.<issue_comment>username_1: In several projects, I found data analysis and data structures to be critical. Machine Learning requires huge amounts of data and, most likely, the data will come from multiple sources. Prior to use, data requires analysis, cleaning, interpretation, feature engineering (subject matter expertise), and structure. Upvotes: 1 <issue_comment>username_2: From the top of my mind roughly in order of priority excl. math: **Practical/applied CS**: machine learning, artificial intelligence (incl. symbolic AI), data mining, algorithms, data structures **Theoretical CS**: complexity theory **Programming**: Python Logic is highly relevant for symbolic AI but not so much for sub-symbolic approaches like ML. For all topics mentioned in the first two categories you can find free online lectures from different universities. Upvotes: 2 <issue_comment>username_3: I worked as a professor for a time, and often advised students on this. For a PhD in machine learning, I think the ideal background is: 1. **Core CS Courses** * Programming (typically 3-4 courses). Language choice is not highly important, but Python, C++, Java, and perhaps JavaScript, are reasonable picks, if only because of their prevalence. * Core topics: data structures, algorithms, operating systems, databases * numerical linear algebra or numerical methods * advanced algorithm design and analysis Together these will allow you to read and write the code that even highly optimized versions of ML algorithms are written in, and to understand what might be going wrong within them. **2. AI & ML courses, usually offered through a CS department** * a broad survey course in AI (like AI:AMA by Russel & Norvig), usually offered to senior undergraduates. * A course applied machine learning or data mining. You may also take other AI courses, but they are not as common to see offered to undergraduates, so many students wait until graduate school: * reinforcement learning * soft computing * computational learning theory * Bayesian Methods * Deep Learning * Multiagent Systems * Information Retrieval * Natural Language Processing * Computer Vision * Robotics Together, these will give you the broadest possible background in AI & ML. These can allow you to find new applications of ML, or to pull AI techniques from one area into another as you need. **3. Statistics courses** * a 1 or 2 term course in probability theory, ideally a version that requires and uses calculus. * at *minimum* a course in statistical hypothesis testing. Much stronger would be to also take courses in: * regression * generalized linear models * experiment design * causal inference * Bayesian methods These courses allow you to reason formally and comfortably about uncertainty. They also give you the correct framework for answering questions about whether your ML algorithm is working, and what patterns an ML algorithm uncovers mean. **4. Mathematics courses** * 3 semesters of calculus, going at least as far as multi-variate/vector calculus. * optionally, a more advanced course that builds on calculus, like real analysis, but only to reinforce calculus concepts. * at least 1, and preferably 2, courses in linear algebra * at least 1, and preferably 2, courses in discrete mathematics. * ideally, something like Knuth et al.'s *Concrete Mathematics* * ideally a course in advanced optimization techniques * optionally, courses in logic, but be aware that this is almost a fringe area in AI now, and essentially irrelevant to a PhD in machine learning. The parts you need are usually covered in a broad survey AI course. These courses give you the basic mathematical fluency to understand most machine learning algorithms well. Upvotes: 4 [selected_answer]
2019/10/17
1,230
5,078
<issue_start>username_0: In my experience with Neural Nets, I have only used them to take input vectors and return binary output. But, here in a video, <https://youtu.be/ajGgd9Ld-Wc?t=214>, <NAME>, renowned AI Expert shows a deep net which takes thousands of samples of Trump's speeches and **generates output in the Chinese Language.** In short, how can deep nets/neural nets be used to **generate** output rather than giving answer **yes or no**? Additionally, how are these nets being trained? Can anyone here provide me a simple design to nets that are capable of doing that?<issue_comment>username_1: If the output can either be *yes* or *no*, then you have a discrete and binary output, so this problem is called [binary classification](https://en.wikipedia.org/wiki/Binary_classification), that is, it is the task of classifying (or categorizing) the input into one of two categories (or classes). You can also have a neural network with an output that can take more than two possible discrete values, which is can be used to solve a [multi-class classification problem](https://stats.stackexchange.com/q/11859/82135). For example, a neural network that outputs a sentence, which is composed of $n$ words, where $n>1$. In general, the output does not necessarily need to take a value from a set of discrete values (or classes), but it can also take a numeric value (e.g. a floating-point number). In that case, the problem is called *regression*. For example, the task of predicting the height of a person (a numeric value) given a picture of the same. There are different types of neural networks. The simplest neural network is either a perceptron (if you consider it a neural network) or a multi-layer feed-forward neural network, that is, a neural network with only forward connections, with possible multiple layers. There are also [convolutional neural networks (CNNs) and recurrent neural networks (RNNs)](https://ai.stackexchange.com/a/12290/2444), which are more sophisticated neural networks that are more suited for processing respectively imagery or sequences. There are also [generative](https://en.wikipedia.org/wiki/Generative_model) neural networks (e.g. [variational auto-encoders](https://arxiv.org/abs/1312.6114)), which are trained to learn a [distribution](https://en.wikipedia.org/wiki/Probability_distribution), from which you can then [sample](https://en.wikipedia.org/wiki/Sampling_(statistics)). In your specific example, the sentence could have been generated with a recurrent neural network or a generative model (or a combination of both). More precisely, a recurrent generative model could have been trained to learn the rules of either the English or Chinese language. Then you sample from this distribution to generate sentences. In principle, a sentence could also be generated with simpler neural networks (such as a multi-layer perceptron), but, in practice, this may be more inefficient. Upvotes: 1 <issue_comment>username_2: Think of a neural network as a universal function approximator (With infinite width under a set of constraints this is actually provable). Now when discussing generation in the context you have provided, you essentially want to draw from some distribution $p(y|c)$ where $y$ is your output and $c$ is your context or input. **Theorem:** For any distribution $\Omega$, if we take $z \sim \mathcal{N}(0,I)$, there exists a function $f$ where $f(z) \sim \Omega$. Given the above theorem (for the purposes of this post I don't need to prove it, but its very similar to the universal approximation theorem proof) and if we take neural networks as a pseudo-universal function approximator, if we have a valid objective or training procedure that can learn the parameters of $f$, sampling is as easy as sampling $\mathcal{N}(0,I)$ and then applying $f$. So the trick really is finding a good training procedure, and this is where you see GANs, VAEs and other models/schemes come into play. Everything I've said above works really well when there isn't autocorrelation like in text, but when there is, the above methodology would result in a combinatorially large output space which isn't realistic with a vocabulary size usually spanning somewhere between a couple thousand and a couple hundred thousand. So to handle this they model the joint by taking advantage of that autocorrelation by modeling the joint probability as its bayesian decomposition. $$p(\vec w) = p(w\_0)\prod\_{i=1}^{N-1}p(w\_i|w\_{ Now that there is a framework to efficiently model this type of output, were back into the position as before where we are looking for clever training schemes. In this case you'll see commonly RNN's or other sequential model training with teacher forcing (@username_1 described this in his answer too), or using GAN like compositions using either reinforcement learning to handle the lack of differentiability in sampling or using approximations like Gumbel-Softmax or Intermediate Loss Sampling (method I actually developed) I hope this answered your question. Upvotes: 3 [selected_answer]
2019/10/18
172
752
<issue_start>username_0: Is it possible and how trivial (or not) might it be (if possible) to retrain GPT-2 on time-series data instead of text?<issue_comment>username_1: Definitely! but at that point it would be training a transformer-encoder (gpt2's architecture) and not GPT2 because GPT2 is defined by the weights / training procedure / data it was trained and not the architecture, and I don't think it would transfer properly to time series. Upvotes: 1 <issue_comment>username_2: In TS tasks, Transformers can capture long-range dependencies effectively through their self-attention mechanism, which can potentially lead to better forecasting performance compared to LSTMs. However, they are computationally more expensive than LSTMs. Upvotes: 0
2019/10/19
471
2,154
<issue_start>username_0: I have a neural network that should be able to classify documents to target label A. The problem is that the network is actually classifying label B, which is an easier task. To make the problem more clear: I need to classify documents from different sources. In the training data each source occurs repeatedly, but the network should be able to work on unknown sources. All documents from a single source have the same class. In this case, it is easier to identify sources than the target label so in practice the network is not really identifying the target label, but the source. The solution to this problem is making sure that the model is bad at identifying the sources in the training data, while still attaching the right target labels. I think the first step is to get two output layers, one for the target label and one for identifying which source it is from. My approach fails however at the training procedure: I want to minimize the loss on the target output, but maximize the loss on the non-target output. But if I maximize the loss on that non-target output, that does not mean that the network 'unlearns' the non-target labels. So the main question for the non-target output is: TLDR; How do I define a training procedure that minimizes the loss on a non-target output layer, and then maximizes that loss on all layers before it. My goal is to have a network that is good at classifying label A, but bad at a related label B. If anyone wants to give a code example, my prefered framework is PyTorch.<issue_comment>username_1: Definitely! but at that point it would be training a transformer-encoder (gpt2's architecture) and not GPT2 because GPT2 is defined by the weights / training procedure / data it was trained and not the architecture, and I don't think it would transfer properly to time series. Upvotes: 1 <issue_comment>username_2: In TS tasks, Transformers can capture long-range dependencies effectively through their self-attention mechanism, which can potentially lead to better forecasting performance compared to LSTMs. However, they are computationally more expensive than LSTMs. Upvotes: 0
2019/10/19
1,188
5,106
<issue_start>username_0: Is there a way to understand, for instance, a multi-layered perceptron without hand-waving about them being similar to brains, etc? For example, it is obvious that what a perceptron does is approximating a function; there might be many other ways, given a labelled dataset, to find the separation of the input area into smaller areas that correspond to the labels; however, these ways would probably be computationally rather ineffective, which is why they cannot be practically used. However, it seems that the iterative approach of finding such areas of separation may give a huge speed-up in many cases; then, natural questions arise why this speed-up may be possible, how it happens and in which cases. One could be sure that this question was investigated. If anyone could shed any light on the history of this question, I would be very grateful. So, why are neural networks useful and what do they do? I mean, from the practical and mathematical standpoint, without relying on the concept of "brain" or "neurons" which can explain nothing at all.<issue_comment>username_1: **tl;dr** I always like to think of Neural Networks as a generalization of **logistic regression**. I too don't like that, traditionally, when introducing Neural Networks, books start with biological neurons and synapses, etc. I think its more beneficial to start from statistics and linear regression, then logistic regression and then neural networks. A perceptron is essentially a simple binary logistic regressor (if you threshold the output). If you have many perceptrons that share the same input (i.e. a layer in a neural network), you can think of it as a multi-class logistic regressor. Now, by stacking one such layer after an other, you create a Multi-Layer Perceptron (MLP), which is a Neural Network with two layers. There is equivalent to two multi-class logistic regressors stacked one after the other. One notable thing that changes is the training technique here, i.e. backpropagation (because you don't have direct access to the targets from the hidden layer). Another thing that can change is the activation function (it's not always sigmoid in Neural Networks) Introduce sparse connectivity and weight sharing and you get a Convolutional Neural Network. Add a connection from a layer to its self (for the next timestep) and you get a Recurrent Neural Network. Likewise, you can reproduce any Neural Network through this reasoning. I know this is an over-simplified way of presenting them, but I think you get the point. Upvotes: 4 <issue_comment>username_2: One way to view a neural network is as a series of linear transformations. You take a bunch of data points and look at it from a different perspective from a different space. You apply some non linear function on the data points like, ReLU, sigmoid etc. Now you repeat the same process of looking from a different space. Our goal is to look at it from a point where things starts looking right for our tasks. These linear transformations is what the network has to optimise. Upvotes: 2 <issue_comment>username_3: A good way of looking at it would be understanding neural networks mathematically, i.e. purely on the basis of the fact that you're just trying to fit a function and solve an optimisation problem (*apart from looking at it as multiple units of logistic regression*). Say we want to approximate a function $y =f\_w(x)$ with $x \in D$, where $D$ is our domain-space. We want this function to map to $C$, our co-domain, with all the values the function ends up taking being the set $y \in R$, our range. Essentially we frame $f(x)$ as a sequence of operations (*What operation should be done where is got from common-practice, intuition, and insight mostly gained from experience*) assuming that when the right parameters are used for these operations we will arrive at a very reasonable approximation of the function. We initialise the parameters with whatever values we want initially (*usually random*), calling this parameter-space $W$. The essential idea would be frame another function $L(f\_w(x), \hat{y})$ called the loss function which we want to minimise. This acts as a test to how good our function is - since our function parameters were initially random, the error between the function approximations and the actual range values for known points (training set) are estimated. These estimated error values and its gradient is then used by back-propagation where $w\_{init}\in W$ is updated to another $w\_{1}\in W$, where $w\_1$ is calculated by moving on $L$ in the direction of decreasing gradient, in hopes of reaching the loss functions minima. Simplifying, essentially all you want to do is find a $y=f\_w(x)$ where parameters $w$ are to be chosen such that $L(f\_w(x), \hat{y})$ is minimised for the training set. Even though this is a very rough idea of neural networks, such a direction in thinking can especially be useful when studying generative networks and other problems where the problem has to be formulated mathematically before being able to approach it. Upvotes: 0
2019/10/20
4,342
16,409
<issue_start>username_0: To produce tangible results in the field of AI/ML, one must take theoretical results under the lens of computational complexity. Indeed, minimax effectively solves any two-person "board game" with win/loss conditions, but the algorithm quickly becomes untenable for games of large enough size, so it's practically useless asides from toy problems. In fact, this issue seems to cut at the heart of intelligence itself: the [*Frame Problem*](https://plato.stanford.edu/entries/frame-problem/) highlights this by observing that any "intelligent" agent that operates under logical axioms must somehow deal with the explosive growth of computational complexity. So, we need to deal with *computational complexity*: but that doesn't mean researchers must limit themselves with practical concerns. In the past, multilayered perceptrons were thought to be intractable (I think), and thus we couldn't evaluate their utility until recently. I've heard that Bayesian techniques are conceptually elegant, but they become computationally intractable once your dataset becomes large, and thus we usually use variational methods to compute the posterior, instead of naively using the exact solution. I'm looking for more examples like this: What are examples of promising (or neat/interesting) AI/ML techniques that are computationally intractable (or uncomputable)?<issue_comment>username_1: *Exact* Bayesian inference is (often) [*intractable*](https://en.wikipedia.org/wiki/Computational_complexity_theory#Intractability) (i.e. there is no closed-form solution, or numerical approximations are also computationally expensive) because it involves the computation of an integral over a range of real (or even floating-point) numbers, which can be intractable. More precisely, for example, if you want to find the parameters $\mathbf{\theta} \in \Theta$ of a model given some data $D$, then [Bayesian inference is just the application of the Bayes' theorem](https://en.wikipedia.org/wiki/Bayesian_inference) \begin{align} p(\mathbf{\theta} \mid D) &= \frac{p(D \mid \mathbf{\theta}) p(\mathbf{\theta})}{p(D)} \\ &= \frac{p(D \mid \mathbf{\theta}) p(\mathbf{\theta})}{\int\_{\Theta} p(D \mid \mathbf{\theta}^\prime) p(\mathbf{\theta}^\prime) d \mathbf{\theta}^\prime} \\ &= \frac{p(D \mid \mathbf{\theta}) p(\mathbf{\theta})}{\int\_{\Theta} p(D, \mathbf{\theta}^\prime) d \mathbf{\theta}^\prime } \tag{1}\label{1} \end{align} where $p(\mathbf{\theta} \mid D)$ is the posterior (which is what you want to find or compute), $p(D \mid \mathbf{\theta})$ is the likelihood of your data given the (fixed) parameters $\mathbf{\theta}$, $p(\mathbf{\theta})$ is the prior and $p(D) = \int\_{\Theta} p(D \mid \mathbf{\theta}^\prime) p(\mathbf{\theta}^\prime) d \mathbf{\theta}^\prime$ is the evidence of the data (which is an integral given that $\mathbf{\theta}$ is assumed to be a continuous random variable), which is intractable because the integral is over all possible values of $\mathbf{\theta}$, that is, ${\Theta}$. If all terms in \ref{1} were tractable (polynomially computable), then, given more data $D$, you could iteratively keep on updating your posterior (which becomes your prior on the next iteration), and exact Bayesian inference would become tractable. The [variational Bayesian approach](https://en.wikipedia.org/wiki/Variational_Bayesian_methods) casts the problem of inferring $p(\mathbf{\theta} \mid D)$ (which requires the computation of the intractable *evidence* term) as an optimization problem, which *approximately* finds the posterior, more precisely, it approximates the intractable posterior, $p(\mathbf{\theta} \mid D)$, with a tractable one, $q(\mathbf{\theta} \mid D)$ (the *variational distribution*). For example, the important [variational auto-encoder (VAEs)](https://arxiv.org/abs/1312.6114) paper (which did not introduce the variational Bayesian approach) uses the variational Bayesian approach to approximate a posterior in the context of neural networks (that represent distributions), so that existing machine (or deep) learning techniques (that is, gradient descent with back-propagation) can be used to learn the parameters of a model. The variational Bayesian approach (VBA) becomes always more appealing in machine learning. For example, [Bayesian neural networks](https://arxiv.org/abs/1505.05424) (which can partially solve some of the inherent problems of non-Bayesian neural networks) are usually inspired by the results reported in [the VAE paper](https://arxiv.org/abs/1312.6114), which shows the feasibility of the VBA in the context of deep learning. Upvotes: 4 <issue_comment>username_1: [AIXI](https://ai.stackexchange.com/a/10377/2444) is a Bayesian, non-Markov, reinforcement learning and artificial general intelligence agent that is [incomputable](https://en.wikipedia.org/wiki/Computable_function#Uncomputable_functions_and_unsolvable_problems), given the involved [incomputable Kolmogorov complexity](https://en.wikipedia.org/wiki/Kolmogorov_complexity#Uncomputability_of_Kolmogorov_complexity). However, there are approximations of AIXI, such as AIXItl, described in [Universal Artificial Intelligence: Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability](http://hutter1.net/ai/uaibook.htm) (2005), by <NAME> (the original author of AIXI), and [MC-AIXI-CTW](https://arxiv.org/abs/0909.0801) (which stands for Monte Carlo AIXI Context-Tree Weighting). Here is a Python implementation of MC-AIXI-CTW: <https://github.com/gkassel/pyaixi>. Upvotes: 4 <issue_comment>username_1: In general, [partially-observable Markov decision processes (POMDPs) are also computationally intractable to solve exactly](https://arxiv.org/pdf/1301.2308.pdf). However, there are [several approximations methods](https://en.wikipedia.org/wiki/Partially_observable_Markov_decision_process#Approximate_POMDP_solutions). See, for example, [Value-Function Approximations for Partially Observable Markov Decision Processes](https://arxiv.org/abs/1106.0234) (2000) by <NAME>. Upvotes: 2 <issue_comment>username_2: The [logical induction algorithm](https://intelligence.org/2016/09/12/new-paper-logical-induction) can make predictions about whether mathematical statements are true or false, which are eventually consistent; e.g. if `A` is true, its probability will eventually reach 1; if `B` implies `C` then `C`'s probability will eventually reach or exceed `B`'s; the probability of `D` will eventually be the inverse of `not(D)`; the probabilities of `E` and `F` will eventually reach or exceed that of `E AND F`; etc. It can also give consistent predictions about itself, e.g. "the logical induction algorithm will predict the probability of X to be Y at timestep T", whilst avoiding paradoxes like the liar's paradox. Upvotes: 3 <issue_comment>username_3: This question gets at a really interesting fact about AI research in general: **AI is hard**. In fact, almost every AI problem is computationally hard (typically NP-Hard, or #P-Hard). This means that most new areas of AI research starts out by characterizing some problem that is intractable, and proposing an algorithm that technically works, but is too slow to be useful. However, that's **not the whole story**. Usually AI researchers then proceed to develop *tractable* techniques according to one of two schools: * Algorithms that usually work in practice, and are always fast, but are not completely correct. * Algorithms that are always correct, and are usually fast, but are sometimes very slow, or only work on specific kinds of sub-problem. Take together, these let AI address most problems. For example: * [Search](https://en.wikipedia.org/wiki/Graph_traversal) was developed as a general purpose AI technique for solving [planning](https://en.wikipedia.org/wiki/Automated_planning_and_scheduling) and [logic](https://en.wikipedia.org/wiki/Automated_theorem_proving) problems. The first algorithm, called the [general problem solver](https://en.wikipedia.org/wiki/General_Problem_Solver), always worked, but was extremely slow. Eventually, we developed heuristic guided search techniques like [A\*](https://en.wikipedia.org/wiki/A*_search_algorithm), domain specific tricks like [GraphPlan](https://en.wikipedia.org/wiki/Graphplan), and stochastic search techniques like [Monte-Carlo Tree Search](https://en.wikipedia.org/wiki/Monte_Carlo_tree_search). * Bayesian Learning (or [Bayesian Inference](https://en.wikipedia.org/wiki/Bayesian_inference)) has been known since the 1800's, but it is known to involve either the computation of intractable integrals, or the creation of exponentially sized discrete tables, making it [NP-Hard](https://en.wikipedia.org/wiki/NP-hardness). A very simple algorithm involves applying brute force and enumerating all of the options, but this is too slow. Eventually, we developed techniques like [Gibbs Sampling](https://en.wikipedia.org/wiki/Gibbs_sampling) (that is always fast, and usually right), or [Variable Elimination](https://en.wikipedia.org/wiki/Variable_elimination) (that is always right, and usually fast). Today we can solve most problems of this kind very well. * Reasoning about language was thought to be very hard (see the [Frame Problem](https://en.wikipedia.org/wiki/Frame_problem)), because there are an infinite number of possible sentences, and an infinite number of possible contexts they could be used in. Exact approaches based on rules did not work. Eventually we developed probabilistic approaches like [Hidden Markov Models](https://en.wikipedia.org/wiki/Hidden_Markov_model) and [Deep Neural Networks](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.248.4448&rep=rep1&type=pdf), that aren't certain to work, but work so well in practice that language problems are, if not completely solve, getting [very close](https://translate.google.com/). + Games of chance, like Poker, were thought to be impossible, because they are [#P-Hard](https://en.wikipedia.org/wiki/%E2%99%AFP-complete) to complete exactly (this is *harder* than NP-Hard). There will probably never be an exact algorithm for these. In spite of this, techniques like [CFR+](https://poker.cs.ualberta.ca/publications/2015-ijcai-cfrplus.pdf) can derive solutions that are so close to exactly perfect that you would need to play for decades against them to tell the difference. So, what's **still** hard? * [Inferring](https://www.hindawi.com/journals/complexity/2018/1591878/) the *structure* of a Bayesian network. This is closely related to the problem of *causality*. It's #P-Hard, but we don't currently have any good algorithms to even do this approximately very well. This is an active area of research. * Picking a machine learning algorithm to use for an arbitrary problem. The [No Free Lunch theorem](https://ai.stackexchange.com/questions/15650/what-are-the-implications-of-the-no-free-lunch-theorem-for-machine-learning/15651#15651) tells us this is not possible in general, but it seems like we ought to be able to do it pretty well in practice. * More to come...? Upvotes: 4 <issue_comment>username_2: Hutter's ["fastest and shortest algorithm for all well-defined problems"](https://www.semanticscholar.org/paper/The-Fastest-and-Shortest-Algorithm-for-All-Problems-Hutter/acf58b7dd2b902fa519db8f7cf8af2e3294d56ee) is the ultimate [just-in-time compiler](https://en.wikipedia.org/wiki/Just_in_time_compilation). It runs a given program and, in parallel, searches for proofs that some other program is equivalent but faster. The running program is restarted at exponentially-spaced intervals; if a faster program has been found, that is started instead. The running time of this algorithm is of the same order as the fastest provably-equivalent algorithm, plus a constant $O(1)$ term (the time taken to find the proof, which doesn't dependent on the input size). For example, it will run [Bubble Sort](https://en.wikipedia.org/wiki/Bubble_sort) in at most $O(n~log (n))$) time, by finding a proof that it's equivalent to such a fast algorithm (like [Merge Sort](https://en.wikipedia.org/wiki/Merge_sort)) then switching to that algorithm. Hutter's algorithm is similar to the best [ahead-of-time compilers](https://en.wikipedia.org/wiki/Compiler), known as [super-optimisers](https://en.wikipedia.org/wiki/Superoptimization). They search through all possible programs, starting with the smallest/fastest, until they find one equivalent to the given code. These are actually in use right now, but are only practical for programs that are a few (machine code) instructions long. The LLVM compiler contains some "peephole optimisations" (i.e. find/replace templates) that were found by a super-optimiser a few years ago. Note that super-optimisation should not be confused with [super-compilation](https://stackoverflow.com/questions/9067545/what-is-supercompilation) (a rather general optimisation, which is not optimal and involves no search). Upvotes: 2 <issue_comment>username_2: [Levin's search algorithm](http://www.scholarpedia.org/article/Universal_search) is a general method of function inversion. Many AI tasks are of this sort, e.g. given a cost or reward function (`object -> cost` or `object -> reward`), its inverse (`cost -> object` or `reward -> object`) would find an object with the given cost/reward; we could ask this inverse function for an object with low cost or high reward. Levin's algorithm is optimal iff the given function is a "black box" with no known pattern in its output. For example, if a small change in the input produces a small change in the output, Levin search wouldn't be optimal; instead we could use hill climbing or some other gradient method. Levin's algorithm looks for the function's inverse by running all possible programs in parallel, assigning exponentially more time to shorter programs. Whenever a program halts, we check whether its output is the desired inverse (i.e. whether `givenProgram(outputOfHaltedProgram) = desiredOutput`, e.g. whether `cost(outputOfHaltedProgram) = low`). This way "simpler" guesses at the inverse are made first; where we define the simplicity (AKA "Levin complexity") of a value by looking through all programs $p$ which generate that value, and minimising the sum of: $p$'s length (in bits) plus the logarithm of $p$'s running time (in steps). If we ignored running time we would get [Kolmogorov complexity](https://en.wikipedia.org/wiki/Kolmogorov_complexity), which is theoretically nicer but is incomputable (we don't know when to give up waiting for short non-halting programs, due to the [Halting Problem](https://en.wikipedia.org/wiki/Halting_problem)). Levin complexity is computable, since we can give up waiting for those loops once they've taken exponentially-many steps as a longer solution (e.g. once we've spent $T$ steps waiting for a possible loop of length $N$, we can start trying programs that are $N+1$ bits long for $T/2$ steps). The running time of Levin Search is of the same order as the simplest such inverse-value-generating program. However, this is misleading, since the fraction of steps allocated to running any particular program $p$ is $1/2^{complexity(p)}$, so this constant factor will be slowing down the computation of the inverse too. There is also overhead associated with context-switching between all of these programs. The [FAST algorithm](http://people.idsia.ch/~juergen/toesv2/node28.html) does the same job as Levin Search, in the same time, but avoids the overhead of context-switching between an infinite number parallel programs. Instead it runs one program at a time, cuts it off if it hasn't halted within an appropriate number of steps, then retries for twice as many steps later on. The [GUESS algorithm](http://people.idsia.ch/~juergen/toesv2/node31.html) is also equivalent, but chooses programs at random; the expected runtime is the same, but there's no need to keep track of loop counters like in FAST, plus it can be run on parallel hardware without having to coordinate anything (whilst still avoiding the infinite parallelism of the original). Levin search is currently impractical in its original setting of searching through general-purpose, Turing-complete programs. It can be useful in less general domains, e.g. searching through the space of hyper-parameters or other domain-specific, configuration-like "programs". Upvotes: 2
2019/10/21
1,727
6,983
<issue_start>username_0: As part of a research project for college, I would like to understand what many of you astern to be the risks associated with regulating Artificial Intelligence. Such as whether regulation is too risky in regards to limiting progress or too risky in regards to uninformed regulation.<issue_comment>username_1: > > Risks of regulation? > > > As you mention in your survey, it is generally understood that the primary concern with regulating AI research is that other parties risk falling behind. > > Should we regulate it? Can it be done? > > > You can't really "regulate" technological development in the same way you can regulate some other things in general. Asides from the fact that there is no global governance that can implement this regulation on nations, you can't really regulate someone's research more than you can control how people think: you just need a pen / paper / computer to do any research in math/AI. The NSA tried to regulate encryption citing national security reasons during a saga known as the [Crypto Wars](https://en.wikipedia.org/wiki/Crypto_Wars). They failed. > > What is AI anyways? How will we get there? What will it be like? > > > Honestly, from the phrasing of your questions in your survey, I get the impression that you don't really understand the hypothetical existential risk due to AI. Personally I don't really buy into their thesis, but in any case, if such a super-intelligent agent emerges, the problem isn't so much "oh no my city is destroyed" or "oh no so many people are killed", but more so "all of humanity is enslaved without being aware" or "everything is dead". We think this might happen because we assume AI is all-powerful and we project our own negative qualities onto this unknown agent with unknown power. It's mostly fear really. This is all speculation, and by definition you cannot predict the behavior of an agent smarter than you, so literally every single comment on this topic is purely unbased speculation. The only thing that is true is that we don't know. There is another aspect of AI which is dangerous, which more so concerns with how humans use it: i.e. facial recognition, automated weapon systems, automated hacking. These are more pressing issues. > > What should we do? We are forced to research AI because no party can afford to fall behind, but at the same time we are pushing ourselves towards a dangerous future: it's a catch-22.... > > > Consensus and current practice suggests that every researcher publicizes our results. Compared to other areas of academia, whose research is often locked behind paywall, ML/AI research is quite publicly accessible. Of course, this doesn't prevent the possibility of a rouge agent.... Upvotes: 0 <issue_comment>username_2: I think there is a very strong argument for regulating AI. Chiefly, unintentional (or intentional) bias in statistically driven algorithms, and the idea that responsibility can be offloaded to processes that cannot be meaningfully punished where they transgress. Additionally, the history of technology, especially since the industrial revolution, strongly validates neo-luddism in the sense that the problems arising from implementation of new technology are not always predictable. In this sense, there are both ethical reasons to consider regulation, and minimax reasons (here in the sense of erring on the side of caution to minimize the maximum potential downside.) * Risk of falling behind A risk is that not all participants will hew to the regulations, giving those who don't a significant advantage, but, that, in and of itself, is not a reason to forgo sensible regulation. However, this is not a justification to forgo regulation in that that penalties at least serve as potential deterrent. * Opportunity cost Not a risk, but a driver. The idea of "leaving money on the table" in that not implementing a given technology forgoes greater utility, sacrificing potential benefit. This is not invalid, but shouldn't ignore hidden costs. For instance, the wide-scale deployment of even primitive bots has had a profound social impact. Upvotes: 0 <issue_comment>username_3: My thoughts =========== AI is already indirectly regulated. This is important to acknowledge and this acknowledgement is missing, in my opinion, in the discourse about law and AI. I'm assuming that your question is about law that directly aims at AI technologies and this exemplifies one of the risks of regulating AI: that the law will focus on the technology rather than outcomes. Another concern is that law that is inadequate or outdates quickly creates a false sense of security and this could create a situation which is even more dangerous then if the laws are not there. Law and innovation ------------------ When it comes to the view that law stifles innovation it is paramount to acknowledge that some regulation can have a very positive effect. There is no general rule that there is a inverse relation between law and innovation. Pacing problem and Collingridge dilemma ======================================= The following is basically what Wendell Wallach says in an espisode of Future of Life Institute's [AI Alignment Podcast](https://futureoflife.org/ai-alignment-podcast/) entitled [*Machine Ethics and AI Governance with Wendell Wallach*](https://futureoflife.org/2019/11/15/machine-ethics-and-ai-governance-with-wendell-wallach/). > > The pacing problem refers to the fact scientific discovery, and technological innovation, is far outpacing our ability to put in place appropriate ethical legal oversight. > > > Wendell Wallach continues to say that pacing problem converges with what is now called the Collingridge Dilemma, a problem that 'bedevilled' people in technology and governance since 1980, and he defines it the following way: > > While it was easiest to regulate a technology early in its development, early in its development we have little idea of what its societal impact would be. By the time we did understand what the challenges and the societal impact the technology would be so deeply entrenched in our society that it would be very difficult to change its trajectory. > > > See also: * [Collingridge dilemma](https://en.wikipedia.org/wiki/Collingridge_dilemma) on Wikipedia; and * [The social control of technology](https://openlibrary.org/books/OL14443859M/The_social_control_of_technology) by <NAME>, published 1980 by <NAME>. Upvotes: 0 <issue_comment>username_4: I don't think regulating something necessarily causes that regulation to defacto become a "risk". Regulation - including overregulation - may, in fact, aid in the dialogue between practitioners, which may end up educating the regulators, the public and the practitioners themselves. My answers to your survey would most likely be "it depends...", or "no risk", which isn't to say it's not an impediment, but just not a "risk", per se. Upvotes: 1
2019/10/21
1,054
3,165
<issue_start>username_0: [![enter image description here](https://i.stack.imgur.com/bMmxd.png)](https://i.stack.imgur.com/bMmxd.png) I don't really understand what this equation is saying or what the purpose of the ELBO is. How does it help us find the true posterior distribution?<issue_comment>username_1: From [this document](http://users.umiacs.umd.edu/~xyang35/files/understanding-variational-lower.pdf), as you found here, $X$ is an observed variable and $Z$ is a hidden variable; $p(X)$ is the density function of $X$. The posterior distribution of the hidden variables can then be written as follows using the Bayes’ Theorem: $$p(Z|X) = \frac{p(X|Z)p(Z)}{p(X)} = \frac{p(X|Z)p(Z)}{\int\_Zp(X,Z)}$$ Now base on what you post, if we denote that $L= \mathbb{E}\_q [\log p(X, Z)] + H[Z]$ ($q(Z)$ is a distribution we use to approximation the true posterior distribution $p(Z|X)$ in VB and $H[Z] = -\mathbb{E}\_q [\log q(Z)]$), then it is obvious that $L$ is a lower bound of the log probability of the observations. As a result, if in some cases we want to maximize the marginal probability (the log probability of the observations), we can instead maximize its variational lower bound $L$. As a real example, you can follow the "Multiple Object Recognition with Visual Attention" example in the referenced document. Moreover, the term $L$ will be presented in KL-divergence that will be used to measuring the similarity of two distributions. Be aware that there is progress on the bound in [this paper](http://proceedings.mlr.press/v80/alemi18a/alemi18a.pdf) (Fixing a Broken ELBO). Upvotes: 2 <issue_comment>username_2: The use of KL provides a more intuitive way of what the ELBO is attempting to maximize. Basically, we want to find a posterior approximation such that $p(z\mid x) \approx q(z)\in\mathcal{Q}$ $$KL(q(z)\parallel p(z\mid x)) \rightarrow \min\_{q(z)\in\mathcal{Q}}$$ As a result of this, while finding this optimal posterior approximation, we maximize the probability of all the observed data $x$. Note that the evidence is usually intractable. Thus, can express the $KL$ as follows: \begin{align\*} \log p(x) &= \int q(z) \log p(x)dz \\ &= \int q(z) \log\frac{p(x,\theta)}{p(\theta\mid x)}dz \\ &= \int q(z) \log\frac{p(x,z)q(z)}{p(\theta\mid x)q(z)}dz\\ &= \int q(z) \log\frac{p(x,z)}{q(z)}dz + \int q(z) \log\frac{q(z)}{p(z\mid x)}dz \\ &= \mathcal{L}(q(z)) + KL(q(z)\parallel p(z\mid x)) \end{align\*} In this case, KL just gives us the difference between $q$ and $p$. We want to make this difference close to zero meaning that $q=p$. So, minimizing the KL is the same as maximizing the ELBO, and as a result, we obtain the lower bound in your expression. If you expand your bound, you can find a nice interpretion: $$ \begin{align\*} \mathcal{L}(q(z)) &= \int q(z) \log\frac{p(x,z)}{q(z)}dz \\ &= \mathbb{E}\_{q(z)} \log p(x\mid z) - KL(q(z)\parallel p(z)) \end{align\*} $$ When we optimize this expression, we want to find a $q$ that fits our data properly and also is really close to true posterior. Thus, $\mathbb{E}\_{q(z)} \log p(x\mid z)$ act as a data term and $KL(q(z)\parallel p(z)) $ as a regularizer. Upvotes: 2
2019/10/21
714
3,116
<issue_start>username_0: I am working on a project that takes signals from the brain, preprocesses them, and then makes the machine learn about what human is thinking about. I am struck on preprocessing the signal (incoming from the EEG). I am having a problem when I attempt to remove noise. I used SVM but to no avail. I need some other suggestions from experts who have worked on a project similar to this. What can I do to preprocess the signal?<issue_comment>username_1: This might be more of a signal-processing question, rather than a artificial intelligence question, but I will try my best to be of help. Do you know what the noise you are trying to remove is? How it behaves/where it stems from? Or, do you know how your output signal should look, post processing? If you know these things and you are familiar with MATLAB or any other matrix multiplication software, they come with great prebuilt toolboxes for traditional approaches to remove noise from signals. If you are not exactly sure what patterns you are looking for, I suggest perhaps looking into Autoencoders to discover the hidden patterns. Though it is important to note that the origin of the noise may greatly effect its abilities. If you plan on using such a technique it is important that you have a sufficiently large dataset of the signals available. Without the clarifications to these questions, along with @nbro's questions, it is hard to be more specific. Upvotes: 1 <issue_comment>username_2: by svm do you possibly mean singular value decomposition (svd a known noise reduction technique) if this is true then i would say the next method i would try would be wavelet transform for noise reduction and if neither of these techniques are working on there own it is not uncommon to use them together as is done [here](https://www.sciencedirect.com/science/article/pii/S1877050915014234). Upvotes: 0 <issue_comment>username_3: If the noise is confined to a particular spectral band, Fourier transform followed by filtering, followed by an inverse Fourier transform will work. If it is multiplicative noise, filtering the Fourier transform of the logarithm of the signal might work. Really, the nature of the noise determines what's possible and the best way to remove it. Upvotes: 1 <issue_comment>username_4: There is a commonly used method, that is also used in machine learning: Independent Component Analysis. (ICA). This is commonly used to find specific noises in the data, however, you need to have some EEG knowledge to do this because automatic rejection is not completely solved at this time. Software like EEGLab is available (as a standalone and Matlab toolbox) Now to do this in real-time it is also not impossible after you collect initial data for a while and don't have too many channels. You can isolate relatively constant noises with ICA, like heart-beating, other temporal noises can be rejected globally (on all channels) because EEG normally does not exceed certain levels. Useful documentation is EEGLabs artifacts Wikipedia page: <https://sccn.ucsd.edu/wiki/Chapter_01:_Rejecting_Artifacts> Upvotes: 0
2019/10/21
420
1,658
<issue_start>username_0: I have read this post: [How to choose an activation function?](https://ai.stackexchange.com/q/7088/2444). There is enough literature about activation functions, but when should I use a linear activation instead of ReLU? What does the author mean with *ReLU when I'm dealing with positive values, and a linear function when I'm dealing with general values.*? Is there a more detail answer to this?<issue_comment>username_1: The activation function you choose depends on the application you are building/data that you have got to work with. It is hard to recommend one over the other, without taking this into account. Here is a short-summary of the advantages and disadvantages of some common activation functions: <https://missinglink.ai/guides/neural-network-concepts/7-types-neural-network-activation-functions-right/> > > What does the author mean with ReLU when I'm dealing with positive values, and a linear function when I'm dealing with general values. > > > ReLU is good for inputs > 0, since ReLU = 0 if input < 0(which would kill the neuron, if the gradient is = 0) To remedy this, you could look into using a Leaky-ReLU instead. (Which avoids killing the neuron by returning a non-zero value in the cases of input <= 0) Upvotes: 3 [selected_answer]<issue_comment>username_2: Nothing is written on stone in here, but as a rule of thumb linear activation is not very common. A linear activation function in a hidden layer can collapse more neurons in more layers. Linear activation can be implemented in the last layer if a scale of the outputs is not used. (This is the most common use I have seen.) Upvotes: 1
2019/10/22
543
2,438
<issue_start>username_0: I'm using a neural network to solve a multi regression problem because I'm trying to predict continuous values. To be more specific, I'm making a tracking algorithm to track the position of an object, I'm trying to predict two values, the latitude and longitude of an object. Now, to calculate the loss of the model, there are some common functions, like mean squared error or mean absolute error, etc., but I'm wondering if I can use some custom function, like [this](https://en.wikipedia.org/wiki/Haversine_formula), to calculate the distance between the two longitude and latitude values, and then the loss would be the difference between the real distance (calculated from the real longitude and latitude) and the predicted distance (calculated from the predicted longitude and latitude). These are some thoughts from me, so I'm wondering if such an idea would make sense? Would this work in my case better than using the mean squared error as a loss function? I had another question in mind. In my case, I'm predicting two values (longitude and latitude), but is there a way to transform these two target values to only one value so that my neural network can learn better and faster? If yes, which method should I use? Should I calculate the summation of the two and make that as a new target? Does this make sense?<issue_comment>username_1: Using two value and using MSE is probably a better approach. I'd you combine the value to one value, like the case of summation, the network may fits to output 0 on one axis and the value on the other. The method you propose also have the same issue. There are many combination to the real distance, but only one is correct. For a neural network to learn faster, one value will not help it learn faster. Instead, accuracy is often increased if the predicted value is a one hot vector of labels instead of a single value. Hope this can help you. Upvotes: 1 <issue_comment>username_2: Alternatively, you might measure the angle between the two vectors (assuming they are points on a sphere), perhaps using their scalar product and use that as the loss function. $a \cdot b = \|a\|\|b\|cos\theta$ (or just use polar co-ordinates) An important question is whether the direction of the errors is likely to be uniform or whether errors in particular directions happen more often than others (in which case that needs to be built into the loss function) Upvotes: 0
2019/10/23
1,890
7,774
<issue_start>username_0: Is it possible to calculate the best possible placements for settlements in [Catan](https://en.wikipedia.org/wiki/Catan) without using an ML algorithm? While it is trivial to simply add up the numbers surrounding the settlement (highest point location), I'm looking to build a deeper analysis of the settlement locations. For example, if the highest point location is around a sheep-sheep-sheep, it might be better to go to a lower point location for better resource access. It could also weight for complementary resources, blocking other players from resources, and being closer to ports. It seems feasible to program arithmetically, yet some friends said this is an ML problem. If it is ML, how would one go about training, as the gameboard changes every game?<issue_comment>username_1: Catan is actually a much more complicated game than the simple rules would suggest, and an *exact* solution is probably beyond the scope of current AI techniques. Monte Carlo Tree Search or Expectiminimax techniques *seem* like they could help, but are intended for games of perfect information. Catan is not a game of perfect information (the development cards are hidden), and also has a phase that occurs *without* a regular turn sequence (trading). To solve Catan properly, I think you're going to need both algorithms for solving POMDPs (like CFR+), *and* algorithms for negotiation (like [Kraus' Diplomat](http://u.cs.biu.ac.il/~sarit/data/articles/An%20Automated%20Diplomacy%20Paper.pdf)). I'm not certain that these have been combined before in formal analysis, so this might actually be a good PhD thesis for someone. That said, you can probably get a good player using self-play techniques, because Catan has randomization, and a relatively small set of moves, like [Backgammon](https://en.wikipedia.org/wiki/TD-Gammon). These may or may not offer simple rules about how-best to play the game. Your friends are right to think about this as, at root, an ML problem. Upvotes: 2 <issue_comment>username_2: Historically, the non-ML approach would be an [expert system](https://en.wikipedia.org/wiki/Expert_system). This is typically a rules-based decision system, falling under the umbrella of [symbolic AI](https://en.wikipedia.org/wiki/Symbolic_artificial_intelligence). These systems can have strong utility in limited contexts, but are generally "brittle" in that parameters not previously defined or accounted will produce no-compute or weak utility. Because the rules of a game are fully definable, the main concern is utility, which relates to the degree to which the game has been solved. Informing a [heuristic](https://en.wikipedia.org/wiki/Heuristic) system in this case requires analysis of the game in in the sense of game theory and combinatorial game theory, since Catan involves both imperfect information and combinatorial elements. The complexity is high indeed, not only per imperfect information, branching factors, stochasticity, players > 2, but, as you note, the game board itself has a very high number of potential configurations, so solving the game is presumed to be extremely difficult to impossible. (Possibly [NEXPTIME](https://en.wikipedia.org/wiki/NEXPTIME) if finite and undecidable otherwise.) The paper [Game strategies for The Settlers of Catan](http://homepages.inf.ed.ac.uk/alex/papers/cig2014_gs.pdf) suggests that the game tree for Catan is not surveyable b/c the options for trade negotiation in natural language aren't bounded: > > One response to this is to develop a symbolic model consisting of heuristic strategies for playing the game. Developing > such models potentially has two advantages. First, a symbolic > model can in principle lead to an interpretable model of human > expert play ... Second, a symbolic model can provide > a prior distribution over which next move is likely to be > optimal... > > > The paper mentions this second part to relation to machine learning, where "the posterior distribution over optimal actions acquired through training improves on the baseline prior distribution." Especially where the game is unsolved and intractable, machine learning has demonstrated strong utility for an increasing number of games, so it is unlikely not to be an optimal component for truly strong play. However, such a system can be a combination of ML and [domain specific knowledge](https://en.wikipedia.org/wiki/Domain_knowledge), such as in [informed search](https://en.wikipedia.org/wiki/Search_algorithm#For_virtual_search_spaces). The [Optimizing UCT for Settlers of Catan](https://pdfs.semanticscholar.org/0ef6/7eac64e0d7ee2d68abf3976751940616aab9.pdf) goes into this in detail, and also provides reference to prior work. If your primary requirement is strong utility, some form of machine learning is likely optimal. But it can be fun to attempt to solve games and cobble together sets of heuristics. Upvotes: 2 <issue_comment>username_3: From the way you have phrased your question one can derive a couple of strong assumptions which simplify the problem tremendously and make it feasable: 1. We do not look for an agent being able to play the game but only an evaluation of settlement options (no other agents to be considered) 2. The evaluation of settlement options is static (i.e. does not change over time) and is independent of other settlements From that two simple ideas come to my mind: **1. the ML approach** Look at historical game data and see which settlement options led to a win. So basically look at tuples of (X,y) with X being something like (W8, C2, O6) meaning that the settlement give access to wood with an 8, clay with a 2 and ore with a 6. And y indicating a win or loss. To make it a bit dynamic you could differentiate between initial settlements (being placed at the beginning) and the ones during the game. So for each of these two categories you would derive basically a score for the possible settlements. If you can compute all the possible combinations you will not even need ML since you can simply run the math once and then look it up. Might be doable in this case as the assumptions mentioned above simplify the problem a lot (compared to fully "solving" the game). Thinking through the possible combinations for a given settlement location (selecting 3 fields with A possible resources and B possible numbers) will quickly give you an idea about that. **2. The classy symbolic approach** What comes to my mind right away is linear programming as it offers a convenient way to model the strategic aspects you have mentioned. You could develop a target function to maximize using scores for different resources and numbers (e.g. you could give clay higher importance than wool). Besides that constraints can capture additional aspects of game strategies like "always make sure to have access to clay" or "do not settle where the 3 resources are the same" etc. My very first idea to model this is using decision variables like X\_(i,j) with X being 0 or 1, i representing the resources out of {clay, wood, ..., desert} (side note: do not forget the water and different ports here) and j modelling the numbers out of {2,...12}. The constraints would need to model the fact that you need to select 3 of those X\_(i,j) for every settlement. If you want to calculate this for a given game you would need to feed the model the possible settlement options based on the layout of that specific game. Then run the optimization and it gives you the best settlement option (i.e. the 3 feasable X\_(i,j) maximizing your goal function). Qua definition you need to bring in game knowledge for this approach. And probably talking to someone who is really good at the game would help to understand what matters. Upvotes: 0
2019/10/23
614
2,648
<issue_start>username_0: (I apologize for the title being too broad and the question being not 'technical') Suppose that my task is to label news articles. This means that given a news article, I am supposed to classify which category that news belong to. Eg, 'Ronaldo scores a fantastic goal' should classify under 'Sports'. After much experimentation, I came up with a model that does this labeling for me. It has, say, 50% validation accuracy. (Assume that it is the best) And so I deployed this model for my task (on unseen data obviously). Of course, from a probabilistic perspective, I should get roughly 50% of the articles labelled correctly. But how do I know that which labels are actually correct and which labels need to be corrected? If I were to manually check (say, by hiring people to do so), how is deploying such a model better than just hiring people to do the classification directly? (Do not forget that the manpower cost of developing the model could have been saved.)<issue_comment>username_1: First of all to be more real, you usually expect more than 50% validation accuracy on articles predictions. Back on your question, you should definitely try to automate this process if you are looking for a long-term solution of labeling articles. Deploying such model should not cost more than hiring employees to do this manually, at least for a long-term perspective. Upvotes: 0 <issue_comment>username_2: There are several advantages: 1. Some text classification systems are *much* more accurate than 50%. For example, most spam classification systems are 99.9% accurate, or more. There will be little value to having employees review these labels. 2. Many text classification systems can output a *confidence* as well as a label. You can selectively have employees review only the examples the model is not confident about. Often these will be small in number. 3. You can usually *test* a text classification model by having it classify some unseen data, and then asking people to check the work. If you do this for a small number of examples, you can make sure the system is working. You can then confidently use the system on a much large set of unlabeled examples, and be reasonably sure about how accurate it is. 4. For text, it is also important to measure how much different *people* agree on the ratings. You are unlikely to do better than this, because this gives you a notion of the subjectivity of the specific problem you are working on. If people disagree 50% of the time anyway, maybe you can accept a 50% failure rate from the automated system, and not bother checking its work. Upvotes: 4 [selected_answer]
2019/10/23
682
2,415
<issue_start>username_0: [![enter image description here](https://i.stack.imgur.com/hKGIu.png)](https://i.stack.imgur.com/hKGIu.png) I got this slide from CMU's lecture notes. The $x\_i$s on the right are inputs and the $w\_i$s are weights that get multiplied together then summed up at each hidden layer node. So I'm assuming this is a node in the hidden layer. What is the mathematical reason for taking the sum of the weights and inputs and inputting that into a sigmoid function? Is there something the sigmoid function provides mathematically or provides some sort of intuition useful for the next layer?<issue_comment>username_1: By itself, I'm not sure it's possible to know. It's possible the slides were old. Or, the intended purpose was to mention how as sigmoid ranges from 0 to 1. Mostly, it looks like it was intended to bring up gradient descent. But it could also be an entry point to the discussion of other methods such as ReLU. Either that or perhaps some sort of norming function. Upvotes: 2 <issue_comment>username_2: Let us suppose we have a network without any functions in between. Each layer consists of a linear function. i.e ``` layer_output = Weights.layer_input + bias ``` Consider a 2 layer neural network, the outputs from layer one will be: x2 = W1\*x1 + b1 Now we pass the same input to the second layer, which will be ``` x3 = W2x*2 + b2 Also x2 = W1*x1 + b1 Substituting back, we have: x3 = W2(W1*x1 + b1) + b2 x3 = (W2W1)*x1 + (W2*b1 + b2) x3 = W*x1 + b ``` Oh no! We still got a linear function. No matter how many layers we add, we will still get a linear function. In that case, our network will never be able to approximate any non linear functions. So what is the solution? We will simply add some non linear functions in between. These functions are called activation functions. Some of these functions include: * ReLU * Sigmoid * tanh * Softmax and there are a lot more of them. Yay! Our network is no more linear! We have a lot of different non linear functions, and each of them serve a different purpose. For example, ReLU is simple and computationally cheap. ReLU(x) = max(0, x) Sigmoid outputs are between 0 and 1. tanh is similar to sigmoid, but zero centered, with outputs from -1 to 1 Softmax is usually used if you want to represent any vector as a discrete probability distribution. Hope you are having a great day! Upvotes: 4 [selected_answer]
2019/10/24
1,152
4,846
<issue_start>username_0: What are the actual risks to society associated with the widespread use of AI? Outside of the use of AI in a military context. I am not talking about accidental risks or unintentional behaviour - eg, a driver-less car accidentally crashing. And I am not talking about any transitional effects when we see the use of AI being widespread and popular. For instance I have heard that the widespread use of AI will make many existing jobs redundant, putting many people out of work. However this is true of any major leap forward in technology (for example the motor car killed off the stable/farrier industries). The leaps forward in technology almost always end up creating more jobs than were lost in the long run. I am interested in long term risks and adverse effects stemming directly from the widespread use of AI in a non-military sense. Has anybody speculated on the social or psychological impacts that AI will produce once it has become popular?<issue_comment>username_1: The biggest risk is algorithmic bias. As more and more decision-making processes are taken on by AI systems, there will be an abdication of responsibility to the computer; people in charge will simply claim the computer did it, and they cannot change it. The real problem is that training data for machine learning often contains bias, which is usually ignored or not recognised. There was a story on BBC Radio about someone whose passport photo was rejected by an algorithm because he supposedly had his mouth open. However, he belonged to an ethnic group which has larger lips than Caucasian whites, but the machine could not cope with that. There is a whole raft of examples where similar things happen: if you belong to a minority group, machine learning can lead to you being excluded, just because the algorithms will have been trained on training data that was too restricted. Update: Here is a link to [a BBC News story about the example](https://www.bbc.co.uk/news/technology-49993647) I mentioned. Upvotes: 3 [selected_answer]<issue_comment>username_2: One risk that’s already realized: large online vendors think they have implemented artificial intelligence in their “help” pages and therefore they can (try to) make it impossible to get to someone who can actually think. And since the artificial **stupidity** (AS) usually feeds the customer articles completely unrelated to the issue, anyone sufficiently persistent to pursue it is extremely pissed off at the company before (if ever) the issue is resolved. And because far too many people passively accept this abuse, the companies have no incentive to be more reasonable. In other words, “AS” is reducing our expectations for customer service. Another is the JavaScript intended to prevent invalid names, phone numbers, and email addresses in web forms which due to bugs or obsolescence rejects legitimate inputs. Upvotes: 2 <issue_comment>username_3: IMHO the greatest risk is that AI can make people lazy. If you can ask an AI for an answer to any problem, what's your motivation to figure out how to figure out the answer for yourself? I have run into a lot of young people who can't add or multiply two three-digit numbers without using a calculator. When it's possible to dump a huge mass of data into an AI, and the AI tells you the structures it finds in the data without explaining *how* it finds the structures so you can do it yourself, the AI wins and you lose. Upvotes: 1 <issue_comment>username_4: * Offloading of responsibility may the single greatest danger. Where algorithmic bias may be the core issue of Machine Learning, it can be identified and mitigated. Transferring responsibility to a robot or algorithm requires an intentional choice with moral dimension. As the scholar <NAME> put it: > > In humans consciousness and ethics are associated with our morality, but that is because of our evolutionary and cultural history. In artefacts, moral obligation is not tied by either logical or mechanical necessity to awareness or feelings. This is one of the reasons we shouldn't make AI responsible: we can't punish it in a meaningful way. > > Source: [AI Ethics: Artificial Intelligence, Robots, and Society](http://www.cs.bath.ac.uk/~jjb/web/ai.html) > > > In a malicious sense, transferring agency to an automaton that may do something harmful which benefits me allows me to say "I didn't make the decision and have no responsibility for the outcome." (It seems to me that companies are doing this more and more.) There was a very good short story on the subject *[Unchained: A story of love, loss, and blockchain](https://www.technologyreview.com/s/610831/unchained-a-story-of-love-loss-and-blockchain/)* in which automate taxis develop novel strategies that have an unintended moral dimension in regard to humans. Upvotes: 1
2019/10/25
575
2,107
<issue_start>username_0: I recently started looking for networks that focus on image segmentation tasks related to biomedical applications. I could not miss the publication [U-Net: Convolutional Networks for Biomedical Image Segmentation](https://arxiv.org/abs/1505.04597) (2015) by Ronneberger, Fischer, and Brox. However, as deep learning is a fast-growing field and the article was published more than 4 years ago, I was wondering if anyone knows other algorithms that yield better results for image segmentation tasks? And if so, do they also use a U-shape architecture (i.e. contraction path then expansion path with up-conv)?<issue_comment>username_1: U-Net and U-Net inspired architectures have been quite popular in the medical image-related tasks ever since it was first introduced. There have been several improved versions of U-Net designed for specific tasks that followed. One such example is [Attention U-Net](https://arxiv.org/abs/1804.03999v3), extremely popular for Pancreas Segmentation. Other examples of architectures that have achieved state-of-the-art results in image segmentation tasks in recent years include [Multi-Scale 3DCNN + CRF](https://arxiv.org/abs/1603.05959v3), popular for Brain and Lesion images, [Multi-Scale Attention](https://arxiv.org/abs/1906.02849) for MRIs, etc. A recent paper that describes an interesting 3D FCNN architecture is [HyperDense-Net](https://arxiv.org/abs/1804.02967v2), widely used for multi-modal tasks in medical image segmentation. Upvotes: 2 <issue_comment>username_2: You can find leaderboards as well as code at [this address](https://paperswithcode.com/task/semantic-segmentation). For now, [HRNetV2](https://arxiv.org/pdf/1908.07919.pdf) leads the game. The U-Net architecture is part of a broad family of network architectures that aggregate multi-scale features to extract finer details useful for semantic segmentation. Examples are Feature Pyramidal Networks (FPN), Hourglass, Encoder-Decoder, MatrixNet, etc... [![enter image description here](https://i.stack.imgur.com/oKEhJ.png)](https://i.stack.imgur.com/oKEhJ.png) Upvotes: 1
2019/10/26
701
2,330
<issue_start>username_0: According to the [SBEED: Convergent Reinforcement Learning with Nonlinear Function Approximation](https://arxiv.org/pdf/1712.10285.pdf) for convergent reinforcement learning, the Smoothed Bellman operator is a way to dodge the double sample problem? Can someone explain to me what the double sample problem is and how SBEED solves it?<issue_comment>username_1: The double sampling problem is referenced in Chaper 11.5 *Gradient Descent in the Bellman Error* in [Reinforcement Learning: An Introduction (2nd edition)](http://incompleteideas.net/book/the-book.html). From the book, this is the full gradient descent (as opposed to semi-gradient descent) update rule for weights of an estimator that should converge to a minimal distance from the Bellman error: > > $$w\_{t+1} = w\_t + \alpha[\mathbb{E}\_b[\rho\_t[R\_{t+1} + \gamma\hat{v}(S\_{t+1},\mathbf{w})] - \hat{v}(S\_{t},\mathbf{w})][\nabla\hat{v}(S\_{t},\mathbf{w})- \gamma\mathbb{E}\_b[\rho\_t\nabla\hat{v}(S\_{t+1},\mathbf{w})]]$$ > > > [...] But this is naive, because the equation above involves the next state, $S\_{t+1}$, appearing in two > expectations that are multiplied together. To get an unbiased sample of the product, > two independent samples of the next state are required, but during normal interaction > with an external environment only one is obtained. One expectation or the other can be > sampled, but not both. > > > Basically, unless you have an environment that you can routinely re-wind and re-sample to get two independent estimates (for $\hat{v}(S\_{t+1},\mathbf{w})$ and $\nabla\hat{v}(S\_{t+1},\mathbf{w})$) then the update rule that naturally arises from gradient descent on the Bellman error does will work any better than other approaches, such as semi-gradient methods. If you can do this rewind process on every step, then it may be worth it because of the guarantees of convergence, even in off-policy with non-linear approximators. The paper proposes a workaround for this issue, keeping the robust convergence guarantees, but dropping the need to collect two independent samples of the same estimate on each step. Upvotes: 4 [selected_answer]<issue_comment>username_2: We cannot approximate $ E^2(x)$ by $\frac{1}{N} \sum\_i x\_i \frac{1}{N} \sum\_j x\_j $. Because $cov(x)=E(x^2)-E^2(x)$. Upvotes: 0
2019/10/28
415
1,692
<issue_start>username_0: I am trying to predict crime. I have data with factors: location, keyword description of the crime, time crime occurred and so on. This is for crimes that occurred in the past. I would like to treat the prediction of crimes as a binary classification problem. In this model, the data I have collected would form the "positive" examples: they are all examples of a crime happening. However, I am unsure what to use for the negative examples. Obviously, most of the time there is no crime at the location, but can I use this as negative data? For example, if I know there was a crime at 7pm at location X, and no other crimes there, should I generate new negative data points for every hour *except* 7pm? Ideally, I want to create probabilities of crime based on a set of factors.<issue_comment>username_1: It might be more informative to: 1. Label each combination of location, type, and time of crime with a crime *rate*. For example, theft, in Crystal City, at 11pm at night, occurs 20 times per year, or 0.4 times per resident per year. 2. Predict the crime *rate*, rather than individual events. This avoids the need to have explicit examples of "non-crime", and lets you instead directly learn something related to the probabilities of crimes being committed (the rate). Upvotes: 3 [selected_answer]<issue_comment>username_2: I would go so far as to say that unless the training examples include predicate data- that is, data about conditions leading up to a crime or non-crime-, then you cannot have enough information to predict the occurence of a crime from conditions or events that happen in advance of a potential crime not yet committed. Upvotes: 0
2019/10/28
400
1,648
<issue_start>username_0: I am reading [Goodfellow's](http://www.deeplearningbook.org/) book about neural networks, but I am stuck in the mathematical calculus of the back-propagation algorithm. I understood the principle, and some Youtube videos explaining this algorithm shown step-by-step, but now I would like to understand the **matrix calculus** (so not basic calculus!), that is, calculus with matrices and vectors, but especially everything related to the derivatives with respect to a matrix or a vector, and so on. Which math book could you advise me to read? I specify I studied 2 years after the bachelor in math school (in French: mathématiques supérieures et spéciales), but did not practice for years.<issue_comment>username_1: Linear Algebra Done Right by Axler seems to be the best book on linear algebra, with a brisk and modern approach. Upvotes: 1 <issue_comment>username_2: If you already have two years of a bachelor's of mathematics, I recommend part I of the book that you're mentioning. That part of the book reviews the main mathematics used in the optimization of neural nets (in part 1), and then actually goes through the various models in detail in the later parts. The review is done at a level that is suitable for someone who has already studied these topics, but needs a refresher. The book [Matrix Differential Calculus with Applications in Statistics and Econometrics](https://www.janmagnus.nl/misc/mdc-ch18.pdf) covers more advanced topics, which might also be what you are looking for. There is also [the related Wikipedia article](https://en.wikipedia.org/wiki/Matrix_calculus). Upvotes: 3 [selected_answer]
2019/10/29
1,412
5,264
<issue_start>username_0: I'm working on a project, where we use an encoder-decoder architecture. We decided to use an LSTM for both the encoder and decoder due to its **hidden states**. In my specific case, the hidden state of the encoder is passed to the decoder, and this would allow the model to learn better latent representations. Does this make sense? I am a bit confused about this because I really don't know what the hidden state is. Moreover, we're using separate LSTMs for the encoder and decoder, so I can't see how the hidden state from the encoder LSTM can be useful to the decoder LSTM because only the encoder LSTM really understands it.<issue_comment>username_1: As you said, one way to look at it is definitely that the LSTM-encoder's encoding can be only understood by itself, that's why the decoder exists there. An optimisation process encoded it, why couldn't an optimisation process decode it? The hidden state is essentially just an encoding of the information you gave it keeping the time-dependencies in check. Most encoder-decoder networks are trained end to end meaning, when the encoding is learned a corresponding decoding is learned simultaneously to decode the encoded latent in your desired format. I'd recommend you read [this blog](http://jalammar.github.io/illustrated-transformer/) on how transformer models are used to convert French to English, as it would give you better intuition and understanding on what happens with encoder-decoder sequence models Upvotes: 2 <issue_comment>username_2: This is my own understanding of the hidden state in a recurrent network. If it's wrong, please, feel free to let me know. Let's consider the following two input and output sequences \begin{align} X &= [a, b, c, d, \dots,y , z]\\ Y &= [b, c, d, e, \dots,z , a] \end{align} We will first try to train a multi-layer perceptron (MLP) with one input and one output from $X$ and $Y$. Here, the details of the hidden layers don't matter. We can write this relationship in maths as $$f(x)\rightarrow y$$ where $x$ is an element of $X$ and $y$ is an element of $Y$ and $f(\cdot)$ is our MLP. After training, if given the input $a = x$, our neural network will give an output $b = y$ because $f(\cdot)$ learned the mapping between the sequence $X$ and $Y$. Now, instead of the above sequences, try to teach the following sequences to the same MLP. \begin{align} X &= [a,a,b,b,c,c,\cdots, y,z,z]\\ Y &= [a,b,c,\cdots, z,a,b,c, \cdots, y,z] \end{align} More than likely, this MLP will not be able to learn the relationship between $X$ and $Y$. This is because a simple MLP **can't learn and understand the relationship** between the previous and current characters. Now, we use the same sequences to train an RNN. In an RNN, we take **two inputs**, one for our input and the previous hidden values, and **two outputs**, one for the output and the next hidden values. $$f(x, h\_t)\rightarrow (y, h\_{t+1})$$ **Important**: here $h\_{t+1}$ represents the next hidden value. We will execute some sequences of this RNN model. We initialize the hidden value to zero. ``` x = a and h = 0 f(x,h) = (a,next_hidden) prev_hidden = next_hidden x = a and h = prev_hidden f(x,h) = (b,next_hidden) prev_hidden = next_hidden x = b and h = prev_hidden f(x,h) = (c,next_hidden) prev_hidden = next_hidden and so on ``` If we look at the above process we can see that we are taking the previous hidden state values to compute the next hidden state. What happens is while we iterate through this process `prev_hidden = next_hidden` it also **encodes some information** about our sequence which will help in predicting our next character. Upvotes: 5 [selected_answer]<issue_comment>username_3: I like to think of hidden states as intermediate representations of input within a neural system. The overall goal of the system is to re-represent an input in some specific way so that the system can produce some target output. Each layer within a neural network can only really "see" an input according to the specifics of its nodes, so each layer produces unique "snapshots" of whatever it is processing. Hidden states are sort of intermediate snapshots of the original input data, transformed in whatever way the given layer's nodes and neural weighting require. The snapshots are just vectors so they can theoretically be processed by any other layer - by either an encoding layer or a decoding layer in your example. Upvotes: 2 <issue_comment>username_4: The hidden state in a RNN is basically just like a hidden layer in a regular feed-forward network - it just happens to also be used as an additional input to the RNN at the next time step. A simple RNN then might have an input $x\_t$, a hidden layer $h\_t$, and an output $y\_t$ at each time step $t$. The values of the hidden layer $h\_t$ are often computed as: $h\_t = f(W\_{xh}x\_t + W\_{hh}h\_{t-1})$ Where $f$ is some non-linear function, $W\_{xh}$ is a weight matrix of size $h\times x$, and $W\_{hh}$ is a weight matrix of size $h\times h$. I've left out the bias terms for simplicity. Thus, the values of the hidden layer $h\_t$ depend on the input $x\_t$ as well as on the previous hidden state $h\_{t-1}$ (literally, the previous values of the hidden layer). Upvotes: 2
2019/10/30
1,311
4,985
<issue_start>username_0: Natural gradient aims to do a steepest descent on the "function" space, a manifold that is independent from how the function is parameterized. It argues that the steepest descent on this function space is not the same as steepest descent on the parameter space. We should favor the former. Since, for example in a regression task, a neural net could be interpreted as a probability function (Gaussian with the output as mean and some constant variance), it is "natural" to form a distance on the manifold under the KL-divergence (and a Fisher information matrix as its metric). Now, if I want to be creative, I could use the same argument to use "square distance" between the outputs of the neural nets (distance of the means) which I think is not the same as the KL. Am I wrong, or it is just another legit way? Perhaps, not as good?<issue_comment>username_1: The KL divergence has slightly different interpretations depending on the context. The [related Wikipedia article](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence#Interpretations) contains a section dedicated to these interpretations. Independently of the interpretation, the KL divergence is always defined as a *specific* function of the [cross-entropy](https://en.wikipedia.org/wiki/Cross_entropy) (which you should be familiar with before attempting to understand the KL divergence) between two distributions (in this case, probability mass functions) \begin{align} D\_\text{KL}(P\parallel Q) &= -\sum\_{x\in\mathcal{X}} p(x) \log q(x) + \sum\_{x\in\mathcal{X}} p(x) \log p(x) \\ &= H(P, Q) - H(P) \end{align} where $H(P, Q)$ is the cross-entropy of the distribution $P$ and $Q$ and $H(P) = H(P, P)$. The KL is not a metric, given that it does not obey the triangle inequality. In other words, in general, $D\_\text{KL}(P\parallel Q) \neq D\_\text{KL}(Q\parallel P) $. *Given that a neural network is trained to output the mean (which can be a scalar or a vector) and the variance (which can be a scalar, a vector or a matrix), why don't we use a metric like the MSE to compare means and variances?* When you use the KL divergence, you don't want to compare just numbers (or matrices), but probability distributions (more precisely, probability densities or mass functions), so you will not compare just the mean and the variance of two different distributions, but you will actually compare the distributions. See [the example of the application of the KL divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence#Basic_example) in the related Wikipedia article. Upvotes: 2 <issue_comment>username_2: Yes, Squared distances & KL Divergence are not the same. Squared distance between means is not a useful metric as it doesn't gauge the amount of similarity between 2 distributions. When we compute \begin{align} D\_\text{KL}(P\parallel Q) \end{align} We are computing the amount of information that is lost when we approximate **P** as **Q**. Ideally, we would want the KL divergence to be as low as possible. Here is an interesting article <https://www.countbayesie.com/blog/2017/5/9/kullback-leibler-divergence-explained> where the author has explained KL Divergence with a toy example. I hope it helps :) Upvotes: 1 <issue_comment>username_3: I have been reading a lot about Natural Gradient and its use to find a descent direction. I found that [this post](https://wiseodd.github.io/techblog/2018/03/14/natural-gradient/) was the most clear. Consider a model $p$ parameterized by some parameters $\theta$ and we want to maximize the likelihood of observing our data $x$ under this model: $p(x|\theta)$. To optimise this likelihood we can take steps in the the **distribution space**. Updating the parameters $\theta$ we need to measure how our likelihood changes and this is measure using the KLK divergence. Even though the KL divergence is not a "proper" distance metric as it is not symmetric, it is still quite informative about the similarity between distributions. It's practical because it can capture differences between distributions that the Euclidean metric (parameter-dependent) could not (see [the same post](https://wiseodd.github.io/techblog/2018/03/14/natural-gradient/) for a simple example). So answering your question is essentially answering which is the best between Natural Gradient Descent and "Normal" Gradient Descent in the Euclidean space where your loss is measured with a L2 norm. You can train the same model using both methods and you will just find different descent directions. Hopefully though, both will converge but in my opinion Natural Gradient descent should be superior in nature. It is just very expensive to actually compute because to find the direction in distribution space you need to compute the inverse Fisher matrix $F^{-1}$ or approximate it and that's quite costly as it is of size $n\times n$ where $n$ is the size of $\theta$ which is typically high in neural networks. Upvotes: 0
2019/10/30
2,210
8,278
<issue_start>username_0: A general AI *x* creates another AI *y* which is better than *x*. *y* creates an AI better than itself. And so on, with each generation's primary goal to create a better AI. Is there a name for this. By better, I mean survivability, ability to solve new problems, enhance human life physically and mentally, and advance our civilization to an intergalactic civilization to name a few.<issue_comment>username_1: I don't think there is a single standard word or phrase that covers just this concept. Perhaps *recursive self-improvement* matches the idea concisely - but that is not specific AI jargon. Very little is understood about what strength this effect can have or what the limits are. Will 10 generations of self-improvement lead to a machine that is 10% better, 10 times better, or $2^{10}$ times better? And by what measure? Some futurologists suggest this might be a very strong effect, and use the term *Singularity* to capture the idea that intelligence growth through recursive self-improvement will be strong, exceed human intelligence, and lead to some form of super-intelligent machine - the point at which this goal is reached is called *The Singularity*. [<NAME>](https://en.wikipedia.org/wiki/The_Singularity_Is_Near) is a well-known proponent of this idea. Specifically, use of the term *Singularity* implies more than just the basic recursion that you suggest, and includes assumptions of a very large effect. Plus technically, it refers to a stage that results from the recursion, not the recursion itself. However, despite the popularity of it as a concept, whether or not such self-improving system will have a large impact on the generation of intelligent machines is completely unknown at this stage. Related research about general intelligence is still in its infancy, so it is not even clear what would count as being the first example system x. Upvotes: 3 <issue_comment>username_2: The first thing that comes to mind when reading your question is *[Genetic algorithms](https://en.wikipedia.org/wiki/Genetic_algorithm)*. They create alternate versions of themselves and measure each versions performance on a specific task, before discarding those that work poorly, while keeping the best ones for their next generation. The mutations here are often random, and for large/complex problems, these simulations can take incredibly long time. This group of algorithms are heavily inspired by evolution and biology, as you can see. I realize as I read the last part of your question, that this might be on a much smaller scope than you had envisioned. But, in essence genetic algorithms does what you describe in your first part. For the more grand-scale question, see @username_1's answer. Upvotes: 1 <issue_comment>username_3: ### **Direct Answer to Your Question**:-- Google uses the term: **Automated Machine Learning**. --- ### **What this Answer is About**:-- > > " ... A general AI x creates another AI y which is better than x. ... " ~ <NAME> (Stack Exchange user, Opening Poster) > > > What is the term for this: "A.I. creating A.I."? - What is some theory behind this:-- > > "The AutoML procedure has so far been applied to image recognition and language modeling. Using AI alone, the team have observed it creating programs that are on par with state-of-the-art models designed by the world’s foremost experts on machine learning." – Google's AI Is Now Creating Its Own AI. (2017, May 22). Retrieved from < <https://www.iflscience.com/technology/google-ai-creating-own-ai/> > > > > --- ### **Layperson Explanation**:-- * [Google's AI Can Make Its Own AI Now](https://www.mentalfloss.com/article/508019/googles-ai-can-make-its-own-ai-now) * [Google’s AI builds its own AI child and it’s better than anything humans have made](https://www.alphr.com/artificial-intelligence/1007850/google-s-ai-builds-its-own-ai-child-and-it-s-better-than-anything) * [Google Researchers Are Teaching Their AI to Build Its Own, More Powerful AI](https://www.sciencealert.com/google-is-improving-its-artificial-intelligence-with-artificial-intelligence) > > " ... Unfortunately, even people who have plenty of coding knowledge might not know how to create the kind of algorithm that can perform these tasks. Google wants to bring the ability to harness artificial intelligence to more people, though, and according to WIRED, it's doing that by teaching machine-learning software to make more machine-learning software. > > > The project is called AutoML, and it's designed to come up with better machine-learning software than humans can. As algorithms become more important in scientific research, healthcare, and other fields outside the direct scope of robotics and math, the number of people who could benefit from using AI has outstripped the number of people who actually know how to set up a useful machine-learning program. Though computers can do a lot, according to Google, human experts are still needed to do things like preprocess the data, set parameters, and analyze the results. These are tasks that even developers may not have experience in. ... " > > > – Google's AI Can Make Its Own AI Now. (2017, October 19). Retrieved from < <https://www.mentalfloss.com/article/508019/googles-ai-can-make-its-own-ai-now> > > > > We use programs to write programs. Researchers often need tools to solve complicated problems and algorithms are often needed. They don't always have the technical experience to do this. This is an artificial intelligence-based solution to the ever-growing challenge of applying machine learning to this problem. This allows non-experts to engage in predictive performance of their final machine learning models. There is the potential of "feed-back between systems" when A.I. feeds into A.I., which continues to feed into itself, ad infinitum. --- ### Business Applications and Practical Uses:-- Defer to the book: *Automated Machine Learning for Business*. * [Accelerate your Data Value Journey with Automated Machine Learning](https://www.bigsquid.com/accelerate-your-data-value-journey-with-automated-machine-learning) * [Leveraging the Power of Machine Learning to Save Lives](https://www.bigsquid.com/medical-tech-machine-learning) [This is not medical advice.] --- ### **Technical Mirror**:-- * (<http://www.ml4aad.org/automl/>) > > "What is AutoML? > Automated Machine Learning provides methods and processes to make Machine Learning available for non-Machine Learning experts, to improve efficiency of Machine Learning and to accelerate research on Machine Learning. > > > Machine learning (ML) has achieved considerable successes in recent years and an ever-growing number of disciplines rely on it. However, this success crucially relies on human machine learning experts to perform the following tasks: > > > * Preprocess and clean the data. > * Select and construct appropriate features. > * Select an appropriate model family. > * Optimize model hyperparameters. > * Postprocess machine learning models. > * Critically analyze the results obtained. > > > As the complexity of these tasks is often beyond non-ML-experts, the rapid growth of machine learning applications has created a demand for off-the-shelf machine learning methods that can be used easily and without expert knowledge. We call the resulting research area that targets progressive automation of machine learning AutoML." > > > – AutoML. (n.d.). Retrieved from < <http://www.ml4aad.org/automl/> > > > > * (<https://en.wikipedia.org/wiki/Automated_machine_learning>) * (<https://www.researchgate.net/publication/237136437_The_Nature_of_Self-Improving_Artificial_Intelligence>) --- ### Sources and References; and Further Reading:-- * (<https://ai.googleblog.com/2017/05/using-machine-learning-to-explore.html>) * (<http://www.primaryobjects.com/2013/01/27/using-artificial-intelligence-to-write-self-modifying-improving-programs/>) * (<https://skymind.ai/wiki/automl-automated-machine-learning-ai>) * (<https://www.datarobot.com/wiki/automated-machine-learning/>) * (<https://blog.aimultiple.com/auto-ml/>) * (<https://powerbi.microsoft.com/en-us/blog/announcing-automated-machine-learning-in-power-bi-general-availability/>) Upvotes: 2
2019/10/31
1,639
5,603
<issue_start>username_0: I have an image classification task to solve, but based on quite simple/good terms: * There are only two classes (either good or not good) * The images always show the same kind of piece (either with or w/o fault) * That piece is always filmed from the same angle & distance * I have at least 1000 sample images for both classes So I thought it should be easy to come up with a good CNN solution - and it was. I created a VGG16-based model with a custom classifier (Keras/TF). Via transfer learning I was able to achieve up to 100% validation accuracy during model training, so all is fine on that end. Out of curiosity and because the VGG-based approach seems a bit "slow", I also wanted to try it with a more modern model architecture as the base, so I did with *ResNet50v2* and *Xception*. I trained both similar to the VGG-based model, tried it with several hyperparameter modifications, etc. However, I was not able to achieve a better validation accuracy than 95% - so much worse than with the "old" VGG architecture. Hence my question is: > > Given these "simple" (always the same) images and only two classes, is the VGG model probably a better base than a modern network like ResNet or Xception? Or is it more likely that I messed something up with my model or simply got the training/hyperparameters not right? > > ><issue_comment>username_1: VGG is a more basic architecture which uses no residual blocks. Reset usually perform better then VGG due to it's more layers and residual approach. Given that resnet-50 can get 99% accuracy on MNIST and 98.7% accuracy on CIFAR-10, it probably should achieve better than VGG network. Also, the validation accuracy should not be 100%. You could try increasing the size of your validation set to improve accuracy on validation. VGG network should perform worst than ResNet in most scenario, but experimenting is the way to go. Try and experiment more to get a method that works for your data. Hope that I can help you and have a nice day! Upvotes: 1 [selected_answer]<issue_comment>username_2: Below is a listing of Keras application models that can be used easily in transfer learning. Note VGG has on the order of 140 million parameters which is why it is slow. ``` Model Size Top-1 Accuracy Top-5 Accuracy Parameters 1Depth Xception 88 MB 0.790 0.945 22,910,480 126 VGG16 528 MB 0.713 0.901 138,357,544 23 VGG19 549 MB 0.713 0.900 143,667,240 26 ResNet50 98 MB 0.749 0.921 25,636,712 - ResNet101 171 MB 0.764 0.928 44,707,176 - ResNet152 232 MB 0.766 0.931 60,419,944 - ResNet50V2 98 MB 0.760 0.930 25,613,800 - ResNet101V2 171 MB 0.772 0.938 44,675,560 - ResNet152V2 232 MB 0.780 0.942 60,380,648 - InceptionV3 92 MB 0.779 0.937 23,851,784 159 InceptionResNetV2 215 MB 0.803 0.953 55,873,736 572 MobileNet 16 MB 0.704 0.895 4,253,864 88 MobileNetV2 14 MB 0.713 0.901 3,538,984 88 DenseNet121 33 MB 0.750 0.923 8,062,504 121 DenseNet169 57 MB 0.762 0.932 14,307,880 169 DenseNet201 80 MB 0.773 0.936 20,242,984 201 NASNetMobile 23 MB 0.744 0.919 5,326,716 - NASNetLarge 343 MB 0.825 0.960 88,949,818 - I tend to use the MobileNet model for transfer learning because it has about 4 million parameters so it much faster than most models. It should perform as well as VGG on your data set. If it does not tuning the hyper parameters may be required. I find that using an adjustable learning such as the Keras ReduceLROnPlateau callback along with the ModelCheckpoint callback both monitoring validation loss works very well. Documentation is [here][1]. You might also try the efficientNet model which comes in various sizes and has high accuracy. Documentation is [here][2] [1]: https://keras.io/callbacks/ [2]: https://github.com/Tony607/efficientnet_keras_transfer_learning ``` Upvotes: 1 <issue_comment>username_3: The newer models generally outperform older ones on the ImageNet challenge in their accuracy scores\*. This does not necessarily mean that this difference in performance will be reflected in your particular classification problem. The closer your problem is to the ImageNet one, the more likely that the relative model performances will be similar. However when you perform transfer learning you will often have to fine-tune the model to achieve a stronger performance, the better you tune the model will effect performance, and there will often be a difference in which model is performing best on a given task. You can see papers in various classification tasks where VGG may be performing best, or Inception, or even AlexNet. I believe the simplest models (AlexNet has only 8 layers) may be the easiest to fine tune, and also may require the smallest amount of data for good performance. \*There are exceptions, MobileNet is more recent but the innovation is that it is a smaller model rather than the strongest model i.e. it is designed to be useable on mobile devices rather than running on the latest GPU. Upvotes: 0
2019/11/01
1,462
4,966
<issue_start>username_0: The thing about machine learning (ML) that worries me is that "knowledge" acquired in ML is hidden: we usually can't explain the criteria or methods used by the machine to provide an answer when we ask it a question. It's as if we asked an expert financial analyst for advice and he/she replied, "Invest in X"; then when we asked "Why?", the analyst answered, "Because I have a feeling that's the right thing for you to do." It makes us dependent on the analyst. Surely there are some researchers trying to find ways for ML systems to encapsulate and refine their "knowledge" into a form that can then be taught to a human or encoded into a much simpler machine. Who, if any, are working on that?<issue_comment>username_1: VGG is a more basic architecture which uses no residual blocks. Reset usually perform better then VGG due to it's more layers and residual approach. Given that resnet-50 can get 99% accuracy on MNIST and 98.7% accuracy on CIFAR-10, it probably should achieve better than VGG network. Also, the validation accuracy should not be 100%. You could try increasing the size of your validation set to improve accuracy on validation. VGG network should perform worst than ResNet in most scenario, but experimenting is the way to go. Try and experiment more to get a method that works for your data. Hope that I can help you and have a nice day! Upvotes: 1 [selected_answer]<issue_comment>username_2: Below is a listing of Keras application models that can be used easily in transfer learning. Note VGG has on the order of 140 million parameters which is why it is slow. ``` Model Size Top-1 Accuracy Top-5 Accuracy Parameters 1Depth Xception 88 MB 0.790 0.945 22,910,480 126 VGG16 528 MB 0.713 0.901 138,357,544 23 VGG19 549 MB 0.713 0.900 143,667,240 26 ResNet50 98 MB 0.749 0.921 25,636,712 - ResNet101 171 MB 0.764 0.928 44,707,176 - ResNet152 232 MB 0.766 0.931 60,419,944 - ResNet50V2 98 MB 0.760 0.930 25,613,800 - ResNet101V2 171 MB 0.772 0.938 44,675,560 - ResNet152V2 232 MB 0.780 0.942 60,380,648 - InceptionV3 92 MB 0.779 0.937 23,851,784 159 InceptionResNetV2 215 MB 0.803 0.953 55,873,736 572 MobileNet 16 MB 0.704 0.895 4,253,864 88 MobileNetV2 14 MB 0.713 0.901 3,538,984 88 DenseNet121 33 MB 0.750 0.923 8,062,504 121 DenseNet169 57 MB 0.762 0.932 14,307,880 169 DenseNet201 80 MB 0.773 0.936 20,242,984 201 NASNetMobile 23 MB 0.744 0.919 5,326,716 - NASNetLarge 343 MB 0.825 0.960 88,949,818 - I tend to use the MobileNet model for transfer learning because it has about 4 million parameters so it much faster than most models. It should perform as well as VGG on your data set. If it does not tuning the hyper parameters may be required. I find that using an adjustable learning such as the Keras ReduceLROnPlateau callback along with the ModelCheckpoint callback both monitoring validation loss works very well. Documentation is [here][1]. You might also try the efficientNet model which comes in various sizes and has high accuracy. Documentation is [here][2] [1]: https://keras.io/callbacks/ [2]: https://github.com/Tony607/efficientnet_keras_transfer_learning ``` Upvotes: 1 <issue_comment>username_3: The newer models generally outperform older ones on the ImageNet challenge in their accuracy scores\*. This does not necessarily mean that this difference in performance will be reflected in your particular classification problem. The closer your problem is to the ImageNet one, the more likely that the relative model performances will be similar. However when you perform transfer learning you will often have to fine-tune the model to achieve a stronger performance, the better you tune the model will effect performance, and there will often be a difference in which model is performing best on a given task. You can see papers in various classification tasks where VGG may be performing best, or Inception, or even AlexNet. I believe the simplest models (AlexNet has only 8 layers) may be the easiest to fine tune, and also may require the smallest amount of data for good performance. \*There are exceptions, MobileNet is more recent but the innovation is that it is a smaller model rather than the strongest model i.e. it is designed to be useable on mobile devices rather than running on the latest GPU. Upvotes: 0
2019/11/02
849
3,576
<issue_start>username_0: There are mainly two different areas of AI at the moment. There is the "learning from experience" based approach of neural networks. And there is the "higher logical reasoning" approach, with languages like LISP and PROLOG. Has there been much overlap between these? I can't find much! As a simple example, one could express some games in PROLOG and then use neural networks to try to play the game. As a more complicated example, one would perhaps have a set of PROLOG rules which could be combined in various ways, and a neural network to evaluate the usefulness of the rules (by simulation). Or even create new PROLOG rules. (Neural networks have been used for language generation of a sort, so why not the generation of PROLOG rules, which could then be evaluated for usefulness by another neural network?) As another example, a machine with PROLOG rules might be able to use a neural network to be able to encode these rules into some language that could be in turn decoded by another machine. And so express instructions to another machine. I think, such a combined system that could use PROLOG rules, combine them, generate new ones, and evaluate them, could be highly intelligent. As it would have access to higher-order logic. And have some similarity to "thinking".<issue_comment>username_1: In reference to your exact question, there is published research that attempts to bring these two areas together. For example, [HolStep: A Machine Learning Dataset for Higher-order Logic Theorem Proving](https://arxiv.org/abs/1703.00426) (2017) by <NAME>, <NAME>, <NAME>. This group also has other published work related to the subject. Regardless of their results, they list several areas of logical systems that are highly suited to machine learning methods (section 3.1, p. 4): > > * Predicting whether a statement is useful in the proof of a given conjecture > * Predicting the dependencies of a proof statement (premise selection) > * Predicting whether a statement is an important one (human named) > * Predicting which conjecture a particular intermediate statement originates from > * Predicting the name given to a statement > * Generating intermediate statements useful in the proof of a given conjecture > * Generating the conjecture the current proof will lead to > > > It's tough to know whether or not you can combine Higher Order Logic and Machine Learning in an effective way without needing to create a general AI. This is equivalent to wondering if an effective merging of the two areas is an AI-complete / AI-hard problem. There are active attempts at general AI by researchers such as <NAME> (many others as well but just to give a popular name for googling). Research into general AI would give you an idea of whether or not other pieces of the puzzle are needed in order to create something "highly intelligent". Upvotes: 2 <issue_comment>username_2: Another example where machine learning has been combined with symbolic AI is in the context of knowledge graphs (which [can be viewed as a graphical/visual representation of a knowledge base](https://ai.stackexchange.com/a/32666/2444)), where people have been proposing ways to learn embeddings of the entities and relations of the graphs (known as [**knowledge graph embeddings**](https://www.youtube.com/watch?v=gX_KHaU8ChI)), in order to be able to perform tasks like **triple classification** (i.e. given a triple $\langle s, r, o\rangle$ with a subject $s$, relation $r$ and object $o$, is this a real fact?). Upvotes: 2
2019/11/02
927
4,077
<issue_start>username_0: If recurrent neural networks (RNNs) are used to capture prior information, couldn't the same thing be achieved by a feedforward neural network (FFNN) or multi-layer perceptron (MLP) where the inputs are ordered sequentially? Here's an example I saw where the top line of each section represents letters typed and the next row represents the predicted next character (red letters in the next row means a confident prediction). [![enter image description here](https://i.stack.imgur.com/4LUjg.jpg)](https://i.stack.imgur.com/4LUjg.jpg) Wouldn't it be simpler to just pass the $X$ number of letters leading up to the last letter into an FFNN? For example, if $X$ equaled 4, the following might be the input to the FFNN ``` S, T, A, C => Prediction: K ```<issue_comment>username_1: An RNN or LSTM have the advantage of "remembering" the past inputs, to improve performance over prediction of a time-series data. If you use a neural network over like the past 500 characters, this may work but the network just treat the data as a bunch of data without any specific indication of time. The network can learn the time representation only through gradient descent. RNN or LSTM however have "time" as a mechanism built into the model. The model loops through the model sequentially and have a real "sense of time" even before the model is trained. The model also have "memory" of previous data points to help the prediction. The architecture is based on the progress of time and the gradient are propagated through time as well. This is a much more intuitive way to process time-series data. A 1D CNN also will work for the task. An example of CNN in time series data is wavenet, which uses CNN for generating incredibly life like speech using dilated convolution neural network. For whether LSTM or CNN works better, it depends on the data. You should try experimenting with both networks to see which works best. Suppose you need to classify a video's genre. It is much simpler to watch it in sequence then seeing frames of it playing randomly in front of your eyes. This is why an RNN or an LSTM works better in time series data. Upvotes: 3 <issue_comment>username_2: Assumptions ----------- Different model structures encode different assumptions - while we often make simplifying assumptions that aren't strictly correct, some assumptions are more wrong than others. For example, your proposed structure of "just pass the $X$ number of letters leading up to the last letter into an FFNN" makes an assumption that all the information relevant for the decision is fully obtainable from the $X$ previous letters, and $(X+1)$st and earlier input letters are not relevant - in some sense, an extension of the Markov property. Obviously, that's not true in many cases, there are all kinds of structures where long term relationships matter, and assuming that they don't lead to a model that intentionally doesn't take such relationships into account. Furthermore, it would make an independence assumption that the effect of $X$th, $(X-1)$st and $(X-2)$nd elements on the current output is entirely distinct and separate, you don't make an assumption that those features are related, while in most real problems they are. The classic RNN structures also make some implicit assumptions, namely, that only the preceding elements are relevant for the decision (which is wrong for some problems, where information from the following items is also required), and that the transformative relationship between the input, output and the passed-on state is *the same* for all elements in the chain, and that it doesn't change over time; That's also not certainly true in all cases, this is quite a strong restriction, but that's generally less wrong than the assumption that the last $X$ elements are sufficient, and powerful true (or mostly true) restrictions are useful (e.g. the No Free Lunch Theorem applies) for models that generalize better; just like e.g. enforcing translational invariance for image analysis models, etc. Upvotes: 3 [selected_answer]
2019/11/02
627
2,249
<issue_start>username_0: I am looking for a dataset, which I could train a model to detect people/boats/surfboards, etc., from a drone view. Has anyone seen a dataset that could be useful for this purpose? I have some photos made by me (like this one below), but I need more data. Of course, the best will be if data will be labeled, but, if someone has seen an unlabeled dataset with videos/photos like that below, please share the link to it. Sample photos I am looking for: [![enter image description here](https://i.stack.imgur.com/icdGt.jpg)](https://i.stack.imgur.com/icdGt.jpg) [![enter image description here](https://i.stack.imgur.com/Es5er.jpg)](https://i.stack.imgur.com/Es5er.jpg)<issue_comment>username_1: Perhaps you can check this dataset out: <http://www.aiskyeye.com/> > > The VisDrone2019 dataset is collected by the AISKYEYE team at Lab of > Machine Learning and Data Mining , Tianjin University, China. The > benchmark dataset consists of 288 video clips formed by 261,908 frames > and 10,209 static images, captured by various drone-mounted cameras, > covering a wide range of aspects including location (taken from 14 > different cities separated by thousands of kilometers in China), > environment (urban and country), objects (pedestrian, vehicles, > bicycles, etc.), and density (sparse and crowded scenes). Note that, > the dataset was collected using various drone platforms (i.e., drones > with different models), in different scenarios, and under various > weather and lighting conditions. These frames are manually annotated > with more than 2.6 million bounding boxes of targets of frequent > interests, such as pedestrians, cars, bicycles, and tricycles. Some > important attributes including scene visibility, object class and > occlusion, are also provided for better data utilization. > > > It provides many drone view images with bounding boxes. Hope it can help you and have a nice day! Upvotes: 2 <issue_comment>username_2: If someone will be looking for a dataset for maritime SAR (Search and Rescue) purposes in the future, we have created **the first publicly free to use for academic research** dataset of this type: <http://afo-dataset.pl/en/download/> Upvotes: 1 [selected_answer]
2019/11/02
441
1,941
<issue_start>username_0: In the book [Artificial Intelligence Engines: A Tutorial Introduction to the Mathematics of Deep Learning](https://rads.stackoverflow.com/amzn/click/com/0956372813), <NAME> says > > With supervised learning, the response to each input vector is an output vector that **receives immediate vector-valued feedback specifying the correct output**, and this feedback refers uniquely to the input vector just received; in contrast, each reinforcement learning output vector (action) receives scalar-valued feedback often sometime after the action, and this feedback signal depends on actions taken before and after the current action. > > > I fail to understand the part formatted in bold. Once we have a set of labeled examples (feature vector and label pairs), where is the "feedback" coming from? Testing and validation results of our calibrated model (say a neural network based one)?<issue_comment>username_1: By "immediate vector-valued feedback", they probably mean exactly the label in the "labeled examples" you mentioned. Upvotes: 3 [selected_answer]<issue_comment>username_2: Sorry for the delay. The term "vector-valued feedback" is compared to scalar-valued feedback. The implication (which I should have made explicit) is that, because vector-valued feedback tells the network the correct answer, the changes in weights required to improve performance are reasonably easy to calculate (e.g. using backprop). In contrast, if a scalar-valued feedback is given (as in reinforcement learning) then the network knows only how bad its previous output was, but not how to change weights in order to improve the output. A rough analogy would be that vector-valued feedback tells you that you got the wrong answer to a question, and provides the correct answer. In contrast, scalar-valued feedback just tells you 'how wrong' your answer was, but does not tell you how to improve your answer. Upvotes: 1
2019/11/03
423
1,841
<issue_start>username_0: I have a structured dataset of around 100 gigs, and I am using DNN for classification in TF 2.0. Because of this huge dataset, I cannot load entire data in memory for training. So, I'll be reading data in batches to train the model. Now, the input to the network should be normalized and for that, I need training dataset mean and SD. I have been reading TensorFlow docs to get info on how to normalize features when reading data in batches. But, couldn't find one. though I found this [article](https://www.tensorflow.org/tutorials/load_data/csv), it is only for the case where entire data can be loaded in memory. So, If any of you have worked on creating such a TensorFlow data pipeline for normalizing input features while loading data in batches and training model, It would be helpful.<issue_comment>username_1: By "immediate vector-valued feedback", they probably mean exactly the label in the "labeled examples" you mentioned. Upvotes: 3 [selected_answer]<issue_comment>username_2: Sorry for the delay. The term "vector-valued feedback" is compared to scalar-valued feedback. The implication (which I should have made explicit) is that, because vector-valued feedback tells the network the correct answer, the changes in weights required to improve performance are reasonably easy to calculate (e.g. using backprop). In contrast, if a scalar-valued feedback is given (as in reinforcement learning) then the network knows only how bad its previous output was, but not how to change weights in order to improve the output. A rough analogy would be that vector-valued feedback tells you that you got the wrong answer to a question, and provides the correct answer. In contrast, scalar-valued feedback just tells you 'how wrong' your answer was, but does not tell you how to improve your answer. Upvotes: 1
2019/11/04
604
2,629
<issue_start>username_0: I am interested in exploring whether AI techniques can derive hidden patterns of relationships in a data set. For example, from among house size, lot size, age of house and asking price, what formula best predicts selling price? In explorations around how this might be done, I tried to use a neural network to solve for a predictable relationship between two variables to predict a third, so I trained my neural network with inputs consisting of the length of two sides of a triangle, and the result being the length of the hypotenuse. It couldn't get it to work. I was told by somebody who understands all this better than me that the reason it failed is because conventional neural networks are not good at modeling non-linear relationships. If that is true, I wonder if there is some other AI technique that could 'derive' a network modeling the Pythagorean theorem from a training data set with better results than a normal neural network?<issue_comment>username_1: > > For example, from among house size, lot size, age of house and asking price, what formula best predicts selling price? > > > There is no general formula for this. Search for neural network regression and you can get started. The AI technique or any prediction algorithm in general will learn a function that maps from the input feature vector $(x\_1, ...,x\_n)$, where each of the element in the vector is a measurement on the $\text{predictors/independent variables/regressors}$ to the $\text{variable of interest/dependent variables}$ i.e. $\text{selling price}$ > > I was told by somebody who understands all this better than me that the reason it failed is because conventional neural networks are not good at modeling non-linear relationships. > > > The statement is incorrect. In fact the opposite is true. CNNs are known for modeling non-linear relationships. Examples are the highly successful image classification CNN architectures like Inception, ResNet, etc. Upvotes: 3 [selected_answer]<issue_comment>username_2: You are mixing up lots of things here. Specifically, you seem to be lacking a basic understanding of artificial neural networks and what they can do (e.g. what type of articifial neural networks are linear classifiers/regressors and which can model non-linear relationships). Therefore, I'd take a step back and start with understanding the basics of AI. The go-to book for that is 'Artificial Intelligence: A Modern Approach' by Russel and Norvig. It might be a slower (and more theoretical) start but IMO that is the right approach to actually understand what you are doing. Upvotes: 2
2019/11/04
480
2,062
<issue_start>username_0: I would like to develop a neural network to measure the distance between two opposite sides of an object in an image (in a similar way that the fractional caliper tool measures an object). So, given an image of an object, the neural network should produce the depth or height of the object. Which computer vision techniques and neural networks could I use to solve this problem?<issue_comment>username_1: > > For example, from among house size, lot size, age of house and asking price, what formula best predicts selling price? > > > There is no general formula for this. Search for neural network regression and you can get started. The AI technique or any prediction algorithm in general will learn a function that maps from the input feature vector $(x\_1, ...,x\_n)$, where each of the element in the vector is a measurement on the $\text{predictors/independent variables/regressors}$ to the $\text{variable of interest/dependent variables}$ i.e. $\text{selling price}$ > > I was told by somebody who understands all this better than me that the reason it failed is because conventional neural networks are not good at modeling non-linear relationships. > > > The statement is incorrect. In fact the opposite is true. CNNs are known for modeling non-linear relationships. Examples are the highly successful image classification CNN architectures like Inception, ResNet, etc. Upvotes: 3 [selected_answer]<issue_comment>username_2: You are mixing up lots of things here. Specifically, you seem to be lacking a basic understanding of artificial neural networks and what they can do (e.g. what type of articifial neural networks are linear classifiers/regressors and which can model non-linear relationships). Therefore, I'd take a step back and start with understanding the basics of AI. The go-to book for that is 'Artificial Intelligence: A Modern Approach' by Russel and Norvig. It might be a slower (and more theoretical) start but IMO that is the right approach to actually understand what you are doing. Upvotes: 2
2019/11/05
970
3,494
<issue_start>username_0: I believe I saw an article about an AI that was able to decode human vision 'brain-waves' in real-time, which would create a blurry image of what the human was seeing. [This AI Decodes Your Brainwaves and Draws What You're Looking at](https://futurism.com/the-byte/ai-draws-decodes-brainwaves) Is anyone aware where I can find this?<issue_comment>username_1: ### Direct Answer to the OP's Question > > "Have any AI's been able to decode human vision 'thoughts'" ~ Albert (Stack Exchange user, OP) > > > [This is technology that can produce pictures of what the user is thinking about through scanning a brain](https://futurism.com/the-byte/ai-draws-decodes-brainwaves). > > "Is anyone aware where I can find this?" ~ Albert (Stack Exchange user, OP) > > > [Emotiv](https://www.emotiv.com/) is the most accessible commercial model (circa. late 2019). --- The OP is probably interested in consumer brain–computer interfaces (also known as BCIs). These are varied technologies which range from:-- 1. [Simple "yes-no" brain-interface (e.g. for people in a coma)](https://kids.frontiersin.org/article/10.3389/frym.2018.00024) 2. [Advanced programs that control video games through thought, such as a high fantasy wizard duel](https://www.technologyreview.com/s/407447/connecting-your-brain-to-the-game/) (Do a search about this on YouTube!) 3. [Technology that can produce pictures of what the user is thinking about through scanning a brain](https://futurism.com/the-byte/ai-draws-decodes-brainwaves). (This was mentioned by the OP.) --- This Wikipedia page, < <https://en.wikipedia.org/wiki/Consumer_brain%E2%80%93computer_interfaces> >, compares different models of BCIs. * (<https://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface>) * (<https://en.wikipedia.org/wiki/Electroencephalography>) * (<https://en.wikipedia.org/wiki/Emotiv_Systems>) --- There also some very serious ethical issues regarding being able to "read brains." I mentioned the medical use; and I hope it goes in this direction. Deep philosophical discussions can be had on whether it is appropriate to read the brain of a supposed criminal. (I would personally say no.) (<https://plato.stanford.edu/entries/neuroethics/>) --- [I can't comment on the technical details of this. It is outside of my purview. For example, if you need information about the Python-BCI interface, you will need an expert.] --- [This is not medical and/or legal advice. This is theoretical discussion.] Upvotes: 1 [selected_answer]<issue_comment>username_2: There have been studies in University of Oregon and Kyoto University to be able to visualise thoughts and dreams on a screen using voxel values of an FMRI scan as input and an estimation of an image of the thoughts as the output. Instead of linking you to these studies and papers - you could just watch [this episode](https://youtu.be/AgbeGFYluEA) of mind field where both these studies are demonstrated and linked. The idea behind this is easier to understand if you have a good understanding of generative networks such as generative adversarial networks or so. Essentially in GAN's you'd map a known latent distribution to images in pixel-space. You would be doing the same thing here, just that the latent distribution would now be the FMRI scan input and the mapping would be made in a supervised setting where they are initially showed images. A very rough understanding of the idea can be drawn on these lines. Upvotes: 1
2019/11/06
3,013
11,607
<issue_start>username_0: Robot technology is usually thought from an engineering perspective. A human programmer writes a software this executed in a robot who is doing a task. But what would happen, if the project is started with the opposite goal? The idea is, that the human becomes the robot by himself. That means, the human is using makeup to make his face more mechanically, buys special futuristic clothing which mirrors the light and imitates in a roleplay the working of a kitchen robot. *What are methods human actors use to imitate robots?*<issue_comment>username_1: The great acting teacher [Stella Adler](https://en.wikipedia.org/wiki/Stella_Adler) wrote about [mannerisms](https://en.wikipedia.org/wiki/Mannerism) being a powerful tool for actors. [Method acting](https://en.wikipedia.org/wiki/Method_acting) in general focuses on natural performances based roughly on understanding the mindset of the character portrayed. It's possible actors who have portrayed androids have observed industrial robots to inform their physicality, and many performances convey the idea, via movement, of a mechanical inner structure. (It is often said that an "actor's body is their instrument".) What is more interesting is actors trying to convey the cognitive structure of the androids. With Arnold, and [Terminator](https://en.wikipedia.org/wiki/The_Terminator) robots in general, the baseline performance is decidedly robotic, to convey their inhumanity. But the more advanced Terminators are able to mimic naturalistic human mannerisms, and even established human characters, to trick humans. [Lieutenant Data](https://en.wikipedia.org/wiki/Data_(Star_Trek)) often used head motions, such as cocking his head slightly, to convey computation. Here the character arc involved working to become more human, as this character draws heavily on [Pinocchio](https://en.wikipedia.org/wiki/Pinocchio), the wooden puppet that became a boy. Overall Data's performance conveyed a lack of emotion, a definite reference to the logic-oriented [Mr. Spock](https://en.wikipedia.org/wiki/Spock), although I recall episodes where Data experimented with "emotional circuits" and "humor circuits", where the output was intentionally inconsistent with natural human behavior. [*Blade Runner*](https://en.wikipedia.org/wiki/Blade_Runner#Plot), where the [Tyrell Corporation](https://bladerunner.fandom.com/wiki/Tyrell_Corporation)'s motto was "More Human than Human", presented the cutting edge Nexus-6 androids as having emotions, but, due to their artificially short life-spans, were portrayed as childlike in trying to reconcile extremely powerful feelings. The [Voight-Kampff Test](https://bladerunner.fandom.com/wiki/Voight-Kampff_test), a form of [Turing Test](https://en.wikipedia.org/wiki/Turing_test), used in the film to identify androids, relied on the emotional response to questions. The key plot point of [*Do Androids Dream of Electric Sheep*](https://en.wikipedia.org/wiki/Do_Androids_Dream_of_Electric_Sheep%3F#Plot_summary), the novel the film was based on, utilized what would be formalized as [evolutionary game theory](https://en.wikipedia.org/wiki/Evolutionary_game_theory#Routes_to_altruism) to hypothesize that empathy is a natural function of intelligence sufficiently advanced. Deckard, who may or may not have been an android, and Rachel, who definitely was, are both capable of love. This capacity informed their performances, to the extent that the androids came off as more human than the actual humans, due to the depth of their emotion. This is also reflected in [*Blade Runner 2049*](https://en.wikipedia.org/wiki/Blade_Runner_2049#Plot) via the girlfriend-bot Joi, who us the most limited android, but the most human character in the film per her capacity to love (or at least simulate it.) In the recent [HBO Westworld](https://en.wikipedia.org/wiki/Westworld_(TV_series)) reboot, the Androids replicate natural human mannerisms when playing their designated roles, but reset to more mechanical mannerisms when acting under their own agency. This is reflected in [*Ex Machina*](https://en.wikipedia.org/wiki/Ex_Machina_(film)), where the android mimics human emotions to pass a Turing Test and trick the human subject, only to revert to purely alien mannerisms after the android is free. ("Alien" here used in the sense of non-human--it's possible the android is sentient as it seems to convey some degree of emotion in regarding the simulated human skin it will wear.) The most interesting recent android performance may come from the recent [*Alien: Covenant*](https://en.wikipedia.org/wiki/David_8#Character_development) where **<NAME> plays two identical androids, David and Walter, which have two distinct neural structures**. (David has the capacity to be creative, where Walter cannot. In the film it is mentioned that David made people uncomfortable, so the creative functions were removed from subsequent models.) The key difference in the performance seems to be that David demonstrates passion, and even emotions, where Walter is more clearly "robotic". * In general, the underlying approach of actors seems to have been to show the androids being distinct from humans, drawing a clear, though sometimes subtle, contrast. * Actors portraying androids have typically utilized robotic mannerisms to convey an artificial entity. Upvotes: 3 [selected_answer]<issue_comment>username_1: *Disclaimer: The intent of this answer is to suggest a a parallel between methods of acting and machine learning, both in intent and application, and theory. A large number of links are included for the convenience of readers new to the field, and there is not an exact correspondence of AI concepts to acting preparation techniques.* In my [prior answer](https://ai.stackexchange.com/a/16342/1671), I mentioned the [method acting](https://en.wikipedia.org/wiki/Method_acting) technique, and [Stella Adler](https://en.wikipedia.org/wiki/Stella_Adler)'s interpretation of [Stanislavski's method](https://en.wikipedia.org/wiki/Konstantin_Stanislavski). Bear in mind that the method is a post-empiricism approach, an attempt, in some sense, to create a science of acting in the sense of analysis, and an approach that is fundamentally algorithmic in the sense of process. (The original manual is titled *[An Actor Prepares](https://en.wikipedia.org/wiki/An_Actor_Prepares#Summary)*.) Note that areas covered include action, imagination (creativity), units and objectives, emotional memory (accessing memory), and adaptation. See Also: [Classical Acting](https://en.wikipedia.org/wiki/Classical_acting).) Note also that [plays](https://www.etymonline.com/word/play) are aptly named. Drama and comedy arise out of interplay of individuals, and the process of refining performance is the process of play—searching within a rule-space for the most optimal outcome. * Strong actors will rigorously research the character to create a [mental model](https://en.wikipedia.org/wiki/Mental_model) of the character's experience of the world, similar to a [model-based agent](https://en.wikipedia.org/wiki/Intelligent_agent). * Modern actors seek objectives, sometimes referred to as motivations, similar to [goal-based agents](https://en.wikipedia.org/wiki/Intelligent_agent#/media/File:Model_based_goal_based_agent.png). The model has many dimensions, and there may be multiple layers of objectives in the sense of the [subconscious](https://en.wikipedia.org/wiki/Subconscious). (What does the character want? What does it really want? What does it really really want?) This also applies to the contexts for any choice, which are multiple (personal, societal, economic, etc.) * Actors observe human behavior for the purpose of [imitating](https://en.wikipedia.org/wiki/Turing_test#Imitation_Game) it, commonly referred to as "people watching". As you note, actors preparing for role of robot may observe machinery, with the purpose of indicating for an audience that quality. Actors may also observe other actors, although novelty in performance is typically understood to be optimal. * Actors will access emotional memory, alternately referred to as "sense memory" & "emotional recall" ([affective memory](https://en.wikipedia.org/wiki/Affective_memory)), either to produce a physical effect or or analysis. Output are [signifiers](https://en.wikipedia.org/wiki/Semiotics#Terminology). They create a [state space](https://en.wikipedia.org/wiki/State_(computer_science)) which they can return to access on command. Essentially it's a form of "memory palace" ([method of loci](https://en.wikipedia.org/wiki/Method_of_loci)) where events take the place of locations. * Actors will improvise in preparation, to identify and test [choices](https://en.wikipedia.org/wiki/Rational_choice_theory) (actions and mannerisms), which involves [decision theory](https://en.wikipedia.org/wiki/Decision_theory). The choices of the other actors ([rational agents](https://en.wikipedia.org/wiki/Rational_agent)) are factor, and influence each other. * Choices are selected in a [genetic process](https://en.wikipedia.org/wiki/Genetic_algorithm), for fitness in environment, here defined as audience response. The improvisation that leads to the performance is [evolutionary](https://en.wikipedia.org/wiki/Evolutionary_game_theory), in that it optimizes via the rehearsal process, with director as audience, and later, in the case of live theater, in response to live audiences. (See also the [Actor-critic](http://incompleteideas.net/book/first/ebook/node66.html) model.) It's not quite a [monte-carlo](https://en.wikipedia.org/wiki/Monte_Carlo_tree_search), being more of an [informed search](https://en.wikipedia.org/wiki/Search_algorithm#Informed_search), but does not exclude randomness. Essentially, it's a process of analysis, trial-and-error, more analysis, repeat, similar to [machine learning](https://en.wikipedia.org/wiki/Machine_learning) with [heuristics](https://en.wikipedia.org/wiki/Heuristic). It wouldn't be far off to say that there is a [convergence](https://softwareengineering.stackexchange.com/questions/288777/what-does-it-mean-for-an-algorithm-to-converge), leading to what is perceived to be the [optimal](https://en.wikipedia.org/wiki/Mathematical_optimization) set of choices, (although it is more typical to say a performance "gels" or "comes together".) * It can be said that modern acting methods are themselves algorithmic processes, where the intent is [maximizing utility](https://en.wikipedia.org/wiki/Utility), here audience response, which can carry significant economic consequences. **Modern actors are using methods similar to [modern AI methods](https://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Modern_Approach) to imitate intelligent [androids](https://en.wikipedia.org/wiki/Android_(robot))!** In the sense of Adler specifically, the technique involves *simulating* natural emotions to "trick" the observer, a form of [Affective computing](https://en.wikipedia.org/wiki/Affective_computing). In other words, via training, the actor is doing what AI's are being trained to do in the context of interacting with humans. The underlying method can be understood as a form of applied [psychology](https://en.wikipedia.org/wiki/Psychology#Major_schools_of_thought) and [neuroscience](https://en.wikipedia.org/wiki/Neuroscience), where the actor is accessing emotion for the purpose of analysis, and accessing specific parts of the brain on command to create observable [signs](https://en.wikipedia.org/wiki/Sign_(semiotics)). Upvotes: 1
2019/11/08
1,418
5,487
<issue_start>username_0: I came across this [answer on Quora](https://www.quora.com/What-is-convergence-in-neural-network), but it was pretty sparse. I'm looking for specific meanings in the context of machine learning, but also mathematical and economic notions of the term in general.<issue_comment>username_1: When formulating a problem in deep learning, we need to come up with a loss function, which uses model weights as parameters. Back-propagation starts at an arbitrary point on the error manifold defined by the loss function and with every iteration intends to move closer to a point that minimises error value by updating the weights. Essentially for every possible set of weights the model can have, there is an associated loss for a given loss function, with our goal being to find the minimum point on this manifold. Convergence is a term mathematically most common in the study of series and sequences. A model is said to converge when the series $s(n) = loss\_{w\_n}(\hat y, y)$ (Where $w\_n$ is the set of weights after the $n$'th iteration of back-propagation and $s(n)$ is the $n$'th term of the series) is a converging series. The series is of course an infinite series only if you assume that loss = 0 is never actually achieved, and that learning rate keeps getting smaller. Essentially meaning, a model converges when its loss actually moves towards a minima (local or global) with a decreasing trend. Its quite rare to actually come across a strictly converging model but convergence is commonly used in a similar manner as convexity is. Strictly speaking rarely exists practically, but is spoken in a manner telling us how close the model is to the ideal scenario for convexity, or in this case convergence. Upvotes: 3 <issue_comment>username_2: This is actually a highly technical term, which has been kind of misused and overgeneralized in many places. What does 'convergence' mean in a literal sense? It simply means that a sequence of terms indexed by $\mathbb{N}$ ($X\_1, X\_2, X\_3,..$) tends to a certain fixed value say $X$ as $\mathbb{N} \rightarrow \infty$, but may not achieve the fixed value. (there are a few technical details associated with this definition but I won't go into it as it requires some analysis) When it comes to ML we are looking at probabilistic or stochastic models. When we talk about convergence in ML, we generally mean 4 types of convergence: * **[Convergence in Probability](https://www.probabilitycourse.com/chapter7/7_2_5_convergence_in_probability.php)**: This means that $\mathbb{N} \rightarrow \infty$ your likelihood of $X\_N$ (a sequence of random variables) being very close to $X$ also increases i.e $P(\omega:[|X\_N(\omega)-X(\omega)| >\epsilon]) \rightarrow 0$ as $N\rightarrow \infty$. This type of convergence is mostly used in Statistical Learning Theory. * **[Almost Sure Convergence](https://www.probabilitycourse.com/chapter7/7_2_7_almost_sure_convergence.php)**: This means that $\mathbb{N} \rightarrow \infty$ your probability of $X\_N$ (a sequence of random variables) being very close to $X$ is $1$ (**NOTE**: Here there is no likelihood of being close to $X$, we straight up say it must be close to $X$) i.e $P(\omega:[|X\_N(\omega)-X(\omega)| >\epsilon]) = 0 $ as $N\rightarrow \infty$. This is a stronger verision of the previous convergence, and this is the type of convergence I have seen being used in RL. * [**Convergence in Distribution**](https://www.probabilitycourse.com/chapter7/7_2_4_convergence_in_distribution.php): This means that the distribution of a sequence of random variables tend to a certain distribution i.e $$\lim\_{N\to \infty} F\_{X\_n} = F\_X$$. * **Convergence in $r$'th moment**: This means that a sequence of random variables will converge to a certain mean as the sequence goes to infinity or simply put: $$\lim\_{N\to \infty} \mathbb E[|X\_N - \mu|^r] \ 0$$ where $\mu$ is the value to which the random variables in converge in $r$th moment. A simple useful [reference](http://web.mit.edu/14.381/www/ho6.pdf) for all the aforementioned modes of convergence. As a side note, this is meant as an informal reference, there is a lot of mathematical analysis involved in getting conditions for when these hold for a sequence of random variables. In the context of ML, one can think of $L(w\_N,y\_N,x\_N)$ in place of $X\_N$ where $w\_N, x\_N, y\_N$ will be the one deciding on the next step, and one can check if it satisfies any of the aforementioned convergence using some sufficient conditions. Note that when we do convex optimization we are talking about almost sure convergence (if the method used works), while for SGD due to stochasticity one might formulate it in the convergence in probability setting. As a concrete example, the PAC learning paradigm uses the convergence in probability framework (without going into details the idea of PAC learning is that with increasing size of dataset your confidence about your classifier increases which can be interpreted as some sort of convergence in probability with the actual loss as the random variable, check the PAC learning framework [here](http://mi.eng.cam.ac.uk/%7Ecz277/doc/Slides-PAC.pdf)), while the Q-learning convergence ([proof](http://users.isr.ist.utl.pt/%7Emtjspaan/readingGroup/ProofQlearning.pdf) as suggested in the comments) is an almost sure convergence under some assumptions (probably proved by Bertsekas and Tsitsiklis), CLT is an example of convergence in distribution. Upvotes: 2
2019/11/08
913
3,910
<issue_start>username_0: Facebook has [just pushed out](https://arxiv.org/pdf/1911.02116.pdf) a bigger version of their multi-lingual language model XLM, called XLM-R. My question is: do these kind of multi-lingual models imply, or even ensure, that their embeddings are comparable between languages? That is, are semantically related words close together in the vector space across languages? Perhaps the most interesting citation from the paper that is relevant to my question (p. 3): > > Unlike Lample and Conneau (2019), we do not use language embeddings, > which allows our model to better deal with code-switching. > > > Because they do not seem to make a distinction between languages, and there's just one vocabulary for all trained data, I fail to see how this can be truly representative of semantics anymore. The move away from semantics is increased further by the use of BPE, since morphological features (or just plain, statistical *word chunks*) of one language might often not be semantically related to the same chunk in another language - this can be true for tokens themselves, but especially so for subword information. So, in short: how well can the embeddings in multi-lingual language models be used for semantically comparing input (e.g. a word or sentence) of two different languages?<issue_comment>username_1: Embeddings generated by transformers like Bert or XLM-R are fundamentally different from embeddings learned through language models like GloVe or Word2Vec. The latter are static, i.e. they are just dictionaries containing a vocabulary with n-dimensional vectors associated to each word. Because of this they can be plotted through PCA and the distance between them can be easily calculate with whatever metrics you prefer. When training Bert or XLM-R instead you are not learning vectors, but the parameters of a transformer. The embedding for each token are then generated once a token is fed into the transformer. This implies several things, the most important being that the hidden representation (the embedding) for the token change depending on the context (recall that XML-R use as input also the hidden states generated by the previous token). This means that there are no static vectors to compare by plotting them or by calculating the cosine similarity. Nevertheless, there are way to analyse and visualise the syntax and semantics encoded in the parameters, this paper show some strategies: <https://arxiv.org/pdf/1906.02715.pdf> On a more linguistic side, I would also ask why vectors of same words should show the same semantic properties across languages. Surely there are similarities for lot of words translated literally, but the use of some expressions is inherently different across languages. To make a quick example: in English the clock 'works', in Dutch the clock 'lopen' (it walks) and in Italian the clock 'funziona' (it functions). Same expression, three different words in different languages that do not necessarily share the same neighbours in their monolingual latent spaces. The point of transformers is exactly to move from static representations to dynamic ones that are able to learn that all those three verbs (in their specific language) can appear early in a sentences and close to the word clock. Upvotes: 2 <issue_comment>username_2: There is a general idea in the field of NLP that there is a mapping between embeddings in different langauges. Figure 1 explains this. [![Figure 1](https://i.stack.imgur.com/Dg7xH.png)](https://i.stack.imgur.com/Dg7xH.png) In Figure 1. we have the embedding of English words and Spanish words, and we see that their exists a mapping between the manifolds associated to this two languages, i.e. Spanish manifold is a distorted image of the English maniflod. This idea was used to create an unsupervised translator in [MUSE Project](https://github.com/facebookresearch/MUSE). Upvotes: -1
2019/11/09
790
3,468
<issue_start>username_0: A recent question on AI and acting recalled me to the idea that in drama, there are not only conflicting motives between agents (characters), but a character may themselves have objectives that are in conflict. The result of this in performance is typically nuance, but also carries the benefit of combinatorial expansion, which supports greater novelty, and it occurs to me that this would be a factor in [affective computing](https://en.wikipedia.org/wiki/Affective_computing). (The actress <NAME> is a good example, where her performances typically involve indicating two or more conflicting emotions at once.) It occurs to me that this can even arise in the context of a formal game where achieving the most optimal outcome requires managing competing concerns. * Is there literature or examples of AI with internal conflicting objectives?<issue_comment>username_1: Embeddings generated by transformers like Bert or XLM-R are fundamentally different from embeddings learned through language models like GloVe or Word2Vec. The latter are static, i.e. they are just dictionaries containing a vocabulary with n-dimensional vectors associated to each word. Because of this they can be plotted through PCA and the distance between them can be easily calculate with whatever metrics you prefer. When training Bert or XLM-R instead you are not learning vectors, but the parameters of a transformer. The embedding for each token are then generated once a token is fed into the transformer. This implies several things, the most important being that the hidden representation (the embedding) for the token change depending on the context (recall that XML-R use as input also the hidden states generated by the previous token). This means that there are no static vectors to compare by plotting them or by calculating the cosine similarity. Nevertheless, there are way to analyse and visualise the syntax and semantics encoded in the parameters, this paper show some strategies: <https://arxiv.org/pdf/1906.02715.pdf> On a more linguistic side, I would also ask why vectors of same words should show the same semantic properties across languages. Surely there are similarities for lot of words translated literally, but the use of some expressions is inherently different across languages. To make a quick example: in English the clock 'works', in Dutch the clock 'lopen' (it walks) and in Italian the clock 'funziona' (it functions). Same expression, three different words in different languages that do not necessarily share the same neighbours in their monolingual latent spaces. The point of transformers is exactly to move from static representations to dynamic ones that are able to learn that all those three verbs (in their specific language) can appear early in a sentences and close to the word clock. Upvotes: 2 <issue_comment>username_2: There is a general idea in the field of NLP that there is a mapping between embeddings in different langauges. Figure 1 explains this. [![Figure 1](https://i.stack.imgur.com/Dg7xH.png)](https://i.stack.imgur.com/Dg7xH.png) In Figure 1. we have the embedding of English words and Spanish words, and we see that their exists a mapping between the manifolds associated to this two languages, i.e. Spanish manifold is a distorted image of the English maniflod. This idea was used to create an unsupervised translator in [MUSE Project](https://github.com/facebookresearch/MUSE). Upvotes: -1
2019/11/09
602
2,636
<issue_start>username_0: In perfect information games, the agent can see all the moves performed in the past. Besides, it can observe the next action that will be put into practice by the opponent. In this case, can we say that perfect information games are actually a fully observable environment? If we reach this conclusion, I guess that imperfect information becomes a partially observable environment?<issue_comment>username_1: Not exactly, at least traditionally: in Game Theory, "imperfect information" is most often defined as agents having only partial information about the history of agents' actions, as you correctly noted. But also note that this doesn't refer to the general world facts or state. But "partial observability" is typically used in terms of *systems*, e.g. in Markov Decision Processes, where it explicitly refers to world state, which might or might not include the history of other actors' actions. But of course in the end it depends which exact definitions are used in the context you're looking at - every author is free to define their own concepts, using traditional names or new ones. Upvotes: 1 <issue_comment>username_2: There is indeed a close parallel here, but the concepts *are* distinct. Every perfect information game is fully observable, but not every fully observable game is a game of perfect information. A game of [imperfect information](https://en.wikipedia.org/wiki/Perfect_information) is one in which you lack knowledge of *any* of the following: 1. The state of the game (e.g. current market prices). 2. The rewards you will receive from various states (i.e. utility and cost functions). In contrast, in partially observable process (specifically, a [POMDP](https://en.wikipedia.org/wiki/Partially_observable_Markov_decision_process)), the requirement is that you must not know *which* state you are in. This is a subtle distinction, so here are some examples: * A multi-armed bandit game with stationary distributions. Here, you know which state you are in (in fact, if the distributions are stationary, you know that the state doesn't change, except for the value of your winnings). You are not in a POMDP (the game is fully observable), but you *are* operating with imperfect information, because you don't know the utility function associated with different actions. You are operating in a regular MDP. * The game of chess has perfect information, and is also thus fully observable. * The game of poker has imperfect information because you cannot observe the current state of the game (you can't see the cards in your opponent's hand). It is thus a POMDP. Upvotes: 2
2019/11/09
588
2,542
<issue_start>username_0: I am solving a problem in which, according to the given values, the heuristic is not admissible. According to my calculation from other similar problems, it should be consistent, as well as keeping in mind the values, but the solution says it's not consistent either. Can someone tell why?<issue_comment>username_1: Not exactly, at least traditionally: in Game Theory, "imperfect information" is most often defined as agents having only partial information about the history of agents' actions, as you correctly noted. But also note that this doesn't refer to the general world facts or state. But "partial observability" is typically used in terms of *systems*, e.g. in Markov Decision Processes, where it explicitly refers to world state, which might or might not include the history of other actors' actions. But of course in the end it depends which exact definitions are used in the context you're looking at - every author is free to define their own concepts, using traditional names or new ones. Upvotes: 1 <issue_comment>username_2: There is indeed a close parallel here, but the concepts *are* distinct. Every perfect information game is fully observable, but not every fully observable game is a game of perfect information. A game of [imperfect information](https://en.wikipedia.org/wiki/Perfect_information) is one in which you lack knowledge of *any* of the following: 1. The state of the game (e.g. current market prices). 2. The rewards you will receive from various states (i.e. utility and cost functions). In contrast, in partially observable process (specifically, a [POMDP](https://en.wikipedia.org/wiki/Partially_observable_Markov_decision_process)), the requirement is that you must not know *which* state you are in. This is a subtle distinction, so here are some examples: * A multi-armed bandit game with stationary distributions. Here, you know which state you are in (in fact, if the distributions are stationary, you know that the state doesn't change, except for the value of your winnings). You are not in a POMDP (the game is fully observable), but you *are* operating with imperfect information, because you don't know the utility function associated with different actions. You are operating in a regular MDP. * The game of chess has perfect information, and is also thus fully observable. * The game of poker has imperfect information because you cannot observe the current state of the game (you can't see the cards in your opponent's hand). It is thus a POMDP. Upvotes: 2
2019/11/09
688
3,069
<issue_start>username_0: I am working on a problem in which I am attempting to find a stable region in a spiral galaxy. The PI I'm working with asked me to use machine learning as a tool to solve the problem. I have created some visualizations of my data, as bellow. [![data](https://i.stack.imgur.com/P6FVS.png)](https://i.stack.imgur.com/P6FVS.png) In this image, you can see there is a flat region between 0 and roughly 30 pixels, and between 90 pixels and 110 pixels. I have received suggestions to use an RNN LSTM model that can identify flat regions, but I wanted to hear other suggestions of other neural network models as well. The PI I'm working with suggests to feed my data visualization images into a neural network and have the neural network identify said stable regions. Can this be done using a neural network, and what resources would I have to look at? Moreover, can this problem be solved with RNN LSTM? I think the premise of this was to treat the radius as some temporal dimension. I've been extensively looking for answers online, and I cannot quite seem to find any similar examples.<issue_comment>username_1: In image processing CNNs are usually used to create weighted filters for focusing in on the image features which are most important for making predictions. Keras is one of the libraries used to examine images in this way. With this type of analysis you will need labeled and unlabeled data you want to create a network that inputs a photo extracts the flat black line regions and outputs those. The model will be generative, generating guesses of regions where the function is flat. This is all possible to do but in order to label the data you need to label them by hand or you need to create a function that manually labels them which would not be very difficult. The input nodes will take in the pixels of the picture and the output layer will be guesses at location along the graph of wether the section is flat or not. It seems overkill to do this with a neural network when it is possible to not use a NN and creating a labeling method will most likely be your first step. If you have any questions please ask. Upvotes: 0 <issue_comment>username_2: If you're really just trying to find long contiguous flat regions in a sequence, **you do not need machine learning**. Your PI is mistaken. You would be better off simply writing a short data processing program. Your program could find the finite differences between adjacent datapoints, and then count whether a long string of them are below some threshold to identify long flat regions. This will be faster, simpler, and perhaps more accurate than using ML on data visualizations for this task. If you are trying to find something more complex than these long flat regions, you could instead train an [LSTM](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) on the raw sequential data that you are using to generate the images. Again, that will probably be more accurate than trying to train a CNN, or any non-sequential model, on the image data itself. Upvotes: 1
2019/11/09
856
3,087
<issue_start>username_0: My vague understanding of reinforcement learning (RL) is that it's very similar to supervised learning except that it updates on a continuous feed of data/activity, this to me sounds very similar to AutoML (which I've started to notice being used). Do they use different algorithms? What is the fundamental difference between RL and AutoML? I'm after an explanation for somebody who understands technology but does not work with machine learning tools regularly.<issue_comment>username_1: [Automated machine learning](https://arxiv.org/pdf/1908.05557.pdf) ([AutoML](https://arxiv.org/pdf/1810.13306.pdf)) is an [umbrella term](https://en.wiktionary.org/wiki/umbrella_term) that encompasses a collection of techniques (such as [hyper-parameter optimization](https://en.wikipedia.org/wiki/Hyperparameter_optimization) or [automated](https://en.wikipedia.org/wiki/Feature_learning) [feature engineering](https://en.wikipedia.org/wiki/Feature_engineering)) to **automate** the design and application of machine learning algorithms and models. [Reinforcement learning (RL)](https://en.wikipedia.org/wiki/Reinforcement_learning) is a sub-field of machine learning concerned with the task of making decisions and taking actions in an environment so as to maximize (long-term) **reward** (which is the goal of the so-called RL agent). RL is (at least partially) based on the way animals (including humans) learn. For example, the usual way of [training a dog](https://en.wikipedia.org/wiki/Dog_training) to perform a certain task is to reward it with food whenever it takes the correct action (for example, jumping, if you want the dog to jump whenever you make a certain gesture with your hand). In this case, the RL agent is the dog, the task the dog needs to perform (e.g. jumping) is the environment, food is the reward and the goal is to get food. Given that reinforcement learning (RL) is a sub-field of machine learning, then, in principle, AutoML can also be used to automate the design of RL algorithms, models or agents. For example, if you use a neural network to represent the policy (the function the determines which action to take in the environment), then you can potentially use AutoML to find the most appropriate architecture (for example, the most appropriate number of layers) for this neural network. Upvotes: 3 [selected_answer]<issue_comment>username_2: RL can be used in the context of Neural Architecture Search (NAS), with is a form of automated ML. A model searches for an architecture that performs a given task. How well this task is performed guides how the architecture will be modified (improved) on the next pass. It works but is very computation-intensive (think hundreds of GPUs). See for instance: * <NAME> and <NAME> (2016), [arxiv: 1611.01578](https://arxiv.org/abs/1611.01578). * <NAME>, <NAME>, <NAME>, and <NAME> (2017), [arxiv: 1707.07012](https://arxiv.org/abs/1707.07012). * <NAME>, <NAME>, <NAME>, <NAME>, and <NAME> (2018), [arxiv: 1807.11626](https://arxiv.org/abs/1807.11626). Upvotes: 2
2019/11/10
1,383
4,402
<issue_start>username_0: In a feed-forward neural network, in order to efficiently do backpropagation, what kind of data structure is needed? I know the weights can just be stored in an array, and you need pointers of some kind to represent connections from one layer to the next (or just a default scheme/pattern), but is anything else needed for backpropagation to work?<issue_comment>username_1: **TL;DR**: You'll need to store a little bit more to perform backward passes. You'll need to store data from the forward pass. This stored information is used for calculating the gradient. **Overview** (warning: not trivial) > > I know the weights can just be stored in an array > > > You'll need a little more: To update the weights you need to keep a "cache" of the forward pass intermediate terms. That is, forward propagation can be seen as a series of transformations on your input $X$: $$X\xrightarrow{\Theta^{[1]}+b^{[1]}} [ Z^{[1]} \xrightarrow{\alpha^{[1]}} A^{[1]}] \xrightarrow{\Theta^{[2]}+b^{[2]}} \dots \xrightarrow{\Theta^{[L]}+b^{[L]}} [ Z^{[L]} \xrightarrow{\alpha^{[L]}} A^{[L]}]\xrightarrow{\frac{1}{m}\sum\limits\_m\sum\limits\_{n\_L} loss\{A^{[L]},y\}} J $$ where: $Z^{[1]}=\Theta^{[1]}X+b^{[1]}$ (ie the linear part) $A^{[l]}=\alpha^{[l]}(Z^{[l]})$ (ie element wise activation over linear part) You need to store the $Z^{[l]}$ & $A^{[l]}$ terms in said "cache." You could store these in an array or some other similar data structure. You need these for calculating the gradient during the backwards pass. **Syntax** $A^{[k]}$ - this means we are indexing by layer (eg $\alpha^{[k]}$ is the activation for k-th layer) $m$ - is the number of examples in the batch $n\_k$ - denotes the number of neurons in the k-th layer $L$ - the number of layers (so $n\_L$ is the number of neurons in last layer) $\Theta$ - The set of *all* weights (notice no superscript) **Backprop** In the case of neural networks the cost is a scalar function of inputs and parameters. To get backprop started calculate the [scalar by matrix](https://en.wikipedia.org/wiki/Matrix_calculus#Scalar-by-matrix) derivative of the cost with respect to the activations of the last layer call this matrix $dA^{[L]}$. Observe: $dA^{[L]} = \frac{\partial J(\Theta,X)}{\partial A^{[L]}}$ Next, we calculate scalar-by-matrix derivative of $Z^{[L]}$. Doing this one realizes: $dZ^{[L]} = \frac{\partial J(\Theta,X)}{\partial Z^{[L]}} = dA\odot\alpha'^{[L]}(Z^{[L]})$ Where $\odot$ denotes element wise ([Hadamard](https://en.wikipedia.org/wiki/Hadamard_product_(matrices))) product. With the above one can make use of the matrix definitions for back propagation: $\text{(A)}\quad d\Theta^{[l]} = \frac{1}{m}dZ^{[l]}\times (A^{[l-1]})^T$ $\text{(B)}\quad db^{[l]} = \frac{1}{m}\sum\_{c=1}^m dZ^{[l](c)}$ (where the new superscript in $dZ^{[l](c)}$ denotes summing along the *batch* dimension ) $\text{(C)} \quad dZ^{[l]}= dA^{[l]}\odot \alpha^{'[l]}(Z^{[l]})$ $\text{(D)}\quad dA^{[k]} = (\Theta^{[k+1]})^T\times dZ^{[k+1]}$ And of course the wight updates are: $\Theta^{[L]} \leftarrow \Theta^{[L]} - \frac{\eta}{m}d\Theta $ $b^{[L]} \leftarrow b^{[L]} - \frac{\eta}{m}db $ (where $\eta$ is the learning rate) Observe, how the forward pass terms are used during the backprop calculations. **A recommendation** Take the A. Ng deep learning specialization. He does a good job explaining the intuition and even has a project to implement this. Though, he does not derive the back propagation equations. You can find a not so easy derivation [here](https://rojasinate.com/documents/backprop.pdf). Upvotes: 3 [selected_answer]<issue_comment>username_2: When I check [the Tensorflow tool code](https://github.com/tensorflow/tensorflow), it looks like they implemented own data structure for neural network operation including back propagation. The custom data structure is [EagerTensor](https://github.com/tensorflow/tensorflow/blob/v2.9.0/tensorflow/python/framework/ops.py#L1049). I think to speed up the operation speed, the Tensorflow used [the TFRECORD file](https://www.tensorflow.org/tutorials/load_data/tfrecord) type/format. Additionally, for neural network structure (they called Graph), they used [the Protocol Buffer format](https://chromium.googlesource.com/external/github.com/tensorflow/tensorflow/+/r0.10/tensorflow/g3doc/how_tos/tool_developers/index.md). Upvotes: 0
2019/11/11
1,073
4,760
<issue_start>username_0: [![enter image description here](https://i.stack.imgur.com/qKVPw.jpg)](https://i.stack.imgur.com/qKVPw.jpg) The above model is what really helped me understand the implementation of convolutional neural networks, so based on that, I've got a tricky hypothesis that I want to find more about, since actually testing it would involve developing an entirely new training model if the concept hasn't already been tried elsewhere. I've been building a machine learning project for image recognition and thought about how at certain stages we flatten the input after convoluting and max pooling, but it occurred to me that by flattening the data, we're fundamentally losing positional information. If you think about how real neurons process information based on clusters, it seems obvious that proximity of the biological neurons is of great significance rather than thinking of them as flat layers, by designing a neural network training model that takes neuron proximity into account in deciding the structure by which to form connections between neurons, so that positional information can be utilized and kept relevant, it seems that it would improve network effectiveness. Edit, for clarification, I made an image representing the concept I'm asking about: [![enter image description here](https://i.stack.imgur.com/wWPO2.png)](https://i.stack.imgur.com/wWPO2.png) Basically: Pixels 1 and 4 are related to each other and that's very important information. Yes we can train our neural network to know those relationships, but that's 12 unique relationships in just a 3x3 pixel grid that our training process needs to successfully teach the network to value, whereas a model that takes proximity of neurons into consideration, like the real world brain would maintain the importance of those relationships since neurons connect more readily to others in proximity. My question is: Does anyone know of white papers / experiments closely related to the concept I'm hypothesizing? Why would or would that not be a fundamentally better model?<issue_comment>username_1: I have had similar thoughts about neural networks before. Convolution layers are layers of two dimensional nodes effectively passing the spacial data so why don't we use two dimensional hidden layers to receive information out of them. I'm sure someone has used this type of implementation before. I believe the papers bellow are using this. Part of the point of neural networks is that the weights are trained in order to optimize finding the best solution so regardless of the spacial information it learns to 'focus'/increase weight on locations that are associated with deciding the solution. Think of the problem where your neural network examines an image and gives true or false. Training images are True if the center is red and one of the corners is blue or if the center is blue and one of the corners are red. Flattening the layers or not should have basically no effect on this model. In other circumstances like object detection or labeling outlines yes I believe not flattening will benefit the model. With that said flattening the data does not erase spacial relationship each layer will still be trained to detect the spacial information that gives a correct answer the flattened layers just won't have the benefit of neighbors when the layers are one dimensional instead of two. In a a CNN with multi class detecting as the task you could allow each class to have its own CNN like hidden layers that narrow to a decision node and decide if they match that class or not. Imagine a palm tree shape where the palm trunk is the image convolutions and each leaf on the top are the two dimensional hidden layers that narrow to an output layer. [Multi-dimensional NN](https://link.springer.com/chapter/10.1007/978-3-540-74690-4_56) and [Three dimensional Neural Network](https://ieeexplore.ieee.org/abstract/document/80212) I know I spoke in a lot of abstraction so if any part doesn't make sense, I'll make an edit to clarify. Upvotes: 2 <issue_comment>username_2: Read on Fully Convolutional Networks (FCN). There is a lot of papers on the subject, first was "Fully Convolutional Networks for Semantic Segmentation" by Long. The idea is quite close to what you describe - preserve spatial locality in the layers. In FCN there is no fully connected layer. Instead there is average pooling on top of last low-resolution/high-channels layer. The effect is like as if you have several fully connected layer centered on different locations and end result produced by weighted voting of them. Pleasant side effect of FCN is that they work on any spatial image size(bigger then receptive field) - image size is not coded into network. Upvotes: 2
2019/11/12
1,726
7,878
<issue_start>username_0: What does AI software look like? What is the major difference between AI software and other software?<issue_comment>username_1: Code in AI is not in principle different from any other computer code. After all, you encode algorithms in a way that computers can process them. Having said that, there are a few points where your typical "AI Code" might be different: * A lot of (especially early) AI code was more research based and exploratory, so certain programming languages were favoured that were not mainstream for, say, business applications. For example, much work in early AI has been coded in Lisp, and probably not much in Fortran or Cobol, which were more suited to engineering or business. Special languages were developed to make it easy to program with symbols and logic (eg Prolog). * The emphasis was more on algorithms than clever/complex programming. If you look at the source code for [ELIZA](https://en.wikipedia.org/wiki/ELIZA) (there are multiple implementations in many different languages), it's really very simple. * Before the advent of neural networks and (statistical) machine learning, most AI programming was symbolic, so there hasn't been much emphasis on numerical computing. This changed as probabilities and fuzziness were increasingly used, but even if using general purpose languages there would be fewer numerical calculations. * Self-modifying code is inherently complex; while eg Lisp made no difference between code and data (at least not in the same way as eg C or Pascal), this would just complicate development without much gain. Perhaps in the early days this was necessary when computers had precious little memory and power and you had to work around those constraints. But these days I don't think anybody would use such techniques anymore. * As modern programming languages evolved, Lisp and Prolog (which were the dominant AI languages until probably 20 to 30 years ago) have been slowly replaced by eg Python; probably because it is easier to find programmers comfortable in an imperative paradigm rather than a functional one. In general, interpreted languages would be preferred over compiled ones due to speed of development, unless performance is important. The move to deep learning has of course shifted this a lot. Now the core processing is all numeric, so you would want languages that are better with calculations than symbol handling. Interpreted languages would now mainly make up the 'glue' code to interface between compiled modules, and be used for data pre-processing. So current AI code is probably not really that different from code used in scientific computing these days. There is of course still a difference between R&D and production code. You might explore a subject using an interpreted language, and then re-code your algorithm for production in a compiled language to gain better performance. This depends on how established the area is; there will for example be ready-made libraries available for neural networks or genetic algorithms which are well-established algorithms (where performance matters). In conclusion: I don't think AI code is any more complex than any other code. Of course, that's not very exciting to portray in a film, so artistic licence is used to make it more interesting. I guess self-modifying code also enables the machines to develop their own conscience and take over the world, which is even more gripping as a story element. However, given that a lot of behaviour is nowadays in the (training/model/configuration) data rather than the algorithm, this might even be more straight forward to modify. Note: this is a fairly simplified summary based on my own experience of working in AI; other people's views might vary, without either being 'wrong'. *Update 2021*: I now work at a company that extracts business information/events from news data on a large scale using NLP methods. And we're using Lisp... so it's still in active, commercial use in AI. Upvotes: 5 [selected_answer]<issue_comment>username_2: username_1's answer is quite good, but I think it can be expanded upon a bit. I think there are extra factors that could be popularly interpreted as making AI code difficult to read (as compared to other code): 1. AI code actually *is* more complex than most code that is written. When we work in AI, we often lose sight of this, but most code ever written does one of two things: turn data in one standard format into data in another standard format; display something to a user. Both of those are conceptually easy to understand. Neither of them is likely to require knowledge of mathematics. This is very unlike most code written in AI, where understanding what was written, and why, requires extensive knowledge *beyond* the knowledge needed to read and write computer programs. So, **reading AI code requires more knowledge of mathematics or of complicated AI-focused Algorithms**. 2. The "programs written by AIs" are really our *models* in the modern context. Our algorithms "program" a template model to make it work for a specific application. This is especially true if you think of it in the senses in which programming is also used in "linear programming", "quadratic programming", and even "dynamic programming". Our models really *are* hard to understand. Often even their creators cannot explain or characterize the model's behavior on specific inputs without running the model. The reason for this is that our **models do not represent simple enough concepts that humans can easily understand or simplify them**. 3. Self-modifying code is rare, [but does exist](https://en.wikipedia.org/wiki/Genetic_programming) within AI. However, as with other AI-generated models, AI-generated code tends to be comparatively difficult for humans to interpret, because (unlike most human-generated code), **it is not written with the intention that humans are going to try to read and understand it**. There actually are some efforts to generate code that [conforms to human](https://blogs.oracle.com/meena/code-generation-using-lstm-long-short-term-memory-rnn-network) styles, but usually the code that is generated does not work well. Upvotes: 3 <issue_comment>username_3: This may be a much simpler explanation than you're looking for, but in [Machine Learning Zero to Hero](https://www.youtube.com/watch?v=VwVg9jCtqaU), Google engineer <NAME> summarized it in a way that I thought was brilliant. Paraphrasing from a presentation slide: > > In traditional programming, you input rules and data and the program outputs answers. In machine learning, you input data and answers and the program outputs rules. > > > There's an algebra-like symmetry to this. And the program doesn't even know what it's coming up with rules for. It just randomly evolves the rules until the data produces the correct answers. You can then take those rules, apply them to different data, and hopefully get correct answers. Upvotes: 3 <issue_comment>username_4: AI has been redefined recently to machine learning. All programming except machine learning (and we'll come back to this) is embodying human knowledge in terms a computer can follow. EG A text editor has user interface rules, user expectations, a contract with the OS that it has to follow. A programmer puts it all together. This applies to text editors, expert medical systems, banking software, accounting software (and the programmer needs to know accounting to program it). Machine learning is training software with data and outputs allowing it to determine the link between them. No human knowledge. Nor can it explain what it is doing. Of course they actually work far better when human knowledge surrounds them as part of their data. A AI that routes incoming invoices etc works better when told where things should actually go (accounts payable). Upvotes: 2
2019/11/13
564
2,372
<issue_start>username_0: In the field of adversarial machine learning, machine learning models are vulnerable to attacks both on the test and training data set. However, how does the attacker get access to these datasets? How do these datasets get manipulated/tampered with?<issue_comment>username_1: They don't have acces to the original training or test dataset. Machine learning environments are build on the premise of a benign environment. The models are trained on real data (real inputs). When someone sends a made up input (fake input) it is very easy to fool the model. This is used for example in image recognition. Imagine a fotograph of a panda. the model may correctly identify this fotograph as a panda. With knowledge of the model you can now alter some pixels in the fotograph. To the human Eye, the fotographs will appear exactly the same, but the model can be fooled to believe the fotograph is actually of a gibbon. This is all done after the training of the model and doesn't require the original datasets. For more info, visit this site: <https://medium.com/@ml.at.berkeley/tricking-neural-networks-create-your-own-adversarial-examples-a61eb7620fd8> Upvotes: 1 <issue_comment>username_2: In adverserial machine learning, someone (program or human) attempts to fool an existing model with a malicious input. The best human example would be an optical illusion. The human brain's model for image processing starts outputting wrong information when looking at an optical illusion. So in the end we see wrong colour, shape, etc. In this case, the optical illusion would be considered as the malicious input. We can trick the human brain’s model through images created with trial and error. So, if you just have the trained model at hand, you don’t have to know the data it has been trained with. You just need to be able to input a value to the model and get the output. Upvotes: 0 <issue_comment>username_3: We can manipulate a model's test data set if the machine learning model takes user input and uses it to resample test data set. The actual training dataset of the ML model does not get manipulated, but if we figure out the ML model through an exploratory attack (sending a lot of inquiries to the ML model to find out its nature), we can generate a training dataset which was built into the original ML model. Upvotes: 1 [selected_answer]
2019/11/14
602
2,535
<issue_start>username_0: I am solving many sequence-to-sequence prediction problems using RNN/LSTM. What type of evaluation metrics can be used for sequence prediction problems? One metric is the mean squared error (MSE) that we can give as a parameter during the training model. Currently, the accuracy of my sequence-to-sequence problems is very low. What are other ways through which we can compare the performance of our models?<issue_comment>username_1: They don't have acces to the original training or test dataset. Machine learning environments are build on the premise of a benign environment. The models are trained on real data (real inputs). When someone sends a made up input (fake input) it is very easy to fool the model. This is used for example in image recognition. Imagine a fotograph of a panda. the model may correctly identify this fotograph as a panda. With knowledge of the model you can now alter some pixels in the fotograph. To the human Eye, the fotographs will appear exactly the same, but the model can be fooled to believe the fotograph is actually of a gibbon. This is all done after the training of the model and doesn't require the original datasets. For more info, visit this site: <https://medium.com/@ml.at.berkeley/tricking-neural-networks-create-your-own-adversarial-examples-a61eb7620fd8> Upvotes: 1 <issue_comment>username_2: In adverserial machine learning, someone (program or human) attempts to fool an existing model with a malicious input. The best human example would be an optical illusion. The human brain's model for image processing starts outputting wrong information when looking at an optical illusion. So in the end we see wrong colour, shape, etc. In this case, the optical illusion would be considered as the malicious input. We can trick the human brain’s model through images created with trial and error. So, if you just have the trained model at hand, you don’t have to know the data it has been trained with. You just need to be able to input a value to the model and get the output. Upvotes: 0 <issue_comment>username_3: We can manipulate a model's test data set if the machine learning model takes user input and uses it to resample test data set. The actual training dataset of the ML model does not get manipulated, but if we figure out the ML model through an exploratory attack (sending a lot of inquiries to the ML model to find out its nature), we can generate a training dataset which was built into the original ML model. Upvotes: 1 [selected_answer]
2019/11/16
1,816
7,283
<issue_start>username_0: In the following paragraph from the book [Automated Machine Learning: Methods, Systems, Challenges](https://link.springer.com/book/10.1007%2F978-3-030-05318-5) (by <NAME> et al.) > > In this section we first give a brief introduction to Bayesian optimization, present alternative surrogate models used in it, describe extensions to conditional and constrained configuration spaces, and then discuss several important applications to hyperparameter optimization. > > > What is an "alternative surrogate model"? What exactly does "alternative" mean?<issue_comment>username_1: A surrogate model is a simplified model. It is a mapping $y\_S=f\_S(x)$ that approximates the original model $y=f(x)$, in a given domain, reasonably well. Source: [Engineering Design via Surrogate Modelling: A Practical Guide](https://onlinelibrary.wiley.com/doi/book/10.1002/9780470770801) In the context of Bayesian optimization, one wants to optimize a function $y=f(x)$ which is expensive (very time consuming) to evaluate, therefore one optimizes the surrogate model $y\_S=f\_S(x)$ which is cheaper (faster) to evaluate. Upvotes: 3 <issue_comment>username_2: What is [Bayesian optimization](http://krasserm.github.io/2018/03/21/bayesian-optimization/)? --------------------------------------------------------------------------------------------- ### Introduction [Bayesian optimization (BO)](https://en.wikipedia.org/wiki/Bayesian_optimization) is an optimization technique used to model an **unknown** ([usually continuous](https://arxiv.org/pdf/1807.02811.pdf)) function $f: \mathbb{R}^d \rightarrow Y$, where [typically $d \leq 20$](https://arxiv.org/pdf/1807.02811.pdf), so it can be used to solve regression and classification problems, where you want to find an approximation of $f$. In this sense, BO is similar to the usual approach of training a neural network with gradient descent combined with the back-propagation algorithm, so that to optimize an objective function. However, BO is particularly suited for regression or classification problems where **the unknown function $f$ is expensive to evaluate** (that is, given the input $\mathbf{x} \in \mathbb{R}^d$, the computation of $f(x) \in Y$ takes a lot of time or, in general, resources). For example, when doing hyper-parameter tuning, we usually need to first train the model with the new hyper-parameters before evaluating the specific configuration of hyper-parameters, but this usually takes a lot of time (hours, days or even months), especially when you are training deep neural networks with big datasets. Moreover, BO does not involve the computation of gradients and [it usually assumes that $f$ lacks properties such as concavity or linearity](https://arxiv.org/pdf/1807.02811.pdf). ### How does Bayesian optimization work? There are three main concepts in BO * the **surrogate model**, which models an unknown function, * a method for **statistical inference**, which is used to update the surrogate model, and * the **acquisition function**, which is used to guide the statistical inference and thus it is used to update the surrogate model The surrogate model is usually a [Gaussian process](https://distill.pub/2019/visual-exploration-gaussian-processes), which is just a fancy name to denote a [collection of random variables such that the joint distribution of those random variables is a multivariate *Gaussian probability distribution*](https://en.wikipedia.org/wiki/Gaussian_process) (hence the name *Gaussian* process). Therefore, in BO, we often use a Gaussian probability distribution (the surrogate model) to model the possible functions that are consistent with the data. In other words, given that we do not know $f$, rather than finding the usual *point estimate* (or maximum likelihood estimate), like in the usual case of supervised learning mentioned above, we maintain a Gaussian probability distribution that describes our uncertainty about the unknown $f$. The method of statistical inference is often just an iterative application of the Bayes rule (hence the name *Bayesian* optimization), where you want to find the posterior, given a prior, a likelihood and the evidence. In BO, you usually place a prior on $f$, which is a multivariate Gaussian distribution, then you use the Bayes rule to find the posterior distribution of $f$ given the data. What is the data in this case? In BO, the data are the outputs of $f$ evaluated at certain points of the domain of $f$. The acquisition function is used to choose these points of the domain of $f$, based on the computed posterior distribution. In other words, based on the current uncertainty about $f$ (the posterior), the acquisition function attempts to cleverly choose points of the domain of $f$, $\mathbf{x} \in \mathbb{R}^d$, which will be used to find an updated posterior. Why do we need the acquisition function? Why can't we simply evaluate $f$ at random domain points? Given that $f$ is expensive to evaluate, we need a clever way to choose the points where we want to evaluate $f$. More specifically, we want to evaluate $f$ where we are more uncertain about it. There are several acquisition functions, such as [*expected improvement, knowledge-gradient, entropy search*, and *predictive entropy search*](https://arxiv.org/pdf/1807.02811.pdf), so there are different ways of choosing the points of the domain of $f$ where we want to evaluate it to update the posterior, each of which deals with the exploration-exploitation dilemma differently. ### What can Bayesian optimization be used for? BO can be used for tuning hyper-parameters (also called hyper-parameter optimisation) of machine learning models, such as neural networks, but [it has also been used to solve other problems](https://arxiv.org/pdf/1807.02811.pdf). What is an alternative surrogate model? --------------------------------------- In the book [Automated Machine Learning: Methods, Systems, Challenges](https://link.springer.com/content/pdf/10.1007%2F978-3-030-05318-5.pdf) (by <NAME> et al.) that you are quoting, the authors say that the commonly used surrogate model *Gaussian process* scales cubically in the number of data points, so sparse Gaussian processes are often used. Moreover, Gaussian processes also scale badly with the number of dimensions. In [section 1.3.2.2.](https://link.springer.com/content/pdf/10.1007%2F978-3-030-05318-5.pdf), the authors describe some [**alternative**](https://www.merriam-webster.com/dictionary/alternative) surrogate models to the Gaussian processes, for example, alternatives that use neural networks or random forests. Upvotes: 4 [selected_answer]<issue_comment>username_3: Recently, I've been thinking this question as well. After reading several papers, finally came up with some thoughts about the surrogate model. In FEM(finite element method), we try to find a weak form to approximate the strong form so that we can solve the weak form analytically. (weak form: approximation equation; strong form: PDE in real world) In my opinion, the surrogate model can be regarded as 'weak form'. There are many methods can form a surrogate model. And if we use a NN model as the surrogate model, the training process is equivalent to 'solving analytically'. Upvotes: 0
2019/11/16
599
2,512
<issue_start>username_0: I have a large set of data points describing mappings of binary vectors to real-valued outputs. I am using TensorFlow, and would like to train a model to predict these relationships. I used four hidden layers with 500 neurons in each layer, and sigmoidal activation functions in each layer. The network appears to be unable to learn, and has high loss even on the training data. What might cause this to happen? Is there something wrong with the design of my network?<issue_comment>username_1: When training our neural network, you need to scale your dataset in order to avoid slowing down the learning or prevent effective learning. Try normalizing your output. This [Tutorial](https://machinelearningmastery.com/how-to-scale-data-for-long-short-term-memory-networks-in-python/) might help Upvotes: 1 [selected_answer]<issue_comment>username_2: Your code suggests a likely problem here: It looks like you are training a very deep neural network with sigmoidal activation functions at every layer. The sigmoid has the property that its derivative (S\*(1-S)) will be extremely small when the activation function's value is close to 0 or close to 1. In fact, the largest it can be is about 0.25. The backpropigation algorithm, which is used to train a neural network, will propagate an error signal backwards. At each layer, the error signal will be multiplied by, among other things, the derivative of the activation function. It is therefore the case that by the 4th layer your signal is *at most* $0.25^4 = \frac{1}{256}$ the size that it was at the start of the network. In fact, it is likely much smaller than this. With a smaller signal, your learning rates at the bottom of the nextwork will effectively be much smaller than the learning rates at the top, which will make it very difficult to pick a learning rate that is effective overall. This problem is known as [the vanishing gradient](https://adventuresinmachinelearning.com/vanishing-gradient-problem-tensorflow/). To fix this, if you want to use a deep architecture, consider using an activation function that does not suffer from a vanishing gradient. The [Rectified Linear](https://en.wikipedia.org/wiki/Rectifier_(neural_networks)) activation function, used in so-called "ReLU" units, is a non-linear activation that does not have a vanishing gradient. It is common to use ReLUs for the earlier layers in a network, and a sigmoid at the output layer, if you need outputs to be bounded between 0 and 1. Upvotes: 2
2019/11/17
850
3,569
<issue_start>username_0: I want to prevent my model from overfitting. I think that k-fold cross-validation (because it is doing this each time with different datasets) may be more effective than splitting the dataset into training and test datasets to prevent overfitting, but a colleague (who has little experience in ML) says that, to prevent overfitting, the 70/30% split performs better than the k-fold cross-validation. In my opinion, k-fold cross-validation provides a reliable method to test the model performance. Is k-fold cross-validation more effective than splitting the dataset into training and test datasets to prevent overfitting? I am not concerned with computational resources.<issue_comment>username_1: **Purely in terms of overfitting**, and assuming you train both for equal amounts of time, 70/30 is probably better but performance is not going to be very good. Not training on %30 of data will make both training and test results equally bad (in my opinion). But it won't overfit, that is for sure. Cross validation (you have in mind 90/10, I assume) will take a long time, so that won't have enough time to train and it might be overfitting more compared to 70/30, but as it is going to see all training samples %90 at a time, there is a good chance it will train better. So, at the end of the day, it will overfit more but perform better. If you are asking which is better overall, performance and overfitting, I say it depends on the size of your dataset. If you have millions of samples in it, you can even use a 98/1/1 for training, testing and validation and [still be OK](https://www.coursera.org/learn/deep-neural-network/lecture/cxG1s/train-dev-test-sets). Edit: Thinking a little more about it, even if the time is not an issue the situation will roughly be the same. But you will know the performance of the model on new data to a higher certainty with cross validation. Upvotes: 2 <issue_comment>username_2: Both methods are fine if used properly. As a rule of thumb, when training time is not an issue, use split method if you have more data than you can use in your model and cross-validation if not. I would suggest handling overfitting by some other means. Upvotes: 0 <issue_comment>username_3: K-fold cross-validation is probably preferred in terms of completeness and generalization: you ensure that the system has seen the complete dataset for training. However, in deep learning this is often not feasible due to time and power constraints. They can both be used, and there is not one *better* than the other. It really depends on the specific case, the size of the dataset and the time and hardware available. Note that overfitting can be (partially) remedied by things such as dropout. To be fair: it is fine to have a discussion about this with your colleagues, but as so often there is no one correct answer. If you *really* want proof, you can test it out and compare them. But performance-wise (i.e. the model's predictive power), the difference will be small. Upvotes: 3 [selected_answer]<issue_comment>username_4: I think this decision will be different from case to case. For example, when deep learning networks are used in new architectures such as using a pre-trained network to extract features and using machine learning classifiers to classify data. You can no longer use the fold method here!! Because the network is no longer fine-tuning! In the 15-15-70 method, we can use more number of runs to reduce the variance. In this way, the problem can be solved according to the requested example. Upvotes: 0
2019/11/17
596
2,461
<issue_start>username_0: In simple words, what does end-to-end training mean, in the context of deep learning?<issue_comment>username_1: This is relevant when you have two or more neural networks serving as components to a larger architecture. Training this architecture in an end-to-end manner means simultaneously training all components (i.e. training it as a single network). The best example I can think of are image captioning architectures. These usually comprise of two networks: a CNN whose role is to extract features from the input images and a RNN that accepts the CNN's features and generates the output captions. ![](https://shenxiaohai.me/2018/10/22/pytorch-tutorial-advanced-04/model.png) You have two options for training: 1. First, train the CNN first for some arbitrary task (e.g. image classification) in hopes that it learns how to extract features. Then use the CNN to extract features from the input images and use those as inputs to train the RNN. This procedure trains the two components in two **completely separate phases**. 2. Treat the whole architecture as a single network and backpropagete the gradients to the CNN so that it also can be trained. This procedure trains the two components **simultaneously**. This is what we call **end-to-end** training. Upvotes: 2 <issue_comment>username_2: Another explanation of deep learning as an end-to-end framework is in deep learning, pre-processing or feature extraction steps are not necessary. So it only uses a single processing step, which is to train the deep learning model. In other traditional machine learning methods, some separated feature extraction steps usually required. [![enter image description here](https://i.stack.imgur.com/RtqsD.jpg)](https://i.stack.imgur.com/RtqsD.jpg) For example in image classification, deep learning frameworks like CNN can receive a raw image and then trained to classify it directly. If we didn't use deep learning, we need to extract some features using more steps, like edge detection, corner detection, color histogram, etc. you can also watch [<NAME>'s explanation here](https://www.coursera.org/lecture/machine-learning-projects/what-is-end-to-end-deep-learning-k0Klk) Upvotes: 3 [selected_answer]<issue_comment>username_3: End to end means deep learning is the only thing that is used. Many people have doubts on its viability though, I certainly do. I wouldn't trust an end-to-end DL based self driving car. Upvotes: 0
2019/11/17
1,133
4,390
<issue_start>username_0: I'm studying a master's degree and my final work is going to be about the convolutional neural network. I read a lot of books and I did Convolutional Network Standford's course, but I need more. Are there books or papers on the details of convolutional neural networks (in particular, convolutional layer)?<issue_comment>username_1: <NAME>'s work is always inspired, and not too technical as one would expect. He has several papers on CNNs on his [website](https://colah.github.io/). In particular, check the series titled "Convolutional Neural Networks" with four papers on the topic. Upvotes: 1 <issue_comment>username_2: I'm not sure if this is what you are looking for but I find Goodfellow's book a pretty good resource: Goodfellow, specifically Section 2, Chapter 9 deals with convolutional neural networks: <https://www.deeplearningbook.org/> 'Pattern Recognition and Machine Learning' by Bishop Might contains a section (5.5.5, pg 267 onwards) as well as an exercise, and a general discussion about neural networks in image recognition. If you edit your question to post a bit more detail, we can offer better answers, for example, what is about the convolutional layer? How it's implemented? If you are looking for a more basic introduction to convolutional layers I would also suggest: [A Comprehensive Guide to Convolutional Neural Networks — the ELI5 way](https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53) gives a pretty general overview, starting at the difference between CNNs and ANNs and explains why CNNs are superior to ANNs (for certain problems). It also gives some details about how the convolution actually works. [Demystifying the transpose convolution](https://towardsdatascience.com/transpose-convolution-77818e55a123) explains the transpose convolution operation in the context of how a traditional convolution; this may not be relevant if you are strictly using CNNs and not transpose-CNNs. [Understanding of Convolutional Neural Network (CNN) — Deep Learning](https://medium.com/@RaghavPrabhu/understanding-of-convolutional-neural-network-cnn-deep-learning-99760835f148) is quite similar to "A Comprehensive..." link above, but it also includes information about filtering and shows the effect that different filters have on an image, which is certainly very import to an understanding of why we use CNNs. [Building a Convolutional Neural Network (CNN) in Keras](https://towardsdatascience.com/building-a-convolutional-neural-network-cnn-in-keras-329fbbadc5f5) (or one of the other thousand similar pages) are pretty good for just starting out and building your own CNN classifier. You can also check out examples from Keras, e.g. [CIFAR10 CNN](https://keras.io/examples/cifar10_cnn/), but these tend to give you a very little information about *why* they designed the network the way that they did. If, on the other hand, you are looking for some more advanced resources, here are is one that springs to mind: [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by He et al., deals with a major advance in image recognition, using Residual Networks (ResNet). This type of network has become pretty popular, so I highly recommend giving it a read. Upvotes: 0 <issue_comment>username_3: [Chapter 9](https://www.deeplearningbook.org/contents/convnets.html) of the book [Deep Learning](https://www.deeplearningbook.org) (2016), by Goodfellow et al., describes the convolutional (neural) network (CNN), its main operations (namely, convolution and pooling) and properties (such as parameter sharing). There's also the article [From Convolution to Neural Network](http://gregorygundersen.com/blog/2017/02/24/cnns/), which first introduces the mathematical operation *convolution* and then describes its connection with signal processing (where images can be viewed as 2D signals) and, finally, describes the CNN. Upvotes: 1 [selected_answer]<issue_comment>username_4: You can look at the paper [Gradient-Based Learning Applied to Document Recognition](http://vision.stanford.edu/cs598_spring07/papers/Lecun98.pdf) (1998) by <NAME> et al., which reviews and compares various methods applied to handwritten character recognition and shows that CNNs outperform all other methods. Also, I suggest Andrew Ng's CNN videos. Upvotes: 0
2019/11/18
612
2,197
<issue_start>username_0: Kaggle is limited to only supervised learning problems. There used to be www.rl-competition.org but they've stopped. Is there anything else I can do other than locally trying out different algorithms for various RL problems?<issue_comment>username_1: OpenAI has [leaderboards](https://github.com/openai/gym/wiki/Leaderboard) for their gym-environments, if you want to compete with other people on runtime and efficiency. Upvotes: 2 <issue_comment>username_2: AICrowd has numerous challenges in the domain, with some very interesting challenges running currently. Here is a short list: * <https://www.aicrowd.com/challenges?&categories=reinforcement-learning> (Complete list with RL tag) * <https://www.aicrowd.com/challenges/neurips-2020-procgen-competition> * <https://www.aicrowd.com/challenges/neurips-2020-flatland-challenge> * <https://www.aicrowd.com/challenges/neurips-2019-minerl-competition> Hope this helps! Upvotes: 3 <issue_comment>username_3: There's a list of ongoing and past RL competitions [here](https://github.com/seungjaeryanlee/awesome-rl-competitions). The ongoing competitions according to that list are * [AWS DeepRacer League](https://aws.amazon.com/deepracer/league/) * [Connect X](https://www.kaggle.com/c/connectx) * [GOSEEK Challenge](https://github.com/MIT-TESSE/goseek-challenge) Upvotes: 3 <issue_comment>username_4: Kaggle recently started adding 'Simulation' competitions, which are well-suited for reinforcement learning. The first competition that's live (no prizes) is [ConnectX](https://www.kaggle.com/c/connectx), like a generalised Connect Four. The first competition with prize money is likely to be the next iteration of TwoSigma's Halite. There's a page for it here, but it hasn't been launched yet: <https://www.kaggle.com/c/halite/overview> I created a site that lists [ongoing machine learning competitions including Reinforcement Learning competitions](https://mlcontests.com/) - you can also sign up to the email list in case you want to get emails (roughly monthly) when new competitions launch. As of right now (May 2020) there are a few live RL competitions on there - the KDD cup, and AWS DeepRacer. Upvotes: 2
2019/11/18
1,170
4,582
<issue_start>username_0: From many blogs and this one <https://web.archive.org/web/20160308070346/http://mcts.ai/about/index.html> We know that the process of MCTS algorithm has 4 steps. > > 1. Selection: Starting at root node R, recursively select optimal child nodes until a leaf node L is reached. > > > What does leaf node L mean here? I thought it should be a node representing the terminal state of the game, or another word which ends the game. If L is not a terminal node (one end state of the game), how do we decide that the selection step stops on node L? From the terms of general algorithm, a leaf node is the one that does not have any > > 2. Expansion: If L is a not a terminal node (i.e. it does not end the game) then create one or more child nodes and select one C. > > > From this description I realise that obviously my previous thought incorrect. Then if L is not a terminal node, it implies that L should have children, why not continue finding a child from L at the "Selection" step? Do we have the children list of L at this step? From the description of this step itself, when do we create one child node, and when do we need to create more than one child nodes? Based on what rule/policy do we select node C? > > 3. Simulation: Run a simulated playout from C until a result is achieved. > > > Because of the confusion of the 1st question, I totally cannot understand why we need to simulate the game. I thought from the selection step, we can reach the terminal node and the game should be ended on node L in this path. We even do not need to do "Expansion" because node L is the terminal node. 4. Backpropagation: Update the current move sequence with the simulation result. Fine. Last question, from where did you get the answer to these questions? Thank you<issue_comment>username_1: Imagine a game with a very clear first move, such as a game where choosing to go first if you win a coin toss brings a clear and obvious advantage. In this situation standard MCTS does little exploration down the side of the tree that branches at the win toss > let opponent start step, as the basic simulations of the rest of the game at this split quickly show the large gain you get when always starting when you win the coin toss. As a result, you would end up with a tree with very little expansion on the side of win the toss > put your opponent in, as every simulation step you do from even the most senior nodes ends with much worse expected outcome values than the alternatives on the other side of the tree where you do the correct move of always playing first. These nodes on the side of letting your opponent start have *huge* potential sub trees (as the whole game would still need to be played out if your opponent started), but would have very little searching down them. As a result, on this side of the tree, you would have many leaf nodes with large (but as yet unexplored, outside of the basic, early simulations down that side) sub trees that you *could* search if you modified the exploration vs exploration algorithm. As a basic example, take the 0/3 node at the far right on level one of the wiki example below, which would get much less attention than the much more promising 7/10 and 3/8 nodes, despite having potentially many subsequent children it *could* explore. If you took this node as your L node, you would expand it's children that you have not yet searched, and thus find out more about why this side of the tree is bad and update our now more granular probabilities accordingly, just as it does for the 3/3 node here: [![enter image description here](https://i.stack.imgur.com/A5vw0.png)](https://i.stack.imgur.com/A5vw0.png) Upvotes: 3 [selected_answer]<issue_comment>username_2: I also want to answer my question after watching the video that @username_1 post youtube.com/watch?v=UXW2yZndl7U . But still thank @username_1 's answer which is pretty helpful. > > The definition of "leaf" node. > > > The key point is `what tree is the host/owner of a "leaf" node` to this question. Through "Expansion" step, we are actually creating a tree with MCTS. The tree, the owner of a "leaf" node, should be the one that we are building, not the tree of the game state in our head (or perhaps it is too big to fill in our head, the tree of the game state actually does not exist). Then we can understand that a "leaf" node is the one, which does not have any child, in the tree that we are building. Once we get the answer of this question, the other questions can be answered automatically. Upvotes: 0
2019/11/19
1,153
4,959
<issue_start>username_0: On Sutton and Barto's RL book, the reward hypothesis is stated as > > that all of what we mean by goals and purposes can be well thought of as the maximization of the expected value of the cumulative sum of a received scalar signal (called reward) > > > Are there examples of tasks where the goals and purposes cannot be well thought of as the maximization of the expected value of the cumulative sum of a received scalar signal? All I can think of are tasks with subjective rewards, like "writing good music", but I am not convinced because maybe this is actually definable (perhaps by some super-intelligent alien) and we just aren't smart enough yet. Thus, I'm especially interested in counterexamples that **logically** or **provably** fail the hypothesis.<issue_comment>username_1: The book sets this hypothesis up by laying out a few assumptions: > > In reinforcement learning, the purpose or goal of the agent is formulated in terms of a special signal called the reward, passing from the environment to the agent. At each time step, the reward is a simple number. > > > We could think about what counterexamples to those assumptions might be: 1. The reward signal originates internally, instead of originating from the environment. (e.g. meditation, or abstract introspection) 2. The signal is not received every time step, or isn't necessarily expected to be received at all. (e.g. seeking of transcendent experiences) What might be common for these counterexamples is that the reinforcement learning mechanism itself undergoes spontaneous change. A signal that would have been positive before the spontaneous change might now be negative. The reward landscape itself might be completely different. From the agent's perspective, it might be impossible to evaluate what changed. The agent might have a 'subconscious' secondary algorithm that introduces changes in the learning algorithm itself, in a way that's decoupled from any reward-defined behavior. Upvotes: -1 <issue_comment>username_2: The closest counterexamples I can think of are cases where reward shaping is required to learn a good policy but ends up having unintended consequences. Reward shaping is usually used in cases we want to encourage a particular behavior or when the reward is sparse or when capturing exactly what you want is not straightforward or infeasible. But it is not a good practice to rely too much on it as it can have unintended consequences. A simple example of this is described here <https://openai.com/blog/faulty-reward-functions/>. Upvotes: 0 <issue_comment>username_3: What if a scalar reward is insufficient, or its unclear on how to collapse a multi-dimensional reward to a single dimension. Example, for someone eating a burger, both taste and cost are important. Agents may prioritize taste and cost differently, so its not clear on how to aggregate the two. It is also not clear on how a subjective categorical taste value can be combined with a numerical cost. Upvotes: 2 <issue_comment>username_4: I believe that there is no clear answer to your question. It essentially boils down to whether [you are a reductionist](https://en.wikipedia.org/wiki/Mathematical_universe_hypothesis) – whether you believe that quantitative measurements can truly give justice to the complexity of the real world, and that a framework such as expectation maximization can losslessly capture what we care about as humans in the performing of tasks. From a non-reductionist perspective, one would be aware that almost any mathematical representation of complex real-world goals will necessarily be a proxy rather than the true goal (as many goals are not mathematically formalizable, such as what we perceive as "good music" or "meaning"), and thus the reward hypothesis is at best an approximation. Based on this, a non-reductionist's reward hypothesis could be rephrased as: > > that all of what we mean by goals and purposes can be ~~well thought of~~ **approximately operationalized (albeit at a certain domain-dependent loss)** as the maximization of the expected value of the cumulative sum of a received scalar signal (called reward) > > > Clearly the original (stricter) version of the reward hypothesis does apply to some cases, such as purely-quantitative domains (e.g. maximizing $ earned on the stock market, or maximizing score in a video game), but as soon as the problem involves enough "complexity" (e.g. humans, or wherever you think the boundary should be), a non-reductionist would say that mathematics is clearly not fit to the task to truly capture the intended goal. More info on the reward hypothesis (as presented by [<NAME>](https://www.littmania.com/) himself) is [here](https://www.coursera.org/lecture/fundamentals-of-reinforcement-learning/michael-littman-the-reward-hypothesis-q6x0e). I would have added it as a comment to the question but do not have enough reputation. Upvotes: 2
2019/11/19
431
1,634
<issue_start>username_0: I have stereo pairs (left, right) images of concrete cracks. I want to measure the length of the crack from those image pairs. Which neural network is appropriate for measuring object dimensions from stereo images? Note: I am insisted to use the NN-based technique only.<issue_comment>username_1: If you have stero pairs, and you can identify the objects in the scene, you do not need a neural network, you can just use [triangulation](https://en.wikipedia.org/wiki/Triangulation). If you need to identify which objects in the scene are the same, you have an [image segmentation](https://www.analyticsvidhya.com/blog/2019/04/introduction-image-segmentation-techniques-python/) problem. Depending on your problem and the amount of data you have access to, you may be able to use simple techniques like clustering-based segmentation, or you may be able to use NN-based techniques, like [Mask R-CNN](https://arxiv.org/abs/1703.06870). Upvotes: 2 <issue_comment>username_2: Is the image taken from a constant distance? If yes, you'd need to scale the images to the same dimensions first of all. For few images say 100-500 images (more the better) you'd need to label the dataset by proper scaling. Once labeled, use it to train a [**CNN**](https://github.com/gsurma/image_classifier) (Although best would be training a [**ResNet**](https://github.com/ry/tensorflow-resnet)). Once trained with decent accuracy, test it for the rest of your dataset. I did something similar for one of my projects, check it out if you want to [here](https://github.com/Ar9av/Feature-Extraction-from-Facial-Images). Upvotes: 2
2019/11/19
1,006
3,145
<issue_start>username_0: In [Sutton & Barto's "Reinforcement Learning: An Introduction", 2nd edition](http://incompleteideas.net/book/the-book-2nd.html), page 199, they describe the on-policy distribution for episodic tasks in the following box: [![enter image description here](https://i.stack.imgur.com/lf1Q3.png)](https://i.stack.imgur.com/lf1Q3.png) I don't understand how this can be done without taking the length of the episode into account. Suppose a task has 10 states, has probability 1 of starting at the first state, then moves to any state uniformly until the episode terminates. If the episode has 100 time steps, then probability of the first state is proportional to $1 + 100\times 1/10$; if it has $1000$ time steps, it will be proportional to $1 + 1000\times 1/10$. However, the formula given would make it proportional to $1 + 1/10$ in both cases. What am I missing?<issue_comment>username_1: Let's first assume that there is only one action so that $\pi(a|s) = 1$ for every state - action pair which simplifies the discussion. Now let's consider a case with 100 time steps, 10 states and uniform distribution for starting state $s\_0$ with $h(s\_0) = 1$. The result would be \begin{align} \eta(s\_0) &= 1 + \sum\_{i = 0}^9 \eta(s\_i) \cdot p(s\_0|s\_i) =\\ &= 1 + \sum\_{i = 0}^9 10 \cdot \frac{1}{10} = 11 \end{align} Now let's consider a case with 1000 time steps where other settings are the same as in the first case. \begin{align} \eta(s\_0) &= 1 + \sum\_{i = 0}^{9} \eta(s\_i) \cdot p(s\_0|s\_i) =\\ &= 1 + \sum\_{i = 0}^{9} 100 \cdot \frac{1}{10} = 101 \end{align} In the first case \begin{equation} \mu(s\_0) = \frac{11}{9\cdot 10 + 11} = 0.1089 \end{equation} and in the second case you have \begin{equation} \mu(s\_0) = \frac{101}{9\cdot 100 + 101} = 0.1009 \end{equation} so it looks like you are correct that $\mu(s)$ depends on the length of the episode, but they didn't really say that it doesn't. Obviously as the length of the episode increases so will the number of times a certain state was visited so you could say that formula implicitly depends on the number of time steps. If $h(s\_i)$ is equal for every state then results would be the same in both cases regardless of number of time steps. Also, as the number of possible states grows very large, as it usually is in real problems, the results would be approaching each other as the number of states grows. Upvotes: 3 [selected_answer]<issue_comment>username_2: You are missing that the expression $$\sum\_{s'} \eta(s')$$ is already a count of the expected length of an episode, and is used in the denominator to scale $\mu(s)$ such that $\sum\_{s} \mu(s) = 1$ So the length of the episode *is* taken into account in the formula. In practice you don't need to know $\mu(s)$, it can be left unresolved as a theoretical construct. What you care about for the theory to work is that the samples that you train with are drawn with same frequency - this happens automatically if you work with an on-policy algorithm. So the theory can hide the maths that you might need to do in order to determine actual values for $\eta(s)$ or $\mu(s)$ Upvotes: 2
2019/11/20
2,943
12,955
<issue_start>username_0: Essentially, AI is created by human minds, so is the intelligence & creativity of algorithms properly an extension of human intelligence & creativity, rather than something independent? I assume that intelligence does not necessarily require creativity, however, creativity can result from machine learning. (A simple example is AlphaGo discovering novel strategies.)<issue_comment>username_1: I think no, it isn't. The reason I would say no, is that in order for it to be an extension of our intelligence & creativity, it must be limited by it. This, I believe, isn't the case however. We are capable of creating an AI that is smarter than ourselves (say at Go or Chess, without cheating and checking every possible move), and so it is not bound by our own intelligence. I would liken it to creating a child. Just because you gave birth to Einstein, doesn't mean he's an extension of your intelligence. (This is of course pretty rudimentary, as it's *very* debatable as to whether it's reasonable to liken humans to AI). Of course, this is a philosophical question, so it's hard to really answer yes or no. Upvotes: 1 <issue_comment>username_2: No it isn't. AI is essentially human intelligence with a combination of computing power to achieve tasks that a human alone cannot achieve in the time period that a programmed machine can. To give an example. A human can identify a pattern in a data set of say 1000 records. However if that same logic needs applied to a data set of a billion records, a human would take ages to do it. But a machine can do that in seconds if the human gives the right instructions to the machine on how to do it. Hope that helps. Upvotes: 1 <issue_comment>username_3: I believe AI is, at least in certain ways, *both* an extension of human intelligence & creativity, and something independent as well. Note people didn't design airplanes to try to fly like birds do. Although planes use the same principles of aerodynamics that birds use to fly, we've adapted how those physics principles are applied to accommodate what we have to work with, i.e., metal, by having things like propellers, jet engines, fixed wings (initially, although later we also had helicopter rotor blades), etc. In a similar fashion, we have adapted a few things we've learned about how human minds & intelligence work, with artificial neural networks being a prime example. However, even with just our fairly limited understanding, we've implemented neural networks differently, e.g., by which [activation functions](https://en.wikipedia.org/wiki/Activation_function) are used. Although we are learning more about how our brains work through neuroscience research, there's still so much we don't yet know. Nonetheless, I believe one of the biggest differences overall between our minds & AI is that our general intelligence comes from mostly massive parallel processing, to a much greater extent than even higher end GPUs can deliver, or even at least most supercomputers, while artificial intelligence generally depends instead a lot more on the massive speed of calculations available on our modern computer chips. It's this learning, adapting & extending what we know about how we think & create, in combination with the mostly independent boost of using the advantages of computer chips (mostly their ability to do very fast computations), that has allowed AI to advance as far as it has so far. Nobody, including myself, can be sure of where & how the next major advances in AI will occur, but I believe it'll likely be a combination of learning & using what we learn about how we mentally operate, along with advances in computer related knowledge & technology (e.g., new algorithm techniques, more & better parallel processing, quantum computers with many simultaneous [qubits](https://en.wikipedia.org/wiki/Qubit) operating, etc.). Upvotes: 1 <issue_comment>username_4: I would say: **no**, it's not just an extension of human intelligence. Actually, I would argue there's nothing like human intelligence. At least it's not clearly distinguishable from intelligence in general. If you say AI is just a set of instructions that are made by humans, you might be right. But what if this set of instructions contains instructions on how to change instructions? That would mean that the AI knows how to learn. What if you include instructions on how to learn to learn to learn to learn (...) to change instructions? At what point would you say that this intelligence is still an extension of human intelligence? If you argue like this then you must also put "human intelligence" in a set altogether with every animal intelligence because it all originates from some sort of intelligence that is based on physical brain activity. In fact, when a child is born, it is not more intelligent than most of the animal species. The only thing that enhances its intelligence from time to time (and do stuff like speaking or using its hands like tools) is the ability to **learn**. I don't see why an AI hasn't got the potential to increase its intelligence to level where one would say: "This is not an extension of human intelligence anymore, this is something independent". Upvotes: 1 <issue_comment>username_5: This is an old question, going back at least to 1950. It is one of the original objections to AI that Turing considers and attempts to refute in his seminal 1950 paper [Computing Machinery and Intelligence](https://academic.oup.com/mind/article/LIX/236/433/986238). Turing actually attributes this objection to [Lady Lovelace](https://en.wikipedia.org/wiki/Ada_Lovelace), apparently quoted by another author. In Turing's paper, this is objection #6: *Lady Lovelace's Objection*, in section 6 of the paper. The objection is concisely stated as > > The Analytical Engine has no pretensions to originate anything. It can > do whatever we know how to order it to perform. > > > where "[The Analytical Engine](https://en.wikipedia.org/wiki/Analytical_Engine)" was an early design for an all-mechanical general purpose computer. Turing offers two replies to this objection. First, he reminds us that computer programs have bugs. That is, they often do things their creators did not intend. This is unsatisfying to many readers, but it does address the objection: programs may act in ways that are unrelated to our intelligence, and in doing so, might display unexpected intelligent behaviors. In this sense, their intelligence would not be an intentional product of human intelligence. Turing's stronger objection comes from an anticipation that *learning* would eventually move to the center of AI research (keep in mind again, this is written in 1950, well before any reasonable learning algorithms had been proposed!). Turing uses the example of a robotic child in Section 7 of the paper (Learning Machines) to elaborate on his point. A child is created by its parents, but, endowed with the ability to learn, quickly begins to display behaviors its parents do not anticipate or intend. No one would suggest that a person's intelligence is "really just" the intelligence of their parents, even though their parents created them, and are *partially* responsible for that intelligence. Likewise, Turing's proposed robotic child is created by a parent, but, endowed with learning, quickly begins to engage in behaviors the parent does not anticipate or intend. Therefore, machine intelligence need not be reduced to just human intelligence. I think that if Turing were alive today, he would agree that we are now beginning to move into the era of learning machines he anticipated. Some of our programs now engage in intelligent behaviors that we do not anticipate or understand. For example, self-driving cars now [kill or maim people](https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg), because they have learned behaviors their creators did not intend or anticipate, perhaps not unlike a reckless teenage driver. Upvotes: 2 <issue_comment>username_6: No, the way human minds think is in no way related to the way an AI thinks. Although you could say that AI is a much simpler form that represents how the brain processes information. For the human brain to think, sense, and act there are billions of connections is various cortex's of the brain that process information in different ways. If talking about brain information as electrical signals you could say that different cortex's of the brain have change in power of specific frequency bands of the brain signal which can be decoded as planning, preparation, thoughts, visual, movement, creativity, attentiveness and much more. So, to answer your question AI could be considered as an extremely minute extension of human intelligence. It's like comparing our solar system to the Milky Way, although the comparison maybe a bit too large as we are slowly becoming able to understand the underlying processes and build fast processors mimicking brain processing and efficient power consuming hardware tech to run humongous neural nets. In the soon future your statement may hold true. Upvotes: 1 <issue_comment>username_7: The answer in part seems to depend on what you mean by "human intelligence". If you mean behavior that would usually be regarded as requiring intelligence were a human to produce it, then various types machines can be intelligent. Such "intelligent" machines presumably include player pianos. Playing the piano and producing a melody is widely regarded as requiring human intelligence when humans do it. Player pianos produce the same sort of behavior, but without a human touching a key. Hence (so the argument goes) player pianos are intelligent. But if "intelligence" includes having the inner process of understanding, say understanding the meanings of symbols of written language, then at least according to philosopher <NAME>, purely symbol manipulating devices such as digital computers could never be intelligent. This is because symbols in themselves don't contain or indicate their meanings, and all the computing machine gets and manipulates is symbols in themselves. However, there does seem to be a sense in which the question "Is artificial intelligence really just human intelligence" is true of computers. This is when the behavior of the machine is caused by human intelligence. A human writes a program that defines, mandates, the behavior of the machine (just like a human designs the mechanism and paper roll of a player piano). This design takes human intelligence. The machine has no intrinsic, or innate, intelligence. It's just an automaton mindlessly following the causal sequence created by the intelligent human designer. Now if computers are purely symbol-manipulating devices, and if Searle is right, AI is doomed, at lest as long as its development platform is the digital computer (and no other machine is available or seems on the horizon). However, are computers purely symbol-manipulating devices? If not, there may be a way they can acquire meanings, or knowledge, and, for instance, learn languages. If computers can receive (including from digital sensors) and manipulate more than just symbols, they may be able to acquire the inner structures and execute the inner processes needed for human-like understanding. That is, they might be able to acquire knowledge by way of sensing the environment (as humans do). A human might write the program that facilitates acquisition of such knowledge, but what the knowledge is about would be derived from the sensed environment not from a human mind. But here we're talking about "intelligence" defined over inner processes and structures, not or not just external behavior. If you define human intelligence as external behavior, as the Turing test does and as AI researchers often do, then music boxes with pirouetting figurines, player pianos, and programmed computers all have human-like intelligence, and artificial intelligence as it exists today is really just the same sort of thing as human intelligence. Upvotes: 0 <issue_comment>username_8: Right, AI is an extension of human creativity and the implied limitation is that it inherits bias through the specific choice of which features to consider. Given a set of features it is then far more able at calculating which combination of features best helps explain the relationship being considered than is the human mind. Humans are too distracted to think to the depth that AI and machine learning can. But that extreme focus is not intelligence. One of the issues that prevents the human mind from thinking at comparable depth is the need to massage the set of features that might apply; we are constantly reviewing features, adding in new and eliminating those that do not contribute. Creativity is openness to admitting other seemingly unrelated features and hoping for emergence, and managing to persist in being creative when emergence is delayed. Upvotes: 1
2019/11/20
3,246
14,022
<issue_start>username_0: I am training an RL agent (specifically using the [PPO algorithm](https://arxiv.org/pdf/1707.06347.pdf)) on a game environment with 2 possible actions **left** or **right**. The actions can be taken with varying "force"; e.g. go **left** 17% or go **right** 69.3%. Currently, I have the agent output 21 actions - 10 for **left** (in 10% increments), 10 for **right** in 10% increments and 1 for stay in place (do nothing). In other words, there is a direct 1-1 mapping in 10% increments between the agent output and the force the agent uses to move in the environment. I am wondering, if instead of outputting 21 possible actions, I change the action space to a binary output and obtain the action probabilities. The probabilities will have the form, say, [70, 30]. That is, go left with 70% probability and go right with 30% probability. Then I take these probabilities and put them through a non-linearity that translates to the actual action force taken; e.g an output of 70% probability to go left, may in fact translate to moving left with 63.8% force. The non linear translation is not directly observed by the agent but will determine the proceeding state, which is directly observed. I don't fully understand what the implications of doing this will be. Is there any argument that this would increase performance (rewards) as the agent does not need to learn direct action mappings, rather just a binary probability output?<issue_comment>username_1: I think no, it isn't. The reason I would say no, is that in order for it to be an extension of our intelligence & creativity, it must be limited by it. This, I believe, isn't the case however. We are capable of creating an AI that is smarter than ourselves (say at Go or Chess, without cheating and checking every possible move), and so it is not bound by our own intelligence. I would liken it to creating a child. Just because you gave birth to Einstein, doesn't mean he's an extension of your intelligence. (This is of course pretty rudimentary, as it's *very* debatable as to whether it's reasonable to liken humans to AI). Of course, this is a philosophical question, so it's hard to really answer yes or no. Upvotes: 1 <issue_comment>username_2: No it isn't. AI is essentially human intelligence with a combination of computing power to achieve tasks that a human alone cannot achieve in the time period that a programmed machine can. To give an example. A human can identify a pattern in a data set of say 1000 records. However if that same logic needs applied to a data set of a billion records, a human would take ages to do it. But a machine can do that in seconds if the human gives the right instructions to the machine on how to do it. Hope that helps. Upvotes: 1 <issue_comment>username_3: I believe AI is, at least in certain ways, *both* an extension of human intelligence & creativity, and something independent as well. Note people didn't design airplanes to try to fly like birds do. Although planes use the same principles of aerodynamics that birds use to fly, we've adapted how those physics principles are applied to accommodate what we have to work with, i.e., metal, by having things like propellers, jet engines, fixed wings (initially, although later we also had helicopter rotor blades), etc. In a similar fashion, we have adapted a few things we've learned about how human minds & intelligence work, with artificial neural networks being a prime example. However, even with just our fairly limited understanding, we've implemented neural networks differently, e.g., by which [activation functions](https://en.wikipedia.org/wiki/Activation_function) are used. Although we are learning more about how our brains work through neuroscience research, there's still so much we don't yet know. Nonetheless, I believe one of the biggest differences overall between our minds & AI is that our general intelligence comes from mostly massive parallel processing, to a much greater extent than even higher end GPUs can deliver, or even at least most supercomputers, while artificial intelligence generally depends instead a lot more on the massive speed of calculations available on our modern computer chips. It's this learning, adapting & extending what we know about how we think & create, in combination with the mostly independent boost of using the advantages of computer chips (mostly their ability to do very fast computations), that has allowed AI to advance as far as it has so far. Nobody, including myself, can be sure of where & how the next major advances in AI will occur, but I believe it'll likely be a combination of learning & using what we learn about how we mentally operate, along with advances in computer related knowledge & technology (e.g., new algorithm techniques, more & better parallel processing, quantum computers with many simultaneous [qubits](https://en.wikipedia.org/wiki/Qubit) operating, etc.). Upvotes: 1 <issue_comment>username_4: I would say: **no**, it's not just an extension of human intelligence. Actually, I would argue there's nothing like human intelligence. At least it's not clearly distinguishable from intelligence in general. If you say AI is just a set of instructions that are made by humans, you might be right. But what if this set of instructions contains instructions on how to change instructions? That would mean that the AI knows how to learn. What if you include instructions on how to learn to learn to learn to learn (...) to change instructions? At what point would you say that this intelligence is still an extension of human intelligence? If you argue like this then you must also put "human intelligence" in a set altogether with every animal intelligence because it all originates from some sort of intelligence that is based on physical brain activity. In fact, when a child is born, it is not more intelligent than most of the animal species. The only thing that enhances its intelligence from time to time (and do stuff like speaking or using its hands like tools) is the ability to **learn**. I don't see why an AI hasn't got the potential to increase its intelligence to level where one would say: "This is not an extension of human intelligence anymore, this is something independent". Upvotes: 1 <issue_comment>username_5: This is an old question, going back at least to 1950. It is one of the original objections to AI that Turing considers and attempts to refute in his seminal 1950 paper [Computing Machinery and Intelligence](https://academic.oup.com/mind/article/LIX/236/433/986238). Turing actually attributes this objection to [Lady Lovelace](https://en.wikipedia.org/wiki/Ada_Lovelace), apparently quoted by another author. In Turing's paper, this is objection #6: *Lady Lovelace's Objection*, in section 6 of the paper. The objection is concisely stated as > > The Analytical Engine has no pretensions to originate anything. It can > do whatever we know how to order it to perform. > > > where "[The Analytical Engine](https://en.wikipedia.org/wiki/Analytical_Engine)" was an early design for an all-mechanical general purpose computer. Turing offers two replies to this objection. First, he reminds us that computer programs have bugs. That is, they often do things their creators did not intend. This is unsatisfying to many readers, but it does address the objection: programs may act in ways that are unrelated to our intelligence, and in doing so, might display unexpected intelligent behaviors. In this sense, their intelligence would not be an intentional product of human intelligence. Turing's stronger objection comes from an anticipation that *learning* would eventually move to the center of AI research (keep in mind again, this is written in 1950, well before any reasonable learning algorithms had been proposed!). Turing uses the example of a robotic child in Section 7 of the paper (Learning Machines) to elaborate on his point. A child is created by its parents, but, endowed with the ability to learn, quickly begins to display behaviors its parents do not anticipate or intend. No one would suggest that a person's intelligence is "really just" the intelligence of their parents, even though their parents created them, and are *partially* responsible for that intelligence. Likewise, Turing's proposed robotic child is created by a parent, but, endowed with learning, quickly begins to engage in behaviors the parent does not anticipate or intend. Therefore, machine intelligence need not be reduced to just human intelligence. I think that if Turing were alive today, he would agree that we are now beginning to move into the era of learning machines he anticipated. Some of our programs now engage in intelligent behaviors that we do not anticipate or understand. For example, self-driving cars now [kill or maim people](https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg), because they have learned behaviors their creators did not intend or anticipate, perhaps not unlike a reckless teenage driver. Upvotes: 2 <issue_comment>username_6: No, the way human minds think is in no way related to the way an AI thinks. Although you could say that AI is a much simpler form that represents how the brain processes information. For the human brain to think, sense, and act there are billions of connections is various cortex's of the brain that process information in different ways. If talking about brain information as electrical signals you could say that different cortex's of the brain have change in power of specific frequency bands of the brain signal which can be decoded as planning, preparation, thoughts, visual, movement, creativity, attentiveness and much more. So, to answer your question AI could be considered as an extremely minute extension of human intelligence. It's like comparing our solar system to the Milky Way, although the comparison maybe a bit too large as we are slowly becoming able to understand the underlying processes and build fast processors mimicking brain processing and efficient power consuming hardware tech to run humongous neural nets. In the soon future your statement may hold true. Upvotes: 1 <issue_comment>username_7: The answer in part seems to depend on what you mean by "human intelligence". If you mean behavior that would usually be regarded as requiring intelligence were a human to produce it, then various types machines can be intelligent. Such "intelligent" machines presumably include player pianos. Playing the piano and producing a melody is widely regarded as requiring human intelligence when humans do it. Player pianos produce the same sort of behavior, but without a human touching a key. Hence (so the argument goes) player pianos are intelligent. But if "intelligence" includes having the inner process of understanding, say understanding the meanings of symbols of written language, then at least according to philosopher <NAME>, purely symbol manipulating devices such as digital computers could never be intelligent. This is because symbols in themselves don't contain or indicate their meanings, and all the computing machine gets and manipulates is symbols in themselves. However, there does seem to be a sense in which the question "Is artificial intelligence really just human intelligence" is true of computers. This is when the behavior of the machine is caused by human intelligence. A human writes a program that defines, mandates, the behavior of the machine (just like a human designs the mechanism and paper roll of a player piano). This design takes human intelligence. The machine has no intrinsic, or innate, intelligence. It's just an automaton mindlessly following the causal sequence created by the intelligent human designer. Now if computers are purely symbol-manipulating devices, and if Searle is right, AI is doomed, at lest as long as its development platform is the digital computer (and no other machine is available or seems on the horizon). However, are computers purely symbol-manipulating devices? If not, there may be a way they can acquire meanings, or knowledge, and, for instance, learn languages. If computers can receive (including from digital sensors) and manipulate more than just symbols, they may be able to acquire the inner structures and execute the inner processes needed for human-like understanding. That is, they might be able to acquire knowledge by way of sensing the environment (as humans do). A human might write the program that facilitates acquisition of such knowledge, but what the knowledge is about would be derived from the sensed environment not from a human mind. But here we're talking about "intelligence" defined over inner processes and structures, not or not just external behavior. If you define human intelligence as external behavior, as the Turing test does and as AI researchers often do, then music boxes with pirouetting figurines, player pianos, and programmed computers all have human-like intelligence, and artificial intelligence as it exists today is really just the same sort of thing as human intelligence. Upvotes: 0 <issue_comment>username_8: Right, AI is an extension of human creativity and the implied limitation is that it inherits bias through the specific choice of which features to consider. Given a set of features it is then far more able at calculating which combination of features best helps explain the relationship being considered than is the human mind. Humans are too distracted to think to the depth that AI and machine learning can. But that extreme focus is not intelligence. One of the issues that prevents the human mind from thinking at comparable depth is the need to massage the set of features that might apply; we are constantly reviewing features, adding in new and eliminating those that do not contribute. Creativity is openness to admitting other seemingly unrelated features and hoping for emergence, and managing to persist in being creative when emergence is delayed. Upvotes: 1
2019/11/20
1,411
5,509
<issue_start>username_0: I need an algorithm to [trace simple bitmaps](https://en.wikipedia.org/wiki/Image_tracing), which only contain paths with a given stroke width. **Is there any existing attempt to create a deep learning model which extracts vector paths from bitmaps?** It is obviously very easy to generate bitmaps from vector paths, so creating data for a machine learning algorithm is simple. The model could be trained by giving both the vector and bitmap representation. Once trained, it would be able to generate the vector paths from the given bitmap. This seems simple, but I could not find any work on this particular task. So, I suppose this problem is not fitted for current deep learning architectures, why? The goal is to trace this kind of image, which would be drawn by hand with a thick felt pen and scanned: ![Bitmap Image containing simple vector paths](https://i.stack.imgur.com/xZX3N.png) **So, is there a deep learning architecture fitted for this problem?** I believe this question could help me understand what is possible to do with deep learning and what is not, and why. Tracing bitmaps is a perfect example of converting sparse data to a dense abstract representation; I have the intuition one can learn a lot from this problem.<issue_comment>username_1: If we seek proven working source code to plug into a GPLv2-licence compatible solution, we should at least consider autotrace. Its [source code](https://sourceforge.net/projects/autotrace/files/AutoTrace/0.31.1/autotrace-0.31.1-1.dag.8.0.src.rpm/download) is open for review. It can be tested against the example images we have and, if it works fine, called by our GPLv2 software. We can even use the calling code in [Inkscape's plug-in image tracing implementation](https://gitlab.com/inkscape/inkscape/blob/master/src/trace/autotrace/inkscape-autotrace.cpp) as a good starting point for design and implementation of our calling program, whether it be C, C++, Java, Python, or ECMA (JS). The trace algorithm in Adobe Illustrator is comparable but is not open source. If we seek theory, there are several academic publications discussing some of the theory, the last being most aligned with machine learning ideology. I would not dismiss earlier work simply because it doesn't connect with the current machine learning idioms. Investigating what is fully implemented and successfully used by many follows a wise old business proverb: The bird in the hand is worth two in the bush. * [Potrace: a polygon-based tracing algorithm](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.159.5801&rep=rep1&type=pdf), <NAME>, 2003 * [Vector Representation of Binary Images Containing Halftone Dots](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.109.2128&rep=rep1&type=pdf), <NAME>, <NAME>, <NAME>, 2004 * [Testing AutoTrace: A Machine-learning Approach to Automated Tongue Contour Data Extraction](https://core.ac.uk/download/pdf/38053960.pdf), <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>, 2013 Many of the online drawing programs collect data. It would not be surprising if, behind the gracious give-away of online bandwidth, they are establishing a continuously improving data set for training a new breed of autotracers. None have published AI designs admitting as much, but they would not be legally obligated to do so because a single input example is indeterminable from the autotrace service that could resulting from the training. Upvotes: 2 <issue_comment>username_2: any luck update in this. I fully understand And today, I can tell u that my research has led me here, and that bitmap tracing for simple lines is so so so trash still after decades, And I have been a graphic designer for bout 20 years now. So anyway.im also a low level- mid level developer u could say, And for so many years I have been thinking of different ways to help solve this problem. And I dont know But can some1 please advise me if this method could work Though it's incomplete I'm hoping I'm on the right path and someone could add to it Would it be effective to Train a model on a dataset of bitmaps and it's corresponding SVG that will contain the max amount of points And % similarity average to the the bitmap (This idea stems around the thought of training a diffuser or something) Then Train another model(or something) On the same bitmaps dataset Though the corresponding SVG in this data set will be simplified to the least amount of paths and still within the same % similarity average to the bitmap So that should simplify the paths and still look like the original (Obviously not theoretically because I'm not actually sure) So now when you run this against a bitmap u want to vectorize I think maybe Tiling the bitmap Scan all the tiles Reproduce that tile bitmap and corresponding SVG overlay with the trained model that produces the most points And then stich the tiles back together And apply a final simplify using the Model trained with the least points OK so as muddled as that all is really.. I'm hoping someone can understand that. And either tell me to shut up. Or any help to develop this idea further or, If you need a visual representation of what I'm trying to say, in order to understand it I can draw up something and send it But really this has been a life long goal.. I would honestly design the entire dataset Just to shape the model to spec. Or am I just way off the mark here Upvotes: -1
2019/11/20
536
2,080
<issue_start>username_0: I am looking to build a neural network that takes an input vector $\mathbf{x}$ and outputs a vector $\mathbf{y}$ such at $f(\mathbf{x}, \mathbf{y})$ is minimized, where $f$ is some function. The network will see many different $\mathbf{x}$ during training to adjust its weights and biases; then I will test the network by using the test set $\{\mathbf{x}\_1, \dots, \mathbf{x}\_n \}$ to calculate $\sum(f(\mathbf{x}\_1, \mathbf{y}), \dots, f(\mathbf{x}\_n, \mathbf{y}))$ to see if this sum is minimized. However, I have no labels for the output $\mathbf{y}$. The loss function I am trying to minimize is based on the input and output, instead of the output and label. I tried many standard Keras and TensorFlow loss functions, but they are unable to do the job. Any thoughts on how this might be achieved?<issue_comment>username_1: According to your description, you already know your function $f$ to be optimized. So you should use it directly instead of the standard loss functions. In this [other post](https://stackoverflow.com/questions/43818584/custom-loss-function-in-keras) there is an explanation of how to use $f$ as a custom loss function in Keras. Upvotes: 1 <issue_comment>username_2: From your question, it sounds like your only training data is {1,…,} and the network has to magically come up with values {y1,…,y} such that an unknown function is minimized. How do you plan to give feedback to the network during training? Your situation appears to be something like this: `X-->Model-->Y-->f(X,Y)` where X is being copied from the first layer to the last layer using a non-sequential architecture. The solution to this problem would be to add an extra layer to your network that implements f(X, Y). However, this will only be trainable using standard methods like gradient descent if f(X, Y) is differentiable. If f(X, Y) is not differentiable, then you may need to use a different optimization method for learning the weights of the model and it may be more difficult. Particle swarm optimization is one possibility here. Upvotes: 0
2019/11/22
386
1,571
<issue_start>username_0: Problems I often face at work usually differ from tutorial or book-like examples so I end up with a code that works but it's not elegant and takes too much time to write. I wanted to ask you if there are some publicly accesible examples or repositories of Python codes that deal with machine learning development and application process but were created in a real company or organisation to develop their real-life products or services? EDIT: What I do not think about are libraries or packages repositories such as tensorflow. I would like to see some codes of projects that for example use tensorflow to create some other product or service.<issue_comment>username_1: The best way is probably to Google it with "[org name] tensorflow github" and look what you get. For instance I found: [Microsoft](https://github.com/microsoft/MMdnn) [Nvidia](https://github.com/NVIDIA/tensorflow-determinism) [Intel](https://github.com/tensorflow/ngraph-bridge) Upvotes: 1 <issue_comment>username_2: Find some opensource projects like [Arxiv sanity](https://github.com/karpathy/arxiv-sanity-preserver) by <NAME> which finds ML related papers from arxiv.org. You can find similar opensource applications. Upvotes: 0 <issue_comment>username_3: If you search [Papers with Code](https://paperswithcode.com/) for [`python "machine learning"`](https://paperswithcode.com/search?q_meta=&q=python+%22machine+learning%22) (or a more specific query) you will get numerous results. Note these will be mostly scientific applications or methods. Upvotes: 1
2019/11/24
524
1,916
<issue_start>username_0: For search algorithms with heuristic functions, the performance of heuristic functions are measured by the *effective branching factor* ${b^\*}$, which involves the total number of nodes expanded ${N}$ and the depth of the solution ${d}$. I'm not able to find out how different values of ${d}$ affect the performance keeping the same ${N}$. Put another way, why not use just the ${N}$ as the performance measure instead of ${b^\*}$?<issue_comment>username_1: As you found $N$ is the number of nodes that are expanded. The cost of expansion of each node is equal to the number of children of that node. Hence, we use $b^\*$ for each node. In other words, the total number of nodes that are involved in the expansion process is $N \times b^\*$. Upvotes: 1 <issue_comment>username_2: I also walked into that trap the first few times. The difference is the following: * $N$ is the number of **expanded** nodes * $b^\*$ is the effective branching factor + $b^\*$ depends on the depth $d$ of the goal and the number of **generated** nodes, lets call that $M$ + $b^\*$ is the solution to $M+1=1+b^\*+(b^\*)^2+(b^\*)^3+...+(b^\*)^d$ So, you could argue that instead of comparing $b\_1^\*$ and $b\_2^\*$ of two algorithms, you can also directly compare $M\_1$ and $M\_2$, because $b\_1^\*>b\_2^\*\Leftrightarrow M\_1>M\_2$. But you can imagine an algorithm $A\_2$ that **expands fewer nodes** than $A\_1$ (so $N\_1>N\_2$), but also different nodes so that it **generates more nodes** (so $M\_1). Since the cost is defined by the number of generated nodes, comparing $N$ might give the wrong result. The effective branching factor is more general than the number of generated nodes, because you can average $b^\*$ for one algorithm over many search problems, but averaging over the number of nodes (which might differ greatly) is not possible or rather nonsensical. Upvotes: 3 [selected_answer]
2019/11/24
2,588
11,200
<issue_start>username_0: I am a programmer but not in the field of AI. A question constantly confuses me is that how can an AI be trained if we human beings are not telling it its calculation is correct? For example, news usually said something like "company A has a large human face database so that it can train its facial recognition program more efficiently". What the piece of news doesn't mention is whether a human engineer needs to tell the AI program each of the program's recognition result is accurate or not. Are there any engineers who are constantly telling an AI what it produced it correct or wrong? If no, how can an AI determine if the result it produces is correct or wrong?<issue_comment>username_1: By "company A has a large human face database so that it can train its facial recognition program more efficiently" the article probably means that there is a training dataset $S$ of the form $$ S = \{ (\mathbf{x}\_1, y\_1), \dots,(\mathbf{x}\_N, y\_N) \} $$ where $\mathbf{x}\_i$ is an image of the face of the $i$th human and $y\_i$ (which is often called a *label*, *class* or *target*) is e.g. the name of the $i$th human. So, the programmer provides a **supervisory signal** (the label) for the AI to learn. The programmer also specifies the function that determines the error the AI program is making, based on the answer of the AI model and $y\_i$. This way of learning is called **supervised learning** (SL). However, there are other ways of training an AI. For example, there is **unsupervised learning** (UL), where the AI needs to find patterns in the data by aggregating objects based on some similarity measure, which is specified by the programmer. There's also reinforcement learning (RL), where the programmer specifies only certain reinforcement signals, that is, the programmer tells the AI which moves or results are "good" and which ones are "bad" to achieve its goal, by giving to the AI, respectively, a positive or negative reward. You can also combine these three approaches and there are other variations. > > Are there any engineers who are constantly telling an AI what it produced it correct or wrong? > > > Yes, in the case of SL. In the case of RL, the programmer also needs to provide the reinforcement signal, but it doesn't need to explicitly tell the AI which action it needs to take. In UL, the programmer needs to specify the way the AI needs to aggregate the objects, so, in this case, the programmer is also involved in the learning process. Upvotes: 5 <issue_comment>username_2: Taking your example of the faces data, keep in mind that when the model is run on a new unseen image the model can only return the already seen identity which emerges as the closest match. The result may be incorrect. The chances of mis-identification are much lower as the number of features incorporated increases. The input of the engineers lies at the level of the training data. Say we have a new photo of an individual that needs to be included in the model. The engineering task is now to morph that image to simulate different environments, angles of view, atmospheric conditions, lighting and so on to provide a large number of data input cases all of which will be "true" since the underlying features are all unchanged since the images are based on the same individual. Then the model is recalculated using the additional data. Keep in mind too that adding a new set of data to an existing training set has the advantage that the parameters of the model are largely in the right ballpark already, and adding the new faces will make only small changes. Cross validation will show whether the addition has improved or spoiled the model. Upvotes: 2 <issue_comment>username_3: The trick with unsupervised learning is that the AI doesn't learn that something is a face or not, it just sees unnamed patterns that the researchers need to then name. Let's say you feed it a dataset with one million pictures in order to train a facial recognition algorithm. After training, the AI will have found a few patterns in the pictures based on the parameters of each picture such as color, lighting, topography, etc. However, without labels (supervised learning) the AI doesn't know what exactly it found, so a researcher then needs to label those patterns. You don't need a label to tell that a picture of a face is mostly different than the picture of a building. You need a label to tell you that one is a "face"and the other is a "building". Upvotes: 0 <issue_comment>username_4: I can't remember the researcher's name, but he specializes in psychology in Great Britain and has done a lot of work with machine learning and artificial intelligence. The project he was working on that I read about earlier this year was one where they tried to deduce how humans learn. They came up with the theory that we learn by making guesses about plausible and possible outcomes and that creates our expectations about reality. When we are wrong, depending on the degree, we are possibly surprised or shocked or not affected at all. They are working on creating AI that does not need human intervention, but to make guesses about outcomes before it performs tasks, and then update those expectations as it experiences more varying outcomes. Extremely interesting stuff, and definitely closer to how sentient beings gain experience and grow as individuals. Upvotes: 0 <issue_comment>username_5: > > how can an AI be trained if we human beings are not telling it its calculation is correct? > > > What you are looking for is called **self-supervised learning**. <NAME>, one of the originators behind modern neural network systems, has suggested that [machines can reason usefully even in the absence of human-provided labels](https://www.simonsfoundation.org/event/could-machines-learn-like-humans/) simply by learning auxiliary tasks, the answers for which are already encoded in the data samples. Self-supervision has [already been successfully applied](https://arxiv.org/pdf/1902.06162.pdf) to a variety of tasks, showing improvement in multitask performance due to self-supervision. Unsupervised learning would in general be a subset of self-supervision. Self-supervision can be performed in a variety of ways. One of the most common is to use parts of the data as input and other parts as labels, and using the "input" subset of the data to predict the labels. **Supervised** learning looks like this: ``` model.fit(various_data, human_labels) ``` The human\_labels correspond to entries in various\_data, which we expect the model to predict. Meanwhile, **self-supervised** learning can look something like this: ``` model.fit(various_data[:,:500], various_data[:,500:]) ``` (Using Python array slice notation, some of the input data are used as training labels.) For example, a machine could use half of the pixels in an image of a handwritten digit to try to predict the missing pixels. This is a form of self-supervision: Since the machine knows which pixels belong together in the same sample, it can "automatically" produce its own labeled data from the input itself, simply by using some inputs as outputs. However, predicting pixels from other pixels is often not the desired task. So instead, a neural network is often **pretrained** using self-supervised or unsupervised learning techniques, and then subsequently trained on some amount of human-labeled data as a form of transfer learning. What the summary of the hypothetical news article promises is that self-supervision made the learning more **efficient**, not that it outgrew the need for any kind of human intervention. This is exactly what we get from successful self-supervision in pretraining. In the best possible case, the machine learns to "recognize" each class of digit 0-9 but it still does not know how to ground its own internal labels to the human's labels. Then a human supplying the mapping between the machine's labels and the human-specified IDs would be the only step necessary to upgrade the self-supervised machine to one that is directly useful for digit recognition. There will always be a need for humans to train a machine via direct supervision in order for the machine to learn the intended task. In order to solve a specific problem, a **sufficient degree of supervision** is always required, and sufficient labels to reflect the intention must be provided. Upvotes: 2 <issue_comment>username_6: I think you're probably looking at this the wrong way around. A conventional, old-fashioned AI doesn't make a guess, then require confirmation as to whether that guess was right or wrong. Instead, (in the simplest case) it undergoes a one-off computationally intensive "training"/"learning" phase, during which you feed it an enormous number of correct answers (which are labelled as correct) and an even more enormous number of incorrect answers (which are labelled as incorrect). Using whatever learning mechanism it has at its disposal, it then identifies some underlying structure in the "corrects" that doesn't exist in the "incorrects". When, in the future, it encounters something new that seems to also exhibit this structure, then it will classify this as a "correct". It might do rather well, or it might do terribly. Once the one-off training phase is done, it's stuck with whatever capability it has. Let's say the company you mention is called Facebook and they have a feature that allows you to "tag" your friends in photos. No need to pay engineers to create the largest labelled image database in human history in order to train your AI. Upvotes: 2 <issue_comment>username_7: What you are missing is what the news story does't mention and gloss over. When a news article says: > > company A has a large human face database so that it can train its facial recognition program more efficiently > > > What it really means is: > > company A has a large database of human faces along with additional information such as the identity of the person the face belong to that was created by other humans so that they can use this data set to train its facial recognition program > > > How training works is basically as follows: 1. You have a large database of correct (or almost all correct, ideally it should be correct) information that you want to relate one to the other. For example images of faces along with who that face belongs to. 2. You split this large database into several sets. 3. You use one set to train the AI. 4. After looping through the training set you use one or more of the other sets to test the AI and check if the training works. 5. If you've done this before compare the performance of the current AI to previous AI. Else go to 6. 6. Tweak some parameters of the AI to try to improve performance. 7. Go to 2 until you are satisfied with the performance of the AI. All the steps above are normally automated by scripts. The key here is that the original database has both the question you want to ask the AI (face) and the answer you want the AI to learn (person). Yes, humans are involved in training the AI but the involvement happens earlier at the database gathering stage. Upvotes: 1
2019/11/24
1,309
5,387
<issue_start>username_0: What's the distinction between a learning algorithm $A$ and a hypothesis $f$? I'm looking for a few concrete examples, if possible. For example, would the decision tree and random forest be considered two different learning algorithms? Would a shallow neural network (**that ends up learning a linear function**) and a linear regression model, **both of which use gradient descent to learn parameters**, be considered different learning algorithms? Anyway, from what I understand, one way to vary the hypothesis $f$ would be to change the parameter values, maybe even the hyper-parameter values of, say, a decision tree. Are there other ways of varying $f$? And how can we vary $A$?<issue_comment>username_1: A hypothesis is a statement that suggests an as yet unproven explanation of a relationship between two or more phenomena that you intend to test. An agronomist thinks that more nitrogen on canola will always increase the crop output $$Harvest = f(N)$$, or a meteorologist thinks he can show that the path of a hurricane over the ocean can be determined by knowledge of the sea temperature and the wind speed at an altitude of 1000 feet one minute before. $$D(t,0) = f(T(t-1,1000),S(t-1,1000)$$ Both hypotheses are pegs on which later steps are based; testing follows with a conclusion whether the hypothesis can be rejected or not. Changing a hypothesis can be simply adding or subtracting arguments to the function or changing the nature of the relationship such as the acceleration of the wind as opposed to its velocity. A "learning" algorithm describes how the parameters of a numeric model are changed in accordance with the delta rule, that is what the learning rate is and whether momentum is to be applied. Random Forest and Decision Tree are "classification" algorithms. They are clearly stepwise processes that proceed towards the goal of a model, but they start by specifying the shape that the model will take and place boundaries on what values the parameters may take. Both learning and classification algorithms specify a priori what shape the model will take and by doing so limit its relevance to particular problems. Upvotes: 0 <issue_comment>username_2: In [computational learning theory](http://eliassi.org/COLTSurveyArticle.pdf), a learning algorithm (or learner) $A$ is an algorithm that chooses a hypothesis (which is a function) $h: \mathcal{X} \rightarrow \mathcal{Y}$, where $\mathcal{X}$ is the input space and $\mathcal{Y}$ is the target space, from the hypothesis space $H$. For example, consider the task of image classification (e.g. [MNIST](http://yann.lecun.com/exdb/mnist/)). You can train, with gradient descent, a neural network to classify the images. In this case, gradient descent is the learner $A$, the space of all possible neural networks that gradient descent considers is the hypothesis space $H$ (so each combination of parameters of the neural network represents a specific hypothesis), $\mathcal{X}$ is the space of images that you want to classify, $\mathcal{Y}$ is the space of all possible classes and the final trained neural network is the hypothesis $h$ chosen by the learner $A$. > > For example, would the decision tree and random forest be considered two different learning algorithms? > > > The decision tree and random forest are not learning algorithms. A **specific** decision tree or random forest is a **hypothesis** (i.e. function of the form as defined above). In the context of decision trees, the [ID3 algorithm](https://en.wikipedia.org/wiki/ID3_algorithm) (a [decision tree algorithm](https://en.wikipedia.org/wiki/Decision_tree_learning) that can be used to construct the decision tree, i.e. the hypothesis), is an example of a **learning algorithm** (aka *learner*). The space of all trees that the learner considers is the **hypothesis space/class**. > > Would a shallow neural network (that ends up learning a linear function) and a linear regression model, both of which use gradient descent to learn parameters, be considered different learning algorithms? > > > The same can be said here. A specific neural network or linear regression model (i.e. a line) corresponds to a specific hypothesis. The set of all neural networks (or lines, in the case of linear regression) that you consider corresponds to the hypothesis class. > > Anyway, from what I understand, one way to vary the hypothesis $f$ would be to change the parameter values, maybe even the hyper-parameter values of, say, a decision tree. > > > If you consider a neural network (or decision tree) model, with $N$ parameters $\mathbf{\theta} = [\theta\_i, \dots \theta\_N]$, then a specific combination of these parameters corresponds to a specific hypothesis. If you change the values of these parameters, you also automatically change the hypothesis. If you change the hyperparameters (such as the number of neurons in a specific layer), however, you will be changing the hypothesis class, so the set of hypotheses that you consider. > > Are there other ways of varying $f$? > > > Off the top of my head, only by changing the parameters, you change the hypothesis. > > And how can we vary $A$? > > > Let's consider gradient descent as the learning algorithm. In this case, to change the learner, you could change, for example, the learning rate. Upvotes: 3 [selected_answer]
2019/11/25
1,451
5,943
<issue_start>username_0: I have a big dataset (28354359 rows) that has some blood values as features (11 features) and the label or outcome variable that tells whether a patient has a virus caused by a Neoplasm or not. The problem with my dataset is that 2% of the patients that are in my dataset have the virus and 98% does not have the virus. I am mandatory to use the random forest algorithm. While my random forest model has a high accuracy scores 92%, the problem is that more than 90% of the patients that have the virus are predicted that they don’t have the virus. I want the opposite effect, I want that my random forest is likely to predict more often that a patient has the virus (even if the patient does not have the virus (ideally I don’t want this side effect , but rather this than the opposite)). The idea behind this is that performing an extra test (via an echo) could not harm the patient that has not the virus, but not testing a patient will have result terrible for the patient. **Does somebody have advice how I could tweak my random forest model for this task?** I my self experimented with the SMOTE transformation and other sampling techniques but maybe you guys have other suggestion. I also have tried to apply a cutoff function.<issue_comment>username_1: A hypothesis is a statement that suggests an as yet unproven explanation of a relationship between two or more phenomena that you intend to test. An agronomist thinks that more nitrogen on canola will always increase the crop output $$Harvest = f(N)$$, or a meteorologist thinks he can show that the path of a hurricane over the ocean can be determined by knowledge of the sea temperature and the wind speed at an altitude of 1000 feet one minute before. $$D(t,0) = f(T(t-1,1000),S(t-1,1000)$$ Both hypotheses are pegs on which later steps are based; testing follows with a conclusion whether the hypothesis can be rejected or not. Changing a hypothesis can be simply adding or subtracting arguments to the function or changing the nature of the relationship such as the acceleration of the wind as opposed to its velocity. A "learning" algorithm describes how the parameters of a numeric model are changed in accordance with the delta rule, that is what the learning rate is and whether momentum is to be applied. Random Forest and Decision Tree are "classification" algorithms. They are clearly stepwise processes that proceed towards the goal of a model, but they start by specifying the shape that the model will take and place boundaries on what values the parameters may take. Both learning and classification algorithms specify a priori what shape the model will take and by doing so limit its relevance to particular problems. Upvotes: 0 <issue_comment>username_2: In [computational learning theory](http://eliassi.org/COLTSurveyArticle.pdf), a learning algorithm (or learner) $A$ is an algorithm that chooses a hypothesis (which is a function) $h: \mathcal{X} \rightarrow \mathcal{Y}$, where $\mathcal{X}$ is the input space and $\mathcal{Y}$ is the target space, from the hypothesis space $H$. For example, consider the task of image classification (e.g. [MNIST](http://yann.lecun.com/exdb/mnist/)). You can train, with gradient descent, a neural network to classify the images. In this case, gradient descent is the learner $A$, the space of all possible neural networks that gradient descent considers is the hypothesis space $H$ (so each combination of parameters of the neural network represents a specific hypothesis), $\mathcal{X}$ is the space of images that you want to classify, $\mathcal{Y}$ is the space of all possible classes and the final trained neural network is the hypothesis $h$ chosen by the learner $A$. > > For example, would the decision tree and random forest be considered two different learning algorithms? > > > The decision tree and random forest are not learning algorithms. A **specific** decision tree or random forest is a **hypothesis** (i.e. function of the form as defined above). In the context of decision trees, the [ID3 algorithm](https://en.wikipedia.org/wiki/ID3_algorithm) (a [decision tree algorithm](https://en.wikipedia.org/wiki/Decision_tree_learning) that can be used to construct the decision tree, i.e. the hypothesis), is an example of a **learning algorithm** (aka *learner*). The space of all trees that the learner considers is the **hypothesis space/class**. > > Would a shallow neural network (that ends up learning a linear function) and a linear regression model, both of which use gradient descent to learn parameters, be considered different learning algorithms? > > > The same can be said here. A specific neural network or linear regression model (i.e. a line) corresponds to a specific hypothesis. The set of all neural networks (or lines, in the case of linear regression) that you consider corresponds to the hypothesis class. > > Anyway, from what I understand, one way to vary the hypothesis $f$ would be to change the parameter values, maybe even the hyper-parameter values of, say, a decision tree. > > > If you consider a neural network (or decision tree) model, with $N$ parameters $\mathbf{\theta} = [\theta\_i, \dots \theta\_N]$, then a specific combination of these parameters corresponds to a specific hypothesis. If you change the values of these parameters, you also automatically change the hypothesis. If you change the hyperparameters (such as the number of neurons in a specific layer), however, you will be changing the hypothesis class, so the set of hypotheses that you consider. > > Are there other ways of varying $f$? > > > Off the top of my head, only by changing the parameters, you change the hypothesis. > > And how can we vary $A$? > > > Let's consider gradient descent as the learning algorithm. In this case, to change the learner, you could change, for example, the learning rate. Upvotes: 3 [selected_answer]
2019/11/27
902
3,648
<issue_start>username_0: Loss is MSE; orange is validation loss, blue training loss. The task is NN regression (18 inputs, 2 outputs), one layer 300 hidden units. [![](https://i.stack.imgur.com/Rs1sq.png)](https://i.stack.imgur.com/Rs1sq.png) Tuning the lr, mom, l2 regularization parameters this is the best validation loss I can obtain. Can be considered overfitting? Is 1 a bad vl loss value for a regression task?<issue_comment>username_1: Depends on what does 1 represent in your task. If you are trying to predict household prices and 1 represents \$1, I think the average validation loss is good. If 1 represents \$10000 in this case, probably something is not right. But remember that there are 2 parts contributing to the overall loss. The mse loss and the l2 penalty loss. (Also remember that most optimizers already implement l2 penalty as weight decay. So you do not need to add it separately) Some suggestions. 1. Check if your data has any outliers/ anomalies. Based on your task you should know as to what you can do with these data points. Also see if your dataset has high variance. 2. If you are worried about over-fitting, think about your data again. Less data + more parameters often leads to over-fitting. If your dataset is too small, you need to think again. 3. Try to adjust the number of hidden units and observe the results. 4. Try using cross validation. 5. Alternatively, try using different optimizers and see what happens (Try Adam). Upvotes: 2 <issue_comment>username_2: The validation loss settles exactly at an error of one. Probably means there's something off with either the kind of data validation set has or with something in the training. An exact validation loss of one almost definitely means there's something off. I'd recommend before doing anything thoroughly go through your data or see if there's anything to debug in the model itself. Considering training error reduces there's probably something different about either the formatting of the validation data or the validation data itself. A mild description of the type of data and exact problem in hand could further help. Upvotes: 2 <issue_comment>username_3: The telltale **signature** of overfitting is when your validation loss starts increasing, while your training loss continues decreasing, i.e.: [![adapted from Wikipedia](https://i.stack.imgur.com/KLM4h.png)](https://i.stack.imgur.com/KLM4h.png) (Image adapted from Wikipedia entry on [overfitting](https://en.wikipedia.org/wiki/Overfitting)) It is clear that this does not happen in your diagram, hence your model does not overfit. A difference between a training and a validation score by itself does **not** signify overfitting. This is just the **generalization gap**, i.e. the *expected* gap in the performance between the training and validation sets; quoting from a recent [blog post by Google AI](https://ai.googleblog.com/2019/07/predicting-generalization-gap-in-deep.html): > > An important concept for understanding generalization is the *generalization gap*, i.e., the difference between a model’s performance on training data and its performance on unseen data drawn from the same distribution. > > > An MSE of 1.0 (or any other specific value, for that matter) by itself cannot be considered "good" or "bad"; everything depends on the context, i.e. of the particular problem and the actual magnitude of your dependent variable: if you are trying to predict something that is in the order of some thousands (or even hundreds), an MSE of 1.0 does not sound bad; it's not the same if your dependent variable takes values, say, in [0, 1] or similar. Upvotes: 1