date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2019/11/27
801
2,876
<issue_start>username_0: Currently, I'm using a Python library, [StellarGraph](https://github.com/stellargraph/stellargraph), to implement GCN. And I now have a situation where I have graphs with weighted edges. Unfortunately, StellarGraph doesn't support those graphs I'm looking for an open-source implementation for graph convolution networks for **weighted graphs**. I've searched a lot, but mostly they assumed unweighted graphs. Is there an open-source implementation for GCNs for weighted graphs?<issue_comment>username_1: [A Comprehensive Survey on Graph Neural Networks](https://arxiv.org/abs/1901.00596) (2019) presents a list of ConvGNN's. All of the following accept weighted graphs, and three accept those with edge weights as well: [![enter image description here](https://i.stack.imgur.com/d7foa.png)](https://i.stack.imgur.com/d7foa.png) And below is a series of open source implementations of many of the above: [![enter image description here](https://i.stack.imgur.com/QSxzd.png)](https://i.stack.imgur.com/QSxzd.png) Upvotes: 3 [selected_answer]<issue_comment>username_2: You can use **[Pytorch\_Geometric](https://github.com/rusty1s/pytorch_geometric)** library for your projects. Its **supports weighted GCNs**. It is a rapidly evolving open-source library with easy to use syntax. It is mentioned in the landing page of [Pytorch](https://pytorch.org/). It is the most starred Pytorch github repo for geometric deep learning. Creating a GCN model which can process graphs with weights is as simple as: ``` import torch import torch.nn.functional as F from torch_geometric.nn import GCNConv class Net(torch.nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = GCNConv(dataset.num_node_features, 16) self.conv2 = GCNConv(16, dataset.num_classes) def forward(self, data): # data has the following 3 attributes x, edge_index, edge_weight = data.x, data.edge_index, data.edge_weight x = self.conv1(x, edge_index, edge_weight) x = F.relu(x) x = F.dropout(x, training=self.training) x = self.conv2(x, edge_index, edge_weight) return F.log_softmax(x, dim=1) ``` See [this](https://pytorch-geometric.readthedocs.io/en/latest/notes/introduction.html) for getting started. Check out its [documentation](https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#module-torch_geometric.nn.conv.message_passing) on different variants of GCNs for further details. One of the best thing is that like Pytorch, its documentation are self-sufficient. Upvotes: 2 <issue_comment>username_3: You can use Graph Attention Networks for weighted graphs. This model can handle negative weights. Check out its [documentation.](https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#module-torch_geometric.nn.conv.message_passing) Upvotes: 0
2019/11/27
706
3,136
<issue_start>username_0: What is the difference between genetic algorithms and evolutionary game theory algorithms?<issue_comment>username_1: A genetic algorithm is typically a single population designed to optimise to a specific task, say minimising the distance on the [travelling salesman problem](https://simple.wikipedia.org/wiki/Travelling_salesman_problem). Evolutionary game theory algorithms typically model changes between populations that are in *competition*, generally by using genetic algorithms as above but framed within a broader competitive environment between actors. In the case of a problem like the travelling salesman problem, it might frame the game as one with competing players where a player getting to a city 'locks' that city and makes it impassable to other players. In these situations new optimisations like localism over adventurism etc may develop, and while players are still trying to minimise the distance travelled overall via their respective genetic algorithm's fitness function, they have to do so in a directly competitive environment with other strategies which quickly creates a lot of additional nuance and depth. Upvotes: 1 <issue_comment>username_2: username_1's answer is good, but I'll add to it. In a [GA](https://en.wikipedia.org/wiki/Genetic_algorithm), a population of individuals (typically represented by bit strings) is evaluated for its fitness on a particular task. Each individual is evaluated separately by a fitness function than can determine its quality. In the [Traveling Salesman Problem](https://en.wikipedia.org/wiki/Travelling_salesman_problem), the bit string might represent a sequence of numbers, for instance, corresponding to an order in which cities are visited during the tour. The fitness function would inspect a single individual, compute the total cost of the tour, and assign that individual a fitness based on that value. Low scoring individuals are removed, high scoring individuals generate variants on themselves, and the process repeats. In Evolutionary Game Theory, a population of individuals is also evaluated on some task, but usually the task involves *interaction* between the individuals. For example, you could use an EGT simulation to study what happens in a game like Iterated [Prisoner's Dilemma](https://en.wikipedia.org/wiki/Prisoner%27s_dilemma). Here, an individual's fitness doesn't just depend on the rules of the task, but on the behaviors and strategies of the other players in the population. A strategy that is highly effective at first (like always cooperate) will quickly die out once defectors appear. Defectors are effective as long as there are some cooperators to prey upon, but are quickly defeated by strategies like [Tit-for-Tat](https://en.wikipedia.org/wiki/Tit_for_tat). Usually researchers are not interested in the specific strategies that emerge so much as in the population *dynamics* over the course of the simulation, and in what kinds of population equilibria can emerge. Check out some of <NAME>'s [papers](http://eldar.mathstat.uoguelph.ca/dashlock/Eprints.html) on Game Theory for more. Upvotes: 2
2019/11/28
2,507
9,630
<issue_start>username_0: If we were learning or working in the machine learning field, then we frequently come across the term "probability distribution". I know what probability, conditional probability, and probability distribution/density in math mean, but what is its meaning in machine learning? Take this example where $x$ is an element of $D$, which is a dataset, $$x \in D$$ Let's say our dataset ($D$) is MNIST with about 70,000 images, so then $x$ becomes any image of those 70,000 images. In many papers and web articles, these terms are often denoted as *probability distributions* $$p(x)$$ or $$p\left(z \mid x \right)$$ * What does $p(\cdot)$ even mean, and what kind of output does $p(\cdot)$ give? * Is the output of $p(\cdot)$ a scalar, vector, or matrix? * If the output is vector or matrix, then will the sum of all elements of this vector/matrix always be $1$? This is my understanding, $p(\cdot)$ is a function which maps the real distribution of the whole dataset $D$. Then $p(x)$ gives a scalar probability value given $x$, which is calculated from real distribution $p(\cdot)$. Similar to $p(H)=0.5$ in a coin toss experiment $D={\{H,T}\}$. $p\left(z \mid x \right)$ is another function that maps the real distribution of the whole dataset to a vector $z$ given an input $x$ and the $z$ vector is a probability distribution that sums to $1$. Are my assumptions correct? An example would be a [VAE's data generation process](https://lilianweng.github.io/lil-log/2018/08/12/from-autoencoder-to-beta-vae.html#vae-variational-autoencoder), which is represented in this equation $$p\_\theta(\mathbf{x}^{(i)}) = \int p\_\theta(\mathbf{x}^{(i)}\vert\mathbf{z}) p\_\theta(\mathbf{z}) d\mathbf{z}$$<issue_comment>username_1: A probability distribution in ML is the same as a probability distribution elsewhere. A probability distribution (or probability function, or probability mass function, or probability density function) is any function that accepts as input elements of some specific set $x \in X$, and produces as output, real-valued numbers between 0 and 1 (inclusive), such that $\int\_{x \in X} p(x) = 1$ or for discrete sets, $\sum\_{x \in X} p(x) = 1$. These distributions can also be more complex. For example, a conditional probability distribution $P(Y|X)$ or a joint probability distribution $P(X,Y)$ accept more than one input, but again, are constrained to producing an output in the range 0 to 1, and to ensuring that the summation of the output over all possible inputs is exactly 1. When these conditions are met, the functions output can be interpreted as a belief about the percentage of times the input event will occur, out of all events, or as a degree of believe in the input event having occurred vs. other events (i.e. it can be interpreted as a probability). Upvotes: 2 <issue_comment>username_2: ### Random variables You do not necessarily need to understand the concept of a [random variable (r.v.)](https://en.wikipedia.org/wiki/Random_variable) to understand the concept of a probability distribution, but the concept of a random variable is strictly connected to the concept of a probability distribution (given that each random variable has an associated probability distribution), so, before proceeding, you should get familiar with the concept of an r.v., which is a (measurable) **function** from the sample space (the set of possible outcomes of an experiment) to a measurable space (you can ignore the definition of a measurable space and assume that the codomain of the random variable is a finite set of numbers). ### Probability measure, cdf, pdf and pmf The expression "probability distribution" can be ambiguous because it can be used to refer to different (even though related) mathematical concepts, such as [probability measure](https://en.wikipedia.org/wiki/Probability_measure), [cumulative distribution function (c.d.f.)](https://en.wikipedia.org/wiki/Cumulative_distribution_function), [probability density function (p.d.f.)](https://en.wikipedia.org/wiki/Probability_density_function), [probability mass function (p.m.f.)](https://en.wikipedia.org/wiki/Probability_mass_function). If a person uses the expression "probability distribution", he (or she) intentionally (or not) refers to one or more of these mathematical concepts, depending on the context. However, **a probability distribution is almost always a synonym for [probability measure](https://en.wikipedia.org/wiki/Probability_measure) or [c.d.f.](https://en.wikipedia.org/wiki/Probability_density_function)**. For example, if I say "Consider the Gaussian probability distribution", in that case, I could be referring to either the c.d.f. or the p.d.f. (or both) of the [Gaussian distribution](https://en.wikipedia.org/wiki/Normal_distribution). Why couldn't I be referring to the p.m.f. of the Gaussian distribution? Because the Gaussian distribution is a continuous distribution, so it is a distribution associated with a continuous random variable, that is, a random variable that can take on continuous values (e.g. real numbers), so a Gaussian distribution does not have an associated p.m.f. or, in other words, no p.m.f. is defined for the Gaussian distribution. Why don't I simply say "Consider the p.d.f. of the Gaussian distribution." or "Consider a Gaussian p.d.f."? Because it is unnecessarily restrictive, given that, if I say "Consider the Gaussian distribution" I am implicitly also considering a p.d.f. and c.d.f. of the Gaussian distribution. Similarly, in the case of a discrete distribution, such as the Bernoulli distribution, only the c.d.f. and p.m.f. are defined, so the Bernoulli distribution does not have an associated p.d.f. However, it is important to recall that both continuous and discrete distributions have an associated c.d.f., so the expression "probability distribution" almost always (implicitly) refers to a c.d.f., which is defined based on a probability measure (as stated above). ### Notation In the same vein, the notation $p(x)$ can be as ambiguous as the expression "probability distribution", given that it can refer to different (but again related) concepts. However, $p(x)$ usually refers to a [probability measure](https://en.wikipedia.org/wiki/Probability_measure) (so it refers to a probability distribution, given that a probability distribution is almost always a synonym for probability measure). In this case, assuming for simplicity that the r.v. is discrete, **$p(x)$ is a shorthand for $p(X=x)$, which is also written as $\mathbb{P}(X=x)$** or $\operatorname{Pr}(X=x)$, where $X$ is a r.v., $x$ a [realization](https://en.wikipedia.org/wiki/Realization_(probability)) of $X$ (that is, a value that the r.v. $X$ can take) and $X=x$ represents an [event](https://en.wikipedia.org/wiki/Event_(probability_theory)). Given that an r.v. is a function, the notation $X=x$ may look a bit weird. In the case of a discrete r.v., $p(x)$ can also refer to a p.m.f. and it can be defined as $p\_X(x) = \mathbb{P}(X=x)$ (I added the subscript $X$ to $p$ to emphasize that this is the p.m.f. of the discrete r.v. $X$). In the case of a continuous r.v., the p.d.f. is often denoted as $f$. In the case of both discrete and continuous r.v.s, the c.d.f is usually denoted with $F$ and it is defined as $F\_X(x) = \mathbb{P}(X \leq x)$, where $\mathbb{P}$ is again a probability measure (or probability distribution). The p.d.f. of a continuous r.v. is then defined as the derivative of $F$. At this point, it should be clear why a probability distribution can refer to different but related concepts, but, in any case, it always refers to a [probability measure](https://en.wikipedia.org/wiki/Probability_measure). ### Empirical distributions There are also [empirical distributions](https://en.wikipedia.org/wiki/Empirical_distribution_function), which are distributions of the data that you have collected. For example, if you toss a coin 10 times, you will collect the results ("heads" or "tails"). You can count the number of times the coin landed on heads and tails, then you plot these numbers as a histogram, which essentially represents your empirical distribution, where the adjective "empirical" usually refers to the fact that there is an experiment involved. ### Multivariate r.v.s and distributions To complicate things even more, there are also [multivariate random variables and probability distributions](https://en.wikipedia.org/wiki/Multivariate_normal_distribution). However, all the concepts above more or less are also applicable in this case. ### Parametrized distributions A parametrized probability distribution, often denoted by $p\_{\theta}$, is a [family of probability distributions](https://en.wikipedia.org/wiki/Parametric_model) (defined by the parameters $\theta$), rather than a single probability distribution. For example, $\mathcal{N}(0, 1)$ refers to a *single* Gaussian distribution with zero mean and unit variance. However, $\mathcal{N}(\mu, \sigma)$, where $\theta=(\mu, \sigma)$ is a variable, is a family (or collection) of distributions. ### Conclusion To conclude, it is completely understandable that you are confused, given that the terminology and notation are used inconsistently, and there are several involved concepts, which I have not extensively covered in this answer (for example, I have not mentioned the concept of a probability space). If you get familiar with the concepts of probability measures, random variables, p.m.f., p.d.f., c.d.f., etc., and how they are related, then you will start to get a better feeling of the whole picture. Upvotes: 3 [selected_answer]
2019/11/29
2,201
8,839
<issue_start>username_0: *When using CNNs for non-image (times series) data prediction, what are some constraints or things to look out for as compared to image data?* To be more precise, I notice there are different types of layers in a CNN model, as described below, which seem to be particularly designed for image data. A **convolutional layer** that extracts features from a source image. Convolution helps with blurring, sharpening, edge detection, noise reduction, or other operations that can help the machine to learn specific characteristics of an image. A **pooling layer** that reduces the image dimensionality without losing important features or patterns. A **fully connected layer** also known as the dense layer, in which the results of the convolutional layers are fed through one or more neural layers to generate a prediction. *Are these operations also applicable to non-image data (for example, times series)?*<issue_comment>username_1: A probability distribution in ML is the same as a probability distribution elsewhere. A probability distribution (or probability function, or probability mass function, or probability density function) is any function that accepts as input elements of some specific set $x \in X$, and produces as output, real-valued numbers between 0 and 1 (inclusive), such that $\int\_{x \in X} p(x) = 1$ or for discrete sets, $\sum\_{x \in X} p(x) = 1$. These distributions can also be more complex. For example, a conditional probability distribution $P(Y|X)$ or a joint probability distribution $P(X,Y)$ accept more than one input, but again, are constrained to producing an output in the range 0 to 1, and to ensuring that the summation of the output over all possible inputs is exactly 1. When these conditions are met, the functions output can be interpreted as a belief about the percentage of times the input event will occur, out of all events, or as a degree of believe in the input event having occurred vs. other events (i.e. it can be interpreted as a probability). Upvotes: 2 <issue_comment>username_2: ### Random variables You do not necessarily need to understand the concept of a [random variable (r.v.)](https://en.wikipedia.org/wiki/Random_variable) to understand the concept of a probability distribution, but the concept of a random variable is strictly connected to the concept of a probability distribution (given that each random variable has an associated probability distribution), so, before proceeding, you should get familiar with the concept of an r.v., which is a (measurable) **function** from the sample space (the set of possible outcomes of an experiment) to a measurable space (you can ignore the definition of a measurable space and assume that the codomain of the random variable is a finite set of numbers). ### Probability measure, cdf, pdf and pmf The expression "probability distribution" can be ambiguous because it can be used to refer to different (even though related) mathematical concepts, such as [probability measure](https://en.wikipedia.org/wiki/Probability_measure), [cumulative distribution function (c.d.f.)](https://en.wikipedia.org/wiki/Cumulative_distribution_function), [probability density function (p.d.f.)](https://en.wikipedia.org/wiki/Probability_density_function), [probability mass function (p.m.f.)](https://en.wikipedia.org/wiki/Probability_mass_function). If a person uses the expression "probability distribution", he (or she) intentionally (or not) refers to one or more of these mathematical concepts, depending on the context. However, **a probability distribution is almost always a synonym for [probability measure](https://en.wikipedia.org/wiki/Probability_measure) or [c.d.f.](https://en.wikipedia.org/wiki/Probability_density_function)**. For example, if I say "Consider the Gaussian probability distribution", in that case, I could be referring to either the c.d.f. or the p.d.f. (or both) of the [Gaussian distribution](https://en.wikipedia.org/wiki/Normal_distribution). Why couldn't I be referring to the p.m.f. of the Gaussian distribution? Because the Gaussian distribution is a continuous distribution, so it is a distribution associated with a continuous random variable, that is, a random variable that can take on continuous values (e.g. real numbers), so a Gaussian distribution does not have an associated p.m.f. or, in other words, no p.m.f. is defined for the Gaussian distribution. Why don't I simply say "Consider the p.d.f. of the Gaussian distribution." or "Consider a Gaussian p.d.f."? Because it is unnecessarily restrictive, given that, if I say "Consider the Gaussian distribution" I am implicitly also considering a p.d.f. and c.d.f. of the Gaussian distribution. Similarly, in the case of a discrete distribution, such as the Bernoulli distribution, only the c.d.f. and p.m.f. are defined, so the Bernoulli distribution does not have an associated p.d.f. However, it is important to recall that both continuous and discrete distributions have an associated c.d.f., so the expression "probability distribution" almost always (implicitly) refers to a c.d.f., which is defined based on a probability measure (as stated above). ### Notation In the same vein, the notation $p(x)$ can be as ambiguous as the expression "probability distribution", given that it can refer to different (but again related) concepts. However, $p(x)$ usually refers to a [probability measure](https://en.wikipedia.org/wiki/Probability_measure) (so it refers to a probability distribution, given that a probability distribution is almost always a synonym for probability measure). In this case, assuming for simplicity that the r.v. is discrete, **$p(x)$ is a shorthand for $p(X=x)$, which is also written as $\mathbb{P}(X=x)$** or $\operatorname{Pr}(X=x)$, where $X$ is a r.v., $x$ a [realization](https://en.wikipedia.org/wiki/Realization_(probability)) of $X$ (that is, a value that the r.v. $X$ can take) and $X=x$ represents an [event](https://en.wikipedia.org/wiki/Event_(probability_theory)). Given that an r.v. is a function, the notation $X=x$ may look a bit weird. In the case of a discrete r.v., $p(x)$ can also refer to a p.m.f. and it can be defined as $p\_X(x) = \mathbb{P}(X=x)$ (I added the subscript $X$ to $p$ to emphasize that this is the p.m.f. of the discrete r.v. $X$). In the case of a continuous r.v., the p.d.f. is often denoted as $f$. In the case of both discrete and continuous r.v.s, the c.d.f is usually denoted with $F$ and it is defined as $F\_X(x) = \mathbb{P}(X \leq x)$, where $\mathbb{P}$ is again a probability measure (or probability distribution). The p.d.f. of a continuous r.v. is then defined as the derivative of $F$. At this point, it should be clear why a probability distribution can refer to different but related concepts, but, in any case, it always refers to a [probability measure](https://en.wikipedia.org/wiki/Probability_measure). ### Empirical distributions There are also [empirical distributions](https://en.wikipedia.org/wiki/Empirical_distribution_function), which are distributions of the data that you have collected. For example, if you toss a coin 10 times, you will collect the results ("heads" or "tails"). You can count the number of times the coin landed on heads and tails, then you plot these numbers as a histogram, which essentially represents your empirical distribution, where the adjective "empirical" usually refers to the fact that there is an experiment involved. ### Multivariate r.v.s and distributions To complicate things even more, there are also [multivariate random variables and probability distributions](https://en.wikipedia.org/wiki/Multivariate_normal_distribution). However, all the concepts above more or less are also applicable in this case. ### Parametrized distributions A parametrized probability distribution, often denoted by $p\_{\theta}$, is a [family of probability distributions](https://en.wikipedia.org/wiki/Parametric_model) (defined by the parameters $\theta$), rather than a single probability distribution. For example, $\mathcal{N}(0, 1)$ refers to a *single* Gaussian distribution with zero mean and unit variance. However, $\mathcal{N}(\mu, \sigma)$, where $\theta=(\mu, \sigma)$ is a variable, is a family (or collection) of distributions. ### Conclusion To conclude, it is completely understandable that you are confused, given that the terminology and notation are used inconsistently, and there are several involved concepts, which I have not extensively covered in this answer (for example, I have not mentioned the concept of a probability space). If you get familiar with the concepts of probability measures, random variables, p.m.f., p.d.f., c.d.f., etc., and how they are related, then you will start to get a better feeling of the whole picture. Upvotes: 3 [selected_answer]
2019/12/01
1,307
5,660
<issue_start>username_0: For an AI to represent the world, it would be good if it could translate human sentences into something more precise. We know, for example, that mathematics can be built up from set theory. So representing statement in language of set theory might be useful. For example > > All grass is green > > > is something like: > > $\forall x \in grass: isGreen(x)$ > > > But then I learned that set theory is built up from something more basic. And that theorem provers use a special form of higher-order logic of types. Then there is propositional logic. Basically what the AI would need would be some way representing statements, some axioms, and ways to manipulate the statements. Thus what would be a good language to use as an internal language for an AI?<issue_comment>username_1: This is (even though it doesn't look like it at first glance) a deeply philosophical question about the nature of 'meaning'. This answer is necessarily limited in scope. There are many ways of representing knowledge, and countless formalisms have been developed since the early days of AI. Many of them are based on some kind of predicate calculus, ontologies, semantic networks (providing eg inheritance of features and part-of relationships), and they seem to work fine for limited domains. One problem is the grounding: if you have a predicate *isGreen(x)*, what does that actually mean? How is it related to *isBlue(x)*? Do you want to treat them similarly? If so, you need to represent this somehow. You quickly come to the point where you will need to encode all the world's knowledge in some generalised way. An impossible task. Linguists have struggled with this for decades: what is the meaning of a particular sentence? Apart from the fact that every individual human will interpret a given sentence differently (based on their own life experience and culture), there are many aspects to 'meaning' that need representing: the 'factual' meaning, but also pragmatic, evaluative, and all sorts of other nuances. An innocent utterance, *That's a nice Apple you've got there*, could have a whole raft of meanings packed into it, all implicit. For example, the person probably likes apples, that one in particular, that apple looks like a tasty piece of fruit, the other person is the owner of the apple, and it might also be a request which is intended to prompt the other person to offer it to you. How are you going to represent all that meaning? One area that interests me personally is representing narrative events. This can β€” up to a point β€” be done using [Conceptual Dependency](https://en.wikipedia.org/wiki/Conceptual_dependency_theory), which uses a limited set of semantic primitives. While useful to encode basic stories, you cannot easily use it to represent the fact that grass is green. So the answer is: there is no answer. AI is too broad a field, and you need to look at a particular application to decide which knowledge is relevant to it, and then how it can best be represented. There is a reason why there are so many ways of representing knowledge. PS: You suggest this would be *more precise*. My personal view is that precision here is a red herring. The word *green* is not precise, as it covers a range of wave lengths, and [different people would disagree on whether something is green or not](https://en.wikipedia.org/wiki/Color_term#Cultural_differences). So a predicate *isGreen(x)* is not any more precise than that. Hence the appeal of [fuzzy logic](https://en.wikipedia.org/wiki/Fuzzy_logic), which allows computation to be based on less precision. Upvotes: 2 <issue_comment>username_2: I think the first question you should answer is: "What questions should the AI be able to answer?" If the intend was that the AI should be able to answer any questions, then that is simply not doable (or at least currently it is not doable). Currently this is similar to asking for a program that can do anything. Currently the AI field is split between statistical approaches and logical approaches. In the early years AI was approached mainly from a logical perspective. Now statistical approaches are more popular. The main advantage of logical approaches is that answers can be explained, while the main advantage of statistical approaches is that given large enough data sets agents can be trained. There is definitely a drive in the AI community to merge statistical and logical approaches to AI, but these approaches are still in its infancy. I therefore will strongly suggest you first determine the kind of problems you will want to address with AI, then based on that, you determine the AI approach that is best suited for those problems. Upvotes: 2 <issue_comment>username_3: From my perspective you should look at the concept of **ontologies**, which might briefly be described as a set of axioms that formalize concepts such as `{Grass, Water, Green}` and relations between those like `hasProperty(Grass, Green)` and `needs(Grass, Water)`. To describe such kind of knowledge the [Web Ontology Language](https://en.wikipedia.org/wiki/Web_Ontology_Language) was created. The theoretical framework on which it is built are different flavors of *description logics*, which all are fragments of first order predicate logic, but come with different tradeoffs between expressiveness and computational complexity for *automatic reasoning*. As with other AI-topics this kind of stuff can get quite involved β€’ and interesting. I can recommend the open textbook: [An introduction to ontology engineering](https://people.cs.uct.ac.za/~mkeet/OEbook/) by <NAME> (University of Cape Town). Upvotes: 1
2019/12/01
1,345
5,749
<issue_start>username_0: Suppose you want to predict the price of some stock. Let's say you use the following features. ``` OpenPrice HighPrice LowPrice ClosePrice ``` Is it useful to create new features like the following ones? ``` BodySize = ClosePrice - OpenPrice ``` or the size of the tail ``` TailUp = HighPrice - Max(OpenPrice, ClosePrice) ``` Or we don't need to do that because we are adding noise and the neural network is going to calculate those values inside? The case of the body size maybe is a bit different from the tail, because for the tail we need to use a non-linear function (the max operation). So maybe is it important to add the input when it is not a linear relationship between the other inputs not if it's linear? Another example. Consider a box, with height $X$, width $Y$ and length $Z$. And suppose the real important input is the volume, will the neural network discover that the correlation is $X \* Y \* Z$? Or we need to put the volume as input too? Sorry if it's a dumb question but I'm trying to understand what is doing internally the neural network with the inputs, if it's finding (somehow) all the mathematically possible relations between all the inputs or we need to specify the relations between the inputs that we consider important (heuristically) for the problem to solve?<issue_comment>username_1: On paper, one expects a complex enough network to determine any complicated function of a limited number of inputs, given a large enough dataset. But in practice, there is no limit to the possible difficulty of the function to be learnt, and the datasets can be relatively small on occasion. In such cases - or arguably in general - it is definitely a good idea to define some combination of the inputs depending on some heuristics as you suggested. If you think some combination of inputs is an important variable by itself, you definitely should include it in your inputs. We can visualize this situation in [TensorFlow playground](http://playground.tensorflow.org). Consider the circular pattern dataset on top left corner with some noise. You can use the default setting: $x\_1$ and $x\_2$ as inputs with 2 hidden layers with 4 and 2 neurons respectively. It should learn the pattern in less than 100 epochs. But if you reduce the number of neurons in the second layer to 2, it is not going to get as good as before. So, you are making the model more complicated to get the correct answer. You can experiment and see that one needs at least one 3 neuron layer to get the correct classification from just $x\_1$ and $x\_2$. Now, if we examine the dataset, we see the circles so we know that instead of $x\_1$ and $x\_2$, we can try $x\_1^2$ and $x\_2^2$. This will learn perfectly without any hidden layers as the function is linear in these parameters. The lesson to be learnt here is that, our prior knowledge of the circle ($x\_1^2 + x\_2^2 = r^2$) and familiarity with the data helped us in getting a good result with a simpler model (smaller number of neurons), by using derived inputs. Take the spiral data at the lower right corner for a more challenging problem. For this one, if you do not use any derived features, it is not likely to give you the correct result, even with several hidden layers. Keep in mind that every extra neuron is a potential source of overfitting, on top of being a computational burden. Of course the problem here is overly simplified but I expect the situation to be more or less the same for any complicated problem. In practice, we do not have infinite datasets or infinite compute times and the model complexity is always a restriction, so if you have any reason to think some relation between your inputs is relevant for your final result, you definitely should include it by hand at the beginning. Upvotes: 3 [selected_answer]<issue_comment>username_2: The question is related to "feature extraction". Firstly, to tackle a **regression problem** like both the problems stated by you, you need to provide the neural network with the most relevant inputs that have a effect on the output. Eg. If you want your network to add x and y, you need to provide it training examples like *input(x=1, y=3) and output (sum=4)*. This will make your network do exactly what you want. But suppose you do not know whether what inputs should you train your network on, neural networks can take care of that too. Look at this [example](https://iamtrask.github.io/2015/07/12/basic-python-network/): Look at the first truth table. Notice that the output column is actually the first input column and the other two input columns are just random. Eventually, the network learns this relationship and provides the correct results. **What we learnt:** if you are unsure about which inputs should you choose for your network, just provide as many as possible, or as many combinations as possible. Neural networks excel in finding relationships in input data. Next, talking of the volume problem, this is what I have been doing recently. It's actually an example of **function approximation**. Usually, the problem has multiple inputs and a single output (just like the addition problem), but the inverse is also possible. i.e., input : sum and output: x & y. This comes under **one to many function mapping** and **multivariate regression.** So YES, you need to provide the volume as input and **x,y and z** as outputs while training. The recommended configuration is **one neuron in input layer, at least 6 hidden neurons and 3 neurons in output layer** For magical results, you can use a deeper neural network rather than the shallow one suggested by me. But remember, neural networks have been proved to be \*\*Universal Approximators\* Upvotes: 0
2019/12/02
449
1,990
<issue_start>username_0: I am planning to enroll for <NAME>'s Machine Learning course <https://www.coursera.org/learn/machine-learning>. I've no background in math. Is it OK if I start the course and learn math as and when required?<issue_comment>username_1: No pre-requisites required for Andrew Ng ML course. There are a couple of lectures in which he gives basic idea of Linear algebra. Also you can learn math when required. Upvotes: 0 <issue_comment>username_2: I must simply direct you to this excellent blog post on Machine Learning Mastery: <https://machinelearningmastery.com/what-is-holding-you-back-from-your-machine-learning-goals/> Upvotes: 0 <issue_comment>username_3: This course is focused on machine learning using MATLAB, which is not practical nowadays as it is a programming language used specifically for computing, and cannot display GUI or communicate through the network. The language is powerful but limited in some ways. Nowadays most people use python for machine learning, as it is versatile and can connect to other backend like C++, java, JavaScript easily. The language is also a general language, and unlike MATLAB can do many things not limited to computing. If you really want to join this course, I would recommend first learning MATLAB language and also learn basic calculus like derivatives. This would greatly help on your learning of the course. However if you want to here serious about machine learning, I would encourage you to enroll in the Deep Learning specialization also by <NAME> on Coursera. <https://www.coursera.org/specializations/deep-learning> This course uses python as the programming language and teaches more modern approaches to deep learning like recurrent neural networks, convolutional neural networks and more. It also talks more about application of neural network. There is also theory, but it also talks about application of a specific algorithm and how it works. Hope I can help you. Upvotes: 3 [selected_answer]
2019/12/02
615
2,250
<issue_start>username_0: I'm doing a paper for a class on the topic of big problems that are still prevalent in AI, specifically in the area of natural language processing and understanding. From what I understand, the areas: * Text classification * Entity recognition * Translation * POS tagging are for the most part solved or perform at a high level currently, but areas such as: * Text summarization * Conversational systems * Contextual systems (relying on the previous context that will impact current prediction) are still relatively unsolved or are a big area of research (although this could very well change soon with the releases of big transformer models from what I've read). For people who have experience in the field, what are areas that are still big challenges in NLP and NLU? Why are these areas (doesn't have to be ones I've listed) so tough to figure out?<issue_comment>username_1: According to a nice article by <NAME> <https://ruder.io/4-biggest-open-problems-in-nlp/> based on answers from top NLP researchers <https://docs.google.com/document/d/18NoNdArdzDLJFQGBMVMsQ-iLOowP1XXDaSVRmYN0IyM/edit> 1. Natural language understanding 2. NLP for low-resource scenarios 3. Reasoning about large or multiple documents 4. Datasets, problems, and evaluation I recommend having a look at the article. More details in the slides <https://drive.google.com/file/d/15ehMIJ7wY9A7RSmyJPNmrBMuC7se0PMP/view> Upvotes: 2 <issue_comment>username_2: According to [<NAME>](https://web.stanford.edu/~jurafsky/), a researcher on NLP and NLU, the current hard problems in NLP are (see [slide 6](http://spark-public.s3.amazonaws.com/nlp/slides/intro.pdf)) * Questioning answering * Paraphrase * Summarisation * Dialogue Other hard problems for which there are already some good solutions are * Sentiment analysis * Coreference resolution * Word sense disambiguation * Parsing * Machine translation * Information extraction Ambiguity in natural language is one of the biggest challenges for NLP and NLU systems. Other challenges are (see [slide 10](http://spark-public.s3.amazonaws.com/nlp/slides/intro.pdf)) * non-standard words * idioms * tricky entity names * neologisms * world knowledge * segmentation issues Upvotes: 1
2019/12/04
432
1,615
<issue_start>username_0: I have heard of ensemble methods, such as XGBoost, for binary or categorical machine learning models. However, does this exist for regression? If so, how are the weights for each model in the process of predictions determined? I am looking to do this manually, as I was planning on training two different models using separate frameworks (YoloV3 aka Darknet and Tensorflow for bounding box regression). Is there a way I can establish a weight for each model in the overall prediction for these boxes? Or is this a bad idea?<issue_comment>username_1: There's similar boosting classes in XGBoost for regression. You can implement their built-in classes for your problem, rather than implementing from scratch. You can read more about it from their website. You can also take a look at catboost, which implements a different approach. Upvotes: 0 <issue_comment>username_2: [![Table](https://i.stack.imgur.com/iMYJP.png)](https://i.stack.imgur.com/iMYJP.png) ? This means that there are not promising versions of this algorithm fro regression until 2012. After your question, I have found one of the survey research paper which is done or [ensemple methods for regression](https://www.researchgate.net/publication/233843976_Ensemble_Approaches_for_Regression_A_Survey#:%7E:text=The%20goal%20of%20ensemble%20regression,phase%2C%20and%20the%20integration%20phase.). This table also extracted from this paper. Read this paper, it will help you a lot more This one is [latest paper](https://www.mdpi.com/2220-9964/9/6/370/pdf) published on object detection with an ensemble approach Upvotes: 1
2019/12/05
582
2,519
<issue_start>username_0: How can I know what each neuron does in NN? Consider the [Playground from Tensorflow](https://playground.tensorflow.org/), there are some hidden layers with some neurons in each. Each of them shows a line(horizontal or vertical or ...). Where these shapes come from. I think they are understandable for nn not a person!<issue_comment>username_1: In TensorFlow Playground, the horizontal line show where each class is separated for each neuron. What happens when you take any intermediate neuron to make the decision? You can see the answer by the line provided by that neuron. And this decision is a result of the weighted sum from the decisions of the previous neurons (up to activation). Take the middle-top neuron in the link you share, which is an almost horizontal line - slightly tilted to the right. This neuron classifies everything above it as a blue, and everything below it as an orange. Hover over the neuron to see a larger picture on the output. You can also see how this is actually calculated by hovering over the line coming from the neurons in the previous layer to the neuron you are looking at. For the case of the same neuron (center-top), the weight coming from the first input ($x\_1$) is 0.091, while from the second one ($x\_2$), it is 0.49. The neuron ends up being almost horizontal because the contribution from the horizontal input ($x\_2$) is so much larger compared to the vertical one ($x\_1$). Of course you need to take into account the nonlinearity coming from the activation function but the idea presented above is the essence of it. The example uses tanh activation, which behaves very linear in its intermediate region so one can ignore this issue to some extend for this particular case. Edit: It appears that the values for the weights change at every browser session, so the neuron I describe might look a little different to you. To get the same configuration, simply click on the colored lines between neurons to edit them and use the values above for the connections. Upvotes: 3 [selected_answer]<issue_comment>username_2: I think username_1 answered this question well, though I wanted to give some extra reading for those interested. There are many ways of deciphering what a neuron in a NN is doing. [This lecture](https://www.youtube.com/watch?v=6wcs6szJWMY) does a fantastic job at covering some of these methods and is an incredibly interesting watch. This covers more advanced methods of visualising what a model is doing. Upvotes: 1
2019/12/07
1,206
4,821
<issue_start>username_0: It is my understanding that, in Q-learning, you are trying to mimic the optimal $Q$ function $Q\*$, where $Q\*$ is a measure of the predicted reward received from taking action $a$ at state $s$ so that the reward is maximised. I understand for this to be properly calculated, you must explore all possible game states, and as that is obviously intractable, a neural network is used to approximate this function. In a normal case, the network is updated based on the MSE of the actual reward received and the networks predicted reward. So a simple network that is meant to chose a direction to move would receive a positive gradient for all state predictions for the entire game and do a normal backprop step from there. However, to me, it makes intuitive sense to have the final layer of the network be a softmax function for some games. This is because in a lot of cases (like Go for example), only one "move" can be chosen per game state, and as such, only one neuron should be active. It also seems to me that would work well with the gradient update, and the network would learn appropriately. But the big problem here is, this is no longer Q learning. The network no longer predicts the reward for each possible move, it now predicts which move is likely to give the greatest reward. Am I wrong in my assumptions about Q learning? Is the softmax function used in Q learning at all?<issue_comment>username_1: > > However, to me, it makes intuitive sense to have the final layer of the network be a softmax function for some games. This is because in a lot of cases (like Go for example), only one "move" can be chosen per game state, and as such, only one neuron should be active. > > > You are describing a network that approximates as *policy function*, $\pi(a|s)$, for a discrete set of actions. > > It also seems to me that would work well with the gradient update, and the network would learn appropriately. > > > Yes there are ways to do this, based on the [Policy Gradient Theorem](https://towardsdatascience.com/policy-gradients-in-a-nutshell-8b72f9743c5d). If you read it you will probably discover this is more complex to understand than you first thought, the problem being that the agent is never directly told what the "best" action is in order to simply learn in a supervised manner. Instead, it has to be inferred from rewards observed whilst acting. This is a bit harder to figure out than the Q learning update rules which are just sampling from the [Bellman optimality equation](https://en.wikipedia.org/wiki/Bellman_equation). You can split Reinforcement Learning methods broadly into value-based methods and policy gradient methods. Q learning is a value-based method, whilst REINFORCE is a basic policy gradient method. It is also common to use a value based method *within* a policy gradient method in order to help estimate likely future return used to drive the polcy gradient updates - this combination is called Actor-Critic where the actor learns a policy function $\pi(a|s)$ and the critic learns a value function e.g. $V(s)$. > > But the big problem here is, this is no longer Q learning. The network no longer predicts the reward for each possible move, it now predicts which move is likely to give the greatest reward. > > > This is true, but it is not a big problem. The main issue is that policy gradient methods are more complex than value based methods. They may or may not be more effective, it depends on the environment you are tryng to create an optimal agent for. > > Is the softmax function used in Q learning at all? > > > I cannot think of any non-contrived environment in which this function would be useful for an action value approximation. However, it is possible to use a variant of softmax to create a behaviour policy for Q learning. This uses a temperature hyperparameter $T$ to weight the Q values, and provide a probability of selecting an action, as follows $$\pi(a\_i|s) = \frac{e^{Q(s,a\_i)/T}}{\sum\_j e^{Q(s,a\_j)/T}}$$ when $T$ is high all the probabilities of actions will be similar, when it is low even a small difference in $Q(s,a\_i)$ will make a big difference to probability of selecting action $a\_i$. This is quite a nice distribution for exploring whilst avoiding previously bad decisions. It will tend to focus the agent on exploring differences between similarly high rated actions. The main issue with it is that it introduces hyperparameters for deciding starting $T$, ending $T$ and how to move between them. Upvotes: 3 [selected_answer]<issue_comment>username_2: This is still Q-learning, remember Q-learning is off-policy value-based. For Bellman optimality operator $\mathcal{T}Q=r+max\ Q'$. If you have enough exploration, it always takes $Q$ to the optimal fixed point. Upvotes: 0
2019/12/07
1,250
4,887
<issue_start>username_0: Now I know this might break some StackExchange rules and I am definitely open for taking the thread down if it does! I am trying to build an AI that can write it's own book and I have no idea where to start or what are the appropriate algorithms and approaches to go with. How should I start and what do I exactly need for such a project?<issue_comment>username_1: Recurrent Neural Networks (RNNs) have been applied to generate text. In [this](https://karpathy.github.io/2015/05/21/rnn-effectiveness/) blog post you will find a couple of interesting text examples (the author also has made his code available on github), e.g. their Shakespeare-like texts generated by an RNN: > > PANDARUS: > Alas, I think he shall be come approached and the day > When little srain would be attain'd into being never fed, > And who is but a chain and subjects of his death, > I should not sleep. > > > Second Senator: > They are away this miseries, produced upon my soul, > Breaking and strongly should be buried, when I perish > The earth and thoughts of many states. > > > DUKE VINCENTIO: > Well, your wit is in the care of side and that. > > > Second Lord: > They would be ruled after this chamber, and > my fair nues begun out of the fact, to be conveyed, > Whose noble souls I'll have the heart of the wars. > > > Clown: > Come, sir, I will make did behold your worship. > > > VIOLA: > I'll drink it. > > > As you can see the RNN is able to somewhat mimic the "flow" of the texts it has been trained on but some sentences (like at the very end) do not make much contextual sense. Moreover, RNNs have been trained to generate other content, e.g. drawing numbers (see [here](https://arxiv.org/pdf/1502.04623.pdf)) or creating music (see [here](http://people.idsia.ch/~juergen/blues/IDSIA-07-02.pdf)). Upvotes: 2 <issue_comment>username_2: There have been many methods proposed for text generating, but recurrent network dominates natural language processing with a key component: the perception of time. Many networks have been tried for text generation, with notable examples such as Markov chain. However RNN have been proven to work the best and is dominating the field of language modelling (text generation). How text generation works ========================= A neural network that generates text is commonly called a language model. It is trained on large amount of text with labels being the next token. The text generation process uses several random token as the starting phrase and then the network predicts the rest. However the network does not just predicts the most probable word, instead it randomly chooses one of the few most probable token, hence the generating part. Why RNN work best on language modelling ======================================= RNN have a perception of time built into the architecture of teh network. LSTM, a popular RNN variant used, is composed of "memory units" that "remembers" past text, thus the "time" part. RNN process input according to the sequence of time, so the network can naturally understand time, thus the superior performance compared to other networks. Architecture of language model ============================== A language model consists of the encoder and the decoder. The encoder compresses word one-hot representation to a smaller sized vector representation. The smaller sized representation is then passed through the decoder, which maps the encoding to the words one hot vectors again. State of the art results for language modelling =============================================== Language modelling is an actively researched field in the AI community, and recently the model GPT-2 have achieved a breakthrough in language modelling accuracy, producing almost human like text with a special component added, the attention layer. Attention basically maps the "memory states" of the encoder and feed it as input to the decoder. The data teh model is trained on is also very large, with over 20GB of web scraped data from sites like Reddit. Limits of language modelling ============================ One limit of language modelling is the size of generated text. As GPU don't have unlimited memory, language model usually limits the input token size to a specific number, padding or trimming to this number. The number is usually 500-1000, which includes a paragraph or two, but not an entire book. You can only generate short paragraphs and essay with language modelling. For long text it is much harder. Resources to help you get started ================================= GPT-2 open AI blog: <https://openai.com/blog/better-language-models/> GPT-2 online interactive site for text generation: <https://talktotransformer.com/> How to train and fine tune GPT-2 in python: <https://minimaxir.com/2019/09/howto-gpt2/> Hope I can help you Upvotes: 3 [selected_answer]
2019/12/09
1,326
5,141
<issue_start>username_0: What is the difference between game theory and machine learning? I had gone through the papers [Deep Learning for Predicting Human Strategic Behavior](https://www.cs.ubc.ca/~jasonhar/GameNet-NIPS-2016.pdf), by <NAME> et al., and [When Machine Learning Meets AI and Game Theory](http://cs229.stanford.edu/proj2012/AgrawalJaiswal-WhenMachineLearningMeetsAIandGameTheory.pdf), by <NAME> et al., but I am not able to understand.<issue_comment>username_1: These are big areas, so here is a brief description of the differences: [Game theory](https://en.wikipedia.org/wiki/Game_theory) is concerned with studying solutions for 'games', which are basically a set of decisions leading to certain outcomes. In game theory you look at strategies to achieve the best outcome for a given participant. One classic example (which isn't really a game in the traditional sense) is the [Prisoner's Dilemma](https://en.wikipedia.org/wiki/Prisoner%27s_dilemma): you and your friend have been arrested, and if only one of you testifies against the other, that person gets a reduced sentence, and the other one a much longer one. If you both testify against each other, you both get a medium sentence, and if you both keep quiet, you both go free. You don't know what your partner in crime does, so do you a) testify, or b) keep quiet? If you keep quiet, you might go free if your partner *also* keeps quiet, but if he testifies, you are in it for a long time. So it's risky to keep quiet, even though you get the better outcome. If you testify you might avoid a longer sentence, but also will not go free. What is your best choice? Game theory is often used in economics to model behaviour, as a rational agent would try to optimise gains. [Machine Learning](https://en.wikipedia.org/wiki/Machine_learning), on the other hand, is a way of training a statistical classifier. You feed features into an algorithm, and the algorithm then gives you a certain output, depending on the data you have trained it with. This hasn't got anything to do with game theory *per se*, but I guess you could use machine learning to train an algorithm to choose moves in a game situation and then compare how that matches the optimal choices according to game theory. As I said, this is a very brief comparison. For more details I suggest you follow the links to read up on those two fields. UPDATE: Now that the papers are accessible β€” game theory is indeed used as a benchmark. In the first paper, the *rational agent* assumption from game theory is being modeled, but without a human expert telling the algorithm what that means. So you learn (using deep learning) what it means to be rational. In the second paper the authors attempt to learn a better algorithm than tit-for-tat, and indeed use game theory as a theoretical framework for comparison/evaluation. Upvotes: 3 <issue_comment>username_2: The other answer gives the nice famous example of the sort of problem that game theory tackles and it *partially* describes what machine learning is. However, it does not emphasize that this type of game theory problem, where you have two or more [agents](https://ai.stackexchange.com/q/12991/2444) competing with each other, also appears in the context of machine learning. More concretely, machine learning can also be applied in the context of a **multi-agent system**, where you have multiple learning agents that compete with each other in an environment. Typical examples of these problems are two-player board games, like chess, go, or tic-tac-toe, which can be solved with machine learning, and, in particular, reinforcement learning (a specific type of machine learning): for instance, you can learn [afterstate value functions](https://ai.stackexchange.com/q/24816/2444) to play tic-tac-toe. There's a subarea of RL that tackles these problems with multiple agents, known as **multi-agent reinforcement learning (MARL)**. One simple mathematical framework that generalises MDPs to multiple agents is the [Markov games (aka stochastic games)](https://courses.cs.duke.edu/spring07/cps296.3/littman94markov.pdf), which can be used to model games like rock-paper-scissors or tic-tac-toe. We could also model a multi-agent system as a single-agent system, where the other agents are incorporated into the environment. If you are interested in MARL, you could read, for example, the paper [A comprehensive survey of multiagent reinforcement learning](https://www.dcsc.tudelft.nl/%7Ebdeschutter/pub/rep/07_019.pdf) (2008) by <NAME> et al. So, I think there are several connections between game theory and machine learning, and even other subareas of AI, such as game AI (e.g. the [minimax algorithm](https://en.wikipedia.org/wiki/Minimax) is often taught in AI programs as an example of an adversarial search algorithm; read [this](https://ai.stackexchange.com/a/11805/2444) to know more about the difference between search and learning) and evolutionary algorithms (in fact, there's also a related subfield of game theory known as [evolutionary game theory](https://ai.stackexchange.com/q/16807/2444)). Upvotes: 0
2019/12/09
1,534
6,015
<issue_start>username_0: Say I have a CNN with this structure: * input = 1 image (say, 30x30 RGB pixels) * first convolution layer = 10 5x5 convolution filters * second convolution layer = 5 3x3 convolution filters * one dense layer with 1 output So a graph of the network will look like this: ![Convolutional Neural Network](https://i.stack.imgur.com/6aXFM.png) Am I correct in thinking that the first convolution layer will create 10 new images, i.e. each filter creates a new intermediary 30x30 image (or 26x26 if I crop the border pixels that cannot be fully convoluted). Then the second convolution layer, is that supposed to apply the 5 filters **on all of the 10 images from the previous layer**? So that would result in a total of 50 images after the second convolution layer. And then finally the last FC layer will take all data from these 50 images and somehow combine it into one output value (e.g. the probability that the original input image was a cat). Or am I mistaken in how convolution layers are supposed to operate? Also, how to deal with channels, in this case RGB? Can I consider this entire operation to be separate for all red, green and blue data? I.e. for one full RGB image, I essentially run the entire network three times, once for each color channel? Which would mean I'm also getting 3 output values.<issue_comment>username_1: These are big areas, so here is a brief description of the differences: [Game theory](https://en.wikipedia.org/wiki/Game_theory) is concerned with studying solutions for 'games', which are basically a set of decisions leading to certain outcomes. In game theory you look at strategies to achieve the best outcome for a given participant. One classic example (which isn't really a game in the traditional sense) is the [Prisoner's Dilemma](https://en.wikipedia.org/wiki/Prisoner%27s_dilemma): you and your friend have been arrested, and if only one of you testifies against the other, that person gets a reduced sentence, and the other one a much longer one. If you both testify against each other, you both get a medium sentence, and if you both keep quiet, you both go free. You don't know what your partner in crime does, so do you a) testify, or b) keep quiet? If you keep quiet, you might go free if your partner *also* keeps quiet, but if he testifies, you are in it for a long time. So it's risky to keep quiet, even though you get the better outcome. If you testify you might avoid a longer sentence, but also will not go free. What is your best choice? Game theory is often used in economics to model behaviour, as a rational agent would try to optimise gains. [Machine Learning](https://en.wikipedia.org/wiki/Machine_learning), on the other hand, is a way of training a statistical classifier. You feed features into an algorithm, and the algorithm then gives you a certain output, depending on the data you have trained it with. This hasn't got anything to do with game theory *per se*, but I guess you could use machine learning to train an algorithm to choose moves in a game situation and then compare how that matches the optimal choices according to game theory. As I said, this is a very brief comparison. For more details I suggest you follow the links to read up on those two fields. UPDATE: Now that the papers are accessible β€” game theory is indeed used as a benchmark. In the first paper, the *rational agent* assumption from game theory is being modeled, but without a human expert telling the algorithm what that means. So you learn (using deep learning) what it means to be rational. In the second paper the authors attempt to learn a better algorithm than tit-for-tat, and indeed use game theory as a theoretical framework for comparison/evaluation. Upvotes: 3 <issue_comment>username_2: The other answer gives the nice famous example of the sort of problem that game theory tackles and it *partially* describes what machine learning is. However, it does not emphasize that this type of game theory problem, where you have two or more [agents](https://ai.stackexchange.com/q/12991/2444) competing with each other, also appears in the context of machine learning. More concretely, machine learning can also be applied in the context of a **multi-agent system**, where you have multiple learning agents that compete with each other in an environment. Typical examples of these problems are two-player board games, like chess, go, or tic-tac-toe, which can be solved with machine learning, and, in particular, reinforcement learning (a specific type of machine learning): for instance, you can learn [afterstate value functions](https://ai.stackexchange.com/q/24816/2444) to play tic-tac-toe. There's a subarea of RL that tackles these problems with multiple agents, known as **multi-agent reinforcement learning (MARL)**. One simple mathematical framework that generalises MDPs to multiple agents is the [Markov games (aka stochastic games)](https://courses.cs.duke.edu/spring07/cps296.3/littman94markov.pdf), which can be used to model games like rock-paper-scissors or tic-tac-toe. We could also model a multi-agent system as a single-agent system, where the other agents are incorporated into the environment. If you are interested in MARL, you could read, for example, the paper [A comprehensive survey of multiagent reinforcement learning](https://www.dcsc.tudelft.nl/%7Ebdeschutter/pub/rep/07_019.pdf) (2008) by <NAME> et al. So, I think there are several connections between game theory and machine learning, and even other subareas of AI, such as game AI (e.g. the [minimax algorithm](https://en.wikipedia.org/wiki/Minimax) is often taught in AI programs as an example of an adversarial search algorithm; read [this](https://ai.stackexchange.com/a/11805/2444) to know more about the difference between search and learning) and evolutionary algorithms (in fact, there's also a related subfield of game theory known as [evolutionary game theory](https://ai.stackexchange.com/q/16807/2444)). Upvotes: 0
2019/12/10
714
3,064
<issue_start>username_0: I have made several neural networks by using [Brain.Js](https://brain.js.org/#/) and [TensorFlow.js](https://www.tensorflow.org/js). What is the difference between artificial intelligence and artificial neural networks?<issue_comment>username_1: From wikipedia: > > Artificial neural networks (ANN) or connectionist systems are computing systems that are inspired by, but not identical to, biological neural networks that constitute animal brains. Such systems "learn" to perform tasks by considering examples, generally without being programmed with task-specific rules. > > > Artifical Intelligence on the other hand refers to the broad term of > > intelligence demonstrated by machines > > > This obviously doesn't clear much up, so the next logical question is: "What is intelligence?" This, however, is one of the most debated questions in computer science and many other fields, so there isn't a straight answer for this. The most you can do is decide yourself what you think intelligence refers to, because as far as we know, there is no agreed upon way of quantifying intelligence, and so the definition of such will remain ambiguous. Upvotes: 1 [selected_answer]<issue_comment>username_2: Artificial intelligence is a broad field, ANN is a specific technique with in that field. Upvotes: 2 <issue_comment>username_3: I would explain it as Artificial Intelligence is a huge topic concerning many fields as: robotics, computer vision, machine learning, etc. It focuses on any "inteligent" task that a computer can do. Artificial Neural Networks are a sub-topic of Machine learning, and probably as you've seen, as you said you have some experience with them, deals with a specific way of solving problems using a set of 'neurons' that try to imitate actual biological neurons. Explaining it in a really simplistic way, it is a method of fitting a function to your specific data in such a way that it still stands and gives good predictions on test data. By 'training' the network, you're basically trying to find better values for the weights(analogous to synapses in an actual brain, connections between the neurons) between the neurons in order to give better outputs in general on that specific type of data instead of just one case. Upvotes: 1 <issue_comment>username_4: Artificial intelligence can refer to a broad range of techniques by which machines (algorithms) demonstrate utility (fitness in an environment, where the environment may be either virtual or physical.) This can include [symbolic AI](https://en.wikipedia.org/wiki/Symbolic_artificial_intelligence), which utilizes logic and search exclusively. (Symbolic AI is sometimes referred to as "good old fashioned AI" aka gofai, or "Classical AI".) * A key distinction is that Neural Networks constitute a form of "statistical AI", which renders them capable of learning by trial/error & analysis. The recent strength & applicability of statistically driven AI methods has been facilitated by advances in processing power and memory. Upvotes: 2
2019/12/10
1,005
4,468
<issue_start>username_0: Suppose a deep neural network is created using Keras or Tensorflow. Usually, when you want to make a prediction, **the user** would invoke `model.predict`. However, how would the actual AI system proactively invoke their own actions (i.e. without the need for me to call `model.predict`)?<issue_comment>username_1: The short answer, i think, is that it cannot. The AI system will only do, and it will only be good at the task that the programmer made it for. Of course you could have an AI that, for example, can trigger a prediction on the input with different models depending on some other variables, but that will still be based on what the programmer wrote, it will never be able to do or learn new unintended things. Like having the model.predict() for an image classification NN in a loop and only stop when it detects a dog and then use another model to predict the breed for example. What you mentioned about "letting the AI lose on the network" usually is part of some concerns about AI that it could evolve, learn new actions and start acting on its own. But those people unknowingly are actually talking about a general AI or strong AI, an AI system that could be as smart as a human so it could act in its own too. But as far is a know at least, we are not even close to creating such a system. Hope I actually answered your question and didn't deviated too much from what you actually asked. Please tell me if so. Upvotes: 2 <issue_comment>username_2: Neural networks, deep learning and other supervised learning algorithms do not "take actions" by themselves, they lack [agency](https://en.wikipedia.org/wiki/Intelligent_agent). However, it is relatively easy to give a machine agency, as far as taking actions is concerned. That is achieved by connecting inputs to some meaningful data source in the environment (such as a camera, or the internet), and connecting outputs to something that can act in that environment (such as a motor, or the API to manage an internet browser). In essence this is no different from any other automation that you might write to script useful behaviour. If you could write a series of tests, if/then statements or mathematical statements that made useful decisions for any machine set up this way, then in theory a neural network or similar machine learning algorithm could learn to approximate, or even improve upon the same kind of function. If your neural network has already been trained on example inputs and the correct actions to take to achieve some goal given those inputs, then that is all that is required. However, training a network to the point where it could achieve this in an unconstrained environment ("letting it loose on the internet") is a tough challenge. There are ways to train neural networks (and learning functions in general) so that they learn useful mappings between observations and actions that progress towards achieving goals. You can use genetic algorithms or other search techniques for instance, and [the NEAT approach](https://en.wikipedia.org/wiki/Neuroevolution_of_augmenting_topologies) can be successful training controllers for agents in simple environments. [Reinforcement learning](https://en.wikipedia.org/wiki/Reinforcement_learning) is another popular method that can also scale up to quite challenging control environments. It can cope with complex game environments such as Defense of the Ancients, Starcraft, Go. The purpose of demonstrating AI prowess on these complex games is in part showing progress towards a longer-term goal of optimal behaviour in the even more complex and open-ended real world. State of the art agents are still quite a long way from *general* intelligent behaviour, but the problem of using neural networks in a system that learns how to act as an agent has much research and many examples available online. Upvotes: 5 [selected_answer]<issue_comment>username_3: You invoke it in a loop. Imagine a digital assistant responding to voice queries. It might look something like this: ``` for(;;) { var audio = RecordSomeAudio(); var response = model.predict(audio); if(response.action == "SAYSOMETHING") { PlaySomeAudio(response.output); } } ``` Note that the model gets invoked repeatedly and can decide in a given situation whether to respond or not. In a digital assistant context, part of the model would be to check for if the user raised a query (e.g. "Hey Google" etc.). Upvotes: 2
2019/12/10
996
4,037
<issue_start>username_0: How does a transformer leverage the GPU to be trained faster than RNNs? I understand the parameter space of the transformer might be significantly larger than that of the RNN. But why does the transformer structure can leverage multiple GPUs, and why does that accelerate its training?<issue_comment>username_1: The issue with Recurrent models is that they don't parallelization during training. Sequential models performs better with more memory but faces problem in learning long-term memory dependencies. On the other hand Transformers take into account of **self attention** which boosts the speed of how fast the model can translate from one sequence to another and establishes dependencies b/w input and output and focus on relevant parts of the input sequence, which in turn eliminates recurrence and convolution unlike RNNs where sequential computation inhibits parallelization. Upvotes: 0 <issue_comment>username_2: A recurrent neural network (RNN) depends on the previous hidden state from the previous time step. That is, an RNN is a function of both the data for the sequence at time $t$ and the hidden state from time $t-1$. This means that we cannot compute the $t$th hidden state without calculating the $t-1$th state, and the $t-1$th state without the $t-2$th state, and so on. In contrast to this, a transformer is able to fully parallelise the processing of the sequence because it does not have this recursive relationship, i.e. a transformer is not a recursive function -- the recursive nature of the sequence is processed in other ways, such as through positional encoding. We can see this by the way self attention works. If we first consider the general attention mechanism framework, then we have a query $q$ and a set of paired key-value tuples $\textbf{k}\_1, ..., \textbf{k}\_n$ and $\textbf{v}\_1, ..., \textbf{v}\_n$. In general, for each key, we will apply some attention function $\beta$ (such as a neural network) to obtain attention scores, $a\_i = \beta(\textbf{q}, \textbf{k}\_i)$. We then define an attention vector $\textbf{a}$ where the $i$th element is the $i$th attention score, and we take a softmax of this vector to obtain attention weights $\alpha\_i$ where $\alpha\_i$ is the $i$th element of $\mbox{softmax}(\textbf{a})$. The output of the attention mechanism for query $\textbf{q}$ is then the weighted sum $\sum\_{i=1}^n \alpha\_i \textbf{v}\_i$. Now that we have the necessary background for an attention mechanism, we can look at self attention which is the backbone of Transformer. If we have a sequence denoted by $\{\textbf{x}\_1, ..., \textbf{x}\_n\}$, then we can define a set of queries, keys and values to be these $\textbf{x}\_i$ values. Note that previously we only had a single query, but here we will have multiple queries which is really how Transformer is able to parallelise the processing of the sequence. If we define $\textbf{Q}, \textbf{K}, \textbf{V}$ to be the matrices of the queries, keys and values (e.g. the $i$th row of $\textbf{Q}$ corresponds to the $i$th query, and similarly for the others). Self attention is as simple as performing attention over these query, key and values -- the name *self* comes from the fact that the queries, keys and values are all the same and represent the $i$th element of the sequence. Now, we can write the above attention mechanism as $a\_{i, j} = \beta(Q, K)$ where we now have a matrix of attention scores (because we have $i$ queries and $i$ keys the matrix will be square), and we can take softmax row-wise to get the attention weights (again, this will be an $i\times i$ matrix). If we call the matrix of attention weights $\textbf{A}$ then the output of a self attention layer will be given by $\textbf{A} \textbf{V}$. As you can see, there is no recursive nature here and this is all parallelisable, e.g. it can be broken up and put onto multiple GPU's at the same time -- this would not be possible with an RNN as you would have to wait for the output of the previous layer. Upvotes: 2
2019/12/11
1,386
5,397
<issue_start>username_0: Why is dropout favored compared to reducing the number of units in hidden layers for the convolutional networks? If a large set of units leads to overfitting and dropping out "averages" the response units, why not just suppress units? I have read different questions and answers on the dropout topic including these interesting ones, [What is the "dropout" technique?](https://ai.stackexchange.com/q/40/2444) and this other [Should I remove the units of a neural network or increase dropout?](https://ai.stackexchange.com/q/9512/2444), but did not get the proper answer to my question. By the way, it is weird that this publication [A Simple Way to Prevent Neural Networks from Overfitting (2014)](http://jmlr.org/papers/volume15/srivastava14a.old/srivastava14a.pdf), <NAME> et al., is cited as being the first on the subject. I have just read one that is from 2012: [Improving neural networks by preventing co-adaptation of feature detectors](https://arxiv.org/pdf/1207.0580.pdf).<issue_comment>username_1: Dropout only ignores a portion of units during a single training batch update. Each training batch will use a different combination of units which gives them the best chance of that portion of them working together to generalize. Note the weights for each unit are kept and will be updated during the next batch in which that unit is selected. During inference, yes, all units are used (with a factor applied to activation...the same factor that defines the fraction used)...this becomes essentially an ensemble of all the different combinations of units that were used. Contrasted with fewer units, the fewer units approach will only learn what those fewer units can be optimized for. Think of dropout as an ensemble of layers of fewer units.(with the exception that there are partial weight sharing between the layers) Upvotes: 4 [selected_answer]<issue_comment>username_2: The idea of dropout is that, **at training time**, with a certain probability $p\_i \in [0, 1]$, the unit (or neuron) $i$ is dropped, $\forall i$, that is, the output of unit $i$ is set to zero so that $i$ does not affect the other units it is connected to, both during the forward and backward (or back-propagation) passes (or steps). At every mini-batch, you randomly drop usually different units, so, across different mini-batches (and consequently epochs), you do **not** always or necessarily drop the same units. The title of the paper [Improving neural networks **by preventing co-adaptation of feature detectors**](https://arxiv.org/pdf/1207.0580.pdf) emphasizes that dropout prevents the co-adaptation of the units (the feature detectors), so units attempt to detect certain features independently of other units, which reduces overfitting, that is, it improves the generalization ability of the neural network. At test time, no unit is usually dropped. However, there is an approximation of a [deep Gaussian process](http://proceedings.mlr.press/v31/damianou13a.pdf) and [Bayesian neural network](https://arxiv.org/abs/1505.05424) that is based on the application of dropout at training and test times. This is called **Monte Carlo dropout** or, in short, MC dropout, for reasons you can understand if you read the paper [Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning](https://arxiv.org/pdf/1506.02142.pdf). There's also the possibility to drop the connections between the neurons, which is called [DropConnect](http://proceedings.mlr.press/v28/wan13.html), rather than the neurons themselves. These two approaches are slightly different, even though DropConnect can be seen as a generalization of dropout. In DropConnect, you do not switch off completely the units, but only the contributions of certain units to the output of other units. In dropout, you completely switch off certain units. If you decided to **deterministically** (and manually) reduce the number of units before training, essentially, you would train a **fixed** smaller network, but this will not necessarily reduce overfitting or, more precisely, co-adaptation of the units. In dropout, you **randomly** select the units to drop, so, at every mini-batch (or epoch, depending on the implementation of dropout), you effectively train a **random** subset of the units of the original neural network and, because of this, it can be thought of as an ensemble of smaller neural networks. The two papers [Improving neural networks by preventing co-adaptation of feature detectors](https://arxiv.org/pdf/1207.0580.pdf) (2012) and [Dropout: A Simple Way to Prevent Neural Networks from Overfitting](http://jmlr.org/papers/volume15/srivastava14a.old/srivastava14a.pdf) (2014) have exactly the same authors, but the latter paper was published in the [Journal of Machine Learning Research](http://www.jmlr.org/), while the former wasn't apparently published in any journal. In fact, [Dropout: A Simple Way to Prevent Neural Networks from Overfitting](http://jmlr.org/papers/volume15/srivastava14a.old/srivastava14a.pdf) does not even cite [Improving neural networks by preventing co-adaptation of feature detectors](https://arxiv.org/pdf/1207.0580.pdf), but it cites the master's thesis [Improving Neural Networks with Dropout](http://www.cs.toronto.edu/~nitish/msc_thesis.pdf) (2013) by <NAME>, who is one of the authors of dropout. Upvotes: 2
2019/12/11
1,184
4,757
<issue_start>username_0: Say a simple neural network's input is a collection of tags (encoded in some way), and the output is an image that corresponds to those tags. Say this network consists of some dense layers and some reverse (transpose) convolution layers. What is the disadvantage of this network, that directs people to invent fairly complicated things like GANs or VAEs?<issue_comment>username_1: Dropout only ignores a portion of units during a single training batch update. Each training batch will use a different combination of units which gives them the best chance of that portion of them working together to generalize. Note the weights for each unit are kept and will be updated during the next batch in which that unit is selected. During inference, yes, all units are used (with a factor applied to activation...the same factor that defines the fraction used)...this becomes essentially an ensemble of all the different combinations of units that were used. Contrasted with fewer units, the fewer units approach will only learn what those fewer units can be optimized for. Think of dropout as an ensemble of layers of fewer units.(with the exception that there are partial weight sharing between the layers) Upvotes: 4 [selected_answer]<issue_comment>username_2: The idea of dropout is that, **at training time**, with a certain probability $p\_i \in [0, 1]$, the unit (or neuron) $i$ is dropped, $\forall i$, that is, the output of unit $i$ is set to zero so that $i$ does not affect the other units it is connected to, both during the forward and backward (or back-propagation) passes (or steps). At every mini-batch, you randomly drop usually different units, so, across different mini-batches (and consequently epochs), you do **not** always or necessarily drop the same units. The title of the paper [Improving neural networks **by preventing co-adaptation of feature detectors**](https://arxiv.org/pdf/1207.0580.pdf) emphasizes that dropout prevents the co-adaptation of the units (the feature detectors), so units attempt to detect certain features independently of other units, which reduces overfitting, that is, it improves the generalization ability of the neural network. At test time, no unit is usually dropped. However, there is an approximation of a [deep Gaussian process](http://proceedings.mlr.press/v31/damianou13a.pdf) and [Bayesian neural network](https://arxiv.org/abs/1505.05424) that is based on the application of dropout at training and test times. This is called **Monte Carlo dropout** or, in short, MC dropout, for reasons you can understand if you read the paper [Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning](https://arxiv.org/pdf/1506.02142.pdf). There's also the possibility to drop the connections between the neurons, which is called [DropConnect](http://proceedings.mlr.press/v28/wan13.html), rather than the neurons themselves. These two approaches are slightly different, even though DropConnect can be seen as a generalization of dropout. In DropConnect, you do not switch off completely the units, but only the contributions of certain units to the output of other units. In dropout, you completely switch off certain units. If you decided to **deterministically** (and manually) reduce the number of units before training, essentially, you would train a **fixed** smaller network, but this will not necessarily reduce overfitting or, more precisely, co-adaptation of the units. In dropout, you **randomly** select the units to drop, so, at every mini-batch (or epoch, depending on the implementation of dropout), you effectively train a **random** subset of the units of the original neural network and, because of this, it can be thought of as an ensemble of smaller neural networks. The two papers [Improving neural networks by preventing co-adaptation of feature detectors](https://arxiv.org/pdf/1207.0580.pdf) (2012) and [Dropout: A Simple Way to Prevent Neural Networks from Overfitting](http://jmlr.org/papers/volume15/srivastava14a.old/srivastava14a.pdf) (2014) have exactly the same authors, but the latter paper was published in the [Journal of Machine Learning Research](http://www.jmlr.org/), while the former wasn't apparently published in any journal. In fact, [Dropout: A Simple Way to Prevent Neural Networks from Overfitting](http://jmlr.org/papers/volume15/srivastava14a.old/srivastava14a.pdf) does not even cite [Improving neural networks by preventing co-adaptation of feature detectors](https://arxiv.org/pdf/1207.0580.pdf), but it cites the master's thesis [Improving Neural Networks with Dropout](http://www.cs.toronto.edu/~nitish/msc_thesis.pdf) (2013) by <NAME>, who is one of the authors of dropout. Upvotes: 2
2019/12/11
2,048
6,848
<issue_start>username_0: I'd like to ask for any kind of assistance regarding the following problem: I was given the following training data: 100 numbers, each one is a parameter, they together define a number X(also given).This is one instance,I have 20 000 instances for training.Next, I have 5000 lines given, each containing the 100 numbers as parameters.My task is to predict the number X for these 5000 instances. I am stuck because I only know of the sigmoid activation function so far, and I assume it's not suitable for cases like this where the output values aren't either 0 or 1. So my question is this : What's a good choice for an activation function and how does one go about implementing a neural network for a problem such as this?<issue_comment>username_1: Usually you're normalizing the data first, meaning that your whole dataset will be in between 0 and 1. Afterwords after you're having the model predictions, when computing the cost function or evaluating the model, you can apply the inverse of the normalization function. Upvotes: 1 <issue_comment>username_2: for regression, you can use a hidden layer with sigmoid, then a LINEAR output layer, where the weighted sum goes straight through, without modification. this way your output is not restricted to 0-1 Upvotes: 1 <issue_comment>username_3: Lets mock some data up. ======================= > > "100 numbers, each one is a parameter, they together define a number X(also given)" > > > ``` # i.e. size of X_train -> [n x d] # i.e. size of X_train -> [??? x 100] , when d = 100 # "I have 20000 instances for training" # i.e. size of X_train -> [20000 x 100], when n = 20000 import torch import numpy as np X_train = torch.rand((20000, 100)) X_train = np.random.rand(20000, 100) # Or using numpy ``` But what is your Y? =================== ``` # Since the definition of a regression task, # loosely means to predict an output real number # given an input of d dimension # So the appropriate Y_train would # be of dimension [n x 1] # and look like this: y_train = torch.rand((20000, 1)) y_train = np.random.rand(20000, 1) # Or using numpy ``` What is a linear perceptron? ============================ Taking definition from [this tutorial](https://github.com/alvations/tsundoku/blob/master/completed/Session%201%20-%20Hello%20DL%20World.ipynb) [![enter image description here](https://i.stack.imgur.com/eEVd5.png)](https://i.stack.imgur.com/eEVd5.png) Thus, in picture: ================= [![enter image description here](https://i.stack.imgur.com/ZPM3A.png)](https://i.stack.imgur.com/ZPM3A.png) Next we need to define training routine, ======================================== For now take it as biblical truth that this is an okay routine to train a neural net model (this isn't the only way but easiest or supervised learning): [![enter image description here](https://i.stack.imgur.com/4Jt7M.png)](https://i.stack.imgur.com/4Jt7M.png) In code: ======== ``` import math import numpy as np np.random.seed(0) def sigmoid(x): # Returns values that sums to one. return 1 / (1 + np.exp(-x)) def sigmoid_derivative(sx): # See https://math.stackexchange.com/a/1225116 # Hint: let sx = sigmoid(x) return sx * (1 - sx) def cost(predicted, truth): return np.abs(truth - predicted) num_epochs = 10000 # No. of times to iterate. learning_rate = 0.03 # How large a step to take per iteration. # Lets standardize and call our inputs X and outputs Y X = np.array(torch.rand((20000, 100))) Y = or_output for _ in range(num_epochs): layer0 = X # Step 2a: Multiply the weights vector with the inputs, sum the products, i.e. s # Step 2b: Put the sum through the sigmoid, i.e. f() # Inside the perceptron, Step 2. layer1 = sigmoid(np.dot(X, W)) # Back propagation. # Step 3a: Compute the errors, i.e. difference between expected output and predictions # How much did we miss? layer1_error = cost(layer1, Y) # Step 3b: Multiply the error with the derivatives to get the delta # multiply how much we missed by the slope of the sigmoid at the values in layer1 layer1_delta = layer1_error * sigmoid_derivative(layer1) # Step 3c: Multiply the delta vector with the inputs, sum the product (use np.dot) # Step 4: Multiply the learning rate with the output of Step 3c. W += learning_rate * np.dot(layer0.T, layer1_delta) ``` Now that we learn the model, i.e. the `W`. When we see the data points that we need to use the model on, we apply the same forward propagation step, i.e. `layer1 = sigmoid(np.dot(X, W))` Since we have: > > I have 5000 lines given, each containing the 100 numbers as parameters.My task is to predict the number X for these 5000 instances. > > > And in code: ``` # If we mock up the data, # it should be the same internal dimension. X_test = np.random.rand(5000, 100) # The desired output just needs to pass through the W and the activation: # the shape of `output` -> [5000 x 1] , # where there's 1 output value for each input. output = sigmoid(np.dot(X_test, W)) ``` Upvotes: 2 <issue_comment>username_4: The quick answer is that you want to use an activation function on the output layer that does not compress values to $(0,1)$. Depending on your software, this might be called "linear" or "identity". It looks like Keras just wants you to leave off the activation function: `model.add(Dense(1))`. The typical way of thinking of a neural network as a classifier (let's say a binary classifier) is just extending a logistic regression. In fact, when you use a sigmoid activation function on the output node, you're (sort of) running logistic regression on the final hidden layer. A logistic regression is one type of generalized linear model. The gist of GLM is that some transformation of the the value of interest is a linear function of the feature space. Let $X$ be the data matrix for the feature space. Let $\beta$ be a parameter vector. Then $\hat{y} = \mathbb{E}[y] = X\beta$ is the linear model, and $g(\mathbb{E}[y]) = X\beta$ is the generalized linear model (vectorized, so apply $g$ to each $y\_i$). But we could extend this to a nonlinear transformation, and when a neural network is a binary classifier, this is precisely what we're doing. Instead of the transformation of $X$ being given by $\beta$ and thus linear, we apply some nonlinear transformation $f$ and get $g(\mathbb{E}[y]) = f(X)$. The terminology in GLM is "link function", but that is essentially the activation function on the final node(s) of the neural network. Consequently, all of the GLM link functions are in play, and one of those link functions is the identity function. For a GLM, that's just linear regression. For your neural network, it will be a neural network (nonlinear) regression, which sounds like what you want. Upvotes: 2
2019/12/11
924
3,902
<issue_start>username_0: So I’ve been working on my own little dynamic architecture for a deep neural network (any number of hidden layers with any number of nodes in every layer) and got it solving the XOR problem efficiently. I moved on to trying to see if I could train my network on how to classify a number as being divisible by another number or not while experimenting with different network structures and have noticed some odd things. I know this is a weird thing to try and train a neural network to do but I just thought it might be easy because I can simply generate the training data set and test data set programmatically. From what I’ve tested, it seems that my network is only really good at identifying whether or not a number is divisible by a number who is a power of 2. If you test divisibility by a power of two, it converges on a very good solution very quickly. And it generalizes well on numbers outside of the training set - which I guess it kind of makes sense, as I’m inputting the numbers into the network in binary representation, so all the network has to learn is that a number n is only divisible by 2^m if the last m digits in the binary input vector are 0 (i.e. fire the output neuron if the last m neurons on the input layer don't fire, else don't). When checking divisibility by non-powers of two, however, there does not seem to be as much of a "positional" (maybe that's the word, maybe not) relationship between the input bits and whether or not the number is divisible. I thought though, that if I threw more neurons and layers at the problem that it might be able to solve classifying divisibility by other numbers – but that is not the case. The network seems to converge on not-so-optimal local minima on the cost function (for which I am using mean-squared-error) when dividing by numbers that are not powers of 2. I’ve tried different learning rates as well to no avail. Do you have any idea what would cause something like this or how to go about trying to fix it? Or are plain deep neural networks maybe just not good at solving these types of problems? Note: I should also add that I've tried using different activation functions for different layers (like having leaky-relu activation for your first hidden layer, then sigmoid activation for your output layer, etc.) which has also not seem to have made a difference Here is my code if you feel so inclined as to look at it: <https://github.com/bigstronkcodeman/Deep-Neural-Network/blob/master/Neural.py> (beware: it was all written from scratch by me in the quest to learn so some parts (namely the back-propagation) are not very pretty - I am really new to this whole neural network thing)<issue_comment>username_1: When you represent a number in base 2 (binary), you have already divided the number by 2 many times. If there is no remainder at the end, the number is obviously evenly divisible by 2. This hints that your AI could test for divisibility by dividing. Hmm-- not much gained there! Unfortunately the problem is not one suited to solving via AI. That's why factorization of large numbers is a good basis for hard encryption schemes. I'd suggest finding a different sort of problem on which to test your AI. Upvotes: 1 <issue_comment>username_2: There is a recent development in research that was looking into effectiveness of neural networks on arithmetic. Interestingly, feed-forward neural networks (MLPs) with various activation functions as well as LSTMs (RNNs which are Turing-complete) are not able to model simple arithmetic operations (e.g. addition/multiplication), they designed a new logic unit which can solve all of the simple arithmetic problems. See: [Neural Arithmetic Logic Units](https://arxiv.org/abs/1808.00508) More recently, DL can solve symbolic maths: [Deep Learning for Symbolic Mathematics](https://arxiv.org/abs/1912.01412) Upvotes: 4 [selected_answer]
2019/12/13
384
1,657
<issue_start>username_0: I need to understand whether it is better to use AI algorithms (ML, DL, etc.) instead of the classic parser (based onto grammars with regular expression and automaton) for the following task: structuring in XML an unstructured plain text. The text is a legal document, so the structure is well defined and a classic parser could do a good job. In the case AI could be a viable way, what would be an appropriate approach for the task?<issue_comment>username_1: A rule-based approach will guarantee a correct result, and it works perfectly fine. On the other hand, an AI based approach will introduce errors as AI cannot produce result with 100% accuracy, and also decreasing speed. As the document you are processing is a legal document, it would be better to use a parser as AI is only adding unnecessary wasted time and non-accurate results. Hope I can help you. Upvotes: 2 <issue_comment>username_2: The question confuses ML (including DL) with AI. AI is a bigger field than ML and includes [rule-based systems](https://en.wikipedia.org/wiki/Rule-based_system) You probably need to extract entities (spans of text) from the unstructured text and embed them into an XML. ML (and DL) are good when the problem is fuzzy (you need very many rules to solve the problem) so it could be a valid option if you have a variety of document structures that each needs its own set of rules. You would need enough data to train your models in this case. Otherwise if you have limited document structures, very limited data (maybe none) and 100% accuracy is expected then going with rules is the obvious choice. Upvotes: 2 [selected_answer]
2019/12/13
1,618
6,407
<issue_start>username_0: I previously asked a question about [How can an AI freely make decisions?](https://ai.stackexchange.com/q/17025/2444). I got a great answer about how current algorithms lack *agency*. The first thing I thought of was *reinforcement learning*, since the entire concept is oriented around an agent getting rewarded for performing a correct action in an environment. It seems to me that reinforcement learning is the path to AGI. I'm also thinking: what if an agent was proactive instead of reactive? That would seem like a logical first step towards AGI. What if an agent could figure out what questions to ask based on their environment? For example, it experiences an apple falling from a tree and asks "What made the Apple fall?". But it's similar to us not knowing what questions to ask about say the universe.<issue_comment>username_1: Some AI researchers do think RL is a path to AGI, and your intuition about how an agent would need to be *proactive* in selecting actions to learn about is exactly the area these researchers are now focused on. Much of the work in this area is focused on the idea of *curiosity*, and since [2014](https://sites.ualberta.ca/%7Eamw8/surprise.pdf) this idea has gained a lot of [traction](https://pathak22.github.io/large-scale-curiosity/) in the research community. So, maybe RL *can* lead to AGI. We don't know for sure yet. However, many of the classic arguments against AGI aren't addressed by the RL approach. For instance, if like [Searle](https://plato.stanford.edu/entries/chinese-room/), you think computers just don't have the right kind of hardware to do thinking, then running an RL algorithm on that hardware isn't going to yield AGI, just ever increasingly robust narrow AI. Ultimately Searle's arguments get into issues of metaphysics, so it isn't clear that there exists *any* argument that would convince someone like Searle that a particular computer-based technique is AGI-capable. There are also other arguments. For example, the [cognitivist](https://www.learning-theories.com/cognitivism.html) school of thought thinks that statistical learning approaches to AI, and, in particular, the black-box approaches of statistically-driven RL, are unlikely to lead to general intelligence because they do not engage in the kind of systematic reasoning process that proponents of cognitivism assume is necessary for general intelligence. Some more extreme proponents of this school might say that a logical planning algorithm like [STRIPS](http://www.primaryobjects.com/2015/11/06/artificial-intelligence-planning-with-strips-a-gentle-introduction/) is innately more intelligent than any approach based on deep learning, because it involves sound logical deduction rather than mere statistical calculation. In particular, STRIPS can correctly generalize to any new domain, as long as it is fed the correct sense data, while an RL approach will need to learn how to act there. So, while there are definitely reasons to be optimistic about RL as a direction for achieving AGI, it's definitely not yet settled. Upvotes: 5 [selected_answer]<issue_comment>username_2: A relatively recent but interesting paper that discusses this topic in more detail is [Reward is enough](https://www.sciencedirect.com/science/article/pii/S0004370221000862) (Artificial Intelligence, 2021) by <NAME>, <NAME>, <NAME>, and <NAME> (so by some of the godfathers of RL, who are all at DeepMind). Their **reward-is-enough hypothesis (RIEH)** (page 4) is > > **Hypothesis** (Reward-is-Enough). Intelligence, and its associated abilities, can be understood as subserving the maximisation of reward by an agent acting in its environment. > > > This hypothesis is slightly different from the [reward hypothesis (RH)](https://ai.stackexchange.com/q/16610/2444), which states that *all goals* can be represented by rewards and the achievement of those goals can be viewed or formulated as the maximization of rewards, because the RIEH also states that the **abilities needed to achieve the main goal in the environment arise from the maximization of the reward**, so the RIEH is a stronger hypothesis than the RH. The authors give examples to explain the RIEH (emphasis mine). > > **Sophisticated abilities** may arise from the maximisation of simple rewards in complex environments. For example, the **minimisation of hunger** in a squirrel’s natural environment demands a skilful **ability to manipulate nuts** that arises from the interplay between (among other factors) the squirrel’s musculoskeletal dynamics; objects such as leaves, branches, or soil that the squirrel or nut may be resting upon, connected to, or obstructed by; variations in the size and shape of nuts; environmental factors such as wind, rain, or snow; and changes due to ageing, disease or injury. Similarly, the **pursuit of cleanliness** in a kitchen robot demands a sophisticated **ability to perceive utensils** in an enormous range of states that includes clutter, occlusion, glare, encrustation, damage, and so on. > > > They also try to argue why language, perceptron, social intelligence and general intelligence could all arise from the maximization of a single reward signal (e.g. survival). Moreover, they also say that similar sophisticated abilities associated with intelligence could arise from the maximization of different reward signals, i.e. the emergence of these abilities is robust to the choice of reward objective. Additionally, they also talk about prior knowledge and learning, but, in my view, they should have emphasized/noted that, for example, perception, without the suitable sensors (inductive bias) cannot emerge: this is not a limitation of the RIEH, as it says nothing about how these abilities actually arise, or the nature of the agent needed for them to arise, or which specific reward signal should be maximized. In the end, they also conjecture that RL is the main framework that could be used to find out whether these conjectures/speculations are true or not. They do not go into philosophical arguments, such as the Chinese-Room argument or the problem consciousness: their argument to address these issues would probably be that any ability (even consciousness, if it's an *ability*) required to achieve the ultimate goal would arise in the process of maximization of the reward. Upvotes: 3
2019/12/14
1,694
5,560
<issue_start>username_0: I'm trying to develop a real-time application that, from the sequence of chalkboard images captured by a webcam, recognizes the lines being draw on it. It must be able of recognize the lines from the chalkboard background, filter the presence in the image of the teacher, and translate these lines to some representation, something as a list of basic events like "start of line at xxx,xxx", "continue line at xxx,xxx", ... After several days looking for references and bibliography, none is found. The most similar are the character recognition applications, in particular when they have a stroke recognition stage. Any hint ? Input will be a sequence as [this one](https://www.shutterstock.com/video/clip-9146711-teacher-writing-on-blackboard-class-ultra-hd), [this one](https://videohive.net/item/young-male-teacher-or-student-holding-chalk-writing-on-chalkboard-in-classroom/17391669) or [this one](https://www.youtube.com/watch?v=gBpqT3IEIQU) (just without the presence of the students). I've expect the teacher not hidding his hand. We could imagine a start with an empty chalkboard. Thanks. Note: I am looking for more than an answer which says only something similar to "you can use a deep learning training it with two classes", without details or references.<issue_comment>username_1: I will assume that the camera is stable (no change in position, zoom or other settings during the video recording), otherwise the task becomes markedly more complicated. Let's say that your dataset is an array of rasters (images in array format). You mention that you want to detect events "start line" and "end line". One way of doing this would be to compute an approximate time derivative of your image series. For instance, take the image raster at index `idx` and the one right after, at index `idx+1` (captured at instants $t$ and $t + \Delta t$, where $\Delta t$ is the sampling interval). At coordinates $(i,j)$, this derivative could look something like: `timeDerivative = (images[idx][j][i] - images[idx+1][j][i])/DeltaT`. This is a crude estimate, and there are better ways of computing an approximate discrete derivative, but you get the idea. Following this, you could declare a state of the recording: drawing line or not drawing line. The states, we assume are always alternating, as a teacher has to take his hand off the blackboard to draw a new line. When a derivative with large values is detected (a region in the image goes from being black to white suddenly) and the state is "not drawing", the event "start line" is recorded and the state switches to "drawing". While a derivative with large values continues to be detected as time goes on in the vicinity of the previous spot with a large derivative, nothing changes. Once this is no longer true, the state changes to "not drawing" and the event "stop line" is recorded at the last location with a large derivative. This is the main idea, which can be improved with: * Defining the area of the blackboard, either by hand or automatically * Thresholding the images to better isolate the chalk trace from the blackboard * Using a tracker such as a Kalman filter to know where to look next for the chalk Upvotes: 0 <issue_comment>username_2: Maybe Just use simple Convnets (Pre-trained perhaps) and train it on the images of the teacher on the blackboard. You *could* use a GAN to remove the teacher and complete the rest of the image (<https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwj9lKKSh43uAhWbfH0KHeaDC1UQFjAAegQIAxAC&url=http%3A%2F%2Fstanford.edu%2Fclass%2Fee367%2FWinter2018%2Ffu_guan_yang_ee367_win18_report.pdf&usg=AOvVaw2tG3bStRlys_NPLX9-XPep>) But that would be too troublesome. The best way would be to take real-time video chunks, use a Convolutional Network to detect the shapes you want, and return bounding boxes for the appropriate shapes (for the location purposes) and their 'classification' - whether a straight line, curved line, or some other user defined shape. You could also choose to use some other YOLO (You Only Look Once) technique. You can check out with: <https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiei4C3iI3uAhXXfX0KHQXxAVAQFjAJegQIARAC&url=https%3A%2F%2Ftowardsdatascience.com%2Fobject-detection-using-deep-learning-approaches-an-end-to-end-theoretical-perspective-4ca27eee8a9a&usg=AOvVaw2TkW094_O5Q7-mcVMJ5SEN> ; You also won't have to deal with the pesky teacher with the above method (assuming he doesn't stand in one place and constantly block a part of the BB). These methods are near SOTA and would be **extremely** effective than conventional algos. Not to mention that using `Keras` is a piece of cake with a giant community and endless resources to help you in case you get stuck on some problem. Since it is easy to use, you can setup a prototype in almost no time. Beginner's guide and Introduction-> <https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiei4C3iI3uAhXXfX0KHQXxAVAQFjALegQIGhAC&url=https%3A%2F%2Fmachinelearningmastery.com%2Fobject-recognition-with-deep-learning%2F&usg=AOvVaw3M-b1gYnbTdzzwPahTTWWR> ; A research paper from ArXiv--> <https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiei4C3iI3uAhXXfX0KHQXxAVAQFjAPegQIHhAC&url=https%3A%2F%2Farxiv.org%2Fpdf%2F1807.05511&usg=AOvVaw33CWoOX4LlrA3T_f75zVZu> Training with YOLO: <https://machinelearningmastery.com/how-to-perform-object-detection-with-yolov3-in-keras/> Upvotes: -1
2019/12/15
879
2,318
<issue_start>username_0: While studying backpropagation in CNNs, I can't understand how can we compute the gradient of max pooling with overlapping regions. That's also a question from [this](http://courses.cs.tau.ac.il/Caffe_workshop/Bootcamp/pdf_lectures/Lecture%203%20CNN%20-%20backpropagation.pdf) quiz and can be also found on [this](https://books.google.gr/books?id=72UkDwAAQBAJ&pg=PA128&lpg=PA128&dq=Compute+the+gradient+of+max+pooling+with+overlapping+regions.&source=bl&ots=RknIRCMsbW&sig=ACfU3U1zStijp120L9lTh5jG-sLL3XxZtQ&hl=el&sa=X&ved=2ahUKEwimjczOrbfmAhVBsaQKHVHsAhQQ6AEwCXoECAgQAQ#v=onepage&q=Compute%20the%20gradient%20of%20max%20pooling%20with%20overlapping%20regions.&f=false) book.<issue_comment>username_1: When gradients in a neural network can follow multiple paths to same parameter, the different gradient values from the sources can often be added together, because the operations in the forward direction are also sums and $\frac{d}{dx}(y+z) = \frac{dy}{dx} + \frac{dz}{dx}$. That is the case already with gradients of kernels (which are sums over the image area), and is equally the case for overlapping aggregation, including maximums, minimums or averages. So in the 1d case, if you have a max pool over the input params $[a\_0, a\_1, a\_2, a\_3, a\_4]$ a max function $m\_0 = max(a\_0, a\_1, a\_2)$, $m\_1 = max(a\_2, a\_3, a\_4)$ which overlap at $a\_2$, and gradients $\nabla\_{\mathbf{m}} J = [\frac{\partial J}{\partial m\_0}, \frac{\partial J}{\partial m\_1}]$, then you would allocate those gradients to vector $\mathbf{a}$ according to which items in each group was the max of that group, adding them when they overlapped. Examples: If $\mathbf{a} = [3,0,1,2,0]$ and $\nabla\_{\mathbf{m}}J = [0.7, 0.9]$, then $\nabla\_{\mathbf{a}}J = [0.7, 0, 0, 0.9, 0]$ If $\mathbf{a} = [3,0,4,2,0]$ and $\nabla\_{\mathbf{m}}J = [0.7, 0.9]$, then $\nabla\_{\mathbf{a}}J = [0, 0, 1.6, 0, 0]$ Upvotes: 3 [selected_answer]<issue_comment>username_2: Denote x(h,w) input to max-pooling, and y(h,w) - output. Then $\frac{dL}{dx}(h,w)= \sum \frac{dL}{dy}(h',w')$ over all y(h',w') which have been obtained from x(h.w) such that y(h',w') = x(h,w). related to [this p 11](http://courses.cs.tau.ac.il/Caffe_workshop/Bootcamp/pdf_lectures/Lecture%203%20CNN%20-%20backpropagation.pdf) Upvotes: 0
2019/12/15
619
2,520
<issue_start>username_0: What i really want to do, is to predict an integer sequence of (5 numbers with values from 1 to 50) for example based on a big dataset of other 5 numbers sequences with same values range created by the same random number generator. I suppose there is a way to train based on the dataset and the program will find a pattern or based on the most common numbers predict the next number sequence. The more numbers will predict in the sequence correctly the better of course. Any help, directions and preferably python code would be greatly appreciated. I recently read the following [can-a-neural-network-be-used-to-predict-the-next-pseudo-random-number](https://ai.stackexchange.com/questions/3850/can-a-neural-network-be-used-to-predict-the-next-pseudo-random-number) and i am new to the AI field. The proposed code while it creates a sequence of 25 numbers it ends showing 20 numbers i do not understand why. It seems they try to do something similar if i understand correctly I tried The code here [can-a-neural-network-be-used-to-predict-the-next-pseudo-random-number](https://ai.stackexchange.com/questions/3850/can-a-neural-network-be-used-to-predict-the-next-pseudo-random-number) It shows always the same numbers no matter how many epochs and or iterations i do is that normal? Is the last code close to what i want to accomplish? Thanks in advance.<issue_comment>username_1: The post you linked to clearly states that pseudo random number cannot be predicted. Their randomness is made to be nearly perfect, and if you ever found a way to even predict a pseudo random number with 20% chance of correct, the security of the entire world would be vulnerable to attacks, as things ranges from cryptocurrency and secure data transfer is all protected by pseudo random number. Upvotes: 2 <issue_comment>username_2: If is a truly a random number, and you could guess each of the next successive five in sequence, then you could win the lottery consistently. This is one of the first tasks many people try to do when first learning machine learning. If the lottery is truly a random physical process with fair, i.e., balanced ping pong balls, then you cannot predict which 5 or 6 numbers will come up next. The Lottery Commissions around the world go through great pains to ensure that the lotteries are fair and not fraudulent. It looks like you are using a random seed number, that is why you are getting the next number always the same. Good luck learning your numbers ! Upvotes: 1
2019/12/18
680
2,802
<issue_start>username_0: I have been exploring edge computation for AI, and I came across multiple libraries or frameworks, which can help to convert the model into a lite format, which is suitable for edge devices. * [TensorFlow Lite](https://www.tensorflow.org/lite) will help us to convert the TensorFlow model into TensorFlow lite. * [OpenVino](https://docs.openvinotoolkit.org/latest/index.html) will optimise the model for edge devices. **Questions** 1. If we have a library to optimise the model for edge devices (e.g. TensorFlow Lite), after conversion, could it make the accuracy decrease? 2. If not, then why do people prefer don't always use e.g. TensorFlow Lite?<issue_comment>username_1: This partly answer to question 1. There is no general rule concerning accuracy or size of the model. It depends on the training data and the processed data. The lightest is your model compared to the full accuracy model the less accurate it will be. I would run the lite model on test data and compare to the accuracy of the full model to get an exact measure of the difference. Tensor flow has different options to save the "lite" model (optimized in size, latency, none and default). The following mostly answer question 2. Tensor flow lite is intented to provide the ability to use the model to on line **predict only** and load the model not to train the model. On the other hand Tensor flow is used to build (train) the model off line. If your edge platform support any of the binding language provided for TensorFlow (javascript, java/kotlin, C++, python) you can use Tensorflow for prediction. The accuracy or speed options you might have selected to create the model will not be affected whether you use Tensor Flow or Tensor Flow Lite. Typically Tensor flow lite can be used on mobile devices (Ios, Android). There are other supported target, see this [link](https://www.tensorflow.org/lite/microcontrollers) Upvotes: 2 <issue_comment>username_2: I have explored the AI for edge devices. My findings for `tflite` model. 1. `TFLite` is just the tool suite to convert `TFModel` into `TFLite`. 2. TFLite optimizes the model for edge or embedded devices using [quantization techniques](https://medium.com/analytics-vidhya/optimization-techniques-tflite-5f6d9ae676d5). > > Quantization dramatically reduces both the memory requirement and > computational cost of using neural networks. > > > **Answer in brief:** 1. When we optimize the model definitely they are faster to take inference but it will impact the accuracy. Quantization leads to bit accuracy loss in smaller networks. 2. TFLite is increasing the model performance but we have to pay a cost in terms of accuracy that's why we have both TFLite for edge computation and TFModel as well. Upvotes: 2 [selected_answer]
2019/12/18
1,363
4,767
<issue_start>username_0: Consider an optimization problem that involves a set of tasks $T = \{1,2,3,4,5\}$, where the goal is to find a certain order of these tasks. I would like to solve this problem with a genetic algorithm, where each chromosome $C = [i, j, k, l, m]$ corresponds to a specific order of these five tasks, so each gene in $C$ corresponds to a task in $T$. So, for example, $C = [1,3,5,4,2]$ and $C' = [1,5,4,2,3]$ would be two chromosomes that correspond to two different orders of the tasks. In this case, how could we design the mutation and cross-over operations so that these constraints are maintained during evolution? The genetic algorithm should produce the three best chromosomes or order of tasks.<issue_comment>username_1: If I understood correctly, your problem is about finding the optimal way to execute a series of tasks in order to maximize the results, using Genetic Algorithms. In few words, you're trying to solve the salesman problem. --- If I am correct, you're looking for Crossover and Mutation algorithms that allow you to work with ordered sets of elements. For these scenarios you usually go for the classic PMX (Partially Mapped Crossover) and Interchange Mutation. But, there are plenty of other crossover algorithms you can use OX1, OX2 (both variants of the Order Based Crossover), Shuffle Crossover, Ring Crossover, etc. Let's start from the mutation, that is easier. For simplicity I'll represent the ordered genome like an array of integers: `int[] genome = {1, 2, 3, 4, 5};` **Interchange mutation** The concept is pretty basic: to mutate an ordered genome you just swap two elements. Easy. [![enter image description here](https://i.stack.imgur.com/35qDd.png)](https://i.stack.imgur.com/35qDd.png) ``` public int[] InterchangeMutation(int[] genome) { int i1 = random.Next(0, genome.Length); int i2 = random.Next(0, genome.Length); var copy = genome.ToArray(); //just making a copy here copy[i1] = genome[i2]; copy[i2] = genome[i1]; return copy; } ``` **PMX Variation Crossover** This is a bit more complicated as we have to take repetitions into account. From experience, I like to use this variation of the Partially Mapped Crossover. It is way easier to implement than the original one (you can find the paper online) but it will cost some more computational complexity. Longer the genome, higher the price you will pay. 1. Start by selecting two parents to use for the crossover. 2. From the first parent (P1) select a random section that will be passed over. 3. For the remaining values: 3A. If they are not in the copied section, take them from P2 3B. If they are in the copied section, take them from P1 3C. End up filling the gaps with the missing values in the order they are in P1 [![enter image description here](https://i.stack.imgur.com/9AUeX.png)](https://i.stack.imgur.com/9AUeX.png) ``` public int[] PMX2Crossover(int[] P1, int[] P2) { //Initializing child genome int[] child = new int[P1.Length]; for (int i = 0; i < P2.Length; i++) child[i] = -1; //Step1: getting random section to copy over int i1 = random.Next(0, P1.Length); int i2 = random.Next(0, P1.Length); //Step 2: Copying over section from P1 for (int i = Math.Min(i1, i2); i < Math.Max(i1, i2); i++) child[i] = P1[i]; //Step 3A: Copying values from P2 for (int i = 0; i < P2.Length; i++) if (child[i] ==-1 && !child.Contains(P2[i])) child[i] = P2[i]; //Step 3B: Copying values from P1 for (int i = 0; i < P2.Length; i++) if (child[i] == -1 && !child.Contains(P1[i])) child[i] = P1[i]; //Step 3C: Copying remaining values from P1 int emptyGene = child.IndexOfFirst(-1); while (emptyGene != -1) { child[emptyGene] = FirstMissingGene(P1, child); emptyGene = child.IndexOfFirst(-1); } return child; } private int FirstMissingGene(int[] parent, int[] child) { foreach (var gene in parent) if (!child.Contains(gene)) return gene; return -1; // should never get here } ``` You can lower down the complexity of the crossover to O(n) (from O(n\*n)) simply using a hashmap that keeps track of the genes already added to child. To get the first child call `PMX2Crossover(P1, P2);` and for the second just swap the parents `PMX2Crossover(P2, P1);` Hope this helps you. Source: I have been a bachelor professor of AI for a period. Upvotes: 1 <issue_comment>username_2: You could use `np.random.choice` to shuffle the arrays. You could use a distance metric to find new arrays that are mutants of the the current good set. Upvotes: 0
2019/12/19
424
1,653
<issue_start>username_0: I found the terms **front-end** and **back-end** in the article (or blog post) [How to Develop a CNN for MNIST Handwritten Digit Classification](https://machinelearningmastery.com/how-to-develop-a-convolutional-neural-network-from-scratch-for-mnist-handwritten-digit-classification/). What do they mean here? Are these terms standard in this context?<issue_comment>username_1: I do not think these are formally defined. The distinction is just to facilitate discussion of the NN architecture: e.g., you may have a few convolutional layers with pooling as a front-end, and a different architecture as a back-end (in a text-book architecture, just a fully-connected layer. But to get wild, maybe LSTM? To really get wild, BERT?). In the end (no pun intended), computers do not care if a layer is seen by humans as a front-end or a back-end. Upvotes: 2 [selected_answer]<issue_comment>username_2: I think that front end refers to a high level API for a CNN framework (c++ front end, Python front end). The back end can be understood as a more peculiar (low level) interface to specific libraries. You can use different back ends but still manipulate training data and model building process the same way using the front end (use Keras with TensorFlow, caffe with Pytorch, or the other way round use Theano, tensorflow, .. . with Keras!). You can find some more material at the following links : * <https://keras.io/backend/> * <https://project.inria.fr/deeplearning/files/2016/05/DLFrameworks.pdf> I don't think it refers to neural network layers structure. The term shallow or deep layers are usually prefered. Upvotes: 0
2019/12/19
936
2,465
<issue_start>username_0: How can I prove that all the a-cuts of any fuzzy set A defined on $R^n$ are convex if and only if $$\mu\_A(\lambda r + (1-\lambda)s) \geq min \{\mu\_A(r), \mu\_A(s)\}$$ such that $r, s \in R^n$, $\lambda \in [0, 1]$ ? That's a fuzzy question on my assignment. Any idea on how to start with?<issue_comment>username_1: We can assume without loss of generality that \begin{equation} \min\{\mu\_A(r), \mu\_A(s)\} = \mu\_A(r) = \alpha. \end{equation} $\implies$ a-cut of fuzzy set $A$ is on $R^n$ is convex. A-cut can be defined as \begin{equation} A = \{x \in R^n| \mu\_A(x) \geq \alpha\} \end{equation} If we take two elements $r$ and $s$, by the definition of convex set, number $\lambda r + (1 + \lambda)s$ is also an element of that set. Since it's an element of that set that means \begin{equation} \mu\_A(\lambda r + (1 + \lambda)s) \geq \alpha \end{equation} $\impliedby$ $\mu\_A(\lambda r + (1 + \lambda)s) \geq \alpha$. We know from $\min\{\mu\_A(r), \mu\_A(s)\} = \mu\_A(r) = \alpha$ that $\mu\_A(s) > \alpha$. We have an affine combination $\lambda r + (1 + \lambda)s$ for which also $\mu\_A(\lambda r + (1 + \lambda)s) \geq \alpha$ so we know that all numbers $\lambda r + (1 + \lambda)s$ satisfy inequality $\mu\_A(\cdot) \geq \alpha$ (belong to the same set as $r$ and $s$) which means this is a convex set again by the definition of a convex set. Upvotes: 2 <issue_comment>username_2: > > A fuzzy set A in $R^n$ is said to be a convex fuzzy set if its > $\alpha$-cuts $A\_\alpha$ are (crisp) convex sets for all $A \in (0,1]$ > . > > > Let A be a convex fuzzy set if and only if for all $r, s \in$ $R^n$, $\lambda \in [0, 1]$ . Let $\alpha=\mu\_A\leq\mu\_B$ Then \begin{equation} r\in A\_{\alpha}, s\in A\_{\alpha} \end{equation} and also \begin{equation} \lambda r + (1-\lambda)s \geq \alpha = min \{\mu\_A(r), \mu\_A(s)\} \end{equation} Conversely, if the membership funciton $\mu\_A$ of the fuzzy set A satisfies the inequality of Theorem 13.1 **Convex fuzzy set**, then taking $\alpha=\mu\_A(r), A\_\alpha$ may be regarded as set of all points $s$ for which $\mu\_A(s)\geq\alpha=\mu\_A(r)$. Therefore for all $r,s \in A\_\alpha$, \begin{equation} \mu\_A(\lambda r + (1-\lambda)s) \geq min \{\mu\_A(r), \mu\_A(s)\} = \mu\_A(r)=\alpha \end{equation} which inplies that $\lambda r + (1-\lambda)s \in A\_\alpha$. Hence $A\_\alpha$ is a convex set for every $\alpha \in [0,1]$ Upvotes: 1 [selected_answer]
2019/12/21
829
3,612
<issue_start>username_0: I keep reading about how LSTMs can't remember the "important parts" of a sequence which is why attention-based mechanisms are required. I was trying to use LSTMs to find people's name format. For example, "<NAME>" can be seen as first\_name middle\_name last\_name format, which I'll denote as 0, but then there's "<NAME>" which is last\_name, first\_name middle\_name, which I'll denote as 1. The LSTM seems to be overfitting to one classification of format. I suspect it's because it's not paying special attention to the comma which is a key feature of what format it could be. I'm trying to understand why an LSTM won't work for a case like this. It makes sense to me because LSTMs are better at identifying sequence to sequence generation and things such as summarization and sentiment analysis usually require attention. I suspect another reason why the LSTM is not able to infer the format is that the comma can be placed in different indexes of the sequence, so it could be losing its importance in the hidden state the longer the sequence is (not sure if that makes sense). Anyone else has any theories? I'm trying to convince my fellow researchers that a pure LSTM won't be sufficient for this problem.<issue_comment>username_1: You can do custom POS Tagging and use it as a multi featured sequence2sequence. Upvotes: 0 <issue_comment>username_2: The problem is not that RNN flavours such as LSTMs are incapable of keeping track of the "important" parts of the input. They also do not have much trouble recognizing commas in different places. To prove this point, I recommend reading [<NAME>'s excelllent write-up about the behaviour of individual RNN "neurons"](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). Addressing specifically this comment in your question: > > I suspect another reason why the LSTM is not able to infer the format is that the comma can be placed in different indexes of the sequence, so it could be losing its importance in the hidden state the longer the sequence is (not sure if that makes sense). > > > If commas are relevant to solve the task at hand, LSTMs can learn to remember its position or related information. This information is not necessarily diluted by repeated application of reccurrence with long sequences: networks can learn to propagate and promote crucial information from one recurrent state to the next. --- **Input sequences have arbitrary length, which means that LSTMs need to compress information about seen sequence elements** Rather, the input sequence has an arbitrary length, while the LSTM state vectors have a fixed size. State vectors are the only way for an LSTM to "keep track of important parts". This means that those fixed-size vectors are a bottleneck and there is an information-theoretic upper bound on the amount of information about "important parts" that can be kept by an LSTM. **LSTMs potentially take multiple decisions. For each decision, something else in the input sequence is most important** For tasks such as summarization that you mention in the question, an LSTM makes a series of predictions (predicting the tokens of the summary one token at a time). For each prediction, different things in the input sequence might be important. Put another way, for each decision, another *view* of the input may be most helpful. This is a key motivation for using attention networks. Each time an LSTM is making a prediction, an attention network can provide a dynamic, optimally helpful view of the input sequence. Upvotes: 2 [selected_answer]
2019/12/21
1,115
4,896
<issue_start>username_0: I read an interesting [essay](https://aeon.co/essays/how-close-are-we-to-creating-artificial-intelligence) about how far we are from AGI. There were quite a few solid points that made me re-visit the foundation of AI today. A few interesting concepts arose: > > imagine that you require a program with a more ambitious functionality: to address some outstanding problem in theoretical physics β€” say the nature of Dark Matter β€” with a new explanation that is plausible and rigorous enough to meet the criteria for publication in an academic journal. > > > Such a program would presumably be an AGI (and then some). But how would you specify its task to computer programmers? Never mind that it’s more complicated than temperature conversion: there’s a much more fundamental difficulty. Suppose you were somehow to give them a list, as with the temperature-conversion program, of explanations of Dark Matter that would be acceptable outputs of the program. If the program did output one of those explanations later, that would not constitute meeting your requirement to generate new explanations. For none of those explanations would be new: you would already have created them yourself in order to write the specification. So, in this case, and actually in all other cases of programming genuine AGI, only an algorithm with the right functionality would suffice. But writing that algorithm (without first making new discoveries in physics and hiding them in the program) is exactly what you wanted the programmers to do! > > > The concept of creativity seems like the initial thing to address when approaching a true AGI. The same type of creativity that humans have to ask the initial question or generate new radical ideas to long-lasting questions like dark matter. Is there current research being done on this? I've seen work with generating art and music, but it seems like a different approach. > > In the classic β€˜brain in a vat’ thought experiment, the brain, when temporarily disconnected from its input and output channels, is thinking, feeling, creating explanations β€” it has all the cognitive attributes of an AGI. So the relevant attributes of an AGI program do not consist only of the relationships between its inputs and outputs. > > > This is an interesting concept behind why reinforcement learning is not the answer. Without input from the environment, the agent has nothing to improve upon. However, with the actual brain, if you had no input or output, it is still in a state of "thinking".<issue_comment>username_1: You can do custom POS Tagging and use it as a multi featured sequence2sequence. Upvotes: 0 <issue_comment>username_2: The problem is not that RNN flavours such as LSTMs are incapable of keeping track of the "important" parts of the input. They also do not have much trouble recognizing commas in different places. To prove this point, I recommend reading [<NAME>'s excelllent write-up about the behaviour of individual RNN "neurons"](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). Addressing specifically this comment in your question: > > I suspect another reason why the LSTM is not able to infer the format is that the comma can be placed in different indexes of the sequence, so it could be losing its importance in the hidden state the longer the sequence is (not sure if that makes sense). > > > If commas are relevant to solve the task at hand, LSTMs can learn to remember its position or related information. This information is not necessarily diluted by repeated application of reccurrence with long sequences: networks can learn to propagate and promote crucial information from one recurrent state to the next. --- **Input sequences have arbitrary length, which means that LSTMs need to compress information about seen sequence elements** Rather, the input sequence has an arbitrary length, while the LSTM state vectors have a fixed size. State vectors are the only way for an LSTM to "keep track of important parts". This means that those fixed-size vectors are a bottleneck and there is an information-theoretic upper bound on the amount of information about "important parts" that can be kept by an LSTM. **LSTMs potentially take multiple decisions. For each decision, something else in the input sequence is most important** For tasks such as summarization that you mention in the question, an LSTM makes a series of predictions (predicting the tokens of the summary one token at a time). For each prediction, different things in the input sequence might be important. Put another way, for each decision, another *view* of the input may be most helpful. This is a key motivation for using attention networks. Each time an LSTM is making a prediction, an attention network can provide a dynamic, optimally helpful view of the input sequence. Upvotes: 2 [selected_answer]
2019/12/23
444
1,705
<issue_start>username_0: Assuming we have big $m \times n$ input dataset, with $m \times 1$ output vector. It's a **classification** problem with only two possible values: either $1$ or $0$. Now, the problem is that almost all elements of the output vector are $0$s with a very few $1$s (i.e. it's a sparse vector), such that if the neural network would "learn" to give **always** 0 as output, this would produce high accuracy, while I'm also interested in learning when the 1s occurs. I thought one possible approach could be to write a custom loss function giving more weight to the 1s, but I'm not completely sure if this would be a good solution. What kind of strategy can be applied to detect such outliers?<issue_comment>username_1: As described in [this post](https://www.pisciottablog.com/2020/10/24/unbalanced-datasets-in-machine-learning/), this problem is known as "unbalanced dataset" problem, which can have different solution approaches. If you use supervised learning, augmentation approaches could help. Otherwise, unsupervised approaches need some proper distance measure for outliers detection. Upvotes: 0 <issue_comment>username_2: From your case, it seems like you want your algorithm to classify both 1s and 0s with high accuracy. To increase the number of 1s and get it to a comparable level as 0s, you could generate new examples of 0s by tweaking some features or adding random noise. If you don't care about classifying 0s and only care about classifying 1s (which doesn't seem like what you want to do but putting this out there), you can create a surrogate loss function which assigns more loss weight to 1s than 0s (eg - 1000 weight for 1s and 1 weight for 0s.) Upvotes: 1
2019/12/23
553
1,656
<issue_start>username_0: I have trained a multi-class CNN model using [fastai](https://www.fast.ai/). The model splits out probabilites for each of the three classes, which, of course, sum up to 1. The class with highest probability becomes the predicted class. Is there any way I can convert them into 0 to 1 scale, where near to 0 value would mean class 1, near to 0.5 would mean class 2 and near to 1 would mean class 3?<issue_comment>username_1: You could maybe do something like this, it's a bit hackish \begin{equation} y = C\_1\cdot 1 + C\_2 \cdot 0.5 + C\_3 \cdot 0 \end{equation} $y$ represents the output and its bounded $\in [0, 1]$. $C\_i$ is probability for class $i$. This way when $C\_1 \approx 1, C\_2 \approx 0, C\_3 \approx 0$ you have \begin{equation} y \approx 1\cdot 1 + 0.5 \cdot 0 + 0 \cdot 0 \approx 1 \end{equation} when $C\_1 \approx 0, C\_2 \approx 1, C\_3 \approx 0 $ you have \begin{equation} y \approx 1\cdot 0 + 0.5 \cdot 1 + 0 \cdot 0 \approx 0.5 \end{equation} and when $C\_1 \approx 0, C\_2 \approx 0, C\_3 \approx 1 $ you have \begin{equation} y \approx 1\cdot 0 + 0.5 \cdot 0 + 0 \cdot 1 \approx 0 \end{equation} Upvotes: 2 [selected_answer]<issue_comment>username_2: In such cases, you can have just 1 final neuron and treat the problem as a regression problem where the output distance from all 3 classes is calculated and the class with least distance becomes the predicted class. If you want independent values for 3 classes (such as [0.8, 0.5, 0.3]) which don't add up to 1, (something like multilabel/multiclass classification), you can use sigmoid in such cases( you won't get the probability ). Upvotes: 0
2019/12/23
1,343
5,216
<issue_start>username_0: I understood that we normalize to **input** features in order to bring them on the same scale so that weights won't be learned in arbitrary fashion and training would be faster. Then I studied about **batch-normalization** and observed that we can do the normalization for outputs of the **hidden** layers in following way: **Step 1:** normalize the output of the hidden layer in order to have zero mean and unit variance a.k.a. **standard normal** (i.e. subtract by mean and divide by std dev of that minibatch). **Step 2:** rescale this normalized vector to a new vector with new distribution having $\beta$ mean and $\gamma$ standard deviation, where both $\beta$ and $\gamma$ are trainable. I did not understand the **purpose of the second step**. Why can't we just do the first step, make the vector standard normal, and then move forward? Why do we need to rescale the input of each hidden neuron to an arbitrary distribution which is learned (through beta and gamma parameters)?<issue_comment>username_1: **Definition and Explaination** For how Batch Normalization works exactly, I'll suggest you to read the following papers: * [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift](https://arxiv.org/abs/1502.03167) * [How Does Batch Normalization Help Optimization?](https://arxiv.org/abs/1805.11604) The recent interpretation on How BN works is that it can reduce the high-order effect as mentioned in [Ian Goodfellow's lecture](https://www.youtube.com/embed/Xogn6veSyxA?start=325&end=664&version=3). So it's not really about reducing the internal covariate shift. **Intuition** For how it works intuitively, you can think that we want to normalize the intermediate outputs (zero mean and unit variance) if the normalization won't remove too much useful information. However, normalization may not be suitable for all intermediate outputs. So $\beta$ and $\gamma$ is introduced to provide additional flexibility, if normalization removes too much useful information then $\beta$ and $\gamma$ will learn to become the original mean and variance, making the BN layer an identity transformation, as if it doesn't exist. In practice, $\beta$ and $\gamma$ won't become the original mean and variance, since all intermediate outputs can be normalized in some certain way without losing too much useful information. So you can think of it to be a customized normalization for each BN layer. **tl;dr** BN layer normalize the intermediate outputs in default, however, if the neural network find out that these intermediate outputs should not be normalized, then the neural network undos or provide more flexibility to the normalization. Upvotes: 1 <issue_comment>username_2: Look, this is currently a quite contentious issue. username_1s answer already links to the original paper and another one. However, most of the evidence for batch normalization is still empirical, and there is very little theoretical explanation for why it works. There are a number of competing theories out there. Also, there are a number of papers providing explanations for how batch normalization works in certain settings. However, there is no general theorem/explanation that can explain why the technique works. Here's [one paper](http://proceedings.mlr.press/v89/kohler19a/kohler19a.pdf) analyzing how it works in simplified settings. [This one](https://papers.nips.cc/paper/2018/file/36072923bfc3cf47745d704feb489480-Paper.pdf) provides another analysis of the batch norm (you can find their appendices on [Arxiv](https://arxiv.org/pdf/1806.02375.pdf)). Here's [another one](https://openreview.net/pdf?id=d-XzF81Wg1) which is the most recent peer reviewed one I could find. To quote this last paper: > > Perhaps at the core of the confusion is that BatchNorm has many > effects. It has been correlated to reducing covariate shift (Ioffe and > Szegedy, 2015), enabling higher learning rates (Bjorck et al., 2018), > improving initialization (Zhang et al., 2019), and improving > conditioning (Desjardins et al., 2015), to name a few. These entangled > effects make it difficult to properly study the technique. > > > Upvotes: 1 <issue_comment>username_3: <NAME> did a [video](https://www.youtube.com/watch?v=P6sfmUTpUmc) explaining initialization of neural networks. To summarize, activations like tanh and relu cause too many vanishing gradients if the distribution of inputs has too larger standard deviation (or the mean negative for relu). Putting batch norm (without Ξ² and Ξ³) before your tanh will prevent it becoming saturated (i.e. prevent vanishing gradients) as less input values will be on the flat tails (i.e. $<-3$ or $>3$). However, adding in Ξ² and Ξ³ will allow the neural network to control the saturation optimally of the tanh. Similarity, zero mean input to relu will mean half of the inputs are saturated (and have vanishing gradient). A positive Ξ² parameter will stop this from happening. So you can think of the step 2 as optimizing the saturation of the activation function. Of course you can put the batch norm after the activation function, and perhaps the second step wouldn't be as needed here. Upvotes: 0
2019/12/24
1,151
4,512
<issue_start>username_0: I am new to self-supervised learning and it all seems a little magical at the moment. The only way I can get an intuitive understanding is to assume that, for real-world problems, features are still embedded at a per-object level. For example, to detect cats in unseen images, my self-supervised network would still have to be composed exclusively of cats. So, if I had 100 images of cats and 100 images of dogs, then I thought self-supervised approaches would learn the features of the images. For example, if an image is rotated 90 degrees, it learns what was in the image that was rotated 90 degrees. However, if I wanted to classify just cats using this representation, then I wouldn't be able to do so without separating out what makes a cat a cat and a dog a dog. Is my assumption correct?<issue_comment>username_1: I don't think your interpretation is correct. Take images as example. * Supervised Learning e.g. classification (maybe use CNN with a L2 loss function) Assume you have many images with different labels. You wish to find a function to approximate the function $y=f(x)$ given a lot of $(\hat x, \hat y)$ sample pairs. * Unsupervised Learning e.g. clustering (maybe use k-means) Assume you have many images, but we don't have the labels or we just want to see if there's a way to categorize them into different categories. So we cluster the images by some characteristic that isn't pre-defined. * Self-Supervised Learning e.g. super resolution (maybe use CNN with a L2 loss function) You have many high resolution images without labels, but, your target is to train a model to up sample a low resolution image. So you can have the high resolution images as target, and down size the image to be the input, and try to train the image pairs. So the target is not some manually tagged labels, but generated directly from the data. Upvotes: 2 <issue_comment>username_2: <NAME>, who is a pioneer in the field of self-supervised learning, described self-supervised learning in a talk at ICML as: > > Self-supervised Learning is a form of unsupervised learning where the data provides the supervision. In general, we withhold some part of the data and task the network with predicting it. The network is forced to learn what we really care about e.g. a semantic representation, in order to solve it. > > > Thus, self-supervised is a subset of unsupervised learning, where you generate the labels from the given data itself. There are a few patterns of research being done for self-supervised learning: **1. Reconstruction:** In this, researchers have set up pretext tasks as predicting the color image from gray-scale image (Image Colorization), predicting the high-resolution image from the low-resolution version (Image Super-resolution) and removing some part of the image and trying to predict it (Image Inpainting). **2. Common Sense Reasoning**: You could take patches of 3x3 images and shuffle the patches and ask the network to predict the correct order ([Jigsaw puzzle](https://amitness.com/2020/02/illustrated-self-supervised-learning/#1-image-jigsaw-puzzle)). Similarly, you could take the center patch and some random patch and train model to predict where the random patch is located in relation to the center patch ([context prediction](https://amitness.com/2020/02/illustrated-self-supervised-learning/#2-context-prediction)). There is another approach where you randomly rotate image into {0, 90, 180, 270} degrees and ask the model to predict the rotation angle applied ([Geometric Transformation Recognition](https://amitness.com/2020/02/illustrated-self-supervised-learning/#3-geometric-transformation-recognition)). **3. Clustering**: You could cluster the images into K categories and treat those clusters as labels. Then, a model can be trained on those clusters and you get representations. You can again repeat clustering and model training for few epochs. Papers for these include: [DeepCluster](https://amitness.com/2020/04/deepcluster/) and [Self-Labelling](https://amitness.com/2020/04/illustrated-self-labelling/). **4. Contrastive Learning**: In this paradigm, augmentations of the image is taken and the task is to bring two augmentations of the same images near while making the distance between this image and some other random image far. Papers for these include: [SimCLR](https://amitness.com/2020/03/illustrated-simclr/) and [PIRL](https://amitness.com/2020/03/illustrated-pirl/). Upvotes: 2
2019/12/27
641
3,139
<issue_start>username_0: Feature extraction is a concept concerning the translation of raw data into the inputs that a particular machine learning algorithm requires. These derived features from the raw data that are actually relevant to tackle the underlying problem. On the other hand, word embeddings are basically distributed representations of text in an n-dimensional space. As far as I understand, word embedding is a somehow feature extraction technique. Am I wrong ? I had an argument with a friend who believes the two topics are totally separate. Is he right? What are the similarities and dissimilarities between word embedding and feature extraction?<issue_comment>username_1: I think you guys are playing on semantics. If you consider feature extraction to be an unlearned preprocessing step to get inputs for your model, then no, word embeddings are not a feature extraction technique (examples here would be BoW counts, n-gram features, etc) If you consider feature extraction to be any form of conversion from text to a set of features, then yes, word embeddings should be considered a form of feature extraction, given that they've learned in the process (or stolen from another model's training). Note though if you do include this, you would probably include most pretrained models as a whole as feature extraction techniques (like BERT). So the whole conversation you had can go either way depending on the definitions you set. Upvotes: 2 <issue_comment>username_2: Though *word-embedding* is primarily a *language modeling tool*, it also acts as a feature extraction method because it helps transform raw data (characters in text documents) to a meaningful alignment of word vectors in the embedding space that the model can work with more effectively (than other traditional methods such as TF-IDF, Bag of Words, etc, on a large corpus). Word Embedding techniques help extract information from the pattern and occurence of words and goes further than other traditional token representation methods to decode/identify the meaning/context of the words, thereby providing more relevant and important features to the model to tackle the underlying problem. However, from another standpoint, word-embedding models were not developed aiming to solve a particular feature extraction problem, but rather, to *generalize* and *model* the language used in a corpus to gain a *semantic understanding* of the words and the *relationships* between them. Such that, all the various corpus-specific tasks can then *employ the same "library" of information* which was *collectively* & *exhaustively* learnt by the embedding model. Meaning, the word embedding model learns a language model that is *task-agnostic* for all tasks on that corpus unlike feature extraction methods which are specifically task-oriented. Hence, the similarity is - word-embeddings can effectively aid in feature extraction; the dissimilarity is - they're not primarily meant to extract features more than they are for modeling a language which might be an "overkill" for a particular feature extraction task on a dataset. Upvotes: 4 [selected_answer]
2019/12/30
607
2,907
<issue_start>username_0: Is there an AI technology out there or being developed that can predict human behaviour, given that we as humans are irrational decision-makers? I'm looking at this from an economic standpoint - the issue with current economic models is that they assume that humans are perfectly rational, but obviously this isn't the case. Could AI develop better models and therefore produce better models of recessions?<issue_comment>username_1: I think you guys are playing on semantics. If you consider feature extraction to be an unlearned preprocessing step to get inputs for your model, then no, word embeddings are not a feature extraction technique (examples here would be BoW counts, n-gram features, etc) If you consider feature extraction to be any form of conversion from text to a set of features, then yes, word embeddings should be considered a form of feature extraction, given that they've learned in the process (or stolen from another model's training). Note though if you do include this, you would probably include most pretrained models as a whole as feature extraction techniques (like BERT). So the whole conversation you had can go either way depending on the definitions you set. Upvotes: 2 <issue_comment>username_2: Though *word-embedding* is primarily a *language modeling tool*, it also acts as a feature extraction method because it helps transform raw data (characters in text documents) to a meaningful alignment of word vectors in the embedding space that the model can work with more effectively (than other traditional methods such as TF-IDF, Bag of Words, etc, on a large corpus). Word Embedding techniques help extract information from the pattern and occurence of words and goes further than other traditional token representation methods to decode/identify the meaning/context of the words, thereby providing more relevant and important features to the model to tackle the underlying problem. However, from another standpoint, word-embedding models were not developed aiming to solve a particular feature extraction problem, but rather, to *generalize* and *model* the language used in a corpus to gain a *semantic understanding* of the words and the *relationships* between them. Such that, all the various corpus-specific tasks can then *employ the same "library" of information* which was *collectively* & *exhaustively* learnt by the embedding model. Meaning, the word embedding model learns a language model that is *task-agnostic* for all tasks on that corpus unlike feature extraction methods which are specifically task-oriented. Hence, the similarity is - word-embeddings can effectively aid in feature extraction; the dissimilarity is - they're not primarily meant to extract features more than they are for modeling a language which might be an "overkill" for a particular feature extraction task on a dataset. Upvotes: 4 [selected_answer]
2019/12/31
662
3,087
<issue_start>username_0: I want a microphone to pick up sounds around me (let's say beyond a 3 foot radius), but ignore sounds made at my desk, such as the rustling of paper, clicking a mouse and typing, my hands brushing up on the table, putting a pen down, etc. How hard would it be for AI to be able to distinguish these sounds from surrounding sounds, such as someone knocking on my door or a random loud sound from further away? How would you implement this? Is it possible that a pre-trained model could accomplish this, and work reliably for most people at their desk? I don't have any experience in AI.<issue_comment>username_1: I think you guys are playing on semantics. If you consider feature extraction to be an unlearned preprocessing step to get inputs for your model, then no, word embeddings are not a feature extraction technique (examples here would be BoW counts, n-gram features, etc) If you consider feature extraction to be any form of conversion from text to a set of features, then yes, word embeddings should be considered a form of feature extraction, given that they've learned in the process (or stolen from another model's training). Note though if you do include this, you would probably include most pretrained models as a whole as feature extraction techniques (like BERT). So the whole conversation you had can go either way depending on the definitions you set. Upvotes: 2 <issue_comment>username_2: Though *word-embedding* is primarily a *language modeling tool*, it also acts as a feature extraction method because it helps transform raw data (characters in text documents) to a meaningful alignment of word vectors in the embedding space that the model can work with more effectively (than other traditional methods such as TF-IDF, Bag of Words, etc, on a large corpus). Word Embedding techniques help extract information from the pattern and occurence of words and goes further than other traditional token representation methods to decode/identify the meaning/context of the words, thereby providing more relevant and important features to the model to tackle the underlying problem. However, from another standpoint, word-embedding models were not developed aiming to solve a particular feature extraction problem, but rather, to *generalize* and *model* the language used in a corpus to gain a *semantic understanding* of the words and the *relationships* between them. Such that, all the various corpus-specific tasks can then *employ the same "library" of information* which was *collectively* & *exhaustively* learnt by the embedding model. Meaning, the word embedding model learns a language model that is *task-agnostic* for all tasks on that corpus unlike feature extraction methods which are specifically task-oriented. Hence, the similarity is - word-embeddings can effectively aid in feature extraction; the dissimilarity is - they're not primarily meant to extract features more than they are for modeling a language which might be an "overkill" for a particular feature extraction task on a dataset. Upvotes: 4 [selected_answer]
2019/12/31
1,926
8,734
<issue_start>username_0: Neural networks are incredibly good at learning functions. We know by the universal approximation theorem that, theoretically, they can take the form of almost any function - and in practice, they seem particularly apt at learning the right parameters. However, something we often have to combat when training neural networks is *overtfitting* - reproducing the training data and not generalizing to a validation set. The solution to overfitting is usually to simply add more data, with the rationalization that at a certain point the neural network pretty much has no choice but to learn the correct function. But this never made much sense to me. There is no reason, in terms of loss, that a neural network should prefer a function that generalizes well (i.e. the function you are looking for) over a function that does incredibly well on the training data and fails miserably everywhere else. In fact, there is usually a loss *advantage* to overfitting. Equally, there is an infinite number of functions that fit the training data and have no success on anything but. So why is it that neural networks almost always (especially for simpler data) stumble upon the function we want, as opposed to one of the infinite other options? Why is it that neural networks are good at generalizing, when there is no incentive for them to?<issue_comment>username_1: There is normally more to generalisation than just increasing training data. It helps to make the task noisy, through various means. One common and popular method is to use dropout, which encourages the network to utilise every node, and avoids dependencies on small clusters of nodes. So how does making the task *more* noisy help with generalisation? Well it's easy to explain with a conceptual example, but I don't think that's what you're looking for, rather a more mathematical approach. The best way to think of it is with a simple example of a polynomial data set, in only 2 dimensions. If you consider this, and the way the network slowly approaches optimums through back propagation, the concept that the classification boundary gradually approaches the optimum, which in all cases is an over-fit function, isn't too far fetched. Now considering this, this would suggest that in order to properly train a network, we would need some way of determining when the network isn't super close to the optimum (as it will have over-fit by then) but also isn't so far from it that it's worse than randomly picking. This is were methods to improve generalisation come in, we want to hit that sweet spot. If the learning rate is too high, we will overshoot that sweet spot and miss out entirely (the range can sometimes be very small), if it's too low, we may get stuck in tiny local minimums and never escape, or it could take years to reach it. Using the example of dropout from before, this increases noise, and makes training harder on the network. Due to more difficult training, the network approaches the optimal function at a slower rate, and makes it easier to cut training when the network has generalised well. Conveniently, this extends to n-dimensional problems as well. Now, as far as I know, there is no robust mathematical proof of why this works for n-dimensional problems. The reason for this, explains why neural networks even exist: **We don't know how to mathematically classify these issues to the degree a NN does ourselves**. Because of this, there will *always* be gaps in our knowledge. We will never be able to quantitatively say what a neural network is doing, without making them obsolete. So until we can figure it out for ourselves, we'll have to just perform testing against unseen data to see if the network has indeed generalised. Upvotes: 0 <issue_comment>username_2: A neural network is composed of continuous functions. Neural networks are regularized by adding an l2 penalty on the weights to the loss function. This means the neural network will try to make the weights as small as possible. The weights are also initiallized with a N(0, 1) distribution so the initial weights will also tend to be small. All of this means that neural networks will compute a continuous function that is as smooth as possible while still fitting the data. By smooth I mean that similar inputs will tend to have similar outputs when run through the neural network. More formally, $||x-y||$ small implies $||f(x)-f(y)||$ small where f represents the output from the neural network. This mean that if a neural network sees an novel input $x$ that is close to an input from the training data $y$, then $f(x)$ will tend to be close to $f(y)$. So the end result is that the neural network will classify $x$ based on what the labels for the nearby training examples were. So the neural network is actually a little like k-nearest neighbors in that way. Another way for neural networks to generalize is using invariance. For example, convolutional neural networks are approximately translation invariant. So this means that if it sees and image where the object in question has been translated then it will still recognize the object. But it's not giving us the exact function we want. The loss function is a combination of classification accuracy and making the weights small so that you can fit the data with a function that is as smooth as possible. This tends to generalize well for the reasons I said before but it's just an approximation. You can solve the problem more exactly using minimal assumptions with a Gaussian process but Guassian processes are too slow to handle large amounts of data. Upvotes: 0 <issue_comment>username_3: A fairly recent paper posits an answer to this: *Reconciling modern machine learning practice and the bias-variance trade-off*. <NAME>, <NAME>, <NAME>, <NAME> <https://arxiv.org/abs/1812.11118> <https://www.pnas.org/content/116/32/15849> I'm probably not qualified to summarize, but it sounds like their conjectured mechanism is: by having far more parameters than are needed even to perfectly interpolate the training data, the space of possible resulting functions expands to include "simpler" functions (simpler here obviously not meaning fewer parameters, but instead something like "less wiggly") that generalize better even while perfectly interpolating the training set. That seems completely orthogonal to the more traditional ML approach of reducing capacity via dropout, regularization, etc. Upvotes: 0 <issue_comment>username_4: You've asked a question which is basically one of the most important open questions about neural networks. The answer is a huge mystery - any response to this question which immediately opens with a purported explanation is basically ridiculous. We don't know. As you pointed out, the issue is that the training set simply does not contain enough information to uniquely specify the target function. On an infinite input domain like $\mathbb R^n$, no finite number of samples is enough to uniquely determine a single function, even approximately. Even accounting for bounds and discretization of the input, and even for the symmetries our architectures impose on the output function, our training sets are microscopic compared to the sizes of our input domains. The problem that neural networks successfully solve every day should be impossible. You can think about this in a low dimensional input space to get some intuition. Doing supervised binary classification on the unit square (that is, your input is a pair of numbers) is equivalent to trying to determine a monochrome image by seeing a random sample of some of its pixels. In terms of the size of the training set relative to the size of the input domain, what neural networks do on say an image classification task like MNIST is comparable to, say, guessing a 1000x1000 monochrome image almost perfectly by observing 20 random pixels, and even 20 is probably generous. The task is impossible - unless you know something about what the target image is. If you know that the image (the target function) is restricted to some set $H$ of functions, then you might be able to determine it approximately from a finite sample. Neural networks must in some sense be doing this implicitly, with some set of "nice" functions $H$ which, it seems, happens to contain (approximations to) a lot of the functions we actually want them to learn, like the "is a cat" function on the space of all images. The study of such sets of "nice" functions, and in particular how small they need to be before learning is possible, is the subject of *statistical learning theory*. But I'm not aware of any plausible answers for what $H$ could be for neural networks. Upvotes: 1
2020/01/01
546
2,256
<issue_start>username_0: Let's assume an extreme case in which the kernel of the convolution layer takes only values 0 or 1. To capture all possible patterns in input of $C$ number of channels, we need $2^{C\*K\_H\*K\_W}$ filters, where $(K\_H, K\_W)$ is the shape of a kernel. So to process a standard RGB image with 3 input channels with 3x3 kernel, we need our layer to output $2^{27}$ channels. Do I correctly conclude that according to this, the standard layers of 64 to 1024 filters are only able to catch a small part of (perhaps) useful patterns?<issue_comment>username_1: Let $n=C\*K\_w\*K\_h$. Then you should only need $n$ filters. Not $2^n$ to keep all the information. If you just used the rows of the identity matrix as your filters than your convolution would just be making an exact copy so it definitely wouldn't be throwing away information. On the other hand, there will be a max pooling operation. To simplify the question let's suppose we have a 3 channels, and a 1 by 1 kernel. And then let's suppose it is just one convolution followed by global max pooling. Also, let's use your assumption that it's all binary. If you have $m$ filters then the final output will be $m$ dimensional no matter how many input points you have. So clearly information is being thrown away there. But that's not such a bad thing. Throwing away irrelevant information gets us closer to the features we need the problem at hand. The parts that get thrown away by max pooling correspond to features not being found in a particular part of the image. Upvotes: 0 <issue_comment>username_2: From mathematical point of view you are correct as are your calculations. To catch all the patterns you need that many filters, but this is where a whole idea of a **training** comes in. Main objective of the training in the CNNs is to find just a few good patterns from billions possible ones. So the direct answer to your question is: The standard layers of 64 to 1024 filters are only able to catch a small part of (perhaps) useful patterns, yes but this is assuming **no training** taking place. If you conducted training on given data with given model, then 64 to 1024 filters could already extract a lot of useful patterns, perhaps more than needed. Upvotes: 1
2020/01/02
697
3,150
<issue_start>username_0: A LSTM model can be trained to generate text sequences by feeding the first word. After feeding the first word, the model will generate a sequence of words (a sentence). Feed the first word to get the second word, feed the first word + the second word to get the third word, and so on. However, about the next sentence, **what should be the next first word**? The thing is to generate a paragraph of multiple sentences.<issue_comment>username_1: Take the sentence that was generated by your LSTM and feed it back into the LSTM as input. Then the LSTM will generate the next sentence. So the LSTM is using it's previous output as it's input. That's what makes it recursive. The intial word is just your base case. Also you should consider using GPT2 by open AI to do this. It's pretty impressive. <https://openai.com/blog/better-language-models/> Upvotes: 3 [selected_answer]<issue_comment>username_2: As you know, an LSTM language model takes in the past word and tries to predict the new one and continue over a loop. A sentence is divided into **tokens** and depending on different method, the tokens are divided differently. Some model maybe **character based models** which simply uses each character as input and output. In this case you can treat punctuation as one character and just run the model as normal. For **word based model** which is commonly used in many systems, we treat punctuation as it's own token. It is commonly called a end of sentence token. There is also a specific token for end of output. This makes the system knows when to finish and stop prediction. Also, just so you know for language model trying to generate original text, they feed the output as the input of the next data point, but the output they choose is not necessarily the one with the best accuracy. They set a threshold and choose upon that. It can introduce diversity to the language model so taht even though the staring word is the same, the sentence/paragraph will be different and not the same one again and again. For some state-of-the-art models, you can try GPT-2 as mentioned by @jdleoj23 . This is a character based(actually byte based but basically the same, it treats each unicode symbol individually) model that uses attention and transformers. The advantage of character based system is that even inputs that have spelling errors can be inputted into the model and new words not in the dictionary can be introduced. However if you want to learn more about how language model works, and not just striving for the best performance, you should try implementing one simple one by yourself. You can try following this article which uses keras to make a language model. <https://machinelearningmastery.com/develop-word-based-neural-language-models-python-keras/> The advantage of making a simple one is taht you can actually understand the encoding process, the tokenization process, the model underneath and others instead of relying on other people's code. The article uses keras Tokenizer but you could try writing your own using regex and simple string processing. Hope my help is useful for you. Upvotes: 2
2020/01/03
901
4,072
<issue_start>username_0: I've written a program to analyse a given piece of text from a website and make conclusary classifications as to its validity. The code basically vectorizes the description (taken from the HTML of a given webpage in real time) and takes in a few inputs from that as features to make its decisions. There are some more features like the domain of the website and some keywords I've explicitly counted. The highest accuracy I've been able to achieve is with a RandomForestClassifier, (>90%). I'm not sure what I can do to make this accuracy better except incorporating a more sophisticated model. I tried using an MLP but for no set of hyperparameters does it seem to exceed the previous accuracy. I have around 2000 datapoints available for training. Is there any classifier that works best for such projects? Does anyone have any suggestions as to how I can bring about improvements? (If anything needs to be elaborated, I'll do so.) Any suggestions on how I can improve on this project in general? Should I include the text on a webpage as well? How should I do so? I tried going through a few sites, but the next doesn't seem to be contained in any specific element whereas the description is easy to obtain from the HTML. Any help? What else can I take as features? If anyone could suggest any creative ideas, I'd really appreciate it.<issue_comment>username_1: Take the sentence that was generated by your LSTM and feed it back into the LSTM as input. Then the LSTM will generate the next sentence. So the LSTM is using it's previous output as it's input. That's what makes it recursive. The intial word is just your base case. Also you should consider using GPT2 by open AI to do this. It's pretty impressive. <https://openai.com/blog/better-language-models/> Upvotes: 3 [selected_answer]<issue_comment>username_2: As you know, an LSTM language model takes in the past word and tries to predict the new one and continue over a loop. A sentence is divided into **tokens** and depending on different method, the tokens are divided differently. Some model maybe **character based models** which simply uses each character as input and output. In this case you can treat punctuation as one character and just run the model as normal. For **word based model** which is commonly used in many systems, we treat punctuation as it's own token. It is commonly called a end of sentence token. There is also a specific token for end of output. This makes the system knows when to finish and stop prediction. Also, just so you know for language model trying to generate original text, they feed the output as the input of the next data point, but the output they choose is not necessarily the one with the best accuracy. They set a threshold and choose upon that. It can introduce diversity to the language model so taht even though the staring word is the same, the sentence/paragraph will be different and not the same one again and again. For some state-of-the-art models, you can try GPT-2 as mentioned by @jdleoj23 . This is a character based(actually byte based but basically the same, it treats each unicode symbol individually) model that uses attention and transformers. The advantage of character based system is that even inputs that have spelling errors can be inputted into the model and new words not in the dictionary can be introduced. However if you want to learn more about how language model works, and not just striving for the best performance, you should try implementing one simple one by yourself. You can try following this article which uses keras to make a language model. <https://machinelearningmastery.com/develop-word-based-neural-language-models-python-keras/> The advantage of making a simple one is taht you can actually understand the encoding process, the tokenization process, the model underneath and others instead of relying on other people's code. The article uses keras Tokenizer but you could try writing your own using regex and simple string processing. Hope my help is useful for you. Upvotes: 2
2020/01/04
449
1,659
<issue_start>username_0: How to calculate mean speed in FPS for an object detection model like YOLOv3 or YOLOv3-Tiny? Different object detection models are often presented on charts like this: [![enter image description here](https://i.stack.imgur.com/hUODt.png)](https://i.stack.imgur.com/hUODt.png) I am using the DarkNet framework in my project and I want to create similar charts for my own models based on YOLOv3. Is there some easy way to get mean FPS speed for my model with the "test video"?<issue_comment>username_1: You can use the dataset test set as "frames" of video. Test the images with your model and calculate the images per second of the result and that is the same as frames per second. However you should set the batch size to 1 as in the real world scenario. You should also display each image with teh corresponding boxes after inference and remove the accuracy calculation as to imitate the real world situation. Upvotes: 1 <issue_comment>username_2: @username_1 Hui Thanks for your answer, I ask AlexeyAB from Darknet the same question and he add now flag for Darknet for this type of model speed measurments: <https://github.com/AlexeyAB/darknet/issues/4627> > > I added -benchmark flag for detector demo, now you can use command 2652263 > > > `./darknet detector demo obj.data yolo.cfg yolo.weights test.mp4 -benchmark` > > > But for very fast models the bottleneck will be in the Video Capturing from file/camera, >or in Video Showing (you can disable showing by using -dont\_show flag). > > > I think that it is the best solution, you only need the newest version of Darknet (from AlexeyAB). Upvotes: 1 [selected_answer]
2020/01/07
953
3,680
<issue_start>username_0: For example, I have a paragraph that I want to classify in a binary manner. But because the inputs have to have a fixed length, I need to ensure that every paragraph is represented by a uniform quantity. One thing I've done is taken every word in the paragraph, vectorized it using GloVe word2vec, and then summed up all of the vectors to create a "paragraph" vector, which I've then fed in as an input for my model. In doing so, have I destroyed any meaning the words might have possessed? Considering these two sentences would have the same vector: > > My dog bit Dave > > > > > Dave bit my dog > > > How do I get around this? Am I approaching this wrong? What other way can I train my model? If I take every word and feed that into my model, how do I know how many words I should take? How do I input these words? In the form of a 2D array, where each word vector is a column? I want to be able to train a model that can classify text accurately. Surprisingly, I'm getting a high (>90%) for a relatively simple model like RandomForestClassifier just by using this summing up method. Any insights?<issue_comment>username_1: Summing up a sequence of word vector maybe used in practice sometimes. However, the operation of addition is non-reversible, meaning that once you sum up a few numbers, you cannot get the original numbers back. However summing up a sequence of word vectors may work depending on your task. You should also normalize the values, or just use average value. For details: <https://towardsdatascience.com/document-embedding-techniques-fed3e7a6a25d#ecd3> To feed data with different lengths, you can also try padding and trimming it. Set a constant L for length of paragraph and trim/pad all list of word vectors to this length. Padding adds 0 vectors to the begining of the list and trimming trims the first part of a text until it is equal to L. Even in LSTM networks padding and trimming is still used as even though you can feed as long of a text to a LSTM as you want, you still have to process the word vectors in batches, which requires them to be the same length. Example code in python on padding/trimming vector: ``` def pad_trim(list_vec, L): //list vec: {vec1,vec2,vec3......} //Assuming vecN have a size of 200 return list_vec[-L:0] if len(list_vec) > L else [[0] * 200] * (L-len(list_vec)) + list_vec ``` However in inference, you can ignore maximum length if you used a RNN based method, although as the network has not been trained on lengths more than L, it may perform better or worse. Generally speaking, you should go for concatenating if possible, so you can keep all information in the sentence. However both may work just fine depending on your task. For RNN based and CNN based model example, you should check this out: <https://medium.com/jatana/report-on-text-classification-using-cnn-rnn-han-f0e887214d5f> Upvotes: 2 <issue_comment>username_2: > > But because the inputs have to have a fixed length > > > Do they? Why? The go-to strategy would be to use an RNN (possibly with LSTM or GRUs, but probably not necessary) and train it to process input sequentially and output the final classification of the paragraph. This has the advantage of being able to take into account word order and constellations, as well as processing variable size inputs. Intuitively, I would think simply adding word vectors will include a lot of noise from commonly occurring words that don't provide much meaningful information from the classification. I would consider Bayesian methods or dimensionality reduction methods to limit the input to the more useful input vectors. Upvotes: 1
2020/01/09
1,400
4,680
<issue_start>username_0: I have a data set that was split using a fixed random seed and I am going to use 80% of the data for training and the rest for validation. Here are my GPU and batch size configurations * use `64 batch size` with `one GTX 1080Ti` * use `128 batch size` with `two GTX 1080Ti` * use `256 batch size` with `four GTX 1080Ti` All other hyper-parameters such as `lr`, `opt`, `loss`, etc., are fixed. Notice the linearity between the `batch size` and the `number of GPUs`. Will I get the same accuracy for those three experiments? Why and why not?<issue_comment>username_1: No. Different batch sizes mean different gradients (check stochastic gradient descent concept you will get how loss calculated) are calculated in each step, and thus the gradient descent will likely end up in different places in parameter space. In addition, how this is actually parallelized might make a difference, including the order of operations and converting between FP precision. Additional check resources:[issue of multi gpus](https://stackoverflow.com/questions/43845644/multi-gpu-architecture-gradient-averaging-less-accurate-model) Upvotes: 2 <issue_comment>username_2: This should make a difference, but how big is the difference heavily depends on your task. However, generally speaking, a smaller batch size will have a lower speed if counted in sample/minutes, but have a higher speed in batch/minutes. If the batch size is too small, the batch/minute will be very low and therefore decreasing training speed severely. However, a batch size too small (for example 1) will make the model hard to generalize and also slower to converge. This slide ([source](https://cs230.stanford.edu/files/C2M2.pdf)) is a great demonstration of how batch size affects training. [![Batch size slide](https://i.stack.imgur.com/cb2hu.png)](https://i.stack.imgur.com/cb2hu.png) As you can see from the diagram, when you have a small batch size, the route to convergence will be ragged and not direct. This is because the model may train on an outlier and have its performance decrease before fitting again. Of course, this is an edge case and you would never train a model with 1 batch size. On the other hand, with a batch size too large, your model will take too long per iteration. With at least a decent batch size (like 16+) the number of iterations needed to train the model is similar, so a larger batch size is not going to help a lot. The performance is not going to vary a lot. In your case, the accuracy will make a difference but only minimally. Whilst writing this answer, I have run a few tests on batch size effect on performance and time, and here are the results. (Results to be added for 1 batch size) ``` Batch size 256 Time required 98.50849771499634s : 0.9414 Batch size 128 Time required 108.53689193725586s : 0.9668 Batch size 64 Time required 129.92272853851318s : 0.9776 Batch size 32 Time required 162.13709354400635s : 0.9844 Batch size 16 Time required 224.82269191741943s : 0.9854 Batch size 8 Time required 351.2729814052582s : 0.9861 Batch size 4 Time required 514.2667407989502s : 0.9862 Batch size 2 Time required 829.1623721122742s : 0.9869 ``` You can test out yourself in [this Google Colab](https://colab.research.google.com/drive/1xJEkyLd_02XVRYF6bmLgQ5MAXV2palCu). [![enter image description here](https://i.stack.imgur.com/dcD7R.png)](https://i.stack.imgur.com/dcD7R.png) As you can see, the accuracy increases while the batch size decreases. This is because a higher batch size means it will be trained on fewer iterations. 2x batch size = half the iterations, so this is expected. The time required has risen exponentially, but the batch size of 32 or below doesn't seem to make a large difference in the time taken. The accuracy seems to be normal as half the iterations are trained with double the batch size. In your case, I would actually recommend you stick with 64 batch size even for 4 GPU. In the case of multiple GPUs, the rule of thumb will be using at least 16 (or so) batch size per GPU, given that, if you are using 4 or 8 batch size, the GPU cannot be completely utilized to train the model. For multiple GPU, there might be a slight difference due to precision error. Please, see [here](https://stackoverflow.com/q/43845644/10675215). Conclusion ---------- The batch size doesn't matter to performance too much, as long as you set a reasonable batch size (16+) and keep the **iterations** not epochs the same. However, training time will be affected. For multi-GPU, you should use the minimum batch size for each GPU that will utilize 100% of the GPU to train. 16 per GPU is quite good. Upvotes: 5 [selected_answer]
2020/01/09
1,293
4,424
<issue_start>username_0: I'm training a multi-label classifier that's supposed to be tested on underwater images. I'm wondering if feeding the model drawings of a certain class plus real images can affect the results badly. Was there a study on this? Or are there any past experiences anyone could share to help?<issue_comment>username_1: No. Different batch sizes mean different gradients (check stochastic gradient descent concept you will get how loss calculated) are calculated in each step, and thus the gradient descent will likely end up in different places in parameter space. In addition, how this is actually parallelized might make a difference, including the order of operations and converting between FP precision. Additional check resources:[issue of multi gpus](https://stackoverflow.com/questions/43845644/multi-gpu-architecture-gradient-averaging-less-accurate-model) Upvotes: 2 <issue_comment>username_2: This should make a difference, but how big is the difference heavily depends on your task. However, generally speaking, a smaller batch size will have a lower speed if counted in sample/minutes, but have a higher speed in batch/minutes. If the batch size is too small, the batch/minute will be very low and therefore decreasing training speed severely. However, a batch size too small (for example 1) will make the model hard to generalize and also slower to converge. This slide ([source](https://cs230.stanford.edu/files/C2M2.pdf)) is a great demonstration of how batch size affects training. [![Batch size slide](https://i.stack.imgur.com/cb2hu.png)](https://i.stack.imgur.com/cb2hu.png) As you can see from the diagram, when you have a small batch size, the route to convergence will be ragged and not direct. This is because the model may train on an outlier and have its performance decrease before fitting again. Of course, this is an edge case and you would never train a model with 1 batch size. On the other hand, with a batch size too large, your model will take too long per iteration. With at least a decent batch size (like 16+) the number of iterations needed to train the model is similar, so a larger batch size is not going to help a lot. The performance is not going to vary a lot. In your case, the accuracy will make a difference but only minimally. Whilst writing this answer, I have run a few tests on batch size effect on performance and time, and here are the results. (Results to be added for 1 batch size) ``` Batch size 256 Time required 98.50849771499634s : 0.9414 Batch size 128 Time required 108.53689193725586s : 0.9668 Batch size 64 Time required 129.92272853851318s : 0.9776 Batch size 32 Time required 162.13709354400635s : 0.9844 Batch size 16 Time required 224.82269191741943s : 0.9854 Batch size 8 Time required 351.2729814052582s : 0.9861 Batch size 4 Time required 514.2667407989502s : 0.9862 Batch size 2 Time required 829.1623721122742s : 0.9869 ``` You can test out yourself in [this Google Colab](https://colab.research.google.com/drive/1xJEkyLd_02XVRYF6bmLgQ5MAXV2palCu). [![enter image description here](https://i.stack.imgur.com/dcD7R.png)](https://i.stack.imgur.com/dcD7R.png) As you can see, the accuracy increases while the batch size decreases. This is because a higher batch size means it will be trained on fewer iterations. 2x batch size = half the iterations, so this is expected. The time required has risen exponentially, but the batch size of 32 or below doesn't seem to make a large difference in the time taken. The accuracy seems to be normal as half the iterations are trained with double the batch size. In your case, I would actually recommend you stick with 64 batch size even for 4 GPU. In the case of multiple GPUs, the rule of thumb will be using at least 16 (or so) batch size per GPU, given that, if you are using 4 or 8 batch size, the GPU cannot be completely utilized to train the model. For multiple GPU, there might be a slight difference due to precision error. Please, see [here](https://stackoverflow.com/q/43845644/10675215). Conclusion ---------- The batch size doesn't matter to performance too much, as long as you set a reasonable batch size (16+) and keep the **iterations** not epochs the same. However, training time will be affected. For multi-GPU, you should use the minimum batch size for each GPU that will utilize 100% of the GPU to train. 16 per GPU is quite good. Upvotes: 5 [selected_answer]
2020/01/11
1,171
4,887
<issue_start>username_0: I have images that contain lots of elements. Some I know, some I don't. I want to know if it's ok to only label those I do know. Let's take this image for example. I would label the green stuff and the worm but leave the rest unlabeled. Is that ok? Another question I would also like to ask is how concise I should be in labeling. For instance, You can see in the picture a bit of blue behind the green plant. So should I label that bit and say water or leave it unlabeled? **EDIT:** I also want to ask if it's ok to label only the things I'm interested in even if they take **up to** 30% of the picture? Won't the neural network be confused by all the details in the picture that it perceives and that I label as A for example, even if A is just a part of it? Another question would be, let's say I have labels A, B and C. I have an image in which I'm a bit confused if a certain object is of label B **or** A **or** even a totally different class other than (A,B,C). What should I do in this instance? *I'm having a hard time with the dataset. It would take an expert to label this correctly. But I want to do things as cleanly as possible, so all the effort doesn't go to waste. I would really appreciate your help. Thank you guys.* [![enter image description here](https://i.stack.imgur.com/37OdN.jpg)](https://i.stack.imgur.com/37OdN.jpg)<issue_comment>username_1: You can't label things you don't know. The goal of labeling is to label the things you want the classifier to learn so that when you run it in inference mode you can discover what is in your data (new data that you didn't use for training, validating, or testing). It is not a good idea to label small objects like the 'blue water' unless it is important to you to discover these fine details in inference mode. Upvotes: 2 <issue_comment>username_2: I think what you are actually talking about is semantic segmentation (where you label pixels individually). There is a difference in theses tasks like Classification, Detection or Semantic Segmentation. Classification refers to the task of giving a (usually) single label to the whole image, e.g. cat. But as you already noticed this does not necessarily end in a clear labeling policy, since you basically always have multiple classes in one image. However, a ANN usually learns the most relevant (biggest, nearest) object in an image to set it to the corresponding class (but this of course again depends on how the images are labeled). At inferencing of the ANN, you then get a probability distribution over all predefined classes. You can, of course, use this to take the K most relevant classes instead of just the single most relevant classes, to cover cases where multiple objects are probably present. However common output layers, e.g. like Softmax, are usually designed to favour for one single class instead of multiple classes, so you should keep that in mind or consider using a better suited output layer function for your use-case. To have a more general approach in the task of object detection you classify multiple objects in an image (usually as bounding boxes). That means you label all predefined objects with their position and class in an image. And the most general approach here would be to do semantic segmentation that labels every pixel to a corresponding class, which then gives you the actual object borders etc. You can also label pixels as "voids" (or something like that) to cover unknown classes or classes that are not considered in your dataset. However creating such a dataset is a horrible amount of work. To clear this up: look at your actual use-case, and think about what you actually want your neural network to do. According to this, you then think of a labeling policy and label your data. Upvotes: 1 <issue_comment>username_3: I would classify each pixel separately instead of giving a label to the whole image. Sadly preparing the training data is very tedious and time-consuming. Let's say the input image has dimensions of 200 x 300 x 3 (RGB) and there are two classes of regions you want to identify. A few approaches come to mind: 1) Train two separate networks, each forecasting a binary mask of size 200 x 300 of the object class in guestion. 2) Train a single network with a binary output of size 200 x 300 x 2 (sigmoid activation) 3) Train a single network with a binary output of size 200 x 300 x 3 (softmax activation), the 3rd class is for "other" If you are uncertain of some regions you can leave its class probability to 50% and it won't affect cross-entropy losses. Option 1 is easiest to get started with but training a single network should be more computationally more efficient than training two separate ones. In addition options 1 and 2 can forecast a single pixel belonging to both classes with 100% probability unlike the network of option 3. Upvotes: 0
2020/01/11
541
2,371
<issue_start>username_0: In the step of tuning my neural networks I often encounter a problem that every time I train the exact same network, it gives me different final error due to random initialization of the weights. Sometimes the differences are small and negligible, sometimes they are significant, depending on the data and architecture. My problem arises when I want to tune some parameters like number of layers or neurons, because I don't know if the change in final error was caused by recent changes in network's architecture or it is simply effect of the aforementioned randomness. My question is how to deal with this issue?<issue_comment>username_1: I don't think you can. Say a NN with 3 layers gives an accuracy of 95.3% and another NN with 4 layers gives an accuracy of 95.4%. Then there is no guarantee that the 4 layer NN is better than the 3 layered NN. Since with different initial values the 3 layer NN might perform better. You could run multiple times and probabilistically say that this is better, but this is computational intense. Upvotes: 2 <issue_comment>username_2: There are two weight-initializing methods for neural networks: 1-Zero initializing 2-Random initializing <https://towardsdatascience.com/weight-initialization-techniques-in-neural-networks-26c649eb3b78> If you choose zero initalizing method in every train loop, you may get same results OR you can use transfer learning according to your problem, it allows to start same parameters. At last, as a worst and hardest choice, you can write your own weight arrays and feed your layers. Problem you mentioned is one of the most interesting problems in the evaluation of performance of neural networks. You can use cross-validation method to verify your model's accuracy! Δ±t will give more reliable results! Upvotes: 1 <issue_comment>username_3: There are other sources that will lead to different results in addition to weight initialization. For example dropout layers. Make sure you specify the random seed.Also data reading using flow from directory,make sure you set shuffle to False or if you do not then set the random seed. If you use transfer learning make that part of your network non-trainable. Some networks have dropout in them and do not provide a way to set the random seed. IF you are using a GPU there are even more issues to contend with. Upvotes: 1
2020/01/12
687
2,668
<issue_start>username_0: I'm building an RL agent using SARSA and Q-Learning for testing its capabilities. The environment is a 10x10 grid, where it gets a reward of 1 if he reaches the goal while he takes -1 every time he takes a step out of the grid. So, it can freely move out and every time it takes a step outside of the grid it gets -1. After tuning the main parameters * alpha\_val: 0.25 * discount: 0.99 * episode\_length: 50 * eps\_val: 0.5 I get the following plot for 10000 episodes (The plot is sampled every 100 episodes): [![enter image description here](https://i.stack.imgur.com/B6iMK.png)](https://i.stack.imgur.com/B6iMK.png) But when I look at the plots online I see usually plots like this one: [![enter image description here](https://i.stack.imgur.com/5d9pi.png)](https://i.stack.imgur.com/5d9pi.png) Since I'm new at RL, I'm asking some comments about my outcome or any type of suggestion if anyone of you think that I'm doing something wrong.<issue_comment>username_1: Well, the way to know that the agent is actually learning is by looking at its behavior while it performs the task, and by comparing against a known optimal performance. So, does your agent reaches the goal quickly? Does it step out of the grid frequently? What is the maximum possible sum of rewards / minimum number of steps attainable? Is the agent close to that limit? From your graphic, and if I understood correctly your RL problem, the maximum average reward per step should be close to 1 (depending on the specific environment you are using), so I guess you are not so far from the optimal solution. Also, probably if you keep training for a longer period, your agent will reach a stable solution that might or might not be optimal. If you keep training after that, your curves surely will look like the ones you found online. Upvotes: 2 <issue_comment>username_2: This could mostly be depending on your exploration rate. Consider this exploration threshold: ``` exploration_threshold = EPS_END + (EPS_START - EPS_END) * math.exp(-1. * episodes / EPS_DECAY) sample = random.random() if sample > exploration_threshold: # take action using the policy else: # take action at random ``` Here the `EPS_END=0.9`, `EPS_START=0.05` are starting and ending exploration probabilities. This will force the agent explore the action space with a decaying high probability and gain experiences. After a while it will choose actions from the policy with a high probability, and as the policy converges, it yeilds the maximum expected utility. This will help learn faster and cause the decaying exponential shape of the convergence plot. Upvotes: 0
2020/01/12
623
2,710
<issue_start>username_0: I'm working on semantic segmentation tasks in the medical space using the U-Net. Let's say that I train a U-Net model on medical images with the goal of segmenting out, say, ligaments, from a medical image. If I train that model on images that contain just a single labelled ligament, it will be able to segment out single ligaments pretty well, I assume. If I present it with an image with multiple ligaments, should it also be able to segment the multiple ligaments well too? Based on my understanding, semantic segmentation is just pixel-wise classification. As a result, shouldn't the number of the objects in the image not be relevant since it's only looking at individual pixels? So, as long as a pixel matches that of a ligament, it should be able to segment it equally right? Or am I misunderstanding some piece? Basically, if I train a U-Net on images with just single ligaments, will it also be able to segment images with multiple ligaments equally as well based on my logic above?<issue_comment>username_1: Well, the way to know that the agent is actually learning is by looking at its behavior while it performs the task, and by comparing against a known optimal performance. So, does your agent reaches the goal quickly? Does it step out of the grid frequently? What is the maximum possible sum of rewards / minimum number of steps attainable? Is the agent close to that limit? From your graphic, and if I understood correctly your RL problem, the maximum average reward per step should be close to 1 (depending on the specific environment you are using), so I guess you are not so far from the optimal solution. Also, probably if you keep training for a longer period, your agent will reach a stable solution that might or might not be optimal. If you keep training after that, your curves surely will look like the ones you found online. Upvotes: 2 <issue_comment>username_2: This could mostly be depending on your exploration rate. Consider this exploration threshold: ``` exploration_threshold = EPS_END + (EPS_START - EPS_END) * math.exp(-1. * episodes / EPS_DECAY) sample = random.random() if sample > exploration_threshold: # take action using the policy else: # take action at random ``` Here the `EPS_END=0.9`, `EPS_START=0.05` are starting and ending exploration probabilities. This will force the agent explore the action space with a decaying high probability and gain experiences. After a while it will choose actions from the policy with a high probability, and as the policy converges, it yeilds the maximum expected utility. This will help learn faster and cause the decaying exponential shape of the convergence plot. Upvotes: 0
2020/01/13
784
3,443
<issue_start>username_0: We know a lot of common sense about the world. Things like "to buy something you need money". I wonder how much of this common sense comes about through actual someone explicitly telling you the instructions "You need money to buy things". Which we store in our brains as a sort of rule. As opposed to just intutively understanding things and picking it up. I am imagining children playing at shop-keeping and saying things like "I give you this and you give me that". And other children not quite understanding the concept of buying things until being told by a teacher. If so, giving a computer a list of common sense rules likes these is no different to teaching a child. So I am wondering why this area of AI research (semantic webs etc.) has been frowned upon in the last decade in favour of trying to learn everything through experience like deep neural networks?<issue_comment>username_1: If I understand correctly, what you are looking for is called "common sense reasoning" in NLP research. Research in this field revolves around benchmark data sets, where good performance indicates some ability to do common sense reasoning. Here is a nice collection of data sets and research by <NAME>: <http://nlpprogress.com/english/common_sense.html> --- In the end, the main question is not > > How much knowledge of the world is learnt through words? > > > since it is virtually unanswerable if asked in this form. A question that is answered in NLP common sense reasoning research is > > Out of 100 specific decisions that my model needs to take, for how many does the model show the ability to reason correctly? > > > Upvotes: 1 <issue_comment>username_2: Grounded language learning -------------------------- Your description does not match the current understanding of how children learn language, and giving a computer an explicit textual list of common sense rules is very different from teaching a *young* child. (Some aspects of teaching older children is more similar to that, but by that time they fully know the language already). Much of language is acquired through interaction (both sensory and motoric) with the physical world, applying words together with a *shared focus of attention* to something real. I.e. you talk about a cat or a ball and it's behavior while both you and the child are paying attention to that behavior or object or an image of it. The same applies later for more complex topics such as social situations - to teach a topic, to a child, you'd inevitably use a shared attention to specific events in the real world or specific events in a mostly shared 'model world' i.e. one reconstructed from memory (what you saw your sister do five minutes ago) or imagined/hypothesised (if you do this, these consequences might happen). Attempts to replicate this process in artificial systems is usually called 'grounded language learning', and there's extensive published literature on that which may be interesting to you. In essence, the assumption is that English (or any other) words to an artificial system are just as useful as a Chinese-Chinese explanatory dictionary to me. If I speak some basic Chinese, then I can use that dictionary to expand my vocabulary and understand complex Chinese - but if I have *no* grounding in Chinese whatsoever, then the expectation is that it's impossible to reconstruct a language from that data alone. Upvotes: 0
2020/01/14
1,054
4,679
<issue_start>username_0: I'm still on my first steps in the Data Science field. I played with some DL frameworks, like TensorFlow (pure) and Keras (on top) before, and know a little bit of some "classic machine learning" algorithms like decision trees, k-nearest neighbors, etc. For example, image classification problems can be solved with deep learning, but some people also use the SVM. Why are traditional ML models still used over neural networks, if neural networks seem to be superior to traditional ML models? Keras is rather simple to use, so why don't people just use deep neural networks with Keras? What are the pros and cons of each approach (considering the same problem)?<issue_comment>username_1: > > Why are still traditional machine learning (ML) models used over neural networks if neural networks seem to be superior to traditional ML models? > > > Of course, the model that achieves state-of-the-art performance depends on the problem, available datasets, etc., so a comprehensive comparison between traditional ML models and deep neural networks is not appropriate for this website, because it requires a lot of time and space. However, there are certain disadvantages of deep neural networks compared to traditional machine learning models, such as k-nearest neighbors, linear regression, logistic regression, naive Bayes, Gaussian processes, support vector machines, hidden Markov models and decision trees. * Often, traditional ML models are conceptually simpler (for example, k-NN or linear regression are much simpler than deep neural networks, such as LSTMs). * Personally, I've noticed that traditional ML models can be used more easily compared to deep neural networks, given the existence of libraries, like scikit-learn, which really have a simple and intuitive API (even though you apparently do not agree with this). * Deep neural networks usually require more data than traditional ML models in order not to overfit. Empirically, I've once observed that certain traditional ML models can achieve comparable performance to deep neural networks in the case of small training datasets. * Even though there's already a new and promising area of study called Bayesian deep learning, most deep neural networks do not really provide any uncertainty guarantees, they only provide you a point estimate. This is a big limitation, because, in areas like healthcare, uncertainty measures are required. In those cases, Gaussian processes may be more appropriate. Upvotes: 3 [selected_answer]<issue_comment>username_2: This question is very broad, so let me attempt to answer it using my own background in time series analysis. As an example, why would I continue using ARIMA to forecast a time series? Why not simply use an LSTM model by default, since this is a type of recurrent neural network that takes time-related dependencies into account? Well, an LSTM model is not good at modelling all time series. It is effective when it comes to modelling volatile data, but ARIMA still outperforms when it comes to forecasting trend data - LSTM tends to overemphasise volatile patterns in future predictions. Let's take an example of forecasting weekly hotel cancellations by potential customers. The second time series shows much more variability in the number of weekly hotel cancellations than the first: **H1 Time Series** [![h1 time series](https://i.stack.imgur.com/pY5xJ.png)](https://i.stack.imgur.com/pY5xJ.png) **H2 Time Series** [![h2 time series](https://i.stack.imgur.com/Iz7Go.png)](https://i.stack.imgur.com/Iz7Go.png) Based on **MDA** (mean directional accuracy), **RMSE** (root mean squared error), and **MFE** (mean forecast error) - ARIMA demonstrates superior performance overall for the first time series, while LSTM shows better performance for the second: [![performance](https://i.stack.imgur.com/jiuJV.png)](https://i.stack.imgur.com/jiuJV.png) On the basis of this example - which is quite specific given the broadness of your question - deep learning techniques are not always used because simpler models can perform better under certain circumstances. It is all about understanding the data you are working with and then choosing the model - not the other way around. Upvotes: 2 <issue_comment>username_3: SVM is generally considered objectively better than deep learning for standard machine learning tasks. SVM or decision trees. Deep Learning is beneficial when there is structure in the data that can't be easily represented by some type of kernel. I'm actually really interested in why decision trees haven't been used for computer vision in conjunction with deep learning feature extraction. Upvotes: -1
2020/01/14
1,840
7,363
<issue_start>username_0: I'm struggling to fully understand the stochastic gradient descent algorithm. I know that gradient descent allows you to find the local minimum of a function. What I don't know is what exactly that function IS. More specifically, the algorithm should work by initializing the network with random weights. Then, if I'm not mistaken, it forward-propagates $n$ times (where $n$ is the mini-batch size). At this point, I've no idea about what function should I search for, with over hundreds of neurons each having hundreds of parameters.<issue_comment>username_1: Welcome to AI Stack exchange! You're right, as the network is initialised randomly, the resultant function is essentially impossible to get your head around. This is because most of the time the network has >4 dimensions (4 can be graphed with some effort and a lot of color), and as such is literally beyond human comprehension via graphing. So what do we do? Well, conveniently, it is possible to find the gradient of *segments* of a function, without having to know the entirety of the function itself (it's worth noting that it actually is possible to find the resultant function and with a lot of effort find it's derivative. This proves to be much more work than it's worth though, as we don't need the general derivative that tells us what the gradient is for whatever input we give it, we only need the derivative at the specific input we just fed through the network). This might be hard to understand, but if you're familiar with the chain rule, it might make a bit more sense. The chain rule essentially allows you to split a function into components, and find the gradient of those specific components. By combining all of that, you end up with some nice gradients at each weight/bias with respect to the loss function. Take the negative of the gradient, and you now have the change required to decrease the loss function. This is obviously quite hard to understand without an example, [so here's the best one I've ever found, that helped me very much](https://youtu.be/d14TUNcbn1k?t=354). Also, as a side note, the whole mini batch thing is used to minimise catastrophic forgetting (where the network begins to "unlearn" old inputs). To deal with minibatchs, what you do is take input individually, then find the gradient for all weights and bias' for that specific input and remember the changes you want to make. Do that for all inputs in the minibatch, then finally add all the changes together to get the resultant best change for each weight and bias. Only then do you update the weights and bias'. Let me know if you have any further questions Upvotes: 2 <issue_comment>username_2: > > I know that gradient descent allows you to find the local minimum of a function. What I don't know is what exactly that function IS. > > > It's usually called the **loss function** (and, in general, [objective function](https://en.wikipedia.org/wiki/Loss_function)) and often denoted as $\mathcal{L}$ or $L$ (or something like that, i.e. it is not really important how you denote it). The specific function used as a loss function depends on the problem (ask another question if you want to know the details). For example, in the case of regression, the loss function may be the [mean squared error](https://en.wikipedia.org/wiki/Mean_squared_error). In classification, the loss function may be the [cross-entropy](https://en.wikipedia.org/wiki/Cross_entropy#Cross-entropy_loss_function_and_logistic_regression). However, the most important thing to note is that the loss function depends on the parameters of the neural network (NN), so you can differentiate it with respect to the parameters of the NN (i.e. you can take the [partial derivative](https://en.wikipedia.org/wiki/Partial_derivative) of the loss function with respect to each of the parameters of the NN). Let's take the example of the mean squared error function, which is defined as $$ \mathcal{L}(\boldsymbol{\theta}) ={\frac {1}{n}}\sum \_{i=1}^{n}(y\_i-f\_{\boldsymbol{\theta}}(x\_i))^{2}. $$ where $n$ is the number of training examples used (the batch size), $y\_i$ is the true class (or target) of the input example $x\_i$ and $f(x\_i)$ is the prediction of the neural network $f\_{\boldsymbol{\theta}}$ with parameters (or weights) $\boldsymbol{\theta} = [\theta\_i, \dots, \theta\_M] \in \mathbb{R}^M$, where $M$ is the number of parameters. Given the loss function $\mathcal{L}(\boldsymbol{\theta})$, we can now take the derivative of $\mathcal{L}$, with respect to $\boldsymbol{\theta}$, using the famous [**back-propagation**](https://en.wikipedia.org/wiki/Backpropagation) (BP) algorithm, which essentially applies the [chain rule of calculus](https://en.wikipedia.org/wiki/Chain_rule). The BP algorithm produces the **gradient** of the loss function $\mathcal{L}(\boldsymbol{\theta})$. The [gradient](https://en.wikipedia.org/wiki/Gradient) can be denoted as $\nabla \mathcal{L}(\boldsymbol{\theta})$ and it contains the partial derivatives of $\mathcal{L}(\boldsymbol{\theta})$ with respect to each parameter $\theta\_i$, that is, $\nabla \mathcal{L}(\boldsymbol{\theta}) = \left[ \frac{\partial \mathcal{L}(\boldsymbol{\theta})}{\partial \theta\_i}, \dots, \frac{\partial \mathcal{L}(\boldsymbol{\theta})}{\partial \theta\_M} \right] \in \mathbb{R}^M$. (If you want to know the details of the back-propagation algorithm, you should ask another question, but make sure you get informed first, because it may not be easy to fully explain it in an answer.) Afterward, we just apply the **gradient descent** step $$ \boldsymbol{\theta} \leftarrow \boldsymbol{\theta} - \gamma \nabla \mathcal{L}(\boldsymbol{\theta}) $$ where $\gamma \in \mathbb{R}$ is often called the learning rate and is used to weight the contribution of the gradient $\nabla \mathcal{L}(\boldsymbol{\theta})$ to the new values of the parameters, and $\leftarrow$ represents an [assignment (like in programming)](https://en.wikipedia.org/wiki/Assignment_(computer_science)). It is worth emphasizing that both $\boldsymbol{\theta}$ and $\nabla \mathcal{L}(\boldsymbol{\theta})$ are vectors and have the same dimensionality ($M$). Have also a look at [this answer](https://ai.stackexchange.com/a/11675/2444) where I explain the difference between gradient descent and stochastic gradient descent. Upvotes: 2 <issue_comment>username_3: After I've learned a little bit more about the topic, I think I figured out the exact sequence of the algorithm. So, here's my own answer. Please, correct me if I'm wrong. 1. Give an input, forward-propagate it, and generate an output 2. For each output neuron: for each weight connected to the neuron: Given the function C = f(w) (which represents the cost in function to the weight value), calculate the derivative of that function at the point where the current weight actually is) 3. Calculate the actual derivative of all the weights by combining all the partial derivatives in respect to the weight: now you have a gradient of the weights 4. Repeat this process to calculate the gradient for each of the batch elements. If you have a batch size of 8, then you'll have 8 gradients. 5. Find the average gradient ((gradient\_1+gradient2+gradient3...)/n\_gradients) 6. Move the weights of that gradient Am I right? How does this apply to deeper layers? Upvotes: 0
2020/01/14
988
4,402
<issue_start>username_0: While studying genetic algorithms, I've come across different crossover operations used for binary chromosomes, such as the *1-point crossover*, the *uniform crossover*, etc. These methods usually don't use any "intelligence". I found methods like the *fitness-based crossover* and *Boltzmann crossover*, which use fitness value so that the child will be created from better parents with a better probability. Is there any other similar method that uses fitness or any other way for an intelligent crossover for binary chromosomes?<issue_comment>username_1: It's not obvious what you mean by "intelligent crossover". However, it is common to use fitness-based selection of parents: individuals in the current population who have higher fitness are assigned a higher probability of being selected to mate and produce offspring. This will increase the likelihood that "good" combinations of genes in members of the current population will be passed along to the next generation, and that independent "good" combinations will be combined in some members of the next generation. The "best" crossover operator depends dramatically on the structure of the problem being solved, and on the mapping of gene "vectors" to the salient features of a solution. Edit #1: In some cases it is important to increase diversity in order to avoid convergence to a local optimum. In that case, an "intelligent" GA might for example select a first parent for its high fitness, and a second parent at random. In "Generator", a GA I sold for a while about 25 years ago, mate selection worked that way, and it was often very effective. Generator also replaced any duplicate individuals in the population with entirely random individuals. I have also structured genetic algorithms specifically to evolve multiple separate populations of individuals, with minimum gene flow between the populations, in order to evolve multiple solutions corresponding to "regional" fitness optima. Edit #2: In genetic algorithms it is not common to directly seek the best combination of genes from both parents. The assumption is that higher-fitness parents are *more likely* to produce even higher-fitness offspring than lower-fitness parents. Sometimes there is a local search (hill climbing) operation where the offspring of two parents is mutated in various ways and the best of the mutants is put in the next generation. And, sometimes a crossover operation involves producing a larger number of offspring than the parent generation, followed by culling low-fitness individuals to keep the population size constant. This is vaguely analogous to a local search via mutation, but uses random crossover instead of random mutation for its search. Upvotes: 1 <issue_comment>username_2: The idea behind it ------------------ I'll do an analogy: while *classical GAs* look like how humanity reproduced until now, "*intelligent crossover*" is more looking like **designer babies**. You first would need to identify which genes are responsible for certain behaviors and then you can bring them on to the new generation. Said this, you will understand why there aren't so many algorithms around: it is a very **case-tailored approach**. Possibly each problem requires a different method and certainly will increase considerably the complexity of the crossover algorithms. --- How to make your own -------------------- If you want to create one of these algorithms, you might want to add a step to your GA cycle: *Evaluation, **Identification**, Selection, Crossover, Mutation, Replacement*. On the Identification step, you can run a probabilistic search on your population to discover which genes might be responsible for higher fitness results. Then you can: * proceed normally: Select two parents based on their fitness and perform crossover giving more importance to the "good" genes. This is the "safest" way. * fuse *Selection* and *Crossover*: use the data retrieved during the *Identification* step to create good individuals directly from the population pool. This system is dangerous as it might reduce the population diversity at alarming rates, leading to stagnation. This is how the *Fitness-based Crossover* are generally built [[paper]](https://www.semanticscholar.org/paper/A-fitness-based-scanning-multi-parent-crossover-a-Auwatanamongkol/0097a816c3302ef145763c7bde3e447fb41124aa) Upvotes: 0
2020/01/15
946
4,237
<issue_start>username_0: I am training a convolutional neural network for object detection. Apart from the learning rate, what are the other hyperparameters that I should tune? And in what order of importance? Besides, I read that doing a grid search for hyperparameters is not the best way to go about training and that random search is better in this case. Is random search really that good?<issue_comment>username_1: It's not obvious what you mean by "intelligent crossover". However, it is common to use fitness-based selection of parents: individuals in the current population who have higher fitness are assigned a higher probability of being selected to mate and produce offspring. This will increase the likelihood that "good" combinations of genes in members of the current population will be passed along to the next generation, and that independent "good" combinations will be combined in some members of the next generation. The "best" crossover operator depends dramatically on the structure of the problem being solved, and on the mapping of gene "vectors" to the salient features of a solution. Edit #1: In some cases it is important to increase diversity in order to avoid convergence to a local optimum. In that case, an "intelligent" GA might for example select a first parent for its high fitness, and a second parent at random. In "Generator", a GA I sold for a while about 25 years ago, mate selection worked that way, and it was often very effective. Generator also replaced any duplicate individuals in the population with entirely random individuals. I have also structured genetic algorithms specifically to evolve multiple separate populations of individuals, with minimum gene flow between the populations, in order to evolve multiple solutions corresponding to "regional" fitness optima. Edit #2: In genetic algorithms it is not common to directly seek the best combination of genes from both parents. The assumption is that higher-fitness parents are *more likely* to produce even higher-fitness offspring than lower-fitness parents. Sometimes there is a local search (hill climbing) operation where the offspring of two parents is mutated in various ways and the best of the mutants is put in the next generation. And, sometimes a crossover operation involves producing a larger number of offspring than the parent generation, followed by culling low-fitness individuals to keep the population size constant. This is vaguely analogous to a local search via mutation, but uses random crossover instead of random mutation for its search. Upvotes: 1 <issue_comment>username_2: The idea behind it ------------------ I'll do an analogy: while *classical GAs* look like how humanity reproduced until now, "*intelligent crossover*" is more looking like **designer babies**. You first would need to identify which genes are responsible for certain behaviors and then you can bring them on to the new generation. Said this, you will understand why there aren't so many algorithms around: it is a very **case-tailored approach**. Possibly each problem requires a different method and certainly will increase considerably the complexity of the crossover algorithms. --- How to make your own -------------------- If you want to create one of these algorithms, you might want to add a step to your GA cycle: *Evaluation, **Identification**, Selection, Crossover, Mutation, Replacement*. On the Identification step, you can run a probabilistic search on your population to discover which genes might be responsible for higher fitness results. Then you can: * proceed normally: Select two parents based on their fitness and perform crossover giving more importance to the "good" genes. This is the "safest" way. * fuse *Selection* and *Crossover*: use the data retrieved during the *Identification* step to create good individuals directly from the population pool. This system is dangerous as it might reduce the population diversity at alarming rates, leading to stagnation. This is how the *Fitness-based Crossover* are generally built [[paper]](https://www.semanticscholar.org/paper/A-fitness-based-scanning-multi-parent-crossover-a-Auwatanamongkol/0097a816c3302ef145763c7bde3e447fb41124aa) Upvotes: 0
2020/01/16
1,891
6,858
<issue_start>username_0: I'm working on an [advantage actor-critic (A2C)](https://github.com/KokoMind/Actor-Critic-TF/tree/d062227c1fc610139a6f026fa63bb6069230bdc8) reinforcement learning model, but when I test the model after I trained for 3500 episodes, I start to get almost the same action for all testing episodes. While if I trained the system for less than 850 episodes, I got different actions. The value of `state` is always different, and around 850 episodes, the `loss` becomes zero. Here is the Actor and critic Network ``` with g.as_default(): #==============================actor==============================# actorstate = tf.placeholder(dtype=tf.float32, shape=n_input, name='state') actoraction = tf.placeholder(dtype=tf.int32, name='action') actortarget = tf.placeholder(dtype=tf.float32, name='target') hidden_layer1 = tf.layers.dense(inputs=tf.expand_dims(actorstate, 0), units=500, activation=tf.nn.relu, kernel_initializer=tf.zeros_initializer()) hidden_layer2 = tf.layers.dense(inputs=hidden_layer1, units=250, activation=tf.nn.relu, kernel_initializer=tf.zeros_initializer()) hidden_layer3 = tf.layers.dense(inputs=hidden_layer2, units=120, activation=tf.nn.relu, kernel_initializer=tf.zeros_initializer()) output_layer = tf.layers.dense(inputs=hidden_layer3, units=n_output, kernel_initializer=tf.zeros_initializer()) action_probs = tf.squeeze(tf.nn.softmax(output_layer)) picked_action_prob = tf.gather(action_probs, actoraction) actorloss = -tf.log(picked_action_prob) * actortarget # actorloss = tf.reduce_mean(tf.losses.huber_loss(picked_action_prob, actortarget, delta=1.0), name='actorloss') actoroptimizer1 = tf.train.AdamOptimizer(learning_rate=var.learning_rate) if var.opt == 2: actoroptimizer1 = tf.train.RMSPropOptimizer(learning_rate=var.learning_rate, momentum=0.95, epsilon=0.01) elif var.opt == 0: actoroptimizer1 = tf.train.GradientDescentOptimizer(learning_rate=var.learning_rate) actortrain_op = actoroptimizer1.minimize(actorloss) init = tf.global_variables_initializer() saver = tf.train.Saver(max_to_keep=var.n) p = tf.Graph() with p.as_default(): #==============================critic==============================# criticstate = tf.placeholder(dtype=tf.float32, shape=n_input, name='state') critictarget = tf.placeholder(dtype=tf.float32, name='target') hidden_layer4 = tf.layers.dense(inputs=tf.expand_dims(criticstate, 0), units=500, activation=tf.nn.relu, kernel_initializer=tf.zeros_initializer()) hidden_layer5 = tf.layers.dense(inputs=hidden_layer4, units=250, activation=tf.nn.relu, kernel_initializer=tf.zeros_initializer()) hidden_layer6 = tf.layers.dense(inputs=hidden_layer5, units=120, activation=tf.nn.relu, kernel_initializer=tf.zeros_initializer()) output_layer2 = tf.layers.dense(inputs=hidden_layer6, units=1, kernel_initializer=tf.zeros_initializer()) value_estimate = tf.squeeze(output_layer2) criticloss= tf.reduce_mean(tf.losses.huber_loss(output_layer2, critictarget,delta = 0.5), name='criticloss') optimizer2 = tf.train.AdamOptimizer(learning_rate=var.learning_rateMADDPG_c) if var.opt == 2: optimizer2 = tf.train.RMSPropOptimizer(learning_rate=var.learning_rate_c, momentum=0.95, epsilon=0.01) elif var.opt == 0: optimizer2 = tf.train.GradientDescentOptimizer(learning_rate=var.learning_rateMADDPG_c) update_step2 = optimizer2.minimize(criticloss) init2 = tf.global_variables_initializer() saver2 = tf.train.Saver(max_to_keep=var.n) ``` This is the choice of action. ``` def take_action(self, state): """Take the action""" action_probs = self.actor.predict(state) action = np.random.choice(np.arange(len(action_probs)), p=action_probs) return action ``` This is the `actor.predict` function. ``` def predict(self, s): return self._sess.run(self._action_probs, {self._state: s}) ``` Any Idea what causing this? **Update** Change the learning rate, state, and the reward solve the problem where I reduce the size of the state and also added switching cost to the reward.<issue_comment>username_1: If the loss is zero, all gradients should be zero as well, so you should take a look at the computed gradients. There might be a problem with momentum or the scheduled lr which might still apply very little updates which eventually lead to this [policy colapse](https://datascience.stackexchange.com/a/22221). On a side note I would also call `reduce_mean` on the actor loss since you're optimizing the expected value. Upvotes: 0 <issue_comment>username_2: Disclaimer: **Without the full code, we can only speculate.** I encourage you to post the full code on [Google Colab](http://colab.research.google.com/) or something like this. In the meanwhile, here is my *point of view*: --- ### The Problem Looks like your model has found some "*master action*" that always leads to zero loss, no matter what the state is. So it's not necessarily *bad*, it's just *unexpected* according to your point of view. > > An example for that would be **pausing** the game - so you never loose. > > > You might not like it, but in de model's point of view, it's absolutely nailing it! ### The Solution So how to convince the actor not to *pause the game*? **Not by changing the model, or tuning hyper-parameters, but by reformulating the problem.** In this example, instead of just penalizing the model for failing, you should reward if for winning, so pausing is no longer the best option. ### Conclusion It might not be a problem in the Machine Learning model, but in your environment and reward models. As we don't have access to that, it's hard to provide an answer. --- ### Edit: You are the CartPole-v0 environment: > > A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center. > > > Source: <https://gym.openai.com/envs/CartPole-v0/> It is a solvable problem. Probably your model has just learned how to solve it after a few hundreds generations. (The link shows a table with "Episodes before solve" for each algorithm, showing numbers consistent to yours). ### TL;DR: It's not a bug, it's a feature! Upvotes: 2 [selected_answer]
2020/01/16
1,218
4,859
<issue_start>username_0: I having trouble finding some starting points for solving an occupancy problem which seems like a good candidate for ai. Assume the following situation: In a company I have n cars and m employees. Not every employee can drive any car (f. e. a special driving license is required). A car can only be used by one employee at a specific point in time. There is a plan which states which employee must be somewhere within some time (therefor they must use a car, so the car is blocked for that amount of time). The goal is to find a near optimal occupancy of the cars according to that plan. This problem is easy to specify, but I'm stumped as to which methods to implement. As it can be represented by a graph I think the right way to solve such a problem is using searching techniques, but a problem here is that I don't know the goal state (and there is no efficient way to compute it - thats the task I want the ai to do...). Neither finding the goal state is in fact part of the problem. So my question is: What ai techniques could be used to solve such a problem ? Edit: Some clarification: Assmume we have two sets - one of the employees (E) and one of cars (C). |C| < |E| is most likely true. Each car has an assigned priority which corresponds to the costs of using it (for example using a Ferrari costs more than using a Dacia, therefore a Dacia has a higher priority (ex. 1) compred to the Ferrari (ex. 10))). Assume further that having employees which are not using a car at a specific time slice are a bad thing - they cost an individual penalty (you want the employeed to be at the customer and sell things etc.). The goal is to find the occupation of employees and cars which has a low total cost. One Example: If you assign an employee to a car at a specific time slice it may turn out that another employee gets no car within that time slice. This can be either because * a car is free, but he has no license for it * because a car is free, but the costs of using this car would be higher than having the employee staing at the head quater * because no car is free anymore Of cause it could be better in terms of costs to change the occupation and give that employee which got no car in this solution a car and therefore having another employee getting no car or not using all cars or ... Note: There is no need to find an exact optimal solution (=lowest total cost of all possible occupations), as this would require checking out all possible occupations of the exponential solution space. Insetad finding a more or less good approximation of a near-optimal low total cost is sufficent.<issue_comment>username_1: If the loss is zero, all gradients should be zero as well, so you should take a look at the computed gradients. There might be a problem with momentum or the scheduled lr which might still apply very little updates which eventually lead to this [policy colapse](https://datascience.stackexchange.com/a/22221). On a side note I would also call `reduce_mean` on the actor loss since you're optimizing the expected value. Upvotes: 0 <issue_comment>username_2: Disclaimer: **Without the full code, we can only speculate.** I encourage you to post the full code on [Google Colab](http://colab.research.google.com/) or something like this. In the meanwhile, here is my *point of view*: --- ### The Problem Looks like your model has found some "*master action*" that always leads to zero loss, no matter what the state is. So it's not necessarily *bad*, it's just *unexpected* according to your point of view. > > An example for that would be **pausing** the game - so you never loose. > > > You might not like it, but in de model's point of view, it's absolutely nailing it! ### The Solution So how to convince the actor not to *pause the game*? **Not by changing the model, or tuning hyper-parameters, but by reformulating the problem.** In this example, instead of just penalizing the model for failing, you should reward if for winning, so pausing is no longer the best option. ### Conclusion It might not be a problem in the Machine Learning model, but in your environment and reward models. As we don't have access to that, it's hard to provide an answer. --- ### Edit: You are the CartPole-v0 environment: > > A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center. > > > Source: <https://gym.openai.com/envs/CartPole-v0/> It is a solvable problem. Probably your model has just learned how to solve it after a few hundreds generations. (The link shows a table with "Episodes before solve" for each algorithm, showing numbers consistent to yours). ### TL;DR: It's not a bug, it's a feature! Upvotes: 2 [selected_answer]
2020/01/17
325
1,390
<issue_start>username_0: The selection of experimental data includes a set of vectors of different dimensions. The input is a 3-dimensional vector, and the output is a 12-dimensional vector. The sample size is 120 pairs of input 3-dimensional and output 12-dimensional vectors. Is it possible to train such a neural network (in MATLAB)? Which structure of the neural network is best suited for this?<issue_comment>username_1: There is nothing stopping you, you can setup Dense Neural Networks to have any size inputs or outputs (simple proof is to imagine a single layer NN with no activation is just a linear transform and given input dim $n$ and output dim $m$, it's just a matrix of $n$ x $m$, trivially this works though with any number of hidden layers) The better question is *should you?*. In all honesty, it depends on the data that you have, but, usually, with only 120 examples you'll either overfit completely or do relatively well if the true solution is a very simple function, but, in general, in the common situations where that isn't the case I find myself more likely or not using Bayesian approaches, so I can actually consider confidence (with little data, this is really nice) Upvotes: 3 [selected_answer]<issue_comment>username_2: The perceptron convergence theorem states that any architecture will lead to a correlation between the data. Yes, you can! Upvotes: 1
2020/01/17
489
2,065
<issue_start>username_0: Let us assume we have a general **AI that can improve itself** and is at least as intelligent as humans. It has wide access to technical systems including the internet, and it can communicate with humans. The AI could become **malicious**. **Can we just switch off a rouge AI?**<issue_comment>username_1: **No.** With a sufficiently advanced general AI, we can not generally assume that we can switch it off when it becomes dangerous dangerous. It seems that the electrical energy supply can always be switched off. While that is true on the physical level, it is not guaranteed to work in practice. The AI could cooperate with humans, which protect the AI from deactivation, or reactivate it. The cooperation could, for example, one of the following forms: * The AI prepares a relation with humans offering a bribe for help. + An operator gets a reward for keeping the AI running, or reactivating it. * The AI prepares an extortion with humans + For example by organizing a process that needs to be actively kept running by the AI to avoid the death of a child of an operator. * The AI maintains contact with a criminal organisation with strong interest to keep it running, and week ethical inhibition. + The cooperation could be with traditional organized crime organisations, providing money laundering and other vital services to a mafia organisation, which can activate it's widespread influence to maintain the AI running even when there is strong public interest in shutting it down. Note that these examples can protect against physical attacks by law enforcement. Upvotes: 0 <issue_comment>username_2: Malware viruses are a very simple form of AI. It is not difficult to conceive of a form of malware that A) can't be detected easily, B) is redundantly distributed across thousands of computers that occasionally connect to the internet, C) is capable of detecting some kinds of threats to itself and mutating to avoid the threats. So, simply "turning off" a malicious AI will not always be possible. Upvotes: 1
2020/01/18
1,859
6,500
<issue_start>username_0: I was wondering if it's possible to get the inverse of a neural network. If we view a NN as a function, can we obtain its inverse? I tried to build a simple MNIST architecture, with the input of (784,) and output of (10,), train it to reach good accuracy, and then inverse the predicted value to try and get back the input - but the results were nowhere near what I started with. (I used the pseudo-inverse for the W matrix.) My NN is basically the following function: $$ f(x) = \theta(xW + b), \;\;\;\;\; \theta(z) = \frac{1}{1+e^{-z}} $$ I.e. ``` def rev_sigmoid(y): return np.log(y/(1-y)) def rev_linear(z, W, b): return (z - b) @ np.linalg.pinv(W) y = model.predict(x_train[0:1]) z = rev_sigmoid(y) x = rev_linear(z, W, b) x = x.reshape(28, 28) plt.imshow(x) ``` [![enter image description here](https://i.stack.imgur.com/qBZiS.png)](https://i.stack.imgur.com/qBZiS.png) ^ This should have been a 5: [![t](https://i.stack.imgur.com/8LI8n.png)](https://i.stack.imgur.com/8LI8n.png) Is there a reason why it failed? And is it ever possible to get inverse of NN's? EDIT: it is also worth noting that doing the opposite does yield good results. I.e. starting with the y's (a 1-hot encoding of the digits) and using it to predict the image (an array of 784 bytes) using the same architecture: input (10,) and output (784,) with a sigmoid. This is not exactly equivalent, to an inverse as here you first do the linear transformation and then the non-linear. While in an inverse you would first do (well, undo) the non-linear, and then do (undo) the linear. I.e. **the claim that the 784x10 matrix is collapsing too much information seems a bit odd to me, as there does exist a 10x784 matrix that can reproduce enough of that information**. [![enter image description here](https://i.stack.imgur.com/SAP02.png)](https://i.stack.imgur.com/SAP02.png)<issue_comment>username_1: **Mathematical Exploration** let $\Theta^+$ be the pseudo-inverse of $\Theta$. Recall, that if a vector $\boldsymbol v \in R(\Theta)$ (ie in the row space) then $\boldsymbol v = \Theta^+\Theta\boldsymbol v$. That is, so long as we select a vector that is in the rowspace of $\Theta$ then we can reconstruct it with full fidelity using the pseudo inverse. Thus, if any of the images happen to be linear combinations of the rows of $\Theta$ then we can reconstruct it. To be more specific. let $f(\boldsymbol x)$ have a pseudo-inverse $f^+(\boldsymbol x)$ defined as you have. If we *restrict* our domain such that $\boldsymbol x \in C(\Theta^T)$ (column space of the transpose) then $f^+= f^{-1}\_{res}$. That is, under our domain restriction the pseudo inverse becomes a true inverse. **An Extrapolation** It would then seem that so long as we are under such domain restrictions then we could define a pseudo inverse for a general NN. Though, it might be possible that some NNs don't have any restriction that admits an inverse. Maybe, there is some way to regularize the parameters such that this is possible. NNs with ReLU wouldn't admit such an inverse since ReLU loses information on negative values. Leaky Relu might work. **Further Investigation** Finally, this presents a zone for further study. Some questions to answer might be: * Is it possible for optimized parameters to contain non-trivial examples in their row-space? * If so, under what conditions is this possible? * Are the examples in any way represented in the row space? * Is there some way to regularize a NN such that it admits an inverse over some desired restriction? * Under what conditions is invertibility useful? Upvotes: 2 <issue_comment>username_2: So, if I go the opposite way, start with my y and predict an x, and then ask for the inverse of that - I get really good results (actually - 100% accuracy). i.e. ``` model = Sequential([ Dense(784, input_shape=(10,), activation='sigmoid'), ]) model.compile(loss=keras.losses.binary_crossentropy, optimizer=keras.optimizers.Adam(0.01), metrics=['binary_crossentropy']) model.fit(y_train, x_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(y_test, x_test)) # train until accuracy > 0.9, then: W, b = model.get_weights() y = y_train x = reverse.predict(y) z = rev_sigmoid(x) y_hat = rev_linear(z, W, b) (y_hat.argmax(axis=1) == y.argmax(axis=1)).mean() # 1.0 ``` After playing a bit with some toy examples, I think the other way is probably not possible, as the matrices don't have an inverse. Putting these (toy) matrices in WolframAlpha for example tells you the determinant is 0, but in numpy the determinant is just slightly bigger than 0, so you manage to calculate an "inverse" which is not really an inverse and get the bad results. It's also makes sense. In the reversed scenario, we start with 10 dimension, expand to 784, and then collapse back to 10. But in the "regular" scenario, we start at 784, collapse to 10, and then expand to 784 again - and (I guess) too much information is lost then. Upvotes: 1 <issue_comment>username_3: TL DR; I do not think the inverse of any reasonable neural network would exist. Assume that you are using $32-bit$ floating-point numbers in the MNIST example. The number of distinct numbers that a $32-bit$ float can represent is finite (say $x$) The number of different images you can put into the neural network = $x^{784}$. But the total number of distinct outputs is *only* $x^{10}$ (As the output is a vector of length $10$). Hence by the *pigeonhole principle* there are multiple inputs that correspond to the same output. Also on an average there are $x^{784}$ / $x^{10}$ = $x ^ {774}$ input images for a single output vector. This means the function is not invertible as it is not one to one (and that by a *long shot*). Some people are also discussing pseudo inverses. I do not know much about pseudo inverses. Still, for these to work (by that, I mean to be able to produce *images of numbers* from a given output vector), they should be able to distinguish between the $x ^ {774}$ images into 1. Images that look like real numbers and 2. Others, which are a total mess of flickering pixels that just happen to produce the given output. Whatever algorithm solves this problem hence mush possesses domain knowledge of the problem, probably inherited through the neural network weights. This seems to be an unexplored region of Neural Networks. Hope this helps:) Upvotes: 0
2020/01/19
1,386
5,115
<issue_start>username_0: In my code, I usually use the mean squared error (MSE), but the TensorFlow tutorials always use the categorical cross-entropy (CCE). Is the CCE loss function better than MSE? Or is it better only in certain cases?<issue_comment>username_1: As a rule of thumb, [mean squared error (MSE)](https://en.wikipedia.org/wiki/Mean_squared_error) is more appropriate for [regression](https://en.wikipedia.org/wiki/Regression_analysis) problems, that is, problems where the output is a numerical value (i.e. a floating-point number or, in general, a real number). However, in principle, [you can use the MSE for classification problems](https://stats.stackexchange.com/q/46413/82135) too (even though that may not be a good idea). MSE can be preceded by the [sigmoid function](https://en.wikipedia.org/wiki/Sigmoid_function), which outputs a number $p \in [0, 1]$, which can be interpreted as the probability of the input belonging to one of the classes, so the probability of the input belonging to the other class is $1 - p$. Similarly, [cross-entropy (CE)](https://en.wikipedia.org/wiki/Cross_entropy) is mainly used for [classification](https://en.wikipedia.org/wiki/Statistical_classification) problems, that is, problems where the output can belong to one of a discrete set of classes. The CE loss function is usually separately implemented for [binary](https://en.wikipedia.org/wiki/Binary_classification) and [multi-class](https://en.wikipedia.org/wiki/Multiclass_classification) classification problems. In the first case, it is called the binary cross-entropy (BCE), and, in the second case, it is called categorical cross-entropy (CCE). The CE requires its inputs to be distributions, so the CCE is usually preceded by a [softmax function](https://en.wikipedia.org/wiki/Softmax_function) (so that the resulting vector represents a probability distribution), while the BCE is usually preceded by a sigmoid. See also [Why is mean squared error the cross-entropy between the empirical distribution and a Gaussian model?](https://stats.stackexchange.com/q/288451/82135) for more details about the relationship between the MSE and the cross-entropy. In case you use TensorFlow (TF) or Keras, see also [How to choose cross-entropy loss in TensorFlow?](https://stackoverflow.com/q/47034888/3924118), which gives you some guidelines for how to choose the appropriate TF implementation of the cross-entropy function for your (classification) problem. See also [Should I use a categorical cross-entropy or binary cross-entropy loss for binary predictions?](https://stats.stackexchange.com/q/260505/82135) and [Does the cross-entropy cost make sense in the context of regression?](https://stats.stackexchange.com/q/223256/82135). Upvotes: 4 [selected_answer]<issue_comment>username_2: In a classification problem it's better to get higher error and higher error slope when we predict the label wrong. As you see in the graph by using cross-entropy you get high error when the algorithm predict a label wrong and small error when the prediacted label is close enough, so it helps us to separate the predicted classes better. [![enter image description here](https://i.stack.imgur.com/CwHzl.png)](https://i.stack.imgur.com/CwHzl.png) Upvotes: 2 <issue_comment>username_3: We sometimes see that binary cross-entropy (BCE) loss is used for regression problems. This post is my opinion on using BCE for regression problems. The figure below is the plots of BCE, $-t\*\log(x) - (1-t)\*\log(1-x)$, for several target values $t = 0.0, 0.1, ..., 0.5$. (The plots for $t>0.5$ are mirror images of those for $t<0.5$, so I omitted them.) As you can see, when the target value $t$ is closer to the medium ($t=0.5$), BCE is flatter around its minimum ($x \sim t$). This means that BCE is less 'focal' when the target value is intermediate value. So, BCE suits your purpose when the edge values ($t=0$ and $t=1$) are of special importance for you but the difference between intermediate values ($t=0.4$ and $t=0.5$, for example) is not very important for you. On the other hand, when any value of target is equally important for you, then BCE will not be a good choice. Another loss function, MSE for example, is better for you. [![The binary cross entropy for several target values](https://i.stack.imgur.com/xIx00.png)](https://i.stack.imgur.com/xIx00.png) Note added: If you use BCE for regression problems, it will be better to subtract $-x\*\log(x) - (1-x)\*\log(1-x)$ from the original BCE expression, so that the loss becomes zero when the prediction value coincides with the target one, $x=t$. This will not matter for backpropagation, but will be convenient to monitor the value. Note added: After I submit this post, I came to think that we can tune how 'focal' the loss function is around its minimum, by simply multiplying an arbitrary factor of target values. For example, we can tune BCE loss $L\_{\rm BCE}(x,t)$ by making it $f(t)\*L\_{\rm BCE}(x,t)$ where $f(t)$ is whatever factor you want. This factor tunes how focal the loss is around its minimum for each target value $t$. Upvotes: 2
2020/01/20
959
3,673
<issue_start>username_0: I am learning to use tensorflow.js. I am also using the tfvis library to print information about the neural net to the web browser. When I create a create a dense neural net with a layer with 5 neurons and another layer with 2 neurons, each layer has a bias vector of length 5 and 2 respectively. I checked the docs (<https://js.tensorflow.org/api/0.6.1/#layers.dense>), and it says that there is indeed a bias vector for each dense layer. Isn't a vector redundant? Doesn't each layer only need a single number for the bias? See the code below: ``` //Create tensorflow neural net this.model = tf.sequential(); this.model.add(tf.layers.dense({units: 5, inputShape: [1]})) this.model.add(tf.layers.dense({units: 2})) const surface = { name: 'Layer Summary', tab: 'Model Inspection'}; tfvis.show.layer(surface, this.model.getLayer(undefined, 0)) ```<issue_comment>username_1: In a simple feed-forward network, each artificial neuron has a separate bias value. This allows for greater flexibility for the output layer function than if each neuron had to use a single whole-layer bias. Although not an absolute requirement, without this arrangement it may become very hard to approximate some functions. Moving from a bias vector to a single scalar bias value per layer will most of the time reduce the effectiveness of a neural network through lost flexibility in how it fits to the target function. Once you have $N$ output neurons in a layer leading to needing $N$ values for bias, then it is fairly straightforward to model this collection of bias values as a vector. Often you will see a neural network layer function written in this form or similar: $$\mathbf{y} = f(\mathbf{W}\mathbf{x} + \mathbf{b})$$ Where $f()$ is the activation function (applied element-wise), $\mathbf{W}$ the weights matrix for the layer and $\mathbf{b}$ is the bias. When written in this form, it is easy to see that $\mathbf{y}$, $\mathbf{W}\mathbf{x}$ and $\mathbf{b}$ must all be vectors of the same size. This layer design has become so standard that it is possible to forget that other designs and implementations are possible for neural network parameters, and can sometimes be useful. Frameworks like TensorFlow also make it easier to take the standard approach, which is why you need a vector for bias on the example you are using. Whilst you are learning, and probably 99% of the time after that, it will be best to go with what the framework is doing here. Upvotes: 3 [selected_answer]<issue_comment>username_2: To emphasize (and this is not emphasized in [this answer](https://ai.stackexchange.com/a/17590/2444)), in the case of neural networks, the biases or, more precisely, the [connections (or weights) between biases and other neurons are also *learnable* parameters](https://stackoverflow.com/a/48243204/3924118), so the back-propagation algorithm calculates a gradient of the loss function that contains the partial derivatives with respect to the connections between the biases and other neurons too and, in the gradient descent step, these connections can also be updated. Each neuron usually has its own bias. For example, in Keras, this is the case, as [you can easily verify](https://stackoverflow.com/a/59828935/3924118). However, in principle, you could also have a layer with a single scalar bias that is shared across all neurons of that layer, but this would probably have a different effect. The role of the bias is discussed in several places on the web. For example, in [this Stack Overflow post](https://stackoverflow.com/q/2480650/3924118) or in [this Stats SE post](https://stats.stackexchange.com/q/185911/82135). Upvotes: 2
2020/01/21
769
3,157
<issue_start>username_0: I'd like to better understand temporal-difference learning. In particular, I'm wondering if it is prudent to think about TD($\lambda$) as a type of "truncated" Monte Carlo learning?<issue_comment>username_1: TD($\lambda$) can be thought of as a combination of TD and MC learning, so as to avoid to choose one method or the other and to take advantage of both approaches. More precisely, TD($\lambda$) is temporal-difference learning with a $\lambda$-return, which is defined as an average of all $n$-step returns, for all $n$, where an $n$-step return is the *target* used to update the estimate of the value function that contains $n$ future rewards (plus an estimate of the value function of the state $n$ steps in the future). For example, TD(0) (e.g. Q-learning is usually presented as a TD(0) method) uses a $1$-step return, that is, it uses one future reward (plus an estimate of the value of the next state) to compute the target. The letter $\lambda$ actually refers to a parameter used in this context to weigh the combination of TD and MC methods. There are actually two different perspectives of TD($\lambda$), the *forward view* and the *backward view* (eligibility traces). The blog post [Reinforcement Learning: Eligibility Traces and TD(lambda)](https://amreis.github.io/ml/reinf-learn/2017/11/02/reinforcement-learning-eligibility-traces.html) gives a quite intuitive overview of TD($\lambda$), and, for more details, read [the related chapter of the book *Reinforcement Learning: An Introduction*](http://incompleteideas.net/book/RLbook2020.pdf#page=309). Upvotes: 4 [selected_answer]<issue_comment>username_2: I am a novice in Reinforcement Learning and I have been struggling for several monthes about the TD()'s logic. Initially it seemed to me that it was a successfull purely heuristic formula without any theoretical foundation. But nowadays, I understand it simply as a mean's calculation, using the recurrent formula that states that when you a have a mean and a new value arrives, it modifies the mean by an amount equal to its difference with it (the mean) divided by the new values number. To summarize, the exposed mean calculation is an instance of a general formula of recurrent mean calculation that uses as increasing factor for the difference between the new value and the actual mean multiplied by any number between 0 and 1. By the way, this number - usually called *step size parameter* - can be dynamic, and in the first paragraph (usual mean calculation) its amount is the inverse of the number of values considered in the mean's calculation. Intuitively, we can understand that it is an accurate estimation procedure independently the initial (guessed or not) value. With a high number of estimates (new values arriving), the initial values fades its importance, and that it can be extended to treat many (lambda) new values simultaneously. Until now I have not found this explanation nowhere, even if it is very simple, and I am not so sure that it is sound. I will appreciate if someone let me know if this intuition is correct and if it has already been exposed somewhere. Upvotes: 2
2020/01/22
812
3,100
<issue_start>username_0: I am trying to understand how deep Q learning (DQN) works. To my current understanding, each $Q(s, a)$ functions is estimated to be a function of a feature vector of its state $\phi$(s) and the weight of the network $\theta$. The loss function to minimise is $||\delta\_{t+1}||^2$ where $\delta\_{t+1}$ is shown below. The loss function is from the website talking about function approximation. Even though it is not explicitly deep Q learning, the loss function to minimise is similar. $$\delta\_{\mathrm{t}+1}=\mathrm{R}\_{\mathrm{t}+1}+\max \_{\mathrm{a}\in\mathrm{A}} \boldsymbol{\theta}^{\top} \Phi\left(\mathrm{s}\_{t+1}, \mathrm{a}\right)-\boldsymbol{\theta}^{\top} \Phi\left(\mathrm{s}\_{\mathrm{t}}, \mathrm{a}\right)$$ Source: <https://towardsdatascience.com/function-approximation-in-reinforcement-learning-85a4864d566>. Intuitively, I am not able to understand why the loss function is defined as such. Once the network converges to a $\theta$ using gradient descent, does that mean that the $Q\_{max}(s,a)$ is found? In essence, I am not able to grasp intuitively how the neural network is able to generalise the learning to unseen states. The algorithm I am looking at to help me understand the deep Q networks is below. [![enter image description here](https://i.stack.imgur.com/DSsAU.png)](https://i.stack.imgur.com/DSsAU.png) Source: [https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf](https://www.cs.toronto.edu/%7Evmnih/docs/dqn.pdf)<issue_comment>username_1: Well, you want your network to have a good prediction powers for the Q-values. So you compare Q-value at time t with the reward that you've got at time t after having executed action a + the prediction of the best Q-value of your neural network at time t+1. Note, that you are optimizing using a prediction and not a true value. That is called bootstrapping, look up TD-learning to have a better grasp of the concept. Upvotes: 0 <issue_comment>username_2: Especially in continuous space, convergence of the value function is mainly a theoretical property. Without seeing enough of the state space, as you suggest, there's no way to ensure that your Q function will generalize to the whole state space. Convergence results for Q learning with function approximation generally show that in the limit of infinite data, your value function will converge to the desired fixed point -- note that this is only true when your agent explores occasionally, for an infinite amount of time. When your parameters have converged, this simply means that your Q function has fit the data you've collected. As you explore more, your agent may get "surprised" and your parameters may start to change again. Also, convergence of the parameters in function approximation can never guarantee that an optimal value function was found in practice -- the only guarantee you can wish for is that the optimal value function *that can be produced with your model* has been found. For instance, the parameters of the linear Q function you posted can converge, even if the optimal Q function is not linear. Upvotes: 1
2020/01/22
1,042
3,648
<issue_start>username_0: In deep learning, is it possible to use discontinuous activation functions (e.g. one with jump discontinuity)? (My guess: for example, ReLU is non-differentiable at a single point, but it still has a well-defined derivative. If an activation function has a jump discontinuity, then its derivative is supposed to have a delta function at that point. However, the backpropagation process is incapable of considering that delta function, so the convex optimization process will have some problem?)<issue_comment>username_1: I would say that it is possible, but probably not a very good idea.?Like you say, the hard requirement is that the network (and thus its components, including the activation functions) must be differentiable. ReLU isn't, but you can cheat by defining f'(0) to be 0 (or 1). A continuous function means that gradient descent leads to some local minimumΒΉ, for piecewise continuous functions, it may not converge (i.e. the breakpoints themselves may not be part of the segment you descend, so you will never get get to an actual minimum). This is not likely to be a problem in practice, though. ΒΉ At least for functions that are bounded from below, like cost functions are. Upvotes: 1 <issue_comment>username_2: Even the first artificial neural network - Rosenblatt's perceptron [[1](https://psycnet.apa.org/doiLanding?doi=10.1037%2Fh0042519)] had a discontinuous activation function. That network is in introductory chapters of many textbooks about AI. For example, *<NAME>. Artificial intelligence: a guide to intelligent systems. Second Edition* shows how to train such networks on pages 170-174. Error backpropagation algorithm can be modified to accommodate discontinuous activation functions. The details are in paper [[2](https://ieeexplore.ieee.org/document/51993)]. That paper points out a possible application: training a neural network on micro-controllers. As the multiplication of the output of the previous layer $x\_j$ by the weigth $w\_{ij}$ is exspensive, the author suggested to approximate it with a left shift by $n$ bits (multiplication by $2^n$) for the corresponding $n$ in which case the activation function is discontinuous (a staircase). An example of a neural network with discontinuous activation functions applied to a restoration of degraded images is in Ref. [[3](https://ieeexplore.ieee.org/document/4634149)]. Applications of recurrent neural networks with discontinuous activation functions to convex optimization problems are in Ref. [[4](https://www.worldscientific.com/doi/abs/10.1142/9789814280150_0004)]. Probably more examples can be found in the literature. References 1. <NAME>. The perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev. 1958 Nov; 65(6):386-408. PMID: 13602029 DOI: 10.1037/h0042519 2. <NAME>. Training networks with discontinuous activation functions. 1989 First IEE International Conference on Artificial Neural Networks, (Conf. Publ. No. 313), London, UK, 1989, pp. 361-363. 3. <NAME>.; <NAME>.; <NAME>. Image restoration using L1-norm regularization and a gradient-based neural network with discontinuous activation functions. 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), Hong Kong, 2008, pp. 2512-2519. DOI: 10.1109/IJCNN.2008.4634149 4. <NAME>.; <NAME>. Recurrent Neural Networks with Discontinuous Activation Functions for Convex Optimization. Integration of Swarm Intelligence and Artificial Neural Network, pp. 95-119 (2011) DOI: 10.1142/9789814280150\_0004 Upvotes: 2
2020/01/23
435
1,724
<issue_start>username_0: During weeks and months of your work, many things may change, for example : * You may modify the loss function * Your training or validation datasets may change * You modify data augmentation Which tools or processes do you use to track modifications you have made and how did they affected the model ?<issue_comment>username_1: Maybe you are looking for a combination of a [version control system](https://en.wikipedia.org/wiki/Version_control) (like [git](https://git-scm.com/) and [Github](https://github.com/)) and a tool like [comet.ml](https://www.comet.ml/). In the past, I used comet.ml to keep track of different experiments performed with different hyper-parameters or different versions of the code. There are other alternatives to comet.ml, such as [sacred](https://github.com/IDSIA/sacred), but they may also have different features and may not be as visually pleasing as comet.ml or even free. Personally, I liked comet.ml (even though, at the time, it still lacked some features). In any case, a VCS, like git, is widely used in software development (not just in AI projects) to keep track of different versions of the code, etc. You may also be interested in continuous integration (e.g. [Travis CI](https://travis-ci.com/plans)) and code review (e.g. [codacy](https://app.codacy.com/login)) tools. Upvotes: 3 [selected_answer]<issue_comment>username_2: Full disclosure: I work at Dessa, the company that developed this tech. We built a machine learning experiment management tool, called Atlas. The main feature is experiment management, allowing you to run thousands of experiments concurrently. This might help with your problem above <https://github.com/dessa-oss> Upvotes: 1
2020/01/23
454
1,849
<issue_start>username_0: Given a pre-trained CNN model, I extract feature vector of images in reference and query dataset with several thousands of elements. I would like to apply some augmentation techniques to reduce the feature vector dimension to speed up cosine similarity/euclidean distance matrix calculation. I have already come up with the following two methods in my literature review: 1. Principal Component Analysis (PCA) + Whitening 2. Locality Search Hashing (LSH) Are there more approaches to perform dimensionality reduction of feature vectors? If so, what are the pros/cons of each perhaps?<issue_comment>username_1: Dimensionality reduction could be achieved by using an **Autoencoder** Network, which learns a representation (or Encoding) for the input data. While training, the reduction side (Encoder) reduces the data to a lower-dimension and a reconstructing side (Decoder) tries to reconstruct the original input from the intermediate reduced encoding. You could assign the encoder layer output ($L\_i$) to a desired dimension (lower than that of the input). Once trained, $L\_i$ could be used as a alternative representation of your input data in a lower feature-space, and can be used for further computations. [![Autoencoder Architecture](https://i.stack.imgur.com/kLtGc.png)](https://i.stack.imgur.com/kLtGc.png) Upvotes: 3 [selected_answer]<issue_comment>username_2: Some examples of dimensionality reduction techniques: | Linear methods | Non-linear methods | Graph-based methods("Network embedding") | | --- | --- | --- | | PCA CCA ICA SVD LDA NMF | Kernel PCA GDA Autoencoders t-SNE UMAP MVU | Diffusion maps Graph Autoencoders Graph-based kernel PCA (Isomap, LLE, Hessian LLE, Laplacian Eigenmaps) | Though there are [many more](https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction). Upvotes: 2
2020/01/23
673
2,910
<issue_start>username_0: It's not clear to me whether or not someone whose work aims to **improve an NLP system** may be called a "*Computational Linguist*" even when she/he **doesn't modify the algorithm directly** by coding. Let's consider the following activities: Annotation for Machine Learning: analysis of Morphology, Syntax, POS tagging Annotation, analysis, and annotation of entities (NER) and collocations; supporting content categorization; chunking; word sense disambiguation. Recording of technical issues of the annotation tool to improve its reliability. Recording of linguistic and logical particular rules adopted by the research team who develops the NLP algorithm to improve consistency between annotation and criteria previously adopted to train the NLP. May be these activities considered "Computational Linguistics"? If not, which is their professional category and how should they be included in the resume in a word which synthesizes them?<issue_comment>username_1: Yes. A computational linguist is someone who (among other things) uses computers to process/model/analyse/... natural language. Coding might be one aspect of it, but is about the least important: you can always get a non-linguist programmer to do coding for you. I studied "Computational Linguistics" at university, and while programming was taught as part of the course, coding was only a minor aspect in the actual subject matter. The senior professor (and head of the department) wasn't able to do any coding himself; he came from the linguistics side of it. Being able to program is useful, as it speeds things up and makes you more independent, but it is by no means an important part of being a computational linguist. UPDATE: I have been accused of misrepresenting the field of CL. However, it is a broad, interdisciplinary field, and comprises many elements. Sure, on the academic/research side you might do more programming than in the applied/commercial side, but I maintain that you can easily work as a computational linguist without actually doing any programming. For most tasks, readily available software exists by now, so you don't actually need to program anything new. Upvotes: 2 <issue_comment>username_2: (Disclosure: I am a researcher and lecturer in Computational Linguistics) It is true that annotation and debugging work with existing tools without modification can be considered Computational Linguistics. And yet, most Computational Linguists program on a daily basis, since they actively develop tools. Just to give you some context, at major Computational Linguistics conferences such as ACL or EMNLP (the biggest ones), most authors did the coding themselves. To say that coding is an unimportant side aspect of being a Computational Linguist, as claimed in another [answer](https://ai.stackexchange.com/a/17637/33073), is a slight misrepresentation. Upvotes: 3 [selected_answer]
2020/01/28
2,251
9,063
<issue_start>username_0: I trained a simple CNN on the MNIST database of handwritten digits to 99% accuracy. I'm feeding in a bunch of handwritten digits, and non-digits from a document. I want the CNN to report errors, so I set a threshold of 90% certainty below which my algorithm assumes that what it's looking at is not a digit. My problem is that the CNN is 100% certain of many incorrect guesses. In the example below, the CNN reports 100% certainty that it's a 0. How do I make it report failure? [![Handwritten digit input - does not look like a digit](https://i.stack.imgur.com/nPBYU.png)](https://i.stack.imgur.com/nPBYU.png) **My thoughts on this**: Maybe the CNN is not really 100% certain that this is a zero. Maybe it just thinks that it can't be anything else, and it's being forced to choose (because of normalisation on the output vector). Is there any way I can get insight into what the CNN "thought" before I forced it to choose? PS: I'm using Keras on Tensorflow with Python. **Edit** Because someone asked. Here is the context of my problem: This came from me applying a heuristic algorithm for segmentation of sequences of connected digits. In the image above, the left part is actually a 4, and the right is the curve bit of a 2 without the base. The algorithm is supposed to step through segment cuts, and when it finds a confident match, remove that cut and continue moving along the sequence. It works really well for some cases, but of course it's totally reliant on being able to tell if what it's looking at is not a good match for a digit. Here's an example of where it kind of did okay. [![Various input images for handwritten digit recognition](https://i.stack.imgur.com/rVDz4.png)](https://i.stack.imgur.com/rVDz4.png) My next best option is to do inference on all permutations and maximise combined score. That's more expensive.<issue_comment>username_1: The concept you are looking for is called epistemic uncertainty, also known as model uncertainty. You want the model to produce meaningful calibrated probabilities that quantify the real confidence of the model. This is generally not possible with simple neural networks as they simply do not have this property, for this you need a Bayesian Neural Network (BNN). This kind of network learns a distribution of weights instead of scalar or point-wise weights, which then allow to encode model uncertainty, as then the distribution of the output is calibrated and has the properties you want. This problem is also called out of distribution (OOD) detection, and again it can be done with BNNs, but unfortunately training a full BNN is untractable, so we use approximations. As a reference, one of these approximations is Deep Ensembles, which train several instances of a model in the same dataset and then average the softmax probabilities, and has good out of distribution detection properties. Check the paper [here](https://arxiv.org/abs/1612.01474), in particular section 3.5 which shows results for OOD based on entropy of the ensemble probabilities. Upvotes: 6 [selected_answer]<issue_comment>username_2: Broken assumptions ------------------ Generalization relies on making strong assumptions (no free lunch, etc). If you break your assumptions, then you're not going to have a good time. A key assumption of a standard digit-recognition classifier like MNIST is that you're classifying pictures that actually contain a single digit. If your real data contains pictures that have non-digits, then that means that your real data is *not* similar to training data but is conceptually very, very different. If that's a problem (as in this case) then one way to treat that is to explicitly break that assumption and train a model that not only recognizes digits 0-9 but also recognizes whether there's a digit at all, and is able to provide an answer "that's not a digit", so a 11-class classifier instead of a 10-class one. MNIST training data is not sufficient for that, but you can use some kind of 'distractor' data to provide the not-a-digit examples. For example, you could use some dataset of letters (perhaps omitting I, l, O and B) transformed to look similar to MNIST data. Upvotes: 3 <issue_comment>username_3: Your classifier is specifically learning the ways in which **0**s are different from other digits, **not** what it really means for a digit to **be** a zero. Philosophically, you could say the model appears to have some powerful understanding when restricted to a tightly controlled domain, but that facade is lifted as soon as you throw any sort of wrench in the works. Mathematically, you could say that the model is simply optimizing a classification metric for data drawn from a specific distribution, and when you give it data from a different distribution, all bets are off. The go-to answer is to collect or generate data like the data you expect the model to deal with (in practice, the effort required to do so can vary dramatically depending upon the application). In this case, that could involve drawing a bunch of random scribbles and adding them to your training data set. At this point you must ask, now how do I label them? You will want a new "other" or "non-digit" class so that your model can learn to categorize these scribbles separately from digits. After retraining, your model should now better deal with these cases. However, you may then ask, but what if I gave it color images of digits? Or color images of farm animals? Maybe pigs will be classified as zeros because they are round. This problem is a fundamental property of the way deep learning is orchestrated. Your model is not capable of higher order logic, which means it can seem to go from being very intelligent to very dumb by just throwing the slightest curve ball at it. For now, all deep learning does is recognize patterns in data that allow it to minimize some loss function. Deep learning is a fantastic tool, but not an all-powerful omnitool. Bear in mind its limitations and use it where appropriate, and it will serve you well. Upvotes: 4 <issue_comment>username_4: Apollys, That's a very well thought out response. Particularly, the philosophical discussion of the essence of "0-ness." I haven't actually performed this experiment, so caveat emptor... I wonder how well an "other" class would actually work. The ways in which "other" differs from "digit" has infinite variability (or at least its only limitation is the cardinality of the input layer). The NN decides whether something is more of one class or more of a different class. If there isn't an essence in common among other "non-digits", I don't believe it will do well at identifying "other" as the catch-all for everything that has low confidence level of classification. This approach still doesn't identify what it is to ***be*** "not-digit". It identifies how all the things that are "other" differ from the other labeled inputs -- probably poorly, depending on the variability of the "non-digit" labeled data. (i.e. is it numerically exhaustive, many times over, of all random scribbles?) Thoughts? Upvotes: 2 <issue_comment>username_5: I'm an amateur with neural networks, but I will illustrate my understanding of how this problem comes to be. First, lets see how trivial neural network classifies 2D input into two classes : [![enter image description here](https://i.stack.imgur.com/OgldC.png)](https://i.stack.imgur.com/OgldC.png) But in case of complex neural network, the input space is much bigger and the sample data points are much more clustered with big chunks of empty space between them: [![enter image description here](https://i.stack.imgur.com/74IAT.png)](https://i.stack.imgur.com/74IAT.png) The neural network then doesn't know how to classify the data in the empty space, so something like this is possible : [![enter image description here](https://i.stack.imgur.com/2VAYc.png)](https://i.stack.imgur.com/2VAYc.png) [![enter image description here](https://i.stack.imgur.com/Vwep2.png)](https://i.stack.imgur.com/Vwep2.png) When using the traditional ways of measuring quality of neural networks, both of these will be considered good. As they do classify the classes themselves correctly. Then, what happens if we try to classify these data points? [![enter image description here](https://i.stack.imgur.com/Ngw14.png)](https://i.stack.imgur.com/Ngw14.png) Really, neural network has no data it could fall back on, so it just outputs what seems to us as random nonsense. Upvotes: 2 <issue_comment>username_6: In your particular case, you could add a eleventh category to your training data: "not a digit". Then train your model with a bunch of images of incorrectly segmented digits, in addition to the normal digit examples. This way the model will learn to tell apart real digits from incorrectly segmented ones. However even after doing that, there will be an infinite number of random looking images that will be classified as digits. They're just far away from the examples of "not a digit" you provided. Upvotes: 0
2020/01/29
2,007
8,060
<issue_start>username_0: The famous Nvidia paper [Progressive Growing of GANs for Improved Quality, Stability, and Variation](https://research.nvidia.com/sites/default/files/pubs/2017-10_Progressive-Growing-of/karras2018iclr-paper.pdf), the GAN can generate hyperrealistic human faces. But, in the very same paper, images of other categories are rather disappointing and there hasn't seemed to be any improvements since then. Why is it the case? Is it because they didn't have enough training data for other categories? Or is it due to some fundamental limitation of GAN? I have come across a paper talking about the limitations of GAN: [Seeing What a GAN Cannot Generate](http://openaccess.thecvf.com/content_ICCV_2019/papers/Bau_Seeing_What_a_GAN_Cannot_Generate_ICCV_2019_paper.pdf). Anybody using GAN for image synthesis other than human faces? Any success stories?<issue_comment>username_1: The concept you are looking for is called epistemic uncertainty, also known as model uncertainty. You want the model to produce meaningful calibrated probabilities that quantify the real confidence of the model. This is generally not possible with simple neural networks as they simply do not have this property, for this you need a Bayesian Neural Network (BNN). This kind of network learns a distribution of weights instead of scalar or point-wise weights, which then allow to encode model uncertainty, as then the distribution of the output is calibrated and has the properties you want. This problem is also called out of distribution (OOD) detection, and again it can be done with BNNs, but unfortunately training a full BNN is untractable, so we use approximations. As a reference, one of these approximations is Deep Ensembles, which train several instances of a model in the same dataset and then average the softmax probabilities, and has good out of distribution detection properties. Check the paper [here](https://arxiv.org/abs/1612.01474), in particular section 3.5 which shows results for OOD based on entropy of the ensemble probabilities. Upvotes: 6 [selected_answer]<issue_comment>username_2: Broken assumptions ------------------ Generalization relies on making strong assumptions (no free lunch, etc). If you break your assumptions, then you're not going to have a good time. A key assumption of a standard digit-recognition classifier like MNIST is that you're classifying pictures that actually contain a single digit. If your real data contains pictures that have non-digits, then that means that your real data is *not* similar to training data but is conceptually very, very different. If that's a problem (as in this case) then one way to treat that is to explicitly break that assumption and train a model that not only recognizes digits 0-9 but also recognizes whether there's a digit at all, and is able to provide an answer "that's not a digit", so a 11-class classifier instead of a 10-class one. MNIST training data is not sufficient for that, but you can use some kind of 'distractor' data to provide the not-a-digit examples. For example, you could use some dataset of letters (perhaps omitting I, l, O and B) transformed to look similar to MNIST data. Upvotes: 3 <issue_comment>username_3: Your classifier is specifically learning the ways in which **0**s are different from other digits, **not** what it really means for a digit to **be** a zero. Philosophically, you could say the model appears to have some powerful understanding when restricted to a tightly controlled domain, but that facade is lifted as soon as you throw any sort of wrench in the works. Mathematically, you could say that the model is simply optimizing a classification metric for data drawn from a specific distribution, and when you give it data from a different distribution, all bets are off. The go-to answer is to collect or generate data like the data you expect the model to deal with (in practice, the effort required to do so can vary dramatically depending upon the application). In this case, that could involve drawing a bunch of random scribbles and adding them to your training data set. At this point you must ask, now how do I label them? You will want a new "other" or "non-digit" class so that your model can learn to categorize these scribbles separately from digits. After retraining, your model should now better deal with these cases. However, you may then ask, but what if I gave it color images of digits? Or color images of farm animals? Maybe pigs will be classified as zeros because they are round. This problem is a fundamental property of the way deep learning is orchestrated. Your model is not capable of higher order logic, which means it can seem to go from being very intelligent to very dumb by just throwing the slightest curve ball at it. For now, all deep learning does is recognize patterns in data that allow it to minimize some loss function. Deep learning is a fantastic tool, but not an all-powerful omnitool. Bear in mind its limitations and use it where appropriate, and it will serve you well. Upvotes: 4 <issue_comment>username_4: Apollys, That's a very well thought out response. Particularly, the philosophical discussion of the essence of "0-ness." I haven't actually performed this experiment, so caveat emptor... I wonder how well an "other" class would actually work. The ways in which "other" differs from "digit" has infinite variability (or at least its only limitation is the cardinality of the input layer). The NN decides whether something is more of one class or more of a different class. If there isn't an essence in common among other "non-digits", I don't believe it will do well at identifying "other" as the catch-all for everything that has low confidence level of classification. This approach still doesn't identify what it is to ***be*** "not-digit". It identifies how all the things that are "other" differ from the other labeled inputs -- probably poorly, depending on the variability of the "non-digit" labeled data. (i.e. is it numerically exhaustive, many times over, of all random scribbles?) Thoughts? Upvotes: 2 <issue_comment>username_5: I'm an amateur with neural networks, but I will illustrate my understanding of how this problem comes to be. First, lets see how trivial neural network classifies 2D input into two classes : [![enter image description here](https://i.stack.imgur.com/OgldC.png)](https://i.stack.imgur.com/OgldC.png) But in case of complex neural network, the input space is much bigger and the sample data points are much more clustered with big chunks of empty space between them: [![enter image description here](https://i.stack.imgur.com/74IAT.png)](https://i.stack.imgur.com/74IAT.png) The neural network then doesn't know how to classify the data in the empty space, so something like this is possible : [![enter image description here](https://i.stack.imgur.com/2VAYc.png)](https://i.stack.imgur.com/2VAYc.png) [![enter image description here](https://i.stack.imgur.com/Vwep2.png)](https://i.stack.imgur.com/Vwep2.png) When using the traditional ways of measuring quality of neural networks, both of these will be considered good. As they do classify the classes themselves correctly. Then, what happens if we try to classify these data points? [![enter image description here](https://i.stack.imgur.com/Ngw14.png)](https://i.stack.imgur.com/Ngw14.png) Really, neural network has no data it could fall back on, so it just outputs what seems to us as random nonsense. Upvotes: 2 <issue_comment>username_6: In your particular case, you could add a eleventh category to your training data: "not a digit". Then train your model with a bunch of images of incorrectly segmented digits, in addition to the normal digit examples. This way the model will learn to tell apart real digits from incorrectly segmented ones. However even after doing that, there will be an infinite number of random looking images that will be classified as digits. They're just far away from the examples of "not a digit" you provided. Upvotes: 0
2020/01/30
1,874
7,732
<issue_start>username_0: We have hundreds of thousands of customers records, and we need to take the benefits of our data to train a model that will recognize fake entries or unrealistic ones for our platform, where customers are asked to enter their names, phone number and zip code. So, our attributes are name, phone number, zip code and IP address to train the model with. We have only data associated with real users. Can we train a model provided with only positive labels (as we do not have a negative dataset to train the model with)?<issue_comment>username_1: The concept you are looking for is called epistemic uncertainty, also known as model uncertainty. You want the model to produce meaningful calibrated probabilities that quantify the real confidence of the model. This is generally not possible with simple neural networks as they simply do not have this property, for this you need a Bayesian Neural Network (BNN). This kind of network learns a distribution of weights instead of scalar or point-wise weights, which then allow to encode model uncertainty, as then the distribution of the output is calibrated and has the properties you want. This problem is also called out of distribution (OOD) detection, and again it can be done with BNNs, but unfortunately training a full BNN is untractable, so we use approximations. As a reference, one of these approximations is Deep Ensembles, which train several instances of a model in the same dataset and then average the softmax probabilities, and has good out of distribution detection properties. Check the paper [here](https://arxiv.org/abs/1612.01474), in particular section 3.5 which shows results for OOD based on entropy of the ensemble probabilities. Upvotes: 6 [selected_answer]<issue_comment>username_2: Broken assumptions ------------------ Generalization relies on making strong assumptions (no free lunch, etc). If you break your assumptions, then you're not going to have a good time. A key assumption of a standard digit-recognition classifier like MNIST is that you're classifying pictures that actually contain a single digit. If your real data contains pictures that have non-digits, then that means that your real data is *not* similar to training data but is conceptually very, very different. If that's a problem (as in this case) then one way to treat that is to explicitly break that assumption and train a model that not only recognizes digits 0-9 but also recognizes whether there's a digit at all, and is able to provide an answer "that's not a digit", so a 11-class classifier instead of a 10-class one. MNIST training data is not sufficient for that, but you can use some kind of 'distractor' data to provide the not-a-digit examples. For example, you could use some dataset of letters (perhaps omitting I, l, O and B) transformed to look similar to MNIST data. Upvotes: 3 <issue_comment>username_3: Your classifier is specifically learning the ways in which **0**s are different from other digits, **not** what it really means for a digit to **be** a zero. Philosophically, you could say the model appears to have some powerful understanding when restricted to a tightly controlled domain, but that facade is lifted as soon as you throw any sort of wrench in the works. Mathematically, you could say that the model is simply optimizing a classification metric for data drawn from a specific distribution, and when you give it data from a different distribution, all bets are off. The go-to answer is to collect or generate data like the data you expect the model to deal with (in practice, the effort required to do so can vary dramatically depending upon the application). In this case, that could involve drawing a bunch of random scribbles and adding them to your training data set. At this point you must ask, now how do I label them? You will want a new "other" or "non-digit" class so that your model can learn to categorize these scribbles separately from digits. After retraining, your model should now better deal with these cases. However, you may then ask, but what if I gave it color images of digits? Or color images of farm animals? Maybe pigs will be classified as zeros because they are round. This problem is a fundamental property of the way deep learning is orchestrated. Your model is not capable of higher order logic, which means it can seem to go from being very intelligent to very dumb by just throwing the slightest curve ball at it. For now, all deep learning does is recognize patterns in data that allow it to minimize some loss function. Deep learning is a fantastic tool, but not an all-powerful omnitool. Bear in mind its limitations and use it where appropriate, and it will serve you well. Upvotes: 4 <issue_comment>username_4: Apollys, That's a very well thought out response. Particularly, the philosophical discussion of the essence of "0-ness." I haven't actually performed this experiment, so caveat emptor... I wonder how well an "other" class would actually work. The ways in which "other" differs from "digit" has infinite variability (or at least its only limitation is the cardinality of the input layer). The NN decides whether something is more of one class or more of a different class. If there isn't an essence in common among other "non-digits", I don't believe it will do well at identifying "other" as the catch-all for everything that has low confidence level of classification. This approach still doesn't identify what it is to ***be*** "not-digit". It identifies how all the things that are "other" differ from the other labeled inputs -- probably poorly, depending on the variability of the "non-digit" labeled data. (i.e. is it numerically exhaustive, many times over, of all random scribbles?) Thoughts? Upvotes: 2 <issue_comment>username_5: I'm an amateur with neural networks, but I will illustrate my understanding of how this problem comes to be. First, lets see how trivial neural network classifies 2D input into two classes : [![enter image description here](https://i.stack.imgur.com/OgldC.png)](https://i.stack.imgur.com/OgldC.png) But in case of complex neural network, the input space is much bigger and the sample data points are much more clustered with big chunks of empty space between them: [![enter image description here](https://i.stack.imgur.com/74IAT.png)](https://i.stack.imgur.com/74IAT.png) The neural network then doesn't know how to classify the data in the empty space, so something like this is possible : [![enter image description here](https://i.stack.imgur.com/2VAYc.png)](https://i.stack.imgur.com/2VAYc.png) [![enter image description here](https://i.stack.imgur.com/Vwep2.png)](https://i.stack.imgur.com/Vwep2.png) When using the traditional ways of measuring quality of neural networks, both of these will be considered good. As they do classify the classes themselves correctly. Then, what happens if we try to classify these data points? [![enter image description here](https://i.stack.imgur.com/Ngw14.png)](https://i.stack.imgur.com/Ngw14.png) Really, neural network has no data it could fall back on, so it just outputs what seems to us as random nonsense. Upvotes: 2 <issue_comment>username_6: In your particular case, you could add a eleventh category to your training data: "not a digit". Then train your model with a bunch of images of incorrectly segmented digits, in addition to the normal digit examples. This way the model will learn to tell apart real digits from incorrectly segmented ones. However even after doing that, there will be an infinite number of random looking images that will be classified as digits. They're just far away from the examples of "not a digit" you provided. Upvotes: 0
2020/01/30
660
2,471
<issue_start>username_0: I am looking for a small size dataset on which I can implement object detection, object segmentation and object localization. Can anyone suggest me a dataset less than 5GB? Or do I need to know anything before implementing these algorithms?<issue_comment>username_1: There are various dataset available such as 1. Pascal VOC dataset: You can perform all your task with these. [![enter image description here](https://i.stack.imgur.com/WZ1Ho.png)](https://i.stack.imgur.com/WZ1Ho.png) size of the dataset is as follows [![enter image description here](https://i.stack.imgur.com/vl0JT.png)](https://i.stack.imgur.com/vl0JT.png) 2. ADE20K Semantic Segmentation Dataset: you can perform only segmentation here [![enter image description here](https://i.stack.imgur.com/mwoHl.png)](https://i.stack.imgur.com/mwoHl.png) 3. COCO dataset: This is rich dataset but a size larger then 5 GB so you can try downloading using google colab in your drive and then make a zip file of data as less than 5 GB [![enter image description here](https://i.stack.imgur.com/3DwkV.png)](https://i.stack.imgur.com/3DwkV.png) You can download all these datasets using Gluoncv easily.here [link](https://gluon-cv.mxnet.io/api/data.datasets.html). Upvotes: 4 [selected_answer]<issue_comment>username_2: Maybe try the COCO (common objects in context) dataset. It's often used for object detection, segmentation and localisation. They provide labels, and you can limit the size by downloading only a specific number of classes. <http://cocodataset.org/#explore> It's also quite a common one, so you can expect good documentation, and online answers to your questions. Hope that helps! Upvotes: 1 <issue_comment>username_3: *NDDS is a UE4 plugin from NVIDIA to empower computer vision researchers to export high -quality synthetic images with metadata. NDDS supports images, segmentation, depth, object pose, bounding box, keypoints, and custom stencils. In addition to the expo rter, the plugin includes different components for generating highly randomized images. This randomization includes lighting, objec ts, camera position, poses, textures, and distractors, as well as camera path following, and so forth. Together, these components allow researchers to easily create randomized scenes for training deep neural networks* <https://github.com/yehengchen/DOPE-ROS-D435/blob/master/Synthetic-Data-UE4-DOPE.md> You can create 3D synthetic dataset here. Upvotes: 1
2020/01/31
827
3,609
<issue_start>username_0: I have trained a XGboost model to predict survival for the [Kaggle Titanic ML competition](https://www.kaggle.com/c/titanic/overview). As with all Kaggle competitions there is a `train` dataset with the target variable included and a `test` dataset without the target variable which is used by Kaggle to compute the final accuracy score that determines your leaderboard ranking. **My problem:** I have build a fairly simple ensemble classifier (based on XGboost) and evaluated it via standard train-test-splits of the `train` data. The **accuracy** I get from this **validation is ~80%** which is good but not amazing by public leaderboard standards (excluding the 100% cheaters). The results and all the KPIs I looked at of this standard model do not indicate severe overfitting, etc. to me. However when I submit my predictions for the `test` set my **public score is ~35%** which is way below even a random chance model. It is sooo bad I even improved my score by simply reversing all predictions from the model. **Why is my model so much worse on the `test`?** I know that Kaggle computes their scores a bit differently than I do locally, additionally there is probably some differences between the datasets. Most who join the competition notices at least some difference between their local test scores and the public scores. However my difference is really drastic and indeed reversing the predictions improves my score. This does not make sense to me because reversing the predictions on my local validations leads to garbage predictions, so this is not a simple problem of generally reversed predictions. **So can you help me understand how those two issues happen at the same time:** * Drastic difference between local accuracy and public score * Reversing actually leads to the better public score. Here is my notebook for the code (please ignore the errors, they are simply because the code does not work on kaggle kernels only locally): <https://www.kaggle.com/fnguyen/titanicrising-test><issue_comment>username_1: Looking at your code, one set of data transformations were applied to the train data and a different set of transformations were applied to the test data. Different data transformations could account for different evaluation metric performance. It is best practices to put all data transformations in a function so they can be applied to all data in a similar way. Since you are using scikit-learn, `sklearn.compose.ColumnTransformer` is designed for this purpose. Example code for the Titanic dataset is [here](https://scikit-learn.org/stable/auto_examples/compose/plot_column_transformer_mixed_types.html). Upvotes: 3 [selected_answer]<issue_comment>username_2: @BrianSpiering was generally correct in pointing out that you should always apply the same transformations to your train and test dataset. This was the key to my solution which was a bit more specific but might actually help others who encounter a similar problem. Specifically my mistake came about because of **Imputation**! Some of the factors I used for my model were `NA` in both the train and test data set. To complete the data I simply imputed these missings using mean and mode respectively. However since I did those transformations separately on both sets the actual mean/mode value that was used differed heavily! By applying the imputation on the full data I also imputed the same data for all missing cases which solved my error. My resulting accuracy in the public leaderboard is now at 74.2% which is fairly close to my local test score of 79.6%. Upvotes: 0
2020/01/31
1,375
6,451
<issue_start>username_0: Why are the terms classification and prediction used as synonyms especially when it comes to deep learning? For example, a CNN predicts the handwritten digit. To me, a prediction is telling the next step in a sequence, whereas classification is to put labels on (a finite set of) data.<issue_comment>username_1: They aren't synonyms literally, books never interchange those terms, as they represent two different processes. What they are, though are two similar processes. **Classification** can be thought of a "process" that uses specific functions for the generation of **one or more discrete values**, usually using a cross-entropy function. **Prediction**, on the other hand can be thought of a "process" that uses, again specific functions for the generation of **continuous values**, usually using a linear or multiple dependency model with a MSE loss function. TL:DR; Classification is for discrete valued dependent variable, Prediction is for continuous valued dependent variable. Both are similar processes with different functions used for learning and estimation of the predicate. You can think of classification as a specific form of prediction. Hope it helped! Do let me know if I have made any mistakes. Upvotes: 0 <issue_comment>username_2: Many people confuse and misuse the two terms, classification and prediction (or classify and predict). This is because in many cases classification techniques are being used for prediction purposes which creates part of the confusion to others who then use the term β€˜prediction’ (or β€˜predict’) inappropriately. Your understanding of the definitions of classification and prediction is mostly correct and you are absolutely correct that there are many people using the terms synonymously, sometimes correctly but I believe mostly erroneously. There are many good articles elaborating on the two and I have added some links and excerpts at the end of this answer. What these articles don't cover is that many forecasting (i.e. prediction) researchers and practitioners will use conventional classifiers to predict the future state of a time series or data sequence. More advanced researchers and practitioners will use time-recurrent models, which learn temporal patterns. These are still called classifiers but the for purpose of prediction. There are more papers written on this use of classifiers, conventional and time-recurrent type classifiers, for time series than the use of regressor models! This adds to the confusion in the data science and machine learning community in the usage of the terms β€˜classify’ and β€˜predict'. <NAME> sums it up best in his paper, β€œTo Explain or to Predict?”, where he states: **β€œConflation between explanation and prediction is common, yet the distinction must be understood for progressing scientific knowledge.”** There is also the opposite problem where people will confuse regression models with classification. See the first article below. **[Classification vs. Prediction, by Professor <NAME>](https://www.fharrell.com/post/classification/)** Excerpt: *By not thinking probabilistically, machine learning advocates frequently utilize classifiers instead of using risk prediction models. The situation has gotten acute: many machine learning experts actually label logistic regression as a classification method (it is not). It is important to think about what classification really implies. Classification is in effect a decision. Optimum decisions require making full use of available data, developing predictions, and applying a loss/utility/cost function to make a decision that, for example, minimizes expected loss or maximizes expected utility. Different end users have different utility functions. In risk assessment this leads to their having different risk thresholds for action. Classification assumes that every user has the same utility function and that the utility function implied by the classification system is that utility function.* **[To Explain or to Predict?, by <NAME>](https://www.stat.berkeley.edu/~aldous/157/Papers/shmueli.pdf)** *Abstract. Statistical modeling is a powerful tool for developing and testing theories by way of causal explanation, prediction, and description. In many disciplines there is near-exclusive use of statistical modeling for causal explanation and the assumption that models with high explanatory power are inherently of high predictive power. Conflation between explanation and prediction is common, yet the distinction must be understood for progressing scientific knowledge. While this distinction has been recognized in the philosophy of science, the statistical literature lacks a thorough discussion of the many differences that arise in the process of modeling for an explanatory versus a predictive goal. The purpose of this article is to clarify the distinction between explanatory and predictive modeling, to discuss its sources, and to reveal the practical implications of the distinction to each step in the modeling process.* **[What is the difference between classification and prediction?, from KDnuggets](https://www.kdnuggets.com/faq/classification-vs-prediction.html)** If one does a decision tree analysis, what is the result? A classification? A prediction? <NAME> answers: The decision tree is a classification model, applied to existing data. If you apply it to new data, for which the class is unknown, you also get a prediction of the class. The assumption is that the new data comes from the similar distribution as the data you used to build your decision tree. In many cases this is a correct assumption and that is why you can use the decision tree for building a predictive model. When Classification and Prediction are not the same? <NAME>-Shapiro answers: It is a matter of definition. If you are trying to classify existing data, e.g. group patients based on their known medical data and treatment outcome, I would call it a classification. If you use a classification model to predict the treatment outcome for a new patient, it would be a prediction. gabrielac adds In the book "Data Mining Concepts and Techniques", Han and Kamber's view is that predicting class labels is classification, and predicting values (e.g. using regression techniques) is prediction. Other people prefer to use "estimation" for predicting continuous values. Upvotes: 3 [selected_answer]
2020/02/02
2,776
10,916
<issue_start>username_0: I am trying to use RealNVP with some data I have (the input size is a 1D vector of size 22). [Here](https://arxiv.org/abs/1605.08803) is the link to the RealNVP paper and [here](https://lilianweng.github.io/lil-log/2018/10/13/flow-based-deep-generative-models.html) is a nice, short explanation of it (the paper is pretty long). My code is mainly based on [this](https://github.com/tonyduan/normalizing-flows) code from GitHub and below are the main piece that I am using (with slight adjustments). The problem is that the loss is getting negative, which in the definition of my code means that the log-probability of the my data is positive, which in turn means that the probabilities are bigger than 1. This is impossible mathematically, and I see no way how this can happen, from a mathematical point of view. I also couldn't find a mistake in my code. Can someone help me with this? Is there a mistake there? Am I missing something with my understanding of normalizing flows? Thank you! ``` class NormalizingFlowModel(nn.Module): def __init__(self, prior, flows): super().__init__() self.prior = prior self.flows = nn.ModuleList(flows) def forward(self, x): m, _ = x.shape log_det = torch.zeros(m).cuda() for flow in self.flows: x, ld = flow.forward(x) log_det += ld z, prior_logprob = x, self.prior.log_prob(x) return z, prior_logprob, log_det def inverse(self, z): m, _ = z.shape log_det = torch.zeros(m).cuda() for flow in self.flows[::-1]: z, ld = flow.inverse(z) log_det += ld x = z return x, log_det def sample(self, n_samples): z = self.prior.sample((n_samples,)) x, _ = self.inverse(z) return x class FCNN_for_NVP(nn.Module): """ Simple fully connected neural network to be used for Real NVP """ def __init__(self, in_dim, out_dim): super().__init__() self.network = nn.Sequential( nn.Linear(in_dim, 32), nn.Tanh(), nn.Linear(32, 32), nn.Tanh(), nn.Linear(32, 64), nn.Tanh(), nn.Linear(64, 64), nn.Tanh(), nn.Linear(64, 32), nn.Tanh(), nn.Linear(32, 32), nn.Tanh(), nn.Linear(32, out_dim), ) def forward(self, x): return self.network(x) class RealNVP(nn.Module): """ Non-volume preserving flow. [Dinh et. al. 2017] """ def __init__(self, dim, base_network=FCNN_for_NVP): super().__init__() self.dim = dim self.t1 = base_network(dim // 2, dim // 2) self.s1 = base_network(dim // 2, dim // 2) self.t2 = base_network(dim // 2, dim // 2) self.s2 = base_network(dim // 2, dim // 2) def forward(self, x): lower, upper = x[:,:self.dim // 2], x[:,self.dim // 2:] t1_transformed = self.t1(lower) s1_transformed = self.s1(lower) upper = t1_transformed + upper * torch.exp(s1_transformed) t2_transformed = self.t2(upper) s2_transformed = self.s2(upper) lower = t2_transformed + lower * torch.exp(s2_transformed) z = torch.cat([lower, upper], dim=1) log_det = torch.sum(s1_transformed, dim=1) + torch.sum(s2_transformed, dim=1) return z, log_det def inverse(self, z): lower, upper = z[:,:self.dim // 2], z[:,self.dim // 2:] t2_transformed = self.t2(upper) s2_transformed = self.s2(upper) lower = (lower - t2_transformed) * torch.exp(-s2_transformed) t1_transformed = self.t1(lower) s1_transformed = self.s1(lower) upper = (upper - t1_transformed) * torch.exp(-s1_transformed) x = torch.cat([lower, upper], dim=1) log_det = torch.sum(-s1_transformed, dim=1) + torch.sum(-s2_transformed, dim=1) return x, log_det flow = RealNVP(dim=data.size(1)) flows = [flow for _ in range(1)] prior = MultivariateNormal(torch.zeros(data.size(1)).cuda(), torch.eye(data.size(1)).cuda()) model = NormalizingFlowModel(prior, flows) model = model.cuda() for i in range(10): for j, dtt in enumerate(my_dataloader_bkg_only): optimizer.zero_grad() x = dtt[0].float() z, prior_logprob, log_det = model(x) logprob = prior_logprob + log_det loss = -torch.mean(prior_logprob + log_det) loss.backward() optimizer.step() if i % 1 == 0: print("Saved") best_loss = logprob.mean().data.cpu().numpy() print(logprob.mean().data.cpu().numpy(), prior_logprob.mean().data.cpu().numpy(), log_det.mean().data.cpu().numpy()) ```<issue_comment>username_1: They aren't synonyms literally, books never interchange those terms, as they represent two different processes. What they are, though are two similar processes. **Classification** can be thought of a "process" that uses specific functions for the generation of **one or more discrete values**, usually using a cross-entropy function. **Prediction**, on the other hand can be thought of a "process" that uses, again specific functions for the generation of **continuous values**, usually using a linear or multiple dependency model with a MSE loss function. TL:DR; Classification is for discrete valued dependent variable, Prediction is for continuous valued dependent variable. Both are similar processes with different functions used for learning and estimation of the predicate. You can think of classification as a specific form of prediction. Hope it helped! Do let me know if I have made any mistakes. Upvotes: 0 <issue_comment>username_2: Many people confuse and misuse the two terms, classification and prediction (or classify and predict). This is because in many cases classification techniques are being used for prediction purposes which creates part of the confusion to others who then use the term β€˜prediction’ (or β€˜predict’) inappropriately. Your understanding of the definitions of classification and prediction is mostly correct and you are absolutely correct that there are many people using the terms synonymously, sometimes correctly but I believe mostly erroneously. There are many good articles elaborating on the two and I have added some links and excerpts at the end of this answer. What these articles don't cover is that many forecasting (i.e. prediction) researchers and practitioners will use conventional classifiers to predict the future state of a time series or data sequence. More advanced researchers and practitioners will use time-recurrent models, which learn temporal patterns. These are still called classifiers but the for purpose of prediction. There are more papers written on this use of classifiers, conventional and time-recurrent type classifiers, for time series than the use of regressor models! This adds to the confusion in the data science and machine learning community in the usage of the terms β€˜classify’ and β€˜predict'. <NAME> sums it up best in his paper, β€œTo Explain or to Predict?”, where he states: **β€œConflation between explanation and prediction is common, yet the distinction must be understood for progressing scientific knowledge.”** There is also the opposite problem where people will confuse regression models with classification. See the first article below. **[Classification vs. Prediction, by <NAME>](https://www.fharrell.com/post/classification/)** Excerpt: *By not thinking probabilistically, machine learning advocates frequently utilize classifiers instead of using risk prediction models. The situation has gotten acute: many machine learning experts actually label logistic regression as a classification method (it is not). It is important to think about what classification really implies. Classification is in effect a decision. Optimum decisions require making full use of available data, developing predictions, and applying a loss/utility/cost function to make a decision that, for example, minimizes expected loss or maximizes expected utility. Different end users have different utility functions. In risk assessment this leads to their having different risk thresholds for action. Classification assumes that every user has the same utility function and that the utility function implied by the classification system is that utility function.* **[To Explain or to Predict?, by <NAME>](https://www.stat.berkeley.edu/~aldous/157/Papers/shmueli.pdf)** *Abstract. Statistical modeling is a powerful tool for developing and testing theories by way of causal explanation, prediction, and description. In many disciplines there is near-exclusive use of statistical modeling for causal explanation and the assumption that models with high explanatory power are inherently of high predictive power. Conflation between explanation and prediction is common, yet the distinction must be understood for progressing scientific knowledge. While this distinction has been recognized in the philosophy of science, the statistical literature lacks a thorough discussion of the many differences that arise in the process of modeling for an explanatory versus a predictive goal. The purpose of this article is to clarify the distinction between explanatory and predictive modeling, to discuss its sources, and to reveal the practical implications of the distinction to each step in the modeling process.* **[What is the difference between classification and prediction?, from KDnuggets](https://www.kdnuggets.com/faq/classification-vs-prediction.html)** If one does a decision tree analysis, what is the result? A classification? A prediction? <NAME>-Shapiro answers: The decision tree is a classification model, applied to existing data. If you apply it to new data, for which the class is unknown, you also get a prediction of the class. The assumption is that the new data comes from the similar distribution as the data you used to build your decision tree. In many cases this is a correct assumption and that is why you can use the decision tree for building a predictive model. When Classification and Prediction are not the same? <NAME> answers: It is a matter of definition. If you are trying to classify existing data, e.g. group patients based on their known medical data and treatment outcome, I would call it a classification. If you use a classification model to predict the treatment outcome for a new patient, it would be a prediction. gabrielac adds In the book "Data Mining Concepts and Techniques", Han and Kamber's view is that predicting class labels is classification, and predicting values (e.g. using regression techniques) is prediction. Other people prefer to use "estimation" for predicting continuous values. Upvotes: 3 [selected_answer]
2020/02/02
1,585
6,927
<issue_start>username_0: How would you go about training an RL Tic Tac Toe (well, any board game, really) application to learn how to play successfully (win the game), without a human having to play against the RL? Obviously, it would take a lot longer to train the AI if I have to sit and play "real" games against it. Is there a way for me to automate the training? I guess creating "a human player" to train the AI who just selects random positions on the board won't help the AI to learn properly, as it won't be up against something that's not using a strategy to try to beat it.<issue_comment>username_1: One solution would be to simply play the AI against itself (which yields good results for the tic tac toe example, I've tried it), but a much more interesting approach is to train two networks at the same time and have them play against each other. It's called a Generative adversarial network (GAN) and it's very promising. It is able to make [realistic images](https://en.wikipedia.org/wiki/Generative_adversarial_network#/media/File:Woman_1.jpg) and [much more](https://en.wikipedia.org/wiki/Generative_adversarial_network#Applications). It's quite easy to implement both solutions so I'd recommend doing both and seeing for yourself since each problem is different. Upvotes: 0 <issue_comment>username_2: The approach will vary depending on some features of the game: * How many players (two for tic tac toe and many classic games). * Whether it is a "perfect information" game (yes for chess and tic tac toe), or whether there is significant hidden information (true for most forms of poker) * Whether the game is zero sum. Any game with simple win/loss/draw results can be considered zero sum. * Branching complexity of the game. Tic tac toe has relatively low branching factor, at worst 9, and reduces by one each turn. The game of Go has branching factoraround 250. * Action complexity of the game. A card game like Magic the Gathering has huge complexity even if the number of possible actions at each turn is quite small. The above traits and others make a large impact on the details of which algorithms and approaches will work best (or work at all). As that is very broad, I will not go into that, and I would suggest you ask separate questions about specific games if you take this further and try to implement some self-playing learning agents. It is possible to outline a general approach that would work for many games: 1. Implement Game Rules ----------------------- You need to have a programmable API for running the game, which allows for code representing a player (called an "agent") to receive current state of the game, to pass in a chosen action, and to return results of that action. The results should include whether any of the players has won or lost, and to update internal state of the game ready for next player. 2. Choose a Learning Agent Approach ----------------------------------- There are several kinds of learning algorithms that are suitable for controlling agents and learning through experience. One popular choice would be Reinforcement Learning (RL), but also Genetic Algorithms (GA) can be used for example. Key here is how different types of agent solve the issues of self-play: * GA does this very naturally. You create a population of agents, and have them play each other, selecting winners to create the next generation. The main issue with GA approaches is how well they scale with complexity - in general not as well as RL. * With RL you can have the current agent play best agent(s) you have created so far - which is most general approach. Or for some games you may be able to have it play both sides at once using the same predictive model - this works because predicting moves by the opposition is a significant part of game play. ### 2a. How self-play works in practice Without going into lines of code, what you need for self-play is: * The game API for automation and scoring * One or more agents that can use the API * A system that takes the results of games between agents and feeds them back so that learning can occur: + In a GA, with tournament selection, this could simply be saving a reference - the id of the winner - into a list, until the list has grown large enough to be the parents for the next generation. Other selection methods are also possible, but the general approach is the same - the games are played to help populate the next generation of agents. + In RL, typically each game state is saved to memory along with next resulting state or the result win/draw/lose (converted into a reward value e.g. +1/0/-1). This memory is used to calculate and update estimates of future results from any given state (or in some variants used directly to decide best action to take from any given state). Over time, this link between current and next states provides a way for the agent to learn better early game moves that eventually lead to a win for each player. + An important "trick" in RL is to figure out a way for the model and update rules to reflect opposing goals of players. This is not a consideration in agents that solve optimal control in single agent environments. You either need to make the prediction algorithm predict future reward as seen from perspective of the current agent, or use a global scoring system and re-define one of the agents as trying to minimise the total reward - this latter approach works nicely for two player zero sum games, as both players can then direcly share the same estimator, just using it to different ends. * A lot of repetition in a loop. GA's unit of repetition is usually a *generation* - one complete assessment of all existing agents (although again there are variants). RL's unit of repetition is often smaller, individual episodes, and the learning part of the loop can be called on every single turn if desired. The basic iteration in all cases is: + Play anything from one move to multiple games, with automated agents taking all roles in the game and storing results. + Use results to update learned parameters for the agents. + Use the updated agents for the next stages of self-play learning. 3. Planning ----------- A purely reactive learning agent can do well in simple games, but typically you also want the agent to look ahead to predict future results more directly. You can combine the outputs from a learning model like RL with a forward-planning approach to get the best of both worlds. Forward search methods include [minimax](https://en.wikipedia.org/wiki/Minimax)/[negamax](https://en.wikipedia.org/wiki/Negamax) and [Monte Carlo Tree Search](https://en.wikipedia.org/wiki/Monte_Carlo_tree_search). These can be combined with learning agents in multiple ways - for instance as well as in planning stages *using* the RL model, they can also be used to help train a RL model (this is how it is used in Alpha Zero). Upvotes: 3 [selected_answer]
2020/02/02
2,500
9,108
<issue_start>username_0: I am reading the Understanding Machine Learning book by Shalev-Shwartz and Ben-David and based on the definitions of PAC learnability and No Free Lunch Theorem, and my understanding of them it seems like they contradict themselves. I know this is not the case and I am wrong, but I just don't know what I am missing here. So, a hypothesis class is (agnostic) PAC learnable if there exists a learner A and a function $m\_{H}$ s.t. for every $\epsilon,\delta \in (0,1)$ and **for every distribution $D$ over $X \times Y$**, if $m \geq m\_{H}$ the learner can return a hypothesis $h$, with a probability of at least $1 - \delta$ $$ L\_{D}(h) \leq min\_{h'\in H} L\_{D}(h') + \epsilon $$ But, in layman's terms, the NFL theorem states that for prediction tasks, for every learner there **exists a distribution** on which the learner fails. There needs to exists a learner that is successful (defined above) for every distribution $D$ over $X \times Y$ for a hypothesis to be PAC learnable, but according to NFL there exists a distribution for which the learner will fail, aren't these theorems contradicting themselves? What am I missing or misinterpreting here?<issue_comment>username_1: There is no contradiction. First, agnostic PAC learnable doesn't mean that the there is a good hypothesis in the hypothesis class; it just means that there is an algorithm that can probably approximately do as well as the best hypothesis in the hypothesis class. Also, these NFL theorems have specific mathematical statements, and hypothesis classes for which they apply are often not the same as the hypothesis class for which PAC-learnability holds. For example, in *Understanding Machine Learning* by Shalev-Shwartz and Ben-David, a hypothesis class is agnostic PAC learnable if and only if has finite VC dimension (Theorem 6.7). Here, the algorithm is ERM. On the other hand, the application of the specific version of NFL that this book uses has Corollary 5.2, that the hypothesis class of all classifiers is not PAC learnable, and note that this hypothesis class has infinite VC dimension, so the Fundamental Theorem of PAC learning does not apply. The main takeaway is that in order to learn, we need some sort of inductive bias (prior information). This can be seen in the form of measuring the complexity of the hypothesis class or using other tools in learning theory. Upvotes: 4 [selected_answer]<issue_comment>username_2: (All notations based on [Understanding ML: From Theory to Algorithms](https://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning/understanding-machine-learning-theory-algorithms.pdf)) The layman's term for NFL is super misleading. The comparison between PAC learnability and NFL is kind of baseless since both proof's are built on a different set of assumptions. Let's review the definition of PAC learnability: > > A hypothesis class $H$ is PAC learnable if there exist a function $m\_H : (0, 1)^ 2 β†’ N$ and a learning algorithm with the > following property: For every $\epsilon, \delta \in (0, 1)$, for every distribution $D$ over $X$ , and for every labelling function $f : X β†’ {0, 1}$, if the realizable assumption holds with respect to $H, D, f$ , then when running the learning algorithm on $m β‰₯ m\_H (\epsilon, \delta)$ i.i.d. examples generated by $D$ and labeled by $f$ , the algorithm returns > a hypothesis $h$ such that, with probability of at least $1 βˆ’ Ξ΄$ (over the choice of the examples), $L\_{(D,f )} (h) ≀ \epsilon$. > > > An important point in this definition is that the complexity bounds (i.e value of $m$) holds irrespective of distribution $D$ (this is known as distribution free). Since, in the proofs we assume error to be $1$ i.e if $f(x) \neq h(x)$ then we assign error $=1$ so $L\_D(A(S))$ which is defined as the true probability of error by the classifier ($A(S) = h\_S$) will be the same as $\Bbb E\_{S \sim D^{m}}(h\_S)$. Also, the realizable assumption is not very important here. Now let's review the definition of NFL: > > Let $A$ be any learning algorithm for the task of binary classification with respect to the $0 βˆ’ 1$ loss over a domain $X$ . Let $m$ be any number smaller than $|X |/2$, representing a training set size. Then, there > exists a distribution $D$ over $X Γ— \{0, 1\}$ such that: > > > 1. There exists a function $f : X β†’ \{0, 1\}$ with $L\_{D} (f ) = 0$ (i.e.Realizable). 2. With probability of at least $1/7$ over the choice of $S \sim D^m$ we have that $L\_D (A(S)) β‰₯ 1/8$. **NOTE:** For second statement it suffices to show that $\Bbb E\_{S \sim D^{m}}L\_D(A'(S)) \geq 1/4$, which can be shown using Markov's Inequality. Also, the definition is implying we consider all functions possible from $X Γ— \{0, 1\}$ and our learning algorithm can pick any function $f$ out of this, which somewhat implies that the set $X$ has been shattered. If you read the definition it clearly states there exists a $D$, which is clearly different from distribution free assumption of PAC learnability. Also to note that we are restricting sample size $m$ to $|X |/2$. You will be able to falsify the second statement by simply picking bigger $m$ and thus your class is suddenly PAC learnable. Thus the point NFL is trying to make is that: > > Without an inductive bias i.e if you pick all possible functions from $f : X β†’ {0, 1}$ as your hypothesis class you would not be able to achieve **for all $D$** an accuracy less than $1/8$ with probability greater than $6/7$ given your sample size is at most $|X|/2$. > > > To prove this, you only have to pick a distribution for which this holds. In the proof of the [book](https://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning/understanding-machine-learning-theory-algorithms.pdf) they have used the uniform distribution which is the margin between 2 types of distribution. So the idea is lets say you have sampled $m = \frac{|X|}{2}$ points, your learning algorithm returns a hypothesis as per ERM rule (doesn't really matter) on the sampled points. Now you want to comment on the error over $2m$ points and true distribution (uniform distribution in this case). So clearly, the probability of picking a point outside your sampled points (unseen points) is $0.5$. Also, the $A(S) = h\_S$ will have a $0.5$ probability of agreeing with the actual label of an unseen point (among all $h$ which agree with the sampled points, half will assign $1$ to an unseen point while other half will assign $0$), which makes the total probability of making an error$=0.25$ over the true distribution or $\Bbb E\_{S \sim D^{m}}L\_D(A(S)) = 1/4$ Note, that we have picked up uniform distribution but this will also hold for distributions which assigns probability $p \leq 0.5$ on the sampled points, then the probability of picking a point outside your sampled points (unseen points) is $\geq 0.5$ and thus error is $\geq 0.5$, and thus uniform distribution is the mid point. ANother important point to note is that if we pick $m+1$ points we will definitely do better, but then its kind of overfitiing. This basically translates to why infinite VC dimension hypothesis class is not PAC learnable, because it shatters every set of size $|X|$ and we have already seen the implications of picking a hypothesis class which shatters a set of size $|X|$ in NFL. This is the informal description of how the NFL theorem was arrived at. You can find the entire explanation in this [lecture](https://www.youtube.com/watch?v=taA3r7378gU&list=PLPW2keNyw-usgvmR7FTQ3ZRjfLs5jT4BO&index=8) after which the proof in the book will start to make much more sense. Thus, inductive bias (restricting hypothesis class to some possible good candidates of $h$) is quite important as can be seen, the effects without any inductive bias. Upvotes: 2 <issue_comment>username_3: There is no contradiction between PAC learning and the no-free-lunch theorem as commented in other answers. **But there is indeed a contradiction between the no-free-lunch theorem and its layman's explanation:** * for infinite $\mathcal{X}$, whenever $\mathcal A$ is fixed, there is a distribution on which it fails to learn. **This is not true!** Indeed, if $\mathcal X$ is countable and $\mathcal A$ is an algorithm that simply memorizes what it has seen and answers $0$ for unseen samples, then it can be shown that the true error of $\mathcal A$ converges to $0$, so $\mathcal A$ effectively learns. (See page 95) For uncountable $\mathcal X$, the layman's explanation is correct. In that case, the theorem looks rather obvious, because finite samples provide no information about the behavior of the true mapping $f$, except at a null set. I personally dislike that part of the book (page 61), because immediately before the no-free-lunch theorem, they say literally "*no learner can succeed on all learning task, as formalized in the following theorem:*", but they leave the distribution $\mathcal D$ as dependent on $m$ in the theorem statement, which disrupts all the previous definitions of learnability and makes the introductory phrase (the layman's terms) misleading. Upvotes: 0
2020/02/03
1,015
3,699
<issue_start>username_0: I want to make a model that outputs the centre pixel of objects appearing in an image. My current method involves using a CNN with L2 loss to output an image of equivalent size to the input where each pixel has a value of 1 if it is the center of an object and 0 otherwise. Each input image has roughly ~80 objects. The problem with this is the CNN learns the easiest way to reduce the error, which is having the entire output be 0, because for 97% of cases that's correct. As such, error decreases but it learns nothing. What is another potential method for training a network to do something similar? I also tried adding dropout, which made the output a lot more noisy and it seemed to learn ok, but eventually ended up in the same state as before with the entire output being 0, never really seeming to learn how to output the locations of objects.<issue_comment>username_1: from what I understand you are building you own model for this specific use case. From my perspective I would try *not to reinvent the wheel*, as it is said, and use an already proven and working model such as the YOLOs ([v1](https://arxiv.org/abs/1506.02640), [v2](https://arxiv.org/abs/1612.08242) and [v3](https://arxiv.org/abs/1804.02767)). YOLO does not tell you the center pixel of the image directly but it tells you the center cell, with respect to the predicted object, of a grid (which is built on top of the image) responsible for predicting each object. See on the left image how the grid is built on top of the input image, then YOLO computes a probability map, and then the bounding boxes are predicted from the **cell in the object's center**. I have highlighted which cells would be responsible for predicting each object in the rightmost image. (Image from YOLOv1 paper) [![YOLO cell approach](https://i.stack.imgur.com/9DbkU.png)](https://i.stack.imgur.com/9DbkU.png) The grid can have different resolutions but if you make it equal to the image size, then you would have the center pixel of each object predicted. This is because YOLO predicts objects in each cell of the grid, ***so if the grid is equal to the image size, YOLO will predict objects in each pixel***. As an example, imagine you have an input image of $[H \times W] = [416 \times 416]$ then YOLO would compute a grid of $[S\_1 \times S\_2]=[52 \times 52]$ on top of it. And predict objects in the center cell of the $[S\_1 \times S\_2]$. So, if you tune YOLO for computing a grid such as $[S\_1 \times S\_2] = [H \times W]$, then YOLO would output objects prediction with respect to the image pixels, in other words, YOLO would predict bounding boxes centered in the image pixel on the object's center. This is how I would proceed for this use case, I hope it helps you or at least give you some clues about how to proceed further! Cheers! :) --- NOTE: I chose the image size and grid size with numbers I usually see at work. Specifically, using YOLOv3. In YOLOv3, for aninput image of $[H \times W] = [416 \times 416]$, 3 grids are built, with different resolutions (for predicting big and small objects), with the following grids sizes are: $[13 \times 13], [26 \times 26], [52 \times 52]$ Upvotes: 1 <issue_comment>username_2: You must increase the percentage of "1" labels on the target output, so that "all zeroes" doesn't give such a low loss. If your images are $512 \times 512$, could you label the target image as ones within a radius of 10 pixels? This way each target would have about 314 "1" labels (0.12%), rather than just one (0.0004%). Maybe even a radius of 20 pixels would be better. Also binary crossentropy is a better loss function for this than MSE (L2 difference). Upvotes: 0
2020/02/03
874
3,116
<issue_start>username_0: Consider a feedforward neural network. Suppose you have a layer of inputs, which is feedforward to a hidden layer, and feedforward both the input and hidden layers to an output layer. Is there a name for this architecture? A layer feeds forward around the layer after it?<issue_comment>username_1: from what I understand you are building you own model for this specific use case. From my perspective I would try *not to reinvent the wheel*, as it is said, and use an already proven and working model such as the YOLOs ([v1](https://arxiv.org/abs/1506.02640), [v2](https://arxiv.org/abs/1612.08242) and [v3](https://arxiv.org/abs/1804.02767)). YOLO does not tell you the center pixel of the image directly but it tells you the center cell, with respect to the predicted object, of a grid (which is built on top of the image) responsible for predicting each object. See on the left image how the grid is built on top of the input image, then YOLO computes a probability map, and then the bounding boxes are predicted from the **cell in the object's center**. I have highlighted which cells would be responsible for predicting each object in the rightmost image. (Image from YOLOv1 paper) [![YOLO cell approach](https://i.stack.imgur.com/9DbkU.png)](https://i.stack.imgur.com/9DbkU.png) The grid can have different resolutions but if you make it equal to the image size, then you would have the center pixel of each object predicted. This is because YOLO predicts objects in each cell of the grid, ***so if the grid is equal to the image size, YOLO will predict objects in each pixel***. As an example, imagine you have an input image of $[H \times W] = [416 \times 416]$ then YOLO would compute a grid of $[S\_1 \times S\_2]=[52 \times 52]$ on top of it. And predict objects in the center cell of the $[S\_1 \times S\_2]$. So, if you tune YOLO for computing a grid such as $[S\_1 \times S\_2] = [H \times W]$, then YOLO would output objects prediction with respect to the image pixels, in other words, YOLO would predict bounding boxes centered in the image pixel on the object's center. This is how I would proceed for this use case, I hope it helps you or at least give you some clues about how to proceed further! Cheers! :) --- NOTE: I chose the image size and grid size with numbers I usually see at work. Specifically, using YOLOv3. In YOLOv3, for aninput image of $[H \times W] = [416 \times 416]$, 3 grids are built, with different resolutions (for predicting big and small objects), with the following grids sizes are: $[13 \times 13], [26 \times 26], [52 \times 52]$ Upvotes: 1 <issue_comment>username_2: You must increase the percentage of "1" labels on the target output, so that "all zeroes" doesn't give such a low loss. If your images are $512 \times 512$, could you label the target image as ones within a radius of 10 pixels? This way each target would have about 314 "1" labels (0.12%), rather than just one (0.0004%). Maybe even a radius of 20 pixels would be better. Also binary crossentropy is a better loss function for this than MSE (L2 difference). Upvotes: 0
2020/02/04
881
3,465
<issue_start>username_0: I am an undergraduate student in applied mathematics with an interest in artificial intelligence. I am currently exploring topics where I could do research. Coming from a mathematical background I am interested in the question: Can we mathematically establish that a certain AI system has the ability to learn a task given some examples of how it should be done? I would like to know what research has been done on this topic and also what mathematical tools could be helpful in answering such questions.<issue_comment>username_1: **Computational learning theory** (or just **learning theory**, abbreviated as CLT, COLT, or LT) is devoted to the mathematical and computational analysis of machine learning algorithms, so it is concerned with the learnability (i.e. generalization, bounds, efficiency, etc.) of certain tasks, given a learner (or a learning algorithm), a hypothesis space, data, etc. CLT can be divided into two subfields: * [statistical learning theory](http://www.mit.edu/%7E6.454/www_spring_2001/emin/slt.pdf) (SLT) and * [algorithmic/formal learning theory](https://seop.illc.uva.nl/entries/learning-formal) (ALT), which is the nonstatistical approach to CLT The most famous and studied SLT frameworks might be [PAC learning](https://people.mpi-inf.mpg.de/%7Emehlhorn/SeminarEvolvability/ValiantLearnable.pdf) and the [VC theory](https://en.wikipedia.org/wiki/Vapnik%E2%80%93Chervonenkis_theory) (which extends PAC learning to infinite-dimensional hypothesis spaces). There are many good resources on CLT, some of which can be found in [this answer](https://ai.stackexchange.com/a/20358/2444). Here's a related question on this site: [What sort of mathematical problems are there in AI that people are working on?](https://ai.stackexchange.com/q/12971/2444). Upvotes: 3 [selected_answer]<issue_comment>username_2: @username_1 has already provided a great answer, so i'll just supplement his answer with two specific results: Minsky, in his 1969 book [*Perceptrons*](https://en.wikipedia.org/wiki/Perceptrons_(book)) provided a mathematical proof that showed that certain types of neural networks (then called perceptrons) weren't able to compute a function called the XOR function, thus showing that the mind couldn't be implemented on strictly this structure. Minsky further argued that this result would generalize to all neural networks, but he failed to account for an architectural adaptation known as "hidden layers", which would allow for neural networks to compute the XOR function. This result isn't very relavant in modern times, but the immediate impact of his proof lead to several decades of people ignoring neural networks due to their perceived failings. Another commonly cited result is the [Universal approximation theorem](https://en.wikipedia.org/wiki/Universal_approximation_theorem), which shows that a sufficiently wide single layer neural network would be able to approximate (read as: arbitrarily close) any continuous function given appropriate activation function (iirc the activation needed to be non-linear). You can also consider the research of MIRI, which in a sense is more of a "pure" study of AI than the examples listed above. Their [Program Equilibrium via Provability Logic](https://arxiv.org/abs/1401.5577) result was pretty interesting, the gist of that paper is that programs can learn to cooperate in a very simple game if they read each other's source code. Upvotes: 1
2020/02/05
837
3,228
<issue_start>username_0: When using convolutional networks on images with multiple channels, do we max pool after we sum the feature map from each channel, or do we max pool each feature map separately and then sum? What's the intuition behind this, or is there a difference between the two?<issue_comment>username_1: **Computational learning theory** (or just **learning theory**, abbreviated as CLT, COLT, or LT) is devoted to the mathematical and computational analysis of machine learning algorithms, so it is concerned with the learnability (i.e. generalization, bounds, efficiency, etc.) of certain tasks, given a learner (or a learning algorithm), a hypothesis space, data, etc. CLT can be divided into two subfields: * [statistical learning theory](http://www.mit.edu/%7E6.454/www_spring_2001/emin/slt.pdf) (SLT) and * [algorithmic/formal learning theory](https://seop.illc.uva.nl/entries/learning-formal) (ALT), which is the nonstatistical approach to CLT The most famous and studied SLT frameworks might be [PAC learning](https://people.mpi-inf.mpg.de/%7Emehlhorn/SeminarEvolvability/ValiantLearnable.pdf) and the [VC theory](https://en.wikipedia.org/wiki/Vapnik%E2%80%93Chervonenkis_theory) (which extends PAC learning to infinite-dimensional hypothesis spaces). There are many good resources on CLT, some of which can be found in [this answer](https://ai.stackexchange.com/a/20358/2444). Here's a related question on this site: [What sort of mathematical problems are there in AI that people are working on?](https://ai.stackexchange.com/q/12971/2444). Upvotes: 3 [selected_answer]<issue_comment>username_2: @username_1 has already provided a great answer, so i'll just supplement his answer with two specific results: Minsky, in his 1969 book [*Perceptrons*](https://en.wikipedia.org/wiki/Perceptrons_(book)) provided a mathematical proof that showed that certain types of neural networks (then called perceptrons) weren't able to compute a function called the XOR function, thus showing that the mind couldn't be implemented on strictly this structure. Minsky further argued that this result would generalize to all neural networks, but he failed to account for an architectural adaptation known as "hidden layers", which would allow for neural networks to compute the XOR function. This result isn't very relavant in modern times, but the immediate impact of his proof lead to several decades of people ignoring neural networks due to their perceived failings. Another commonly cited result is the [Universal approximation theorem](https://en.wikipedia.org/wiki/Universal_approximation_theorem), which shows that a sufficiently wide single layer neural network would be able to approximate (read as: arbitrarily close) any continuous function given appropriate activation function (iirc the activation needed to be non-linear). You can also consider the research of MIRI, which in a sense is more of a "pure" study of AI than the examples listed above. Their [Program Equilibrium via Provability Logic](https://arxiv.org/abs/1401.5577) result was pretty interesting, the gist of that paper is that programs can learn to cooperate in a very simple game if they read each other's source code. Upvotes: 1
2020/02/05
593
2,387
<issue_start>username_0: I'm training a deep network in `Keras` on some images for a binary classification (I have around 12K images). Once in a while, I collect some false positives and add them to my training sets and re-train for higher accuracy. I split my training into 20/80 percent for training/validation sets. Now, my question is: which resulting model should I use? Always the one with higher validation accuracy, or maybe the higher mean of training and validation accuracy? Which one of the two would you prefer? ``` Epoch #38: training acc: 0.924, validation acc: 0.944 Epoch #90: training acc: 0.952, validation acc: 0.932 ```<issue_comment>username_1: The training accuracy tells you nothing about how good it is on other data than the ones it learned on, it could be better on this data because it memorized this examples. On the other hand the validation set is here to indicate you how good the model is to generalize what it learned to new data (hopefully the testing dataset accurately represents the diversity of the data). As you are looking for a model which is good on every dataset you don't want to use training accuracy to choose your model and so you should choose the first one. Upvotes: 0 <issue_comment>username_2: Neither of the above mentioned methods could be a potent indicator of the performance of a model. A simple way to train the model *just enough* so that it generalizes well on unknown datasets would be to monitor the validation loss. Training should be stopped once the validation loss progressively starts increasing over multiple epochs. Beyond this point, the model learns the statistical noise within the data and starts *overfitting*. [![enter image description here](https://i.stack.imgur.com/EwzMW.png)](https://i.stack.imgur.com/EwzMW.png) This technique of **Early stopping** could be implemented in Keras with the help of a callback function: ``` class EarlyStop(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): if(logs.get('val_loss') < LOSS_THRESHOLD and logs.get('val_categorical_accuracy') > ACCURACY_THRESHOLD): self.model.stop_training = True callbacks= EarlyStop() model.fit(...,callbacks=[callbacks]) ``` The Loss and Accuracy thresholds can be estimated after a trial run of the model by monitoring the validation/training error graph. Upvotes: 3 [selected_answer]
2020/02/06
2,382
9,073
<issue_start>username_0: Is it possible to estimate the capacity of a neural network model? If so, what are the techniques involved?<issue_comment>username_1: Most methods for measuring the complexity of neural networks are fairly crude. One common measure of complexity is [VC dimension](https://en.wikipedia.org/wiki/Vapnik%E2%80%93Chervonenkis_dimension), a discussion which can be found [here](https://stats.stackexchange.com/questions/25952/calculating-vc-dimension-of-a-neural-network) and [here](https://cs.stackexchange.com/questions/75327/why-is-deep-learning-hyped-despite-bad-vc-dimension). For example, neural networks have a [VC dimension that is too large](https://arxiv.org/pdf/1703.02930.pdf) to give a strong upper bound on the number of training samples needed for a model (the upper bound provided by VC analysis is much higher than what we have observed neural networks to be able to generalize from). Another common measure of capacity is the number of parameters. We see in the paper "[Understanding deep learning requires rethinking generalization](https://arxiv.org/pdf/1611.03530v2.pdf)", published at ICLR with over 1400+ citations, that networks with more parameters than data often have the capacity to memorize the data. The paper provides compelling evidence that traditional approaches to generalization provided by statistical learning theory (VC dimension, Rademacher complexity) are unable to fully explain the apparent capacity of neural networks. In general, neural networks seem to have a large capacity, given the apparent good performance on certain tasks. Beyond these ideas, the universal approximation theorem tells us that the set of neural networks can approximate any continuous function arbitrarily well, which strongly suggests that any neural network has a large capacity. Upvotes: 0 <issue_comment>username_2: VC dimension ============ A rigorous measure of the capacity of a neural network is the [**VC dimension**](https://en.wikipedia.org/wiki/Vapnik%E2%80%93Chervonenkis_dimension), which is *intuitively* a number or **bound** that quantifies the difficulty of learning from data. The **sample complexity**, which is the number of training instances that the model (or learner) must be exposed to in order to be reasonably certain of the accurateness of the predictions made given some data, is proportional to this number. The paper [VC Dimension of Neural Networks](http://mathsci.kaist.ac.kr/%7Enipl/mas557/VCD_ANN_3.pdf) (1998) by <NAME> provides a good introduction to the VC dimension of neural networks (even though these concepts are quite abstract and you may need to read them several times to fully grasp them). The information in this answer is highly based on that paper. Shattering and VC dimension --------------------------- In section 2, **Concepts and VC Dimension**, he describes the basic concepts behind the VC dimension (not only for neural networks), such as the concept of **shattering** (i.e. what does it mean for a set of sets to shatter another set?), which is a well-known concept in [computational learning theory](https://ai.stackexchange.com/a/17842/2444) and is used to define the VC dimension (see definition 2), so you definitely need to get familiar with this concept to understand the VC dimension and, therefore, the capacity of a neural network (calculated with the VC dimension). VC dimension of functions and neural networks --------------------------------------------- He then provides an equivalent definition of the VC dimension but for functions (equation 6). Given that neural networks represent functions, then we can also define the VC dimension of a neural network. A specific combination of weights of neural networks represents a specific function, for which the VC dimension can be defined. To be more precise, a parametrized function (and a neural network) can be denoted as $$ \beta : \mathbb{W} \times \mathbb{U} \rightarrow \mathbb{R} $$ where $\mathbb{W} = \mathbb{R}^p$ and $p$ is the number of weights (or parameters) of the neural network, $\mathbb{U}$ is the input space and $\mathbb{R}$ the output space. So, in this case, $\beta$ can also represent a neural network, with a certain parameter space $\mathbb{W}$, an input space $\mathbb{U}$ and an output space $\mathbb{R}$. The vector $\mathbf{w} = (w\_1, \dots, w\_p) \in \mathbb{W}$ represents a specific combination of weights of the neural network, so it represents a specific function. The set of all functions for each choice of this weight vector can be denoted as $$ \mathcal{F}\_{\beta} = \{ \beta(\mathbf{w}, \cdot) \mid \mathbf{w} \in \mathbb{W} \} $$ The VC dimension (VCD) of $\beta$ can then be defined as $$ \text{VCD}(\beta) := \text{VCD}(\mathcal{F}\_{\beta}) $$ Therefore, the VC dimension is a measure of the capacity of a neural network with a **certain** architecture. Moreover, the VC dimension is equivalently defined for a certain set of functions associated with a neural network. How to calculate the VC dimension? ---------------------------------- To calculate the actual VC dimension of a neural network, it takes a little bit of more creativity. Therefore, I will just report the VC dimension of some neural networks. For more details, you should fully read the cited paper (more than once) and other papers and books too (especially, the ones described in [this answer](https://ai.stackexchange.com/a/17842/2444), which provide an introduction to CLT concepts). ### VC dimension of a perceptron The VC dimension of a perceptron is $m + 1$, where $m$ is the number of inputs. Given that a perceptron represents a linear and affine function, the VC dimension of the perceptron is also equal to the number of parameters. However, note that, even though the VC dimension of the perceptron is linear in the number of parameters and inputs, it doesn't mean the perceptron can learn any function. In fact, perceptrons can only represent linear functions. See section 3.1 of [VC Dimension of Neural Networks](http://mathsci.kaist.ac.kr/%7Enipl/mas557/VCD_ANN_3.pdf) for more details. ### VC dimension of a single hidden layer neural network Let $n$ be the number of hidden units, then the VC dimension of a single hidden layer neural network is less than or equal to $n+1$. See section 3.2 of [VC Dimension of Neural Networks](http://mathsci.kaist.ac.kr/%7Enipl/mas557/VCD_ANN_3.pdf) for more details. ### VC dimension of multi-layer neural networks with binary activations The VC dimension of multi-layer neural networks (MLPs) with binary activations and $p$ weights (or parameters) is $\mathcal{O}(p \log p)$. See theorem 4 (and related sections) of the paper [VC Dimension of Neural Networks](http://mathsci.kaist.ac.kr/%7Enipl/mas557/VCD_ANN_3.pdf) for more details. ### VC dimension of MLPs with real-valued activations The VC dimension of MLPs with real-valued activations is no longer bounded by $\mathcal{O}(p \log p)$ and can be exponential in the number of parameters. See section 5.3 of [VC Dimension of Neural Networks](http://mathsci.kaist.ac.kr/%7Enipl/mas557/VCD_ANN_3.pdf). ### VC dimension of MLPs with linear activations The VC dimension of MLPs with linear activations is $\mathcal{O}(p^2)$. See theorem 5 of the paper [VC Dimension of Neural Networks](http://mathsci.kaist.ac.kr/%7Enipl/mas557/VCD_ANN_3.pdf). Notes ----- The VC dimension is often expressed as a bound (e.g. with big-O notation), which may **not** be strict. In any case, the VC dimension is useful because it provides some guarantees. For example, if you use the VC dimension to describe an upper bound on the number of samples required to learn a certain task, then you have a precise mathematical formula that guarantees that you will not need more samples than those expressed by the bound in order to achieve a small generalization error, but, in practice, you may need fewer samples than those expressed by the bound (because these bounds may not be strict or the VC dimension may also not be strict). Further reading --------------- There is a more recent paper (published in 2017 in MLR) that proves new and tighter upper and lower bounds on the **VC dimension of deep neural networks with the ReLU activation function**: [Nearly-tight VC-dimension bounds for piecewise linear neural networks](http://proceedings.mlr.press/v65/harvey17a/harvey17a.pdf). So, you probably should read this paper first. The paper [On Characterizing the Capacity of Neural Networks using Algebraic Topology](https://arxiv.org/pdf/1802.04443.pdf) may also be useful and interesting. See also section 6, Algebraic Techniques, of the paper I have been citing: [VC Dimension of Neural Networks](http://mathsci.kaist.ac.kr/%7Enipl/mas557/VCD_ANN_3.pdf). The capacity of a neural network is clearly related to the number of functions it can represent, so it is strictly related to the universal approximation theorems for neural networks. See [Where can I find the proof of the universal approximation theorem?](https://ai.stackexchange.com/a/13319/2444). Upvotes: 3
2020/02/06
713
2,726
<issue_start>username_0: I have a question regarding features representation for graph convolutional neural network. For my case, all nodes have a different number of features, and for now, I don't really understand how should I work with these constraints. I can not just reduce the number of features or add meaningless features in order to make the number of features on each node the same - because it will add to much extra noise to the network. Are there any ways to solve this problem? How should I construct the feature matrix? I'll appreciate any help and if you have any links to papers that solve this problem.<issue_comment>username_1: The simplest way I could come with is to pad with 0 each feature which is not present. You said that you're going to add too much noise to the network, but I don't see the problem (please correct me if I'm wrong). For example we have two nodes, the first one has only 2 features with the 3rd one missing and the second node has all features X=[[1,2,0], [3,4,5]]. Now we can project the nodes to a hidden representation (pretty common). I'm going to use a weight matrix of W=[[1], [1], [1]]. The output of XW will be [[3], [12]]. Now let's add a new feature to the second node X=[[1,2,0, 0], [3,4,5,6]] and apply the same transformation W=[[1], [1], [1], [1]] the output will be [[3], [18]] you can see that the first node is not affected by the number of missing features. Another way you could achieve this if you don't want to use the projection could be using a mask. For example give the same example above we could create a mask M=[[1,1,0], [1,1,1]] where each entry represents if a specific feature is present in a specific node. Now usually a GCN layer is defined as H=f(AHW) where A is the adjacency matrix. We could change the propagation rule to H=f(AH\*MW) where \* is the pointwise multiplication. Like this if a node is missing a feature it can not "access" information from others that are having that feature. Upvotes: 2 <issue_comment>username_2: My immediate suggestion would be to zero-fill the missing values, but I recalled the below comment suggesting a more sophisticated method: > > **Karim:** *How to deal with different size of feature vectors?* > > > **Nabila:** *That's a problem I'm actually working on. I've seen that you can create separate networks for each type of node feature, and sort of project them - so train them separately, and project them to the same size.* > > > *Or you can do concatenation so you don't have to worry about that, but at some point they all need to be the same size to do classification at the end.* > > > * [A Literature Review on Graph Neural Networks](https://youtu.be/54ohmr2LYuI?t=2346) Upvotes: 0
2020/02/07
536
2,211
<issue_start>username_0: Why do we split the data into two parts, and then split those segments into training and testing data? Why do we have two sets of data for each training and test data?<issue_comment>username_1: Usually we are splinting the data into 3 chunks for example 70% for training, 10% for validation and 20% for testing. The first two chunks are going to be used for training. The reason you need the validation dataset is to tune your hyper-parameters and see how well your model can generalize. Once you have a model that achieves a fairly good performance on the validation dataset, you're measuring its accuracy on the test dataset. Upvotes: 0 <issue_comment>username_2: For any Machine Learning model, the available data is usually split into three sets: **Training Set:** The part of data used to train the model and learn the parameters of the network. The data that remains after allocation of the Training Dataset, is split into the Validation and Test sets. **Validation Set:** This sample of data is used to provide an unbiased evaluation of a model fit on the training dataset. This helps in tuning model hyperparameters to improve the model performance. Eg: Changing the number of clusters ($k$) in a K-Means algorithm, or the pooling layers in a CNN. **Test Set:** After training, this part of the dataset is used to used to test how well the model generalizes to unseen data and estimate its performance. Another possibility (going by your question), is the use of **Cross-Validation**. This is performed when the dataset is too small. In such a case, a random split is performed on the dataset resulting in k non-overlapping subsets. The test error is then estimated by taking the average test error across k trials. [![kFold](https://i.stack.imgur.com/IqYBM.png)](https://i.stack.imgur.com/IqYBM.png) [[Image Source]](https://en.wikipedia.org/wiki/Cross-validation_(statistics)) Upvotes: 1 <issue_comment>username_3: Are you talking about (X\_train,y\_train) and (X\_test,y\_test). If yes, then X represents the data(features) and y represents the labels of that data. That's why you get a pair when you divide it into training and test data Upvotes: 2 [selected_answer]
2020/02/07
583
2,434
<issue_start>username_0: There is a popular strategy of using a neural network trained on one task to produce features for another related task by "chopping off" the top of the network and sewing the bottom onto some other modeling pipeline. Word2Vec models employ this strategy, for example. Is there an industry-popular term for this strategy? Are there any good resources that discuss its use in general terms?<issue_comment>username_1: Usually we are splinting the data into 3 chunks for example 70% for training, 10% for validation and 20% for testing. The first two chunks are going to be used for training. The reason you need the validation dataset is to tune your hyper-parameters and see how well your model can generalize. Once you have a model that achieves a fairly good performance on the validation dataset, you're measuring its accuracy on the test dataset. Upvotes: 0 <issue_comment>username_2: For any Machine Learning model, the available data is usually split into three sets: **Training Set:** The part of data used to train the model and learn the parameters of the network. The data that remains after allocation of the Training Dataset, is split into the Validation and Test sets. **Validation Set:** This sample of data is used to provide an unbiased evaluation of a model fit on the training dataset. This helps in tuning model hyperparameters to improve the model performance. Eg: Changing the number of clusters ($k$) in a K-Means algorithm, or the pooling layers in a CNN. **Test Set:** After training, this part of the dataset is used to used to test how well the model generalizes to unseen data and estimate its performance. Another possibility (going by your question), is the use of **Cross-Validation**. This is performed when the dataset is too small. In such a case, a random split is performed on the dataset resulting in k non-overlapping subsets. The test error is then estimated by taking the average test error across k trials. [![kFold](https://i.stack.imgur.com/IqYBM.png)](https://i.stack.imgur.com/IqYBM.png) [[Image Source]](https://en.wikipedia.org/wiki/Cross-validation_(statistics)) Upvotes: 1 <issue_comment>username_3: Are you talking about (X\_train,y\_train) and (X\_test,y\_test). If yes, then X represents the data(features) and y represents the labels of that data. That's why you get a pair when you divide it into training and test data Upvotes: 2 [selected_answer]
2020/02/08
646
2,209
<issue_start>username_0: This question can seem a little bit too broad, but I am wondering what are the current state-of-the-art works on meta reinforcement learning. Can you provide me with the current state-of-the-art in this field?<issue_comment>username_1: One of the most recent papers on meta-RL is [meta-Q-learning](https://arxiv.org/pdf/1910.00125.pdf). This paper introduces Meta-Q-Learning (MQL), a new off-policy algorithm for meta-reinforcement learning (meta-RL). MQL builds upon three simple ideas. * Q-learning is competitive with state-of-the-art meta-RL algorithms if given access to a context variable that is a representation of the past trajectory. * Using a multi-task objective to maximize the average reward across the training tasks is an effective method to meta-train RL policies. * past data from the meta-training replay buffer can be recycled to adapt the policy on a new task using off-policy updates Experiments on standard continuous-control benchmarks suggest that MQL compares favorably with state-of-the-art meta-RL algorithms. I think that other references to other work on meta-RL are present in the experiments part of the MQL paper. Upvotes: 3 [selected_answer]<issue_comment>username_2: Meta-Reinforcement Learning can refer to a broad range of ideas. Also, different algorithms are SOTA under different evaluation metrics (sample efficiency, agent performance, adaptation speed on a new task, etc) Assuming that you are referring to the problem of quickly learning/adapting to a new task by training an agent on a distribution of related tasks, the following are some popular algorithms * PEARL [Rakelly et al., 2019] * VariBAD [Zintgraf et al., 2020] * Meta-Q-Learning [Fakoor et al., 2020] **References:** 1. <NAME>, <NAME>, <NAME>, <NAME>, <NAME> - [Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables](https://arxiv.org/abs/1903.08254), ICML 2019. 2. <NAME> et al., - [VariBAD: A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning](https://arxiv.org/abs/1910.08348), ICLR 2020. 3. <NAME>, <NAME>, <NAME>, <NAME> - [Meta-Q-Learning](https://arxiv.org/abs/1910.00125), ICLR 2020. Upvotes: 1
2020/02/10
1,282
3,508
<issue_start>username_0: Given a pre-trained CNN model, I extract feature vector of *3450* reference images `FV_R` as follows: ``` FV_R = [ [-8.2, -52.2, 9.07, -1.1, -0.08, -9.1, ........, -4.11], [7.8, -3.8, 6.4, -4.27, -2.2, -5.0, ............., 3.6], [-1.2, -0.8, 49.3, 1.73, -1.74, -7.1, ..........., 2.41], [-1.2, -.8, 49.3, 0.6, -1.24, -1.04, .........., -2.06], . . . [-1.2, -.8, 49.3, 12.77. -2.2, -5.0, .........., -51.1] ] ``` and `FV_Q` for *1200* query images : ``` FV_Q = [ [-0.13, 2.6, -3.7, -0.5, -1.02, -0.6, ........, -0.11], [0.3, -3.8, 6.4, -1.6, -2.2, -5.0, ............., 0.97], [-6.4, -0.08, 8.0, 7.3, -8.07, -5.6, ..........., 0.01], [-6.09, -.8, 0.5, -8.9, -0.74, -0.08, .........., -8.9], . . . [-1.2, -.8, 49.3, 12.77. -2.2, -5.0, .........., -51.1] ] ``` The size info: ``` >>> FV_R.shape (3450, 64896) ``` Query images: ``` >>> FV_Q.shape (1200, 64896) ``` I would like to *binarize* the CNN feature vectors (descriptors) and calculate Hamming Distance. I am already aware of [this](https://stackoverflow.com/questions/32730202/fast-hamming-distance-computation-between-binary-numpy-arrays) answer to probably use `np.count_nonzero(a!=b)`(`if a.shape == b.shape`) but does anyone know a method to binarize a feature vector with different size? Cheers,<issue_comment>username_1: One of the most recent papers on meta-RL is [meta-Q-learning](https://arxiv.org/pdf/1910.00125.pdf). This paper introduces Meta-Q-Learning (MQL), a new off-policy algorithm for meta-reinforcement learning (meta-RL). MQL builds upon three simple ideas. * Q-learning is competitive with state-of-the-art meta-RL algorithms if given access to a context variable that is a representation of the past trajectory. * Using a multi-task objective to maximize the average reward across the training tasks is an effective method to meta-train RL policies. * past data from the meta-training replay buffer can be recycled to adapt the policy on a new task using off-policy updates Experiments on standard continuous-control benchmarks suggest that MQL compares favorably with state-of-the-art meta-RL algorithms. I think that other references to other work on meta-RL are present in the experiments part of the MQL paper. Upvotes: 3 [selected_answer]<issue_comment>username_2: Meta-Reinforcement Learning can refer to a broad range of ideas. Also, different algorithms are SOTA under different evaluation metrics (sample efficiency, agent performance, adaptation speed on a new task, etc) Assuming that you are referring to the problem of quickly learning/adapting to a new task by training an agent on a distribution of related tasks, the following are some popular algorithms * PEARL [Rakelly et al., 2019] * VariBAD [Zintgraf et al., 2020] * Meta-Q-Learning [Fakoor et al., 2020] **References:** 1. <NAME>, <NAME>, <NAME>, <NAME>, <NAME> - [Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables](https://arxiv.org/abs/1903.08254), ICML 2019. 2. L Zintgraf et al., - [VariBAD: A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning](https://arxiv.org/abs/1910.08348), ICLR 2020. 3. <NAME>, <NAME>, <NAME>, <NAME> - [Meta-Q-Learning](https://arxiv.org/abs/1910.00125), ICLR 2020. Upvotes: 1
2020/02/11
925
3,482
<issue_start>username_0: There is [this video on pythonprogramming.net](https://pythonprogramming.net/introduction-deep-learning-python-tensorflow-keras/) that trains a network on the MNIST handwriting dataset. At [~9:15](https://youtu.be/wQ8BIBpya2k?t=539), the author explains that the data should be normalized. The normalization is done with ``` x_train = tf.keras.utils.normalize(x_train, axis=1) x_test = tf.keras.utils.normalize(x_test, axis=1) ``` The explanation is that values in a range of 0 ... 1 make it easier for a network to learn. That might make sense, if we consider sigmoid functions, which would otherwise map almost all values to 1. I could also understand that we want black to be pure black, so we want to adjust any offset in black values. Also, we want white to be pure white and potentially stretch the data to reach the upper limit. However, I think the kind of normalization applied in this case is incorrect. The image before was: [![Original image](https://i.stack.imgur.com/pjAzC.png)](https://i.stack.imgur.com/pjAzC.png) After the normalization it is [![Normalized image](https://i.stack.imgur.com/0Yv01.png)](https://i.stack.imgur.com/0Yv01.png) As we can see, some pixels which were black before have become grey now. Columns with few black pixels before result in black pixels. Columns with many black pixels before result in lighter grey pixels. This can be confirmed by applying the normalization on a different axis: [![Normalization applied to a different axis](https://i.stack.imgur.com/iMtFo.png)](https://i.stack.imgur.com/iMtFo.png) Now, rows with few black pixels before result in black pixels. Rows with many black pixels result in lighter grey pixels. **Is normalization used the right way in this tutorial? If so, why? If not, would my normalization be correct?** What I expected was a per pixel mapping from e.g. [3 ... 253] (RGB values) to [0.0 ... 1.0]. In Python code, I think this should do: ``` import numpy as np import imageio image = imageio.imread("sample.png") image = (image - np.min(image))/np.ptp(image) ```<issue_comment>username_1: You could rank every pixel in terms of brightness before normalization and after to verify that the ranks are preserved. Correct normalization would preserve the ranks. From the pictures 1 and 2, it seems that the ranks were not preserved (e.g. the top right pixels went from grey to black). The tutorial's normalization is done incorrectly: `x_train` is a an array of 2D images, but the normalization was applied along the first axis. What happened is that each column of every image was normalized relatively to itself (compare image 1 and 2 column-wise – the ranks were preserved). The normalization should have been applied along the second axis – that would normalize every pixel relative to the image it's in. In seems that incorrect normalization wasn't enough to sabotage the learning though! Your proposed normalization should work fine. Upvotes: 0 <issue_comment>username_2: The [normalize()](https://keras.io/api/utils/python_utils/#normalize-function) function used in the tutorial is the incorrect function to use for normalization in this context. This normalize function is for finding L1 and L2 norms. The correct layer/function to use for scaling between 0 and 1 would be [Rescaling()](https://keras.io/api/layers/preprocessing_layers/image_preprocessing/rescaling/): ``` x_train_norm = Rescaling(scale=1./255, offset=0.0)(x_train) ``` Upvotes: 1
2020/02/11
500
2,061
<issue_start>username_0: I am reading through Artificial Intelligence: Modern Approach and it states that the space complexity of the GBFS (tree version) is $\mathcal{O}(b^m)$. While I am reading, at some points, I found GBFS similar to DFS. It expands the whole branches and goes after one according to the heuristic function. It doesn't expand the rest like BFS. Perceiving this as similar to what depth-first search does, I understand that the worst time complexity is $\mathcal{O}(b^m)$. But I don't understand the space complexity. Shouldn't it be the same as DFS, $\mathcal{O}(bm)$, since it only will be expanding $b\*m$ nodes during the search in one path?<issue_comment>username_1: You could rank every pixel in terms of brightness before normalization and after to verify that the ranks are preserved. Correct normalization would preserve the ranks. From the pictures 1 and 2, it seems that the ranks were not preserved (e.g. the top right pixels went from grey to black). The tutorial's normalization is done incorrectly: `x_train` is a an array of 2D images, but the normalization was applied along the first axis. What happened is that each column of every image was normalized relatively to itself (compare image 1 and 2 column-wise – the ranks were preserved). The normalization should have been applied along the second axis – that would normalize every pixel relative to the image it's in. In seems that incorrect normalization wasn't enough to sabotage the learning though! Your proposed normalization should work fine. Upvotes: 0 <issue_comment>username_2: The [normalize()](https://keras.io/api/utils/python_utils/#normalize-function) function used in the tutorial is the incorrect function to use for normalization in this context. This normalize function is for finding L1 and L2 norms. The correct layer/function to use for scaling between 0 and 1 would be [Rescaling()](https://keras.io/api/layers/preprocessing_layers/image_preprocessing/rescaling/): ``` x_train_norm = Rescaling(scale=1./255, offset=0.0)(x_train) ``` Upvotes: 1
2020/02/12
537
2,102
<issue_start>username_0: I trained a ResNet20 on Cifar10 and obtained the following learning curves. [![enter image description here](https://i.stack.imgur.com/O7JVh.png)](https://i.stack.imgur.com/O7JVh.png) From the figures, I see at epoch 52, my validation loss is 0.323 (the lowest), and my validation accuracy is 89.7%. On the other hand, at the end of the training (epoch 120), my validation loss is 0.413 and my validation accuracy is 91.3% (the highest). Say I'd like to deploy this model on some real-world application. Should I prefer the snapshotted model at epoch 52, the one with lowest validation loss, or the model obtained at the end of training, the one with highest validation accuracy?<issue_comment>username_1: You could rank every pixel in terms of brightness before normalization and after to verify that the ranks are preserved. Correct normalization would preserve the ranks. From the pictures 1 and 2, it seems that the ranks were not preserved (e.g. the top right pixels went from grey to black). The tutorial's normalization is done incorrectly: `x_train` is a an array of 2D images, but the normalization was applied along the first axis. What happened is that each column of every image was normalized relatively to itself (compare image 1 and 2 column-wise – the ranks were preserved). The normalization should have been applied along the second axis – that would normalize every pixel relative to the image it's in. In seems that incorrect normalization wasn't enough to sabotage the learning though! Your proposed normalization should work fine. Upvotes: 0 <issue_comment>username_2: The [normalize()](https://keras.io/api/utils/python_utils/#normalize-function) function used in the tutorial is the incorrect function to use for normalization in this context. This normalize function is for finding L1 and L2 norms. The correct layer/function to use for scaling between 0 and 1 would be [Rescaling()](https://keras.io/api/layers/preprocessing_layers/image_preprocessing/rescaling/): ``` x_train_norm = Rescaling(scale=1./255, offset=0.0)(x_train) ``` Upvotes: 1
2020/02/13
820
3,306
<issue_start>username_0: In section 3 of the paper [The Limits of Correctness](https://www.student.cs.uwaterloo.ca/~cs492/11public_html/p18-smith.pdf) (1985) [<NAME>](https://en.wikipedia.org/wiki/Brian_Cantwell_Smith) writes > > When you design and build a computer system, you first formulate a model of the problem you want it to solve, and then construct the computer program in its terms. > > > He then writes > > computers have a special dependence on these models: you write an explicit description of the model down inside the computer, in the form of a set > of rules or what are called representations - essentially linguistic formulae encoding, in the terms of the model, the facts and data > thought to be relevant to the system's behavior. It is with respect to these representations that computer systems work. In fact, that's really what computers are (and how they differ from other machines): they run by manipulating representations, and representations are always formulated > in terms of models. This can all be summarized in a slogan: no computation without representation. > > > And then he says > > Models have to ignore things exactly because they view the world at a level of abstraction > > > He then writes in section 7 > > The systems that land airplanes are hybrids - combinations of computers and people - exactly because the unforeseeable happens, and because what > happens is in part the result of human action, requiring human interpretation > > > As quoted above, computers depend on models, which are abstractions (i.e. they ignore a lot of details), which are written inside the computer. Therefore, the true world cannot really be encoded into an algorithm, but only an abstraction and thus simplification of the world can. So, will AI always depend on models and thus approximations? Can it get rid of or overcome this limitation?<issue_comment>username_1: You could rank every pixel in terms of brightness before normalization and after to verify that the ranks are preserved. Correct normalization would preserve the ranks. From the pictures 1 and 2, it seems that the ranks were not preserved (e.g. the top right pixels went from grey to black). The tutorial's normalization is done incorrectly: `x_train` is a an array of 2D images, but the normalization was applied along the first axis. What happened is that each column of every image was normalized relatively to itself (compare image 1 and 2 column-wise – the ranks were preserved). The normalization should have been applied along the second axis – that would normalize every pixel relative to the image it's in. In seems that incorrect normalization wasn't enough to sabotage the learning though! Your proposed normalization should work fine. Upvotes: 0 <issue_comment>username_2: The [normalize()](https://keras.io/api/utils/python_utils/#normalize-function) function used in the tutorial is the incorrect function to use for normalization in this context. This normalize function is for finding L1 and L2 norms. The correct layer/function to use for scaling between 0 and 1 would be [Rescaling()](https://keras.io/api/layers/preprocessing_layers/image_preprocessing/rescaling/): ``` x_train_norm = Rescaling(scale=1./255, offset=0.0)(x_train) ``` Upvotes: 1
2020/02/14
743
2,664
<issue_start>username_0: I started learning about Q table from this blog post [Introduction to reinforcement learning and OpenAI Gym, by <NAME>](https://www.oreilly.com/radar/introduction-to-reinforcement-learning-and-openai-gym/), which has a line as below - > > After so many episodes, the algorithm will converge and determine the optimal action for every state using the Q table, ensuring the highest possible reward. We now consider the environment problem solved. > > > The Q table was updated by Q-learning formula `Q[state,action] += alpha * (reward + np.max(Q[state2]) - Q[state,action])` I ran 100000 episodes of which I got the following - ``` Episode 99250 Total Reward: 9 Episode 99300 Total Reward: 7 Episode 99350 Total Reward: 6 Episode 99400 Total Reward: 14 Episode 99450 Total Reward: 10 Episode 99500 Total Reward: 10 Episode 99550 Total Reward: 9 Episode 99600 Total Reward: 14 Episode 99650 Total Reward: 5 Episode 99700 Total Reward: 7 Episode 99750 Total Reward: 3 Episode 99800 Total Reward: 5 ``` I don't know what the highest reward is. It does not look like it has converged. Yet, the following graph [![enter image description here](https://i.stack.imgur.com/NBS9v.png)](https://i.stack.imgur.com/NBS9v.png) shows a trend in convergence but it was plotted for a larger scale. What should be the sequence of actions to be taken when the game is reset() but the "learned" Q table is available? How do we know that and the reward in that case?<issue_comment>username_1: Your Q learning update expression looks correct. The `Total Reward` will not be the same at the end of each episode because the starting position of taxi is different in each episode so the number of steps necessary to reach the final destination will be different in each episode. The graph that you posted shows that the algorithm converged after short amount of episodes so 100000 episodes might be too much. Since the environment is simple try manually calculating optimal policy for some starting specific position and then see if the algorithm does the same sequence of actions. Upvotes: 3 [selected_answer]<issue_comment>username_2: Great answer by @username_1. I want to point out another thing. The plateau in a graph ---like the one you shared--- indicates convergence, and the converged value may not always be *zero*. In general, you need to verify the learned behavior by your algorithm. For instance, if you are using Gym's LunarLander environment, you should also double-check what is it that your algorithm has learned *after* converging. Put another way, your algorithm might converge, but not to the desired behavior. Upvotes: 1
2020/02/16
408
1,728
<issue_start>username_0: Is there an ideal ratio in reinforcement learning between the positive and negative rewards? Suppose I have the scenario of moving a robot across the river. There are two options, walk across the bridge or walk across the river. If it walks across the river then the robot breaks so the idea is to reinforce the robot to walk across the bridge. What would be the best rewards values? Does this ratio vary between cases? ``` option1: Bridge: +10 River: -10 Option2: Bridge: +10 River: -1 Option3: Bridge: +1 River: -10 ```<issue_comment>username_1: Your Q learning update expression looks correct. The `Total Reward` will not be the same at the end of each episode because the starting position of taxi is different in each episode so the number of steps necessary to reach the final destination will be different in each episode. The graph that you posted shows that the algorithm converged after short amount of episodes so 100000 episodes might be too much. Since the environment is simple try manually calculating optimal policy for some starting specific position and then see if the algorithm does the same sequence of actions. Upvotes: 3 [selected_answer]<issue_comment>username_2: Great answer by @username_1. I want to point out another thing. The plateau in a graph ---like the one you shared--- indicates convergence, and the converged value may not always be *zero*. In general, you need to verify the learned behavior by your algorithm. For instance, if you are using Gym's LunarLander environment, you should also double-check what is it that your algorithm has learned *after* converging. Put another way, your algorithm might converge, but not to the desired behavior. Upvotes: 1
2020/02/18
910
3,342
<issue_start>username_0: It kind of makes sense intuitively but I'm not sure about a formal proof. I'll start with briefly listing definitions from **Intro to Multiagent systems, Wooldridge, 2002** and then give you my reasoning attempts thus far. $E$ is a finite set of discrete, instantaneous states, $E=(e, e',...)$. $Ac$ is a repertoire of possible actions (also finite) available to an agent, which transform the environment, $Ac=(\alpha, \alpha', ...)$. A run is a sequence of interleaved environment states and actions, $r=(e\_0, \alpha\_0, e\_1, \alpha\_1,..., \alpha\_{u-1}, e\_u)$, set of all such possible finite sequences (over $E$ and $Ac$) is $R$, $R^E$ is a subset of $R$ containing the runs that end with an env. state. Purely reactive agent is modeled as: $Ag\_{pure}: E\mapsto Ac$, a standard agent is modeled as $Ag\_{std}: R^E\mapsto Ac$. So, if $R^E$ is a sequence of agent's actions and environment states, than it just makes sense that $E\subset R^E$. Hence, $Ag\_{std}$ can map to every action to which $Ag\_{pure}$ can. And behavioral equivalence with respect to environment $Env$ is defined as $R(Env, Ag\_{1}) = R(Env, Ag\_{2})$; where $Env=\langle E,e\_{0},t \rangle$, $e\_{0}$ - initial environment state, $t$ - transformation function (definition irrelevant for now). Finally, if $Ag\_{pure}: E\mapsto Ac$ **and** $Ag\_{std}: R^E\mapsto Ac$, **and** $E\subset R^E$, we can say that $R(Env,Ag\_{pure}) = R(Env, Ag\_{std})$ (might be too bold of an assumption). Hence, every purely reactive agent has behaviorally equivalent standard agent. The opposite might not be true, since $E\subset R^E$ means that all elements of $E$ belong to $R^E$, while not all elements $R^E$ belong to $E$. It's a textbook problem, but I couldn't find an answer key to check my solution. If anyone has formally (and perhaps mathematically) proven this before, can you post your feedback, thoughts, proofs in the comments? For instance, set of mathematical steps to infer $E\subset R^E$ from their definitions: $E=(e\_{0}, e\_{1},..., e\_{u})$ and $R^E$ is "*all agent runs that end with an environment state*" (no formal equation found) is not clear to me.<issue_comment>username_1: Your Q learning update expression looks correct. The `Total Reward` will not be the same at the end of each episode because the starting position of taxi is different in each episode so the number of steps necessary to reach the final destination will be different in each episode. The graph that you posted shows that the algorithm converged after short amount of episodes so 100000 episodes might be too much. Since the environment is simple try manually calculating optimal policy for some starting specific position and then see if the algorithm does the same sequence of actions. Upvotes: 3 [selected_answer]<issue_comment>username_2: Great answer by @username_1. I want to point out another thing. The plateau in a graph ---like the one you shared--- indicates convergence, and the converged value may not always be *zero*. In general, you need to verify the learned behavior by your algorithm. For instance, if you are using Gym's LunarLander environment, you should also double-check what is it that your algorithm has learned *after* converging. Put another way, your algorithm might converge, but not to the desired behavior. Upvotes: 1
2020/02/19
573
2,285
<issue_start>username_0: Suppose I want to classify a dataset like the MNIST handwritten dataset, but it has added distractions. For example, here we have a 6 but with extra strokes around it that don't add value. [![enter image description here](https://i.stack.imgur.com/Z35w2.png)](https://i.stack.imgur.com/Z35w2.png) I suppose a good model would predict a 6, but maybe with less than 100% certainty (or maybe with 100% certainty - I don't know that it matters for the purpose of this question). Is there any way to get information about which pixels most strongly influenced the decision of the CNN, and which pixels were not so important? So to represent that visually, green means that those pixels were important: [![enter image description here](https://i.stack.imgur.com/T7XhO.png)](https://i.stack.imgur.com/T7XhO.png) Or conversely, is it possible to highlight pixels which did not contribute to the outcome (or which cast doubt on the outcome thereby reducing the certainty from 100%) [![enter image description here](https://i.stack.imgur.com/3YpG7.png)](https://i.stack.imgur.com/3YpG7.png)<issue_comment>username_1: Your Q learning update expression looks correct. The `Total Reward` will not be the same at the end of each episode because the starting position of taxi is different in each episode so the number of steps necessary to reach the final destination will be different in each episode. The graph that you posted shows that the algorithm converged after short amount of episodes so 100000 episodes might be too much. Since the environment is simple try manually calculating optimal policy for some starting specific position and then see if the algorithm does the same sequence of actions. Upvotes: 3 [selected_answer]<issue_comment>username_2: Great answer by @username_1. I want to point out another thing. The plateau in a graph ---like the one you shared--- indicates convergence, and the converged value may not always be *zero*. In general, you need to verify the learned behavior by your algorithm. For instance, if you are using Gym's LunarLander environment, you should also double-check what is it that your algorithm has learned *after* converging. Put another way, your algorithm might converge, but not to the desired behavior. Upvotes: 1
2020/02/20
685
3,292
<issue_start>username_0: I am currently studying the textbook *Deep Learning* by Goodfellow, Bengio, and Courville. Chapter **5.1 Learning Algorithms** says the following: > > **Classification with missing inputs**: Classification becomes more challenging if the computer program is not guaranteed that every measurement in its input vector will always be provided. To solve the classification task, the learning algorithm only has to define a single function mapping from a vector input to a categorical output. When some of the inputs may be missing, rather than providing a single classification function, the learning algorithm must learn a set of functions. Each function corresponds to classifying $\mathbf{x}$ with a different subset of its inputs missing. This kind of situation arises frequently in medical diagnosis, because many kinds of medical tests are expensive or invasive. One way to efficiently define such a large set of functions is to learn a probability distribution over all the relevant variables, then solve the classification task by marginalizing out the missing variables. With $n$ input variables, we can now obtain all $2^n$ different classification functions needed for each possible set of missing inputs, but the computer program needs to learn only a single function describing the joint probability distribution. See Goodfellow et al. (2013b) for an example of a deep probabilistic model applied to such a task in this way. Many of the other tasks described in this section can also be generalized to work with missing inputs; classification with missing inputs is just one example of what machine learning can do. > > > I was wondering if people would please help me better understand this explanation. Why is it that, when some of the inputs are missing, rather than providing a single classification function, the learning algorithm must learn a set of functions? And what is meant by "each function corresponds to classifying $\mathbf{x}$ with a different subset of its inputs missing."? I would greatly appreciate it if people would please take the time to clarify this.<issue_comment>username_1: Your Q learning update expression looks correct. The `Total Reward` will not be the same at the end of each episode because the starting position of taxi is different in each episode so the number of steps necessary to reach the final destination will be different in each episode. The graph that you posted shows that the algorithm converged after short amount of episodes so 100000 episodes might be too much. Since the environment is simple try manually calculating optimal policy for some starting specific position and then see if the algorithm does the same sequence of actions. Upvotes: 3 [selected_answer]<issue_comment>username_2: Great answer by @username_1. I want to point out another thing. The plateau in a graph ---like the one you shared--- indicates convergence, and the converged value may not always be *zero*. In general, you need to verify the learned behavior by your algorithm. For instance, if you are using Gym's LunarLander environment, you should also double-check what is it that your algorithm has learned *after* converging. Put another way, your algorithm might converge, but not to the desired behavior. Upvotes: 1
2020/02/21
1,021
3,918
<issue_start>username_0: Does a fully convolutional network share the same translation invariance properties we get from networks that use max-pooling? If not, why do they perform as well as networks which use max-pooling?<issue_comment>username_1: All convolutional networks (with or without max-pooling) are translation-invariant (AKA spatially invariant) because their filters slide over every position in the image. This means that if a pattern that "matches" a filter is present anywhere in the image, then at least one neuron should activate. Max-pooling, on the other hand, has nothing to do with spatial invariance. It's simply a regularization technique to help reduce the number of parameters later in the network by downsizing activation layers within the network. This can help combat overfitting, although it's not strictly necessary. Alternatively, neural networks can achieve the same effect by using a convolutional layer with a stride of 2 instead of 1. Upvotes: 0 <issue_comment>username_2: FCNs can and typically have downsampling operations. For example, [u-net](https://arxiv.org/pdf/1505.04597.pdf) has downsampling (more precisely, max-pooling) operations. The difference between an FCN and a regular CNN is that the former does not have fully connected layers. See [this answer](https://ai.stackexchange.com/a/21824/2444) for more info. Therefore, FCNs inherit the same properties of CNNs. There's nothing that a CNN (with fully connected layers) can do that an FCN cannot do. In fact, you can even simulate a fully connected layer with a convolution (with a kernel that has the same shape as the input volume). Upvotes: 0 <issue_comment>username_3: Neural networks are not **invariant** to translations, but **equivariant**, ### Invariance vs Equivariance Suppose we have input $x$ and the output $y=f(x)$ of some map between spaces $X$ and $Y$. We apply transformation $T$ in the input domain. For general map,output will change in some complicated and unpredictable way. However, for certain class of maps, change of the output becomes very tractable. **Invariance** means that output doesn't change after application of the map $T$. Namely: $$ f(T(x)) = f(x) $$ For CNN example of the map, invariant to translations, is the **GlobalPooling** operation. **Equivariance** means that symmetry transformation $T$ on the input domain leads to the symmetry transformation $T^{'}$ on the output. Here $T^{'}$ can be the same map $T$, identity map - which reduces to invariance, or some other kind of transformation. This picture is illustration of translational equivariance. [![from https://towardsdatascience.com/translational-invariance-vs-translational-equivariance-f9fbc8fca63a](https://i.stack.imgur.com/hxRvr.png)](https://i.stack.imgur.com/hxRvr.png) ### Equivariance of operations in CNN * Convolutions with `stride=1`: $$ f(T(x)) = T f(x) $$ Output feature map is shifted in same direction and number of steps. * Downsampling operations. Convolutions with `stride=1`, `Pooling` (non-global): $$ f(T\_{1/s}(x)) = T\_{1/s} f(x) $$ They are equivariant to the subgroup of translations, which involves translations with integer number of strides. * `GlobalPooling` : $$ f(T(x)) = f(x) $$ These are invariant to arbitrary shifts, this property is useful in classification tasks. ### Combination of layers Stacking multiple equivariant layers you obtain equivariant architecture a whole. For classification layer it makes sense to put `GlobalPooling` in the end in order to for NN to output the same probabilities for the shifted image. For **segmentation** or **detection** problem architecture should be equivariant with the same map $T$, in order to translate bounding boxes or segmentation masks by the same amount as the transform on the input. Non-global downsampling operations *reduce equivariance* to the subgroup with shifts integer multiples of stride. Upvotes: 2
2020/02/22
1,038
3,986
<issue_start>username_0: I'm working on a project for my college to recognize traffic signs in pictures. I searched a lot but can't find the best method to do it. Can someone recommend me a paper, article, or even GitHub link that describes the best way to achieve this? It will be helpful.<issue_comment>username_1: All convolutional networks (with or without max-pooling) are translation-invariant (AKA spatially invariant) because their filters slide over every position in the image. This means that if a pattern that "matches" a filter is present anywhere in the image, then at least one neuron should activate. Max-pooling, on the other hand, has nothing to do with spatial invariance. It's simply a regularization technique to help reduce the number of parameters later in the network by downsizing activation layers within the network. This can help combat overfitting, although it's not strictly necessary. Alternatively, neural networks can achieve the same effect by using a convolutional layer with a stride of 2 instead of 1. Upvotes: 0 <issue_comment>username_2: FCNs can and typically have downsampling operations. For example, [u-net](https://arxiv.org/pdf/1505.04597.pdf) has downsampling (more precisely, max-pooling) operations. The difference between an FCN and a regular CNN is that the former does not have fully connected layers. See [this answer](https://ai.stackexchange.com/a/21824/2444) for more info. Therefore, FCNs inherit the same properties of CNNs. There's nothing that a CNN (with fully connected layers) can do that an FCN cannot do. In fact, you can even simulate a fully connected layer with a convolution (with a kernel that has the same shape as the input volume). Upvotes: 0 <issue_comment>username_3: Neural networks are not **invariant** to translations, but **equivariant**, ### Invariance vs Equivariance Suppose we have input $x$ and the output $y=f(x)$ of some map between spaces $X$ and $Y$. We apply transformation $T$ in the input domain. For general map,output will change in some complicated and unpredictable way. However, for certain class of maps, change of the output becomes very tractable. **Invariance** means that output doesn't change after application of the map $T$. Namely: $$ f(T(x)) = f(x) $$ For CNN example of the map, invariant to translations, is the **GlobalPooling** operation. **Equivariance** means that symmetry transformation $T$ on the input domain leads to the symmetry transformation $T^{'}$ on the output. Here $T^{'}$ can be the same map $T$, identity map - which reduces to invariance, or some other kind of transformation. This picture is illustration of translational equivariance. [![from https://towardsdatascience.com/translational-invariance-vs-translational-equivariance-f9fbc8fca63a](https://i.stack.imgur.com/hxRvr.png)](https://i.stack.imgur.com/hxRvr.png) ### Equivariance of operations in CNN * Convolutions with `stride=1`: $$ f(T(x)) = T f(x) $$ Output feature map is shifted in same direction and number of steps. * Downsampling operations. Convolutions with `stride=1`, `Pooling` (non-global): $$ f(T\_{1/s}(x)) = T\_{1/s} f(x) $$ They are equivariant to the subgroup of translations, which involves translations with integer number of strides. * `GlobalPooling` : $$ f(T(x)) = f(x) $$ These are invariant to arbitrary shifts, this property is useful in classification tasks. ### Combination of layers Stacking multiple equivariant layers you obtain equivariant architecture a whole. For classification layer it makes sense to put `GlobalPooling` in the end in order to for NN to output the same probabilities for the shifted image. For **segmentation** or **detection** problem architecture should be equivariant with the same map $T$, in order to translate bounding boxes or segmentation masks by the same amount as the transform on the input. Non-global downsampling operations *reduce equivariance* to the subgroup with shifts integer multiples of stride. Upvotes: 2