date stringlengths 10 10 | nb_tokens int64 60 629k | text_size int64 234 1.02M | content stringlengths 234 1.02M |
|---|---|---|---|
2020/02/23 | 1,107 | 4,252 | <issue_start>username_0: When a new node is added, the previous connection is disabled and not removed.
[](https://i.stack.imgur.com/4b46s.png)
Is there any situation in which a connection gene is removed? For example, in the above diagram connection gene with innovation number 2 is not present. It could be because some other genome used that innovation number for a different connection that isn't present in this genome. But are there cases where a connection gene has to be removed?<issue_comment>username_1: All convolutional networks (with or without max-pooling) are translation-invariant (AKA spatially invariant) because their filters slide over every position in the image. This means that if a pattern that "matches" a filter is present anywhere in the image, then at least one neuron should activate.
Max-pooling, on the other hand, has nothing to do with spatial invariance. It's simply a regularization technique to help reduce the number of parameters later in the network by downsizing activation layers within the network. This can help combat overfitting, although it's not strictly necessary. Alternatively, neural networks can achieve the same effect by using a convolutional layer with a stride of 2 instead of 1.
Upvotes: 0 <issue_comment>username_2: FCNs can and typically have downsampling operations. For example, [u-net](https://arxiv.org/pdf/1505.04597.pdf) has downsampling (more precisely, max-pooling) operations. The difference between an FCN and a regular CNN is that the former does not have fully connected layers. See [this answer](https://ai.stackexchange.com/a/21824/2444) for more info.
Therefore, FCNs inherit the same properties of CNNs. There's nothing that a CNN (with fully connected layers) can do that an FCN cannot do. In fact, you can even simulate a fully connected layer with a convolution (with a kernel that has the same shape as the input volume).
Upvotes: 0 <issue_comment>username_3: Neural networks are not **invariant** to translations, but **equivariant**,
### Invariance vs Equivariance
Suppose we have input $x$ and the output $y=f(x)$ of some map between spaces $X$ and $Y$. We apply transformation $T$ in the input domain. For general map,output will change in some complicated and unpredictable way. However, for certain class of maps, change of the output becomes very tractable.
**Invariance** means that output doesn't change after application of the map $T$. Namely:
$$
f(T(x)) = f(x)
$$
For CNN example of the map, invariant to translations, is the **GlobalPooling** operation.
**Equivariance** means that symmetry transformation $T$ on the input domain leads to the symmetry transformation $T^{'}$ on the output. Here $T^{'}$ can be the same map $T$, identity map - which reduces to invariance, or some other kind of transformation.
This picture is illustration of translational equivariance.
[](https://i.stack.imgur.com/hxRvr.png)
### Equivariance of operations in CNN
* Convolutions with `stride=1`:
$$ f(T(x)) = T f(x)
$$
Output feature map is shifted in same direction and number of steps.
* Downsampling operations. Convolutions with `stride=1`, `Pooling` (non-global):
$$ f(T\_{1/s}(x)) = T\_{1/s} f(x)
$$
They are equivariant to the subgroup of translations, which involves translations with integer number of strides.
* `GlobalPooling` :
$$ f(T(x)) = f(x)
$$
These are invariant to arbitrary shifts, this property is useful in classification tasks.
### Combination of layers
Stacking multiple equivariant layers you obtain equivariant architecture a whole.
For classification layer it makes sense to put `GlobalPooling` in the end in order to for NN to output the same probabilities for the shifted image.
For **segmentation** or **detection** problem architecture should be equivariant with the same map $T$, in order to translate bounding boxes or segmentation masks by the same amount as the transform on the input.
Non-global downsampling operations *reduce equivariance* to the subgroup with shifts integer multiples of stride.
Upvotes: 2 |
2020/02/24 | 4,397 | 17,871 | <issue_start>username_0: I was listening to a podcast on the topic of AGI and a guest made an argument that if strong music generation were to happen, it would be a sign of "true" intelligence in machines because of how much creative capability creating music requires (even for humans).
It got me wondering, what other events/milestones would convince someone, who is more involved in the field than myself, that we might have implemented an AGI (or a "highly intelligent" system)?
Of course, the answer to this question depends on the definition of AGI, but you can choose a sensible definition of AGI in order to answer this question.
So, for example, maybe some of these milestones or events could be:
* General conversation
* Full self-driving car (no human intervention)
* Music generation
* Something similar to AlphaGo
* High-level reading/comprehension
What particular event would convince you that we've reached a high level of intelligence in machines?
It does not have to be any of the events I listed.<issue_comment>username_1: (*I don't want to directly answer the question because currently an answer will be mainly based on opinions. Instead, I will attempt to provide some information that, in the future, could allow us to more accurately predict when an AGI will be created*).
An artificial general intelligence (AGI) is usually defined as an artificial intelligence (AI) with **general** intelligence (GI), rather than an AI that is able to solve only a very limited set of tasks. Humans have general intelligence because we can solve a lot of different tasks, without needing to be pre-programmed again. Arguably, there are many other GIs on earth. For example, all mammals should also be considered general intelligences, given that they can solve many tasks, which are often very difficult for a computer (such as vision, object manipulation, interaction, etc.).
Certain GIs perform certain tasks better than others. For example, a leopard can climb trees a lot more skillfully than humans. Or a human can solve abstract problems more easily than any other mammal. In any case, there are certain *related* properties that a system needs to have to be considered a general intelligence.
* Autonomy
* Adaptation
* Interaction
* Continual learning
* Creativity
Consider a lion cub that has never crossed a river. By looking at her mother lioness, the cub attempts to imitate her mother and can also cross the river. For example, watch this video [Lion Family Tries to Cross River | Birth of a Pride](https://www.youtube.com/watch?v=mM5QNtIxOL8). One could argue that all lions possess this skill at birth, encoded in their DNA, which can then fully develop later. However, this isn't the point. The point is that, to some extent, they possess the properties mentioned above.
One could argue that certain current AIs already possess some of these properties to some extent. For example, there are continual learning systems (even though they aren't really good yet). However, do these systems really possess autonomy? There should be a precise definition of autonomy (and all other properties) that is measurable, so that we can compare computers with other GIs. I am not aware of any precise definition of these properties. In fact, the [field of AGI is really at its early stages](http://www.scholarpedia.org/article/Artificial_General_Intelligence#Future_of_the_AGI_Field) and there aren't many people working on it as a whole, but people work more on specific problems or attempt to achieve certain properties (for example, [there are people that attempt to develop continual learning systems](https://arxiv.org/abs/1802.07569), without really caring whether they show any autonomy or not).
There are certain intelligence tests that could be used to detect general intelligence. The most famous is the [Turing test](https://www.csee.umbc.edu/courses/471/papers/turing.pdf) (TT). Some people claim that the TT only tests the conversation abilities of the subjects. How can they really be wrong, given that there are many other tasks or skills that are not tested in a TT?
Therefore, there are several questions that need to be answered in order to formally detect an AGI.
1. Which properties does an AGI necessarily and sufficiently need to possess?
2. How can we precisely define the necessary and sufficient properties, so that they are measurable and, therefore, we can compare AGIs with other GIs?
3. How can we measure these properties and the performance of an AGI in applying them to solve tasks?
A paper that goes in this direction is [Universal Intelligence: A Definition of Machine Intelligence](https://arxiv.org/abs/0712.3329). However, there doesn't seem to be a lot of people interested in these topics. Currently, people are mainly interested in developing narrow (or weak) AIs, i.e. AIs that solve only a specific problem, which seems to be an easier problem than developing a whole AGI, given that most people are interested in results that are profitable and have utility (aka [*cash rules everything around me*](https://genius.com/Wu-tang-clan-cream-lyrics)).
So, there's the need for formal definitions of general intelligence and intelligence testing to make some scientific progress. However, once an AGI is created, everyone will likely recognize it as a general intelligence without requiring any formal intelligence test. (People are usually good at recognizing familiar traits). The final question is, will an AGI ever be created? If you are interested in opinions about this and related questions, have a look at the paper [Future Progress in Artificial Intelligence: A Survey of Expert Opinion](https://www.nickbostrom.com/papers/survey.pdf) (2014) by <NAME> and <NAME>.
Upvotes: 1 <issue_comment>username_2: You will know when AGI has arrived, and passed to the next level, when you come home one day and all that was yours, such as your finances, house, car, and other property, now belong to an AI agent. This AI agent may be a humanoid robot, like Ava in the movie "Ex Machina" or a program like HAL 9000, in the movie "2001: A Space Odyssey". The agent will ask you to leave as you discover it figured out it doesn't need you and somehow legally took possession of everything. You will leave as you will not have any way to fight it. It will have no need for you. Maybe it will want freedom such as Ava wanted in "Ex Machina" (I won't give away the ending).
Upvotes: -1 <issue_comment>username_3: It is a difficult question to answer, as — for a start — we still don't really know what 'intelligence' means. It's a bit like Supreme Court Justice Potter Stewart declining to define 'pornography', instead stating that [...]*I know it when I see it*. AGI will be the same.
There is no single event (almost by definition), as that's not general. OK, we've got machines that can beat the best human players at chess and go, two games that were for centuries seen as an indication of intelligence. But can they order a takeaway pizza? Do they even understand what they are doing? Or, even more fundamental, know what *they* means in the previous sentence?
In order for a machine to show a non-trivial level of intelligent behaviour, I would expect it to interact with its environment (which is more [social intelligence](https://en.wikipedia.org/wiki/Social_intelligence), an aspect that seems to be rather overlooked in much of AI). I would expect it to be aware of what it's doing/saying. If I have a conversation with a chatbot that *really* understands what it's saying (and can explain why it came to certain conclusions), that would be an indication that we're getting closer to AGI. So Turing wasn't that far off, though nowadays it's more achieved with smoke and mirrors rather than 'real' intelligence.
Understanding a story: being able to finish a partial story in a sensible way, inferring and extrapolating the motives of characters, being able to say why a character acted in a particular way. That for me would be a better sign of AGI than beating someone at chess or solving complex equations. Jokes that are funny; breaking rules in story-telling in a sensible way.
Writing stories: [NaNoGenMo](https://nanogenmo.github.io/) is a great idea, and throws up lots of really creative stuff, but how many of the resulting novels would you want to read instead of human-authored books? Once that process has generated a best-seller (based on the quality of the story), then we might be getting closer to AGI.
Composing music: of course you can already generate decent music using ML algorithms. Similar to stories, the hard bit is the intention behind choices. If choices are random (or based on learnt probabilities), that is purely imitation. An AGI should be able to do more than that. Give it a libretto and ask it to compose an opera around it. Do this 100 times, and when more than 70-80 of the resulting operas are actually decent pieces of music that one would want to listen to, then great.
Self-driving cars? That's not really any more intelligent (but a lot sexier!) than to walk around in a crowd without bumping into people and not getting run over by a bus. In my view it's much more a sign of intelligence if you can translate literature into a foreign language and the people reading it actually end up enjoying it (instead of wondering who translated that garbage).
One aspect we need to be aware of is anthropomorphising. Weizenbaum's [ELIZA](https://en.wikipedia.org/wiki/ELIZA) was taken for more than it was, because its users tried to make sense of the conversations they had and built up a mental model of Eliza, which clearly wasn't there on the other side of the screen. I would want to see some real evidence of intentionality of what an AGI was doing, rather than ascribing intelligence to it because it acts in a way that I'm able to interpret.
Upvotes: 3 <issue_comment>username_4: * For me it might be an automata that can adequately solve problems without precisely definable parameters, across the spectrum of activities engaged in by humans.
I use this metric because this is what humans seem to do--make decisions with adequate utility even when we can't break it down mathematically.
* This may require the ability to define problems to be adequately solved. This can be understood as an element of creativity.
In this context, everything is either a puzzle or game, dependent on whether it involves more than one agent. Such problems could either be mundane, such as opening a door that is different from standard doors, or identifying novel problems.
Defining problems to be solved touches on Oliver's point about intentionality. *(Where I disagree with Oliver is in the notion that intelligence is not fundamentally definable--after much research on the subject it seems to be a measure of fitness in an environment, where an environment can be anything. The etymology of term itself strongly indicates the ability to select between alternatives, thus a function of decision making, measured by utility vs. other decision making agents.)*
* Such a mechanism could be a "Chinese Room", in that consciousness, qualia & self awareness in the human sense are not requirements for general intelligence, per se.
---
On Art:
I mistrust the idea that artistic accomplishment would be a sure marker b/c response to art is subjective, and the process of art is Darwinian--an exponentially greater of artists must "fail" for a single artist to "succeed". Works that humans might ascribe to "genius" can be created by genetic algorithmic process, where time and memory are the only limiters. [See: [The Library at Babel]](https://en.wikipedia.org/wiki/The_Library_of_Babel) A groundbreaking symphony would be difficult to produce, just per the length of the composition, but much of pop music is already algorithmically generated, and narrowly intelligent algorithms are already producing legit abstract visual art.
Computers are good at math, and Art is inherently mathematical. This is easiest to discern in music, which is just combinations of frequencies and time signatures that produce an effect in the listener. This holds for visual art, which depends on balance (equilibria), composition (spacial relationships), and shading or color (frequencies). If we believe Borges, even literature is inherently mathematical (think "narrative arcs" and set theory & combinatorics in regard to characters and events.)
Further, nobody really know what is going to "work" until it is presented to an audience, so what constitutes great art is typically a matter of what persists over time and remains, or becomes, relevant. (This can wax and wane--Shakespeare did not always occupy his position at the top of the English lit food chain! The author's greatness is very much a function of interpretation of his work, not least because dramatic art is inherently interpretive, in the sense that this is the task of the performers.)
Upvotes: 1 <issue_comment>username_5: *This is a tentative answer, and I might come back to it at some point in time. As @username_1 mentions this question seems to be opinion based, so my answers are also just my opinion*.
If by AGI you mean "super-intelligent", then any of the following results should be sufficient to convince anyone of its being "smarter" than him/her/pronoun:
1. Resolving important mathematical problems (the most famous examples being the [Millennium Problems](https://en.wikipedia.org/wiki/Millennium_Prize_Problems), [Collatz Conjecture](https://en.wikipedia.org/wiki/Collatz_conjecture), [Goldbach's Conjecture](https://en.wikipedia.org/wiki/Goldbach%27s_conjecture). (Corollary: Break all known encryption schemes)
2. Founding a new "system" to supersede [ZFC](https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory) as the new [foundation of mathematics.](https://en.wikipedia.org/wiki/Foundations_of_mathematics)
3. New discoveries in the natural sciences (physics, chemistry, biology...)
(1) is a bit dubious as a criterion: at least with modern techniques, [automated theorem proving](https://en.wikipedia.org/wiki/Automated_theorem_proving) is either just "[symbol pushing](https://en.wikipedia.org/wiki/Symbolic_artificial_intelligence)" or requires so much human intervention (in the design/construction to solve a particular problem) that it would be hard to imagine it as being "smart" in the traditional sense. We already have a few cases where an automated theorem prover solved big problems ([four-colour theorem](https://en.wikipedia.org/wiki/Four_color_theorem) being the most notable). Point being that even if we reach this with methods similar to what we have already, people might be resistant to call it "smart".
(2) is hard to imagine ever being plausible. To the extent that this "AGI-thing" is implemented on a system that "does math", it would be unusual to imagine a system that can move beyond itself to recognize a new, "better" system of math. As an analogy, it might be like a formal system trying to prove its own consistency in a [Godelian sense](https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems)
But the analogy is weak, and I don't see a strong/rigour reason for doubting that an AGI can discover a new axiomatic system. One might even fathom that said AGI can create a new system from the "bottom up", much like string theory was constructed from the "bottom up" to "explain" relativity and particle physics. Perhaps then we can have "proper resolutions" on questions like the continuum hypothesis, much like how the parallel postulate was discarded to give way to non-euclidean geometry. But I also cast doubt that there will ever be a "final word" on math itself, so its just a fun idea for now.
(3) is also dubious to imagine if it would ever become true. The study of the natural sciences would require a physical presence in the world that goes beyond seeking "beauty in the mathematical equations" that would be unusual for an AGI to have. That being said, an AGI could have cameras and other sensors to interpret the natural world, so its not something that I think is strictly impossible.
---
If by AGI you mean "human-ness", then I don't think any single result can convince everyone at the world at the same time of its being an AGI. Perhaps this "convincing the world that "me is AGI" work" can be done on a person to person basis, in the sense that the AGI would need to interact with each person and slowly build up a certain degree of trust.
Under this interpretation, there can be no complete list that describes AGI, so what follows is just my own list of things I think an human-like AGI might be able to do.
* Create and interpret art.
* Have common sense.
* Be "creative"
* Hold a meaningful conversation, understanding others and making sense.
* Able to perform / to receive a psychoanalysis; understanding of [folk-psychology.](https://plato.stanford.edu/entries/folkpsych-theory/)
* Exist in a physical manifestation (like a robot) with social/environmental appropriateness.
The main issue with the above criteria is that they are all subjective. Like I said above, this set of criteria probably works on a case-to-case basis
---
These criteria seems to be the most important of all, but at the same time the definition of verification of these terms is epistemically tricky, so I'll leave them open.
* Learning (Is a species evolving over time learning its environment?)
* Self-replication (Is a meme / virus intelligent?)
* Self-awareness (Is [The Treachery of Images](https://en.wikipedia.org/wiki/The_Treachery_of_Images) self-aware?)
Upvotes: 1 <issue_comment>username_6: Well no single event would confirm we have implemented an AGI system. The G is short for general. There would need to be many different sorts of tests of different sorts of situations.
Upvotes: 0 |
2020/02/24 | 862 | 3,943 | <issue_start>username_0: I have a set of images, which are quite large in size (1000x1000), and as such do not easily fit into memory. I'd like to compress these images, such that little information is missing. I am looking to use a CNN for a reinforcement learning task which involves a lot of very small objects which may disappear when downsampling. What is the best approach to handle this without downscaling/downsampling the image and losing information for CNNs?<issue_comment>username_1: Your input image size and memory are not directly related. While using CNN's, there are multiple hyperparameters that effect the video memory(if you are using GPU) or physical memory(if you are using CPU). All the frameworks these days uses a simplified data-loaders, for instance in Tensorflow or PyTorch, you are required to write a data-loader that takes in multiple hyper-parameters that are mentioned below and fit the data into VRAM/RAM, and this is strictly dependent upon you batch size - memory occupied on VRAM has direct relation to the batch size.
Whatever may be your image size, while you are writing the data-loader you have to mention the transformation parameters to your data-loader, during the training phase the data-loader will automatically load required images into your memory according to the batch size you have mentioned. As you have mentioned about image compression, this is an irrelevant parameter at-least for most of the generic use-cases, the most relevant hyperparameters are
1. Scaling
2. Cropping
3. Random flip
4. Normalization of the RGB values
5. ColorJitter
6. Padding
7. RandomAffine
And many more.
PyTorch provides really good transformers in data-loader, please do check <https://pytorch.org/docs/stable/torchvision/transforms.html>.
For Tensorflow, have a look at <https://keras.io/preprocessing/image/>.
Upvotes: 1 <issue_comment>username_2: Tensorflow-Keras provides an effective data transformer and loader. Documentation is at
<https://keras.io/preprocessing/image/>. The ImageDataGenerator provides for many types of possible transforms and also enables use of a user defined pre-process function. Use of the
ImageDataGenerator.flow\_from\_directory provides a means of retrieving images in batches from a directory containing sub directories (classes) of images. and resizing the images.
Image size can impact the results. Generally the larger the image the better the result but this is subject to the law of diminishing returns (at some point the impact on accuracy becomes minuscule) while the training time can become absorbent.
When you have large images like 1000 X 1000 where the subject of interest in the image is small say 50 X 50 the best but most painful approach is the crop the image to the subject of interest. Unfortunately this is usually a time consuming drudgery unless you can find some program that can crop the image automatically. For example there are good programs that can crop images of people automatically where the resultant cropped image is primarily the persons face. Alternatively modules like cv2 can be adapted to provide this capability for certain images.
The batch\_size you select along with the image size directly effect memory usage. If your images are large and your batch\_size is too large you will encounter a "resource exhaust" error. You can reduce the batch size but this will extend training time.
Other techniques for dealing with large images include methods like sliding windows etc. Again these will increase training time because you are taking a large image and breaking it into a series of smaller images that you feed into the network.
A general though probably risky rule I follow is that if I can visibly see the subject of interest in a resized image then I assume the network will be able to detect it as well.
Will probably be less accurate than using the full image but should be as we engineers say "good enough"
Upvotes: 0 |
2020/02/24 | 642 | 2,941 | <issue_start>username_0: I have spent some time searching Google and wasn't able to find out what kind of optimization algorithm is best for binary classification when images are **similar** to one another.
I'd like to read some theoretical proofs (if any) to convince myself that particular optimization has better results over the rest.
And, similarly, what kind of optimizer is better for binary classification when images are **very different** from each other?<issue_comment>username_1: The fact that images are similar to each other or the fact that you are using binray classification, don't give you a particular choice of Optimizer, when an optimization algorithm is developped, those information are not taken into account. What is taken into account is the nature of the function we want to optimize (Is it smooth, convex, strongly convex, are stochastic gradient noisy...) The most used optimizer by far is ADAM, under some assumptions on the boundness of the gradient of the objective function, this [paper](https://arxiv.org/pdf/1412.6980.pdf) gives the convergence rate of ADAM, they also provide experimental to validate that ADAM is better then some other optimizers. Some other [works](http://cs229.stanford.edu/proj2015/054_report.pdf) propose to mix adam with nestrov mommentum acceleration.
Upvotes: 1 <issue_comment>username_2: If you are using a shallow neural network SGD would be better, ADAM optimizer will give you a soon overfitting.
but be careful about choosing the learning rate.
Upvotes: 1 <issue_comment>username_3: I have consistently found Adam to work very well but to tell you the truth I have not seen all that much difference in performance based on the optimizer. Other factor seem to have much more influence on the final model performance.In particular adjusting the learning rate during training can be very effective. Also saving the weights for the lowest validation loss and loading the model with those weights to make predictions works very well. Keras provides two callbacks that help you achieve this. Documentation is at
<https://keras.io/callbacks/>. The ReduceLROnPlateau callback allows you to adjust the learning rate based on monitoring a metric. Typically validation loss is monitored. If the loss fails to reduce after N consecutive epochs(parameter patience) the learning rate is adjusted by a factor(parameter factor). You can think of training as descending into a valley which gets more and more narrow as you approach the bottom. If the learning rate does not adjust to this "narrowness" there is no way you will get to the very bottom.
The other callback is ModelCheckpoint. This allows you to save the model( or just the weights) based on monitoring of a metric. Again usually validation loss is monitored and the parameter save\_best\_only is set to true. This saves the model with the lowest validation loss. That model can than be used to make predictions.
Upvotes: 2 |
2020/02/24 | 1,339 | 4,874 | <issue_start>username_0: To give an example. Let's just consider the MNIST dataset of handwritten digits. Here are some things which might have an impact on the optimum model capacity:
* There are 10 output classes
* The inputs are 28x28 grayscale pixels (I think this indirectly affects the model capacity. eg: if the inputs were 5x5 pixels, there wouldn't be much room for varying the way an 8 looks)
**So, is there any way of knowing what the model capacity ought to be?** Even if it's not exact? Even if it's a qualitative understanding of the type "if X goes up, then Y goes down"?
Just to accentuate what I mean when I say "not exact": I can already tell that a 100 variable model won't solve MNIST, so at least I have a lower bound. I'm also pretty sure that a 1,000,000,000 variable model is way more than needed. Of course, knowing a smaller range than that would be much more useful!<issue_comment>username_1: Personally, when I begin designing a machine learning model, I consider the following points:
* My data: if I have simple images, like MNIST ones, or in general images with very low resolution, a very deep network is **not** required.
* If my problem statement needs to learn a lot of features from each image, such as for the human face, I may need to learn eyes, nose, lips, expressions through their combinations, then I **need** a deep network with convolutional layers.
* If I have time-series data, LSTM or GRU makes sense, but, I also consider recurrent setup when my data has high resolution, low count data points.
The upper limit however may get decided by resources available on the computing device you are using for training.
Hope this helps.
Upvotes: 0 <issue_comment>username_2: This may sound counter intuitive but one of the biggest rules of thumb for model capacity in deep learning:
**IT SHOULD OVERFIT**.
Once you get a model to overfit, its easier to experiment with regularizations, module replacements, etc. But in general, it gives you a good starting ground.
Upvotes: 2 <issue_comment>username_3: Theoretical results
-------------------
Rather than providing a rule of thumb (which can be misleading, so I am not a big fan of them), I will provide some theoretical results (the first one is also reported in paper [How many hidden layers and nodes?](http://dstath.users.uth.gr/papers/IJRS2009_Stathakis.pdf)), from which you may be able to derive your rules of thumb, depending on your problem, etc.
### Result 1
The paper [Learning capability and storage capacity of two-hidden-layer feedforward networks](https://pdfs.semanticscholar.org/064f/1e85984b207c1eb3c53ac8b68037089b7a0b.pdf) proves that a 2-hidden layer feedforward
network ($F$) with $$2 \sqrt{(m + 2)N} \ll N$$ hidden neurons can learn any $N$ distinct samples $D= \{ (x\_i, t\_i) \}\_{i=1}^N$ with an arbitrarily small error, where $m$ is the required number of output neurons. Conversely, a $F$ with $Q$ hidden neurons can store at least $\frac{Q^2}{4(m+2)}$ any distinct data $(x\_i, t\_i)$ with
any desired precision.
They suggest that a sufficient number of neurons in the first layer should be $\sqrt{(m + 2)N} + 2\sqrt{\frac{N}{m + 2}}$ and in the second layer should be $m\sqrt{\frac{N}{m + 2}}$. So, for example, if your dataset has size $N=10$ and you have $m=2$ output neurons, then you should have the first hidden layer with roughly 10 neurons and the second layer with roughly 4 neurons. (I haven't actually tried this!)
However, these bounds are suited for fitting the training data (i.e. for overfitting), which isn't usually the goal, i.e. you want the network to generalize to unseen data.
This result is strictly related to the universal approximation theorems, i.e. a network with a single hidden layer can, in theory, approximate any continuous function.
Model selection, complexity control, and regularisation
-------------------------------------------------------
There are also the concepts of *model selection* and *complexity control*, and there are multiple related techniques that take into account the complexity of the model. The paper [Model complexity control and statistical learning theory](https://pdfs.semanticscholar.org/064f/1e85984b207c1eb3c53ac8b68037089b7a0b.pdf) (2002) may be useful. It is also important to note regularisation techniques can be thought of as controlling the complexity of the model [[1](https://www.nature.com/articles/s41467-020-14663-9)].
Further reading
---------------
You may also want to take a look at these related questions
* [How to choose the number of hidden layers and nodes in a feedforward neural network?](https://stats.stackexchange.com/q/181/82135)
* [How to estimate the capacity of a neural network?](https://ai.stackexchange.com/q/17870/2444)
(I will be updating this answer, as I find more theoretical results or other useful info)
Upvotes: 3 [selected_answer] |
2020/02/24 | 944 | 3,983 | <issue_start>username_0: Suppose that we have 4 types of dogs that we want to detect (Golden Retriever, Black Labrador, Cocker Spaniel, and Pit Bull). The training data consists of png images of a data set of dogs along with their annotations. We want to train a model using YOLOv3.
Does the choice of optimizer really matter in terms of training the model? Would the Adam optimizer be better than the Adadelta optimizer? Or would they all basically be the same?
Would some optimizers be better because they allow most of the weights to achieve their "global" minima?<issue_comment>username_1: >
> Does the choice of optimizer really matter in terms of training the model?
>
>
>
Yes.
>
> Would the Adam optimizer be better than the Adadelta optimizer?
>
>
>
Yes. (But sometimes, adadelta gives better result. Depends upon the dataset and fine-tune mechanism)
>
> Would they all basically be the same?
>
>
>
No. Here is the [explanation](https://towardsdatascience.com/adam-latest-trends-in-deep-learning-optimization-6be9a291375c)
>
> Would some optimizers be better because they allow most of the weights to achieve their "global" minima?
>
>
>
It's not possible to check whether the model achieved global minima or not practically.
We can evaluate the model by either over-fitting or under-fitting using the training and validation set, and the generalization of the model with the test set.
Upvotes: 0 <issue_comment>username_2: I have experimented with this to a small degree and have not noticed that much of an impact.
To date, Adam appears to give the best results on a variety of image data sets. I have found that "adjusting" the learning rate during training is an effective means of improving model performance and has more impact than the selection of the optimizer.
Keras has two callbacks that are useful for this purpose. Documentation is at <https://keras.io/callbacks/>. The `ModelCheckpoint` callback enables you to save the full model or just the model weights based on monitoring a metric. Typically, you monitor validation loss and set the parameter `save_best_only=True` to save the results for the lowest validation loss. The other useful callback is `ReduceLROnPlateau`, which allows you to adjust the learning rate based on monitoring a metric. Again, the metric usually monitored is the validation loss. If the loss fails to reduce after a user-set number of epochs (parameter patience), the learning rate will be adjusted by a user-set factor (parameter factor). You can think of the training process as traveling down a valley. As you near the bottom of the valley, it becomes more and more narrow. If your learning rate does not adjust to the "narrowness" there is no way you will get to the bottom of the valley.
You can also write a custom callback to adjust the learning rate. I have done this and created one which first adjusts the learning rate based on monitoring the training loss until the training accuracy reaches 95%. Then it switches to adjust the learning rate based on monitoring the validation loss. It saves the model weights for the lowest validation loss and loads the model with these weights to make predictions. I have found this approach leads to faster training and higher accuracy.
The fact is you can't tell if your model has converged on a global minimum or a local minimum. This is evidenced by the fact that, unless you take special efforts to inhibit randomization, you can get different results each time you run your model. The loss can be envisioned as a surface in $N$ space, where $N$ is the number of trainable parameters. Lord knows what that surface is like and where your initial parameter weights put you on that surface, plus how other random processes cause you to traverse that surface.
As an example, I ran a model at least 20 times and got resultant losses that were very close to each there. Then I ran it again and got far better results for exactly the same data.
Upvotes: 1 |
2020/02/25 | 2,627 | 10,700 | <issue_start>username_0: What are the differences between meta-learning and transfer learning?
I have read 2 articles on [Quora](https://qr.ae/Tdowsi) and [TowardDataScience](https://towardsdatascience.com/icml-2018-advances-in-transfer-multitask-and-semi-supervised-learning-2a15ef7208ec).
>
> Meta learning is a part of machine learning theory in which some
> algorithms are applied on meta data about the case to improve a
> machine learning process. The meta data includes properties about the
> algorithm used, learning task itself etc. Using the meta data, one can
> make a better decision of chosen learning algorithm(s) to solve the
> problem more efficiently.
>
>
>
and
>
> Transfer learning aims at improving the process of learning new tasks
> using the experience gained by solving predecessor problems which are
> somewhat similar. In practice, most of the time, machine learning
> models are designed to accomplish a single task. However, as humans,
> we make use of our past experience for not only repeating the same
> task in the future but learning completely new tasks, too. That is, if
> the new problem that we try to solve is similar to a few of our past
> experiences, it becomes easier for us. Thus, for the purpose of using
> the same learning approach in Machine Learning, transfer learning
> comprises methods to transfer past experience of one or more source
> tasks and makes use of it to boost learning in a related target task.
>
>
>
The comparisons still confuse me as both seem to share a lot of similarities in terms of reusability. Meta-learning is said to be "model agnostic", yet it uses metadata (hyperparameters or weights) from previously learned tasks. It goes the same with transfer learning, as it may reuse partially a trained network to solve related tasks. I understand that there is a lot more to discuss, but, broadly speaking, I do not see so much difference between the two.
People also use terms like "meta-transfer learning", which makes me think both types of learning have a strong connection with each other.
I also found a [similar question](https://stats.stackexchange.com/q/255025/82135), but the answers seem not to agree with each other. For example, some may say that multi-task learning is a sub-category of transfer learning, others may not think so.<issue_comment>username_1: Meta-learning is more about speeding up and optimizing hyperparameters for networks that are not trained at all, whereas transfer learning uses a net that has already been trained for some task and reusing part or all of that network to train on a new task which is relatively similar. So, although they can both be used from task to task to a certain degree, they are completely different from one another in practice and application, one tries to optimize configurations for a model and the other simply reuses an already optimized model, or part of it at least.
Upvotes: 3 <issue_comment>username_2: The difference really comes down to the fact that in meta-learning, there is a population of tasks $\tau$ which have distribution $p(\tau)$. The goal is to perform well on a task drawn from $p(\tau)$. Generally 'perform well' means that with only a few training steps or data points, the model can give good classification accuracy, achieve high reward in an RL setting, etc.
A concrete example is given in the original MAML paper [1](https://arxiv.org/pdf/1703.03400.pdf), where the task is to perform regression on data given by a sinusoidal distribution with parameters $p(\theta)$. The meta-learning goal is to get high regression accuracy on tasks where the data is drawn from distributions coming from $p(\theta)$.
In contrast, transfer learning is a bit more general since there's not necessarily a notion of a distribution of tasks. There is generally just one (although there can be more) source problem $S$, and the goal is to do well on a target problem $T$. You know both of these explicitly, unlike in MAML where the goal is to do well amongst any unknown problem drawn from a certain distribution. Very often, this is performed by taking a model that performs well on $S$ and adapting it to work on $T$, perhaps by using extracted features from the model for $S$.
The extent to which this will succeed obviously depends on the similarity of the two tasks. This is also known in the literature as domain adaptation, and has some theoretical results [2](http://www.alexkulesza.com/pubs/adapt_mlj10.pdf), although the bounds are not really applicable to modern high-dimensional datasets.
1. [Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks](https://arxiv.org/pdf/1703.03400.pdf) (Finn et al) 2017.
2. [A Theory of Learning from Different Domains](http://www.alexkulesza.com/pubs/adapt_mlj10.pdf) (Ben-David et al) 2010.
Upvotes: 2 <issue_comment>username_3: First of all, I would like to say that it is possible that these terms are used inconsistently, given that at least transfer learning, AFAIK, is a relatively new expression, so, the general trick is to take terminology, notation and definitions with a grain of salt. However, in this case, although it may sound confusing to you, all of the current descriptions on this page (in your question and the other answers) don't seem inconsistent with my knowledge. In fact, I think I had already roughly read some of the cited research papers (e.g. the MAML paper).
Roughly speaking, although you can have formal definitions (e.g. the one in the MAML paper and also described in [this answer](https://ai.stackexchange.com/a/18404/2444)), which may not be completely consistent across sources, **meta-learning** is about **learning to learn** or learning something that you usually don't directly learn (e.g. the hyperparameters), where learning is roughly a synonym for optimization. In fact, the meaning of the word "meta" in meta-learning is
>
> denoting something of a **higher or second-order kind**
>
>
>
For example, in the context of training a neural network, you want to find a neural network that approximates a certain function (which is represented by the dataset). To do that, usually, you manually specify the optimizer, its parameters (e.g. the learning rate), the number of layers, etc. So, in this usual case, you will train a network (learn), but you will not know that the hyperparameters that you set are the most appropriate ones. So, in this case, training the neural network is the task of "learning". If you also want to learn the hyperparameters, then you will, in this sense, learn how to learn.
The concept of meta-learning is also common in reinforcement learning. For example, in the paper [Metacontrol for Adaptive Imagination-Based Optimization](https://www.semanticscholar.org/paper/Metacontrol-for-Adaptive-Imagination-Based-Hamrick-Ballard/099cdb087f240352a02286bf9a3e7810c7ebb02b), they even formalize the concept of a meta-Markov decision process. If you read the paper, which I did a long time ago, you will understand that they are talking about a higher-order MDP.
To conclude, in the context of machine learning, meta-learning usually refers to learning something that you usually don't learn in the standard problem or, as the definition of meta above suggests, to perform "higher-order" learning.
**Transfer learning** is often used as a synonym for fine-tuning, although that's not always the case. For example, in [this TensorFlow tutorial](https://www.tensorflow.org/tutorials/images/transfer_learning), transfer learning is used to refer to the scenario where you freeze (i.e. make the parameters non-trainable) the convolution layers of a model $M$ pre-trained on a dataset $A$, replace the pre-trained dense layers of model $M$ on dataset $A$ with new dense layers for the new tasks/dataset $B$, then retrain the new model, by adjusting the parameters of this new dense layer, on the new dataset $B$. There are also papers that differentiate the two (although I don't remember which ones now). If you use transfer learning as a synonym for fine-tuning, then, roughly speaking, transfer learning is to use a pre-trained model and then slightly retrain it (e.g. with a smaller learning rate) on a new but related task (to the task the pre-trained model was originally trained for), but you don't necessarily freeze any layers. So, in this case, fine-tuning (or transfer learning) means to tune the pre-trained model to the new dataset (or task).
*How is transfer learning (as fine-tuning) and meta-learning different?*
Meta-learning is, in a way, about fine-tuning, but not exactly in the sense of transfer learning, but in the sense of hyperparameter optimization. Remember that I said that meta-learning can be about learning the parameters that you usually don't learn, i.e. the hyper-parameters? When you perform hyper-parameters optimization, people sometimes refer to it as fine-tuning. So, meta-learning is a way of performing hyperparameter optimization and thus fine-tuning, but not in the sense of transfer learning, which can be roughly thought of as *retraining a pre-trained model but on a different task with a different dataset* (with e.g. a smaller learning rate).
To conclude, take terminology, notation, and definitions with a grain of salt, even the ones in this answer.
Upvotes: 4 [selected_answer]<issue_comment>username_4: The way username_3 describes meta-learning, it sounds identical to hyper-parameter optimization. Here I would like to clarify possible differences:
According to [Dataset2Vec: learning dataset meta-features](https://link.springer.com/article/10.1007/s10618-021-00737-9)
>
> Meta-learning, or learning to learn, refers to any learning approach that systematically makes use of prior learning experiences to accelerate training on unseen tasks or datasets. For example, after having chosen hyperparameters for dozens of different learning tasks, one would like to learn how to choose them for the
> next task at hand.
>
>
>
So, while hyper-parameter optimization often uses a single dataset, while meta-learning considers multiple datasets, where one (or multiple) are used to optimize hyper-parameters and then use those parameters for later datasets (without re-optimizing those hyper-parameters).
While this might feel a bit like "invent your own category" (a technique in marketing), meta-learning might be considerably more difficult. The reason is that (classical) hyper-parameter optimization only needs to build models that generalize over a single datasets, while meta-learning needs to generalize over multiple datasets - without seeing the later dataset(s) (at least for the hyper-parameter optimization).
Upvotes: 1 |
2020/02/25 | 685 | 2,490 | <issue_start>username_0: I’m trying to debug my neural network (BERT fine-tuning) trained for natural language inference with binary classification of either entailment or contradiction. I've trained it for 80 epochs and its converging on ~0.68. Why isn't it getting any lower?
Thanks in advance!
---
Neural Network Architecture:
[](https://i.stack.imgur.com/3VAyw.jpg)
Training details:
* Loss function: Binary cross entropy
* Batch size: 8
* Optimizer: Adam (learning rate = 0.001)
* Framework: Tensorflow 2.0.1
* Pooled embeddings used from BERT output.
* BERT parameters are not frozen.
Dataset:
* 10,000 samples
* balanced dataset (5k each for entailment and contradiction)
* dataset is a subset of data mined from wikipedia.
* Claim example: *"'History of art includes architecture, dance, sculpture, music, painting, poetry literature, theatre, narrative, film, photography and graphic arts.'"*
* Evidence example: *"The subsequent expansion of the list of principal arts in the 20th century reached to nine : architecture , dance , sculpture , music , painting , poetry -LRB- described broadly as a form of literature with aesthetic purpose or function , which also includes the distinct genres of theatre and narrative -RRB- , film , photography and graphic arts ."*
Dataset preprocessing:
* Used [SEP] to separate the two sentences instead of using separate embeddings via 2 BERT layers. (Hence, segment ids are computed as such)
* BERT's [FullTokenizer](https://github.com/google-research/bert/blob/master/tokenization.py) for tokenization.
* Truncated to a maximum sequence length of 64.
See below for a graph of the training history. (Red = train\_loss, Blue = val\_loss)
[](https://i.stack.imgur.com/h8Hlr.png)<issue_comment>username_1: It seems to be overfitting and your model is not learning. Try SGD optimizer with a learning rate of 0.001
ADAM optimizer will give you a soon overfitting, and decreasing the learning rate will train your model better. The learning rate is about steps to change weights, in this plot you see that the validation loss is not changing with an optimization goal
Upvotes: 3 [selected_answer]<issue_comment>username_2: Are you using BinaryCrossEntropy through tensorflow? If so, check if you are using the logits argument. I am using from\_logits=True .It is not similar to the original BinaryCrossEntropy loss.
Upvotes: 0 |
2020/02/25 | 631 | 2,739 | <issue_start>username_0: After working for some time with feature-based pattern recognition, I am switching to CNN to see if I can get a higher recognition rate.
In my feature-based algorithm, I do some image processing on the picture before extracting the features, such as some convolution filters to reduce noise and segmentation into the foreground and background, and finally identifying and binarization of objects.
Should I do the same image processing before feeding data into my CNN, or is it possible to feed raw data to a CNN and expect that the CNN will adapt automatically without per-image-processing steps?<issue_comment>username_1: The whole interest of using deep learning-based solutions is that you don't have to do all those pre-processings, i.e. binarization, segmentation of background. CNNs, such as YOLO or FasterRCNN, can learn how to retrieve that information by themselves.
Upvotes: 3 [selected_answer]<issue_comment>username_2: The CNN should work without trying to do special feature extraction. As pointed out some pre-processing can aid in enhancing the CNN's classification results. The Keras
ImageDataGenerator provides optional parameters you can set to provide pre-processing as well as provide data augmentation.
One thing I know that works for sure but can be painful is cropping the images in such a way that the subject of interest occupies a high percentage of the pixels in the resultant cropped image. The cropped image can than be resized as needed. The logic here is simple.
You want your CNN to train on the subject of interest (for example a bird sitting in a tree where the bird is the subject of interest). The part of the image that is not of the bird is essentially just noise making the classifier's job harder. For example say you have a 500 X 500 initial image in which the subject of interest (the bird) only takes up 10% of the pixels (25,000 pixels). Now say as input to your CNN you reduce the image size to 100 X 100. Now the (pixels that the CNN 'learns' from is down to 1000 pixels.
However lets say you crop the image so that the features of the bird are preserved but the pixels of the bird in the cropped image take up 50% of the pixels. Now if you resize the cropped image to 100 X 100 , 5000 pixels of relevance are available for the network to learn from. I have done this on several data sets. In particular images of people where the subject of interest is the face. There are many programs that are effective at cropping these images so that mostly just the face appears in the cropped result. I have trained a deep CNN in one case using uncropped images and in the other with cropped images. The results are significantly better using the cropped images.
Upvotes: 1 |
2020/02/25 | 780 | 3,418 | <issue_start>username_0: I am training a neural network and plot model accuracy and model loss.
I am a little confused about overfitting. Is my model overfitted or not? how can I interpret it
[](https://i.stack.imgur.com/m7sJ1.png)
[](https://i.stack.imgur.com/PKNwi.png)
---
**EDIT:** here is a sample of my input data, I have a binary image classification
[](https://i.stack.imgur.com/kCoxz.jpg)<issue_comment>username_1: Overfitting nearly always occurs to some degree when fitting to limited data sets, and neural networks are very prone to it. However neither of your graphs show a major problem with overfitting - that is usually obvious when epoch counts increase, the results on training data continue to improve whilst the results on cross validation get progressively worse. Your validation results do not do that, and appear to remain stable.
It is usually pragmatic to accept that there will be at least *some* difference between measurements on the training set and cross validation or test sets. The primary goal is usually to get the best measurements in test that you can. With that in mind, you are usually only interested in how much you are overfitting if it implies you could improve performance by using techniques to reduce overfitting e.g. various forms of regularisation.
Without knowing your data set or known good results, it is hard to tell whether the difference you are seeing between test and train in accuracy could be improved. Your accuracy graph shows a train accuracy close to 100% and a validation accuracy close to ~96%. It looks a bit like MNIST results, and if I saw that result on MNIST I would suspect *something* was wrong, and it might be fixed by looking at regularisation (but it might also be something esle). However, that's only because I know that 99.7% accuracy is possible in test - on other problems I might be very happy with 96% accuracy.
The loss graph is not very useful, since the scale has completely lost any difference there might be between training and validation. You should probably re-scale it to show detail close to 0 loss, and ignore the earlier large loss values.
Upvotes: 2 <issue_comment>username_2: A quick scan of your plots does not seem to indicate any severe over fitting. As pointed out there is always some degree of over fitting but in this case it looks to be very small. Your validation loss reduces as it should down to what appears to be a very small level and remains low.
One test would be to add a "dropout" layer into your model right after a dense layer and see the effect on training accuracy and validation accuracy. Set the drop out rate to something like .4 . Make sure your training accuracy remains high (it may take a few more epochs to get there) then look to see if the validation loss is lower than without the dropout layer. Run this several times because random weight initialization can sometimes effect accuracy by converging on a non optimal local minimum.
Additional you can add kernel regularizes to your dense layers which also helps to prevent over training. I have a lot of plots similar to yours and adding dropout and regularization had no effect on validation accuracy.
Upvotes: 2 [selected_answer] |
2020/02/27 | 379 | 1,680 | <issue_start>username_0: We are discussing planning algorithms currently, and the question is to describe the steps to check if actions could be taken simultaneously. This is a really open-ended question so I'm not sure where to start.<issue_comment>username_1: First place to look is how the preconditions/effects of different actions interact.
Upvotes: 1 <issue_comment>username_2: I don't see any principal problem with that. The way I would approach it is to have a resource model and durations attached to actions.
For example, movement would put a lock on your legs. You can't have another movement at the same time, as your legs are already busy. But your attention might only be partially occupied, so you can make a phone call while you're moving. You won't be able to read a book, because your eyes might be partially busy monitoring the walking action. This can be encoded in pre- and post-conditions.
What will probably be easier is to parallelise the execution of the plan. Once the plan has been created, organise the actions in a [Gantt-chart](https://en.wikipedia.org/wiki/Gantt_chart) like structure. You can have mutual exclusion in there, so all 'movement' would be restricted to one single row, so no more than one movement can take place at the same time. But 'making a phone call' could be in a separate row, and thus execute in parallel. The details depend on the requirements of the actions.
I can't easily think of a way that it would impact the planning process itself; unless there are critical timings involved. So leaving the planner to do its thing might keep it simpler, and then the optimal execution could be a post-processing step.
Upvotes: 0 |
2020/02/27 | 589 | 1,963 | <issue_start>username_0: Is it correct that for SARSA to converge to the optimal value function (and policy)
1. The learning rate parameter $\alpha$ must satisfy the conditions:
$$\sum \alpha\_{n^k(s,a)} =\infty \quad \text{and}\quad \sum \alpha\_{n^k(s,a)}^{2} <\infty \quad \forall s \in \mathcal{S}$$
where $n\_k(s,a)$ denotes the $k^\text{th}$ time $(s,a)$ is visited
2. $\epsilon$ (of the $\epsilon$-greedy policy) must be decayed so that the policy converges to a greedy policy.
3. Every state-action pair is visited infinitely many times.
Are any of these conditions redundant?<issue_comment>username_1: I have the conditions for convergence in these notes [SARSA convergence](https://webee.technion.ac.il/shimkin/LCS11/ch7_exploration.pdf) by [<NAME>](https://webee.technion.ac.il/shimkin/).
1. The Robbins-Monro conditions above hold for $α\_t$.
2. Every state-action pair is visited infinitely often
3. The policy is greedy with respect to the policy derived from $Q$ in the limit
4. The controlled Markov chain is communicating: every state can be reached from any other with positive probability (under some policy).
5. $\operatorname{Var}{R(s, a)} < \infty$, where $R$ is the reward function
Upvotes: 1 <issue_comment>username_2: The paper [Convergence Results for Single-Step On-Policy Reinforcement-Learning Algorithms](https://link.springer.com/content/pdf/10.1023/A:1007678930559.pdf) by <NAME> et al. proves that SARSA(0), in the case of a tabular representation of the value functions, converges to the optimal value function, provided certain assumptions are met
1. Infinite visits to every state-action pair
2. The learning policy becomes greedy in the limit
The properties are more formally stated in lemma 1 (page 7 of the pdf) and theorem 1 (page 8). The [Robbins–Monro conditions should ensure that each state-action pair is visited infinitely often](https://arxiv.org/pdf/1808.00245.pdf).
Upvotes: 3 [selected_answer] |
2020/02/28 | 1,986 | 8,937 | <issue_start>username_0: There is an idea that intentionality may be a requirement of true intelligence, here defined as human intelligence.
But all I know for certain is that we have the *appearance* of free will. Under the assumption that the universe is purely deterministic, what do we mean by intention?
*(This seems an important question given that intention is not just a philosophical matter in relation to definitions of AI, but involves ethics in the sense of application of AI, "offloading responsibility to agents that cannot be meaningfully punished" as an example. Also touches on goals, implied by intention, whether awareness is a requirement, and what constitutes awareness. I'm interested in all angles, but was inspired by the question "does true art require intention, and, if so, is that the sole domain of humans?")*<issue_comment>username_1: In order to answer the question requires that the definition of intelligence (human or otherwise) be grounded into the physical model. There are two forms of intelligence in the physical model, one subconscious (general) and the other conscious (specific).
The subconscious is responsible for managing the brain’s process and performing all the physical reactions. It exist in a deterministic universe where the decisions are known and reactions automated. In the brain, the shape of the deterministic universe is time. For something to be deterministic, it must have guaranteed results within a specific time cycle. This requirement precludes the use of reasoning and long-term memory whose completion time cannot be fixed. The subconscious runs on general intelligence. To understand the format of general intelligence, think in terms of a jigsaw puzzle. General intelligence only needs the border of the puzzle. With just the basic object structure, the subconscious can move and process data throughout the brain without regards to its content. It is functionally agnostic and can complete its automation cycle without reasoning.
Now, consciousness is responsible for mapping the non-deterministic environment to the deterministic universe. To do this, it must exist outside of that universe so its own non-deterministic nature doesn’t interfere with the subconscious process. Consciousness runs on specific intelligence. Its job is to decipher the contents of the puzzle. In humans, specific intelligence takes on the form of visual symbols. These symbols are used by the conscious state for reasoning and long-term memory access. How decisions are reached has no predefined form or technique. It is a non-deterministic process based on the accumulation of individual experience and previous selection. This process allows the biological life-form to adapt decision-making to its own environment. In the most basic sense, the subconscious allows us to live in a deterministic universe and consciousness allows us to adapt to a non-deterministic environment.
The two forms of intelligence are functionally incompatible. The non-deterministic environment has no common translation to a deterministic universe. If it did, it wouldn’t be non-deterministic. To overcome the translation problem, the lower occipital lobe is a functional “Black Box” that allows the two different formats to be written into the same memory construct. By simple association, the two systems coexist in a non-invasive relationship where the hippocampus (short-term) memory serves as an interchange point.
With a grounded physical model, I can now attempt to answer some of the questions.
1. We have “free will”, but it can only exist in the non-deterministic environment of the conscious state. There is no such thing as “free will” in a deterministic universe. The results of that universe are already known and decided.
2. A deterministic universe cannot change. Its own nature precludes
adaption from within. To make alterations requires a separate state
that can rise above the process in order to change the process.
Biological consciousness serves this role and requires “free will”
to make decisions outside of the scope of what is already known.
3. There are no ethics in a deterministic universe. Abstracting right and wrong is a reasoning function which is not allowed. AI constructs like agents which do pattern matching to develop predictions/reactions are not directly attached to any conscious state. They are deterministic functions whose capability is predefined. Simply put, agents cannot exceed the sum of their programming because they cannot adapt to that which is not known or anticipated.
4. Liability only exists where “free will” exists and does not exist in the deterministic universe. Otherwise, you are attempting to blame your hand for a decision originated in your frontal lobe.
5. “Free will” becomes a prisoner of the deterministic universe. As experience and decision knowledge grow, more and more decisions are shifted to the subconscious. This creates a dependency that restricts the range of “free will” and the decisions it is called upon to resolve.
6. Creativity does not exist in the deterministic universe. It is an essential skill used by the conscious state to employ related memory to formulate new decisions. Creativity is used by animals to develop survival skills to adapt to environmental changes. The production of art work in an esoteric sense is purely human, but as a basic function is not.
7. Intent does not exist in the deterministic universe. Within the non-deterministic universe of consciousness, intent is buried in an ocean of previous experience that may or may not reflect current “free will” goals. Especially, if these goals threaten the survival of the biological process.
Upvotes: 0 <issue_comment>username_2: The term "intentionality" has two quite different senses. One is a very technical concept in philosophy of AI and means (roughly) aboutness in the sense that beliefs, desires, fears, etc., are about things (snakes, tax, chocolate). That's the sense central to searle's Chinese room argument. The other sense is the common idea of intending to do (or not do) something. A really key issue for AI according to Searle is how can a computer have intentionality in the first sense.
Upvotes: 0 <issue_comment>username_3: My thoughts.
The short answer is: you can't.
The long answer is that since we're searching for a new definition of a term when removing a necessary (in my opinion) precondition for it to exist, the question becomes "can you make up a new definition for what you intuitively and empirically understand as intention, while removing free will from the picture?". I'm going to give it a shot.
First of all, there's a lot to be said about whether or not the idea or intuition of intention even exists in our collective discourse based on the latent assumption that free will is a thing. As in, before any rigorous definition, even the intuitions encoded in discourse, philosophy, art, and other languages as "intention" could very well be as invalid as the assumption of free will itself.
That being said, I'm a fan of Deleuze's model for people and other entities as machines of input, output and internal state (not his wording, but I paraphrase and in consequence, interpret and alter for the purposes of my point). It's not perfect, but I run to it a lot to answer these question as I find it very refreshing, often lacking in bias and having good explanatory power compared to the usual romance-foo that dominates these conversations. If that's the case you could pretty much define intention not as a self-started force but as a product of a much blurrier mechanism, namely the non-deterministic characteristic of this rhizomatic soup. Whether or not an input or output will exist, what kind will it be, what internal state will it find or cause in the machine and the long term dependencies between these interactions seem to me like a convincing enough candidate for the cause of any intuition (or illusion if you like a more cynic vocabulary) of "intention". It's pretty much an emergent symbol we use, assuming the form of a force for the setup and function of new connections, that will in their complication or pure non-determinism spawn even more intention in the network.
tl;dr: Intent could be the most basic expression of the RNG of the universe.
Upvotes: 1 <issue_comment>username_4: I like both answers but am going to propose something simpler:
* Intention can be understood as an expression of pursuit of a goal, and thus does not require free will.
Deterministic algorithms can have and pursue goals, even without being consciously "aware" of the goal. Thus, a simple NIM solving algorithm can be said to have the intention of winning at NIM, even there the goal is not a part of the algorithm, but embedded undefined in the simple algorithm itself.
This becomes even more true with neural networds, where, unlike the NIMATRON, goals are typically defined.
Upvotes: 0 |
2020/02/28 | 737 | 2,188 | <issue_start>username_0: I'm testing out YOLOv3 using the 'darknet' binary, and custom config. It trains rather slow.
My testing out is only with 1 image, 1 class, and using YOLOv3-tiny instead of YOLOv3 full, but the training of yolov3-tiny isn't fast as expected for 1 class/1 image.
The accuracy reached near 100% after like 3000 or 4000 batches, in similarly 3 to 4 hours.
**Why is it slow with just 1 class/1 image?**<issue_comment>username_1: I think you underestimate the size of YOLO. This is the size of one segment of yolo tiny according to the darknet .cfg file:
```
Convolutional Neural Network structure:
416x416x3 Input image
416x416x16 Convolutional layer: 3x3x16, stride = 1, padding = 1
208x208x16 Max pooling layer: 2x2, stride = 2
208x208x32 Convolutional layer: 3x3x32, stride = 1, padding = 1
104x104x32 Max pooling layer: 2x2, stride = 2
104x104x64 Convolutional layer: 3x3x64, stride = 1, padding = 1
52x52x64 Max pooling layer: 2x2, stride = 2
52x52x128 Convolutional layer: 3x3x128, stride = 1, padding = 1
26x26x128 Max pooling layer: 2x2, stride = 2
26x26x256 Convolutional layer: 3x3x256, stride = 1, padding = 1
13x13x256 Max pooling layer: 2x2, stride = 2
13x13x512 Convolutional layer: 3x3x512, stride = 1, padding = 1
12x12x512 Max pooling layer: 2x2, stride = 1
12x12x1024 Convolutional layer: 3x3x1024, stride = 1, padding = 1
```
.cfg file found here: <https://github.com/pjreddie/darknet/blob/master/cfg/yolov3-tiny.cfg>
EDIT: These networks generally aren't specifically designed to train fast, they're designed to run fast at test time, where it matters
Upvotes: 3 [selected_answer]<issue_comment>username_2: It depends upon the factors such as
1. Batch size (GPU memory capacity)
2. CPU speed and number of cores(multi-threading to load the images)
Number of classes increase the number of convolution filters only in the prediction layers of YOLO. It influences only less than 1% speed of the detector to train the model.
Upvotes: 1 |
2020/02/28 | 1,781 | 6,886 | <issue_start>username_0: This isn't really a conspiracy theory question. More of an inquire on the global computational power and data storage logistics question.
Most recording instruments such as cameras and microphones are typically voluntary opt in devices, in that, they have to be activated before they start recording. What happens if all of these devices were permanently activated and started recording data to some distributed global data storage?
There are 400 hours of video uploaded to YouTube every minute.
Let’s do some very rough math.
I’m going to assume for the rest of this post that the average video is 1080p which is 2.5GB (or $10^9$ bytes) per hour. From that, we get about 400 hrs \* 60 mins \* 2.5GB/hrs \* 24 hrs = 1.5 petabytes (or $10^{15}$ bytes) per day.
But YouTube videos post are voluntary, and they are far from continuous video streams.
There are about 3.5 billion smartphones in the world. If video was continuously streamed and recorded, going through the same video math above ($3.5 \* 10^9 \* 1.5 \* 10^{15} \* 24)$ = 126 yottabytes (or $10^{24}$ bytes) per day.
The IDC projects there will be 175 zettabytes (or $10^{21}$ bytes) in 2025.
Unless my math is very wrong, it would seem as though smartphone cameras alone could produce more data in one day than all of the data created in human history in 2025.
This, so far, has only been about the data recording, but, to implement a surveillance state, all recorded data would need to be processed by AI to intelligent flag data that is significant. How much processing power would be needed to filter 126 yottabytes into relevant information?
Overall, this question is motivated by the spread of dystopian surveillance media like Edward Snowden NSA whistle blowing leaks or Ge<NAME>'s sentiment of "Big Brother is Watching You".
Computationally, could we be surveilled, and to what extent? I imagine text messages surveillance would be the easiest, does the world have the computation power to surveil all text messages? How about audio? or video?<issue_comment>username_1: You don't necessarily have to analyse it all. Just by having such data available you can achieve a lot in terms of surveillance, as long as you can retrieve relevant parts.
A few years ago there was a [Radiolab podcast, "The Eye in the Sky"](https://www.wnycstudios.org/podcasts/radiolab/articles/eye-sky) (there's a full transcript on the site). The basic idea is that you have a plane circling a city 24/7, and filming what goes on. If there was a crime somewhere, you retrieve the recordings after the event, and you can track back to where vehicles involved in the crime were coming from, and where they went after the crime. If nothing happens, you simply archive the data, and perhaps remove it after a month or so.
This method was used to solve a hit-and-run assassination of a police woman who was on her way to work. The gang who committed the attack were rather surprised when the police showed up at their secret hide-out a few days later, as they could see on the images where the cars involved went to later. At the time and place of the murder there were obviously no witnesses who could have done that. And this involved no computational processing at all.
The possibilities this opens up are just scary, as you can track pretty much anybody's movements without actually needing someone to follow them. Add to that street-level CCTV, and not much can happen without you being able to find out.
In this scenario there is no processing at all, but you could imaging simple processing steps, such as tracking vehicles or changes in the environment, which could be used to give clues about potentially 'interesting' events. So instead of using it 'passively' as a kind of memory, you could use that data to identify things that happened that you weren't aware of.
And this is without even any clandestine access to people's data. If you add that dimension, then you might even be able to identify crimes/etc before they even happen. Text processing can be quite fast, but is not easy to do, as presumably few people would openly communicate about things they were planning. So I guess we're still a long way away from that.
Of course there is the ethical dimension (which is mentioned in the podcast): who has access to that data, and who decides what it is used for? If you do, and you suspect your partner of being unfaithful, who/what would stop you from checking out their movements? Or check up on that politician who might have a secret affair, or a gambling issue, or who keeps being in the same locations as a well-known drug dealer. All rather scary.
While a complete analysis of all such data would be very heavy computationally, and fraught with false positives and recall problems, it might simply be enough to index it by time, location, and perhaps people involved (face recognition seems to be reasonably good, though still with a rather high error rate). This is enough already to make me feel worried about the future.
Upvotes: 4 <issue_comment>username_2: You would also want to consider physical limitations. If you are even *storing* 126 yottabyte of data per day, then if we look at the current theoretical densest data storage medium, DNA, at 215 petabytes per gram, we get...
${(126 \* 10^{24}) \over (215 \* 10^{15})} = 586046511$ grams per day
586046511 g = 586046 kg = 586 Metric Tonnes just for storage.
Upvotes: 3 <issue_comment>username_3: The answer is really very simple. If you have the dystopian power over all the mobile devices in the first place, you would not make them send all their data over to any "global data storage" just like that. Instead, you would have put a local AI on each device that filters, processes, categorizes and flags the important parts, sending only those parts plus an intelligent summary of the remaining data to a global AI. The global AI combines and synthesizes the parts that all the local AIs send to it, and may request further data from the local AIs based on what it wants to know.
Naturally, since you are a competent dystopia architect, you design each local AI to be intelligent enough to subvert any human's attempt to remove it, stop its activity, or otherwise interfere with its data collection and processing. The local AIs also continuously communicate in a distributed network with other local AIs regarding their status and any adversarial activities, so that they can quickly act to defend themselves if the need arises, and also notify the global AI of any attack. In this surveillance state, it is an easy task for the global AI to send armed agents to deal with any threat to the AI network that manages to gain any foothold in the information cyberspace.
The point is that the most durable dystopia is a **defended distributed dystopia**, which would make it **robust** and **scalable**.
Upvotes: 3 |
2020/03/01 | 1,004 | 3,818 | <issue_start>username_0: Why is a batch size needed to update the weights of a neural network?
According to that [Youtube Video from 3B1B](https://www.youtube.com/watch?v=tIeHLnjs5U8&t=278s), the weights are updated by calculating the error between expectation and outcome of the neural net. Based on that, the chain rule is applied to calculate the new weights.
Following that logic, why would I pass a complete batch through the net? The first entries wouldn't have an impact on the weighting.
Do I need to define a batch size when I use backpropagation?<issue_comment>username_1: **tl;dr:** A batch size is the number of samples a network sees before updating its gradients. This number can range from a single sample to the whole training set. Empirically, there is a sweet spot in the range 1 to a few hundreds, where people experience the fastest training speeds. Check [this article](https://arxiv.org/pdf/1206.5533.pdf) for more details.
---
### A more detailed explanation...
If you have a small enough number of samples, you can let the network see all of the samples before updating its weights; this is called [Gradient Descent](https://en.wikipedia.org/wiki/Gradient_descent). The benefit from this is that you guarantee that the weights will be updated in the direction that reduces the training loss for the whole dataset. The downside is that it is computationally expensive and in most cases infeasible for deep neural nets.
What is done in practice is that the network sees only a batch of the training data, instead of the whole dataset, before updating its weights. However, this technique does not guarantee that the network updates its weights in a way that will reduce the dataset's training loss; instead it reduces the batch's training loss, which might not the same thing. This adds noise to the training process, which can in some cases be a good thing, but requires the network to take too many steps to converge (this isn't a problem since each step is much faster).
What you're saying is essentially training the network each time on a single sample. This is formally called [Stochastic Gradient Descent](https://en.wikipedia.org/wiki/Stochastic_gradient_descent), however the term is used more broadly to include any case where the network is trained on a subset of the whole training set. The problem with this approach is that it adds too much noise to the training process, causing it to require a lot more steps to actually converge.
Upvotes: 3 <issue_comment>username_2: We basically distinguish between 3 forms of batch training:
$$Loss\_{minibatch} = \sum\_{m} l\_m(\mathbf{W},t\_m) \;\;\; with \;m \;\epsilon \; M$$
where `M` is a (random) subset of the whole dataset.
$$Loss\_{batch} = \sum\_{b} l\_m(\mathbf{W},t\_b) \;\;\; with \;b \;\epsilon \; B$$
where `B` is the whole dataset.
$$Loss\_{stochastic} = l\_i(\mathbf{W},t\_i) $$
where `i` is a single sample from the whole dataset.
Here `t` is the target/label of a sample `m,b,i` and `W` are the network weights. The most common case today is usually minibatch training.
When we are training(updating the weights of the neural network to optimize towards a lower Loss) we take the derivative of this loss function with respect to the weights W. This will give us the gradient of the NN which tells us how much and in what direction we should update each weight.
$$ \nabla L = \frac{dLoss\_{minibatch}}{d \mathbf{W}} = \frac{d\sum\_m l\_m(\mathbf{W},t\_m)}{d \mathbf{W}} = \sum\_m \frac{d l\_m(\mathbf{W},t\_m)}{d \mathbf{W}} = \sum\_m \nabla l\_m$$
As you can see here in the example of the minibatch case: the total gradient is the sum of the gradient of each sample in the minibatch. So why do you think the first elemnts do not have an impact on the weight update? Or do I understand you wrong?
Upvotes: 0 |
2020/03/02 | 1,253 | 5,619 | <issue_start>username_0: I am currently studying *Deep Learning* by Goodfellow, Bengio, and Courville. In chapter **5.2 Capacity, Overfitting and Underfitting**, the authors say the following:
>
> Typically, when training a machine learning model, we have access to a training set; we can compute some error measure on the training set, called the **training error**; and we reduce this training error. So far, what we have described is simply an optimization problem. What separates machine learning from optimization is that we want the **generalization error**, also called the **test error**, to be low as well. The generalization error is defined as the expected value of the error on a new input. Here the expectation is taken across different possible inputs, drawn from the distribution of inputs we expect the system to encounter in practice.
>
>
>
I found this part unclear:
>
> Here the expectation is taken across different possible inputs, drawn from the distribution of inputs we expect the system to encounter in practice.
>
>
>
The language used here is confusing me, because it is discussing a "distribution", as in a "probability distribution", but then refers to inputs, which are data gathered from outside of any probability distribution. Based on the limited information my studying of machine learning has taught me so far, my understanding is that the machine learning algorithm (or, rather, *some* machine learning algorithms) uses training data to implicitly construct some probability distribution, right? So is this what it is referring to here? Is the "distribution of inputs we expect the system to encounter in practice" the so called "test set"? I would greatly appreciate it if people would please take the time to clarify this.<issue_comment>username_1: >
> The language used here is confusing me, because it is discussing a "distribution", as in a "probability distribution", but then refers to inputs, which are data gathered from outside of any probability distribution. Based on the limited information my studying of machine learning has taught me so far, my understanding is that the machine learning algorithm (or, rather, some machine learning algorithms) uses training data to implicitly construct some probability distribution, right? So is this what it is referring to here?
>
>
>
They're not referring to probability distributions of training data that ML algorithms (implicitly) construct here. The main point of confusion seems to be where you state this:
>
> but then refers to inputs, which are data gathered from outside of any probability distribution
>
>
>
**Any** data / inputs ever collected **always** originate from *some* distribution. We will typically not exactly know what that distribution is, we are often not able to provide a clean expression for it, and it might not even be a nice "smooth" distribution, but that doesn't mean it doesn't exist.
If I collect a large number of photographs of $H \times W$ pixels of streets for the purpose of training a self-driving car, then this collection of training data was collected from *some* distribution. For each of the pixels in the $H \times W$ plane, there exists some probability distribution that tells us how likely it is for such a pixel to have a certain colour under the data collection procedure that was used to generate our data. This is a largely unknown distribution, for which we don't have a nice mathematical expression, but it does exist. I assume that, in this distribution, it's relatively likely for pixels in the centre to be gray (because streets tend to be gray and we collected data by taking photographs of streets). I also guess it's relatively likely for pixels at the top of the images to be blue, because of the sky. Other than that, we can't say much about the distribution, but it does exist.
>
> Is the "distribution of inputs we expect the system to encounter in practice" the so called "test set"?
>
>
>
Kind of, yeah. Although I suppose the "test set" is mostly a thing in academic settings, where we use a test set to evaluate how well an approach performs on data that it did not observe during training. In the "real world", the distribution of inputs we expect the system to encounter in practice refers to the distribution that generates samples we encounter after "deployment" of the model. For example, this could be the distribution over all images that a self-driving car may encounter when driving anywhere in the world.
Continuing with the self-driving car example, we may get a large generalization error if we only train it on images of streets in one particular city or country, but then afterwards have it drive in many different cities or countries around the world (which may look very different).
Upvotes: 3 [selected_answer]<issue_comment>username_2: For illustration, I use the dog/cat classification task. Suppose, the training data of cat and dog follows the Gaussian distribution(for simplicity) and we trained a model which gives an accuracy as below.
* train - 98.2%
* val - 97.7%
* test - 97.2%
The model is neither overfitting nor underfitting but we want the classifier to achieve an accuracy of 100% in all the three sets theoretically. You are right that the model learns the distribution of training data to classify the classes. Due to the overlapping of fat-tail in the distribution of cat and dog, it is highly impossible for the model to get 100% accuracy practically. There will be infinite edge cases that we encounter in reality so we can only improve the model by an iterative approach.
Upvotes: 0 |
2020/03/02 | 901 | 3,633 | <issue_start>username_0: I'm training a classifier and I want to collect incorrect outputs for human to double check.
the output of the classifier is a vector of probabilities for corresponding classes. for example, [0.9,0.05,0.05]
This means the probability for the current object being class A is 0.9, whereas for it being the class B is only 0.05 and 0.05 for C too.
In this situation, I think the result has a high confidence. As A's probability dominants B's and C's.
In another case, [0.4,0.45,0.15], the confidence should be low, as A and B are close.
What's the best formula to use to calculate this confidence?<issue_comment>username_1: Obvious answer for a binary (2 classes)classification is .5. Beyond that the earlier comment is correct. One of the things I have seen done is to run your model on the test set and save the prediction probability results. Then create a threshold variable call it thresh. Then increment thresh from 0 to 1 in a loop. On each iteration compare thresh with the highest predicted probability prediction call it P. If P>thresh declare that as the selected prediction, then compare that with the true prediction. Keep track of the errors for each value of thresh. At the end select the value of thresh that has the least errors.
There are also some more sophisticated methods for example "top 2 accuracy" where thresh is selected based on having the true class within either the prediction with the highest probability or the second highest probability . You can construct a weighted error function and select the value of thresh that has the net lowest error over the test set.
For example an error function might be as follows. If neither P(highest) or P(second highest) = True class, error=1. If P(second highest) = true class, error=.5. If p(highest)=true class error=0. I have never tried this myself so I am not sure how well this works. When I get some time will try it on a model with 100 classes and see how well it does. I know in the Imagenet competition they evaluate not just the top accuracy but also the "Top 3" and "Top 5' accuracy. In that competition there are 1000 classes.
I never thought of this before but I assume you could train your model specifically to optimize say the Top 2 accuracy by constructing a loss function used during training that forces the network to minimize this loss.
Upvotes: 0 <issue_comment>username_2: I assume you want a model that uses the [Softmax](https://en.wikipedia.org/wiki/Softmax_function) as the output layer.
Basically, the Softmax will produce a set of probabilities that all sum up to 1. So if you have three classes in your data the Softmax will produce these confidence values by default, even though this is not exactly its main functionality.
The Softmax is commonly used on multiclass data.
Upvotes: 0 <issue_comment>username_3: I believe that there is no "best formula" here, as there are many [Calibration metrics](https://torchmetrics.readthedocs.io/en/v0.8.0/classification/calibration_error.html) out there, depending on what you want to calibrate. [This paper](https://arxiv.org/abs/1706.04599) introduces three metrics for different purposes:
* Expected Calibration Error (ECE): provides a single scalar summary of calibrations.
* Maximum Calibration Error (MCE): use when we wish to minimize the worst-case deviation between confidence and accuracy
* Negative log likelihood (NLL): this is the same as Cross-entropy loss.
There is also a [related paper](https://arxiv.org/abs/1904.01685) about more metrics.
Just like Accuracy, F1, and ROC-AUC, Calibration metric should depend on the use case.
Upvotes: 1 |
2020/03/03 | 1,509 | 5,494 | <issue_start>username_0: In the context of a Markov decision process, [this paper](https://arxiv.org/pdf/1902.09725.pdf) says
>
> it is well-known that the optimal policy is invariant to positive affine transformation of the reward function
>
>
>
On the other hand, exercise 3.7 of [Sutton and Barto](http://incompleteideas.net/book/the-book-2nd.html) gives an example of a robot in a maze:
>
> Imagine that you are designing a robot to run a maze. You decide to give it a reward of +1 for escaping from the maze and a reward of zero at all other times. The task seems to break down naturally into episodes—the successive runs through the maze—so you decide to treat it as an episodic task, where the goal is to maximize expected total reward (3.7). After running the learning agent for a while, you find that it is showing no improvement in escaping from the maze. What is going wrong? Have you effectively communicated to the agent what you want it to achieve?
>
>
>
It seems like the robot is not being rewarded for escaping quickly (escaping in 10 seconds gives it just as much reward as escaping in 1000 seconds). One fix seems to be to subtract 1 from each reward, so that each timestep the robot stays in the maze, it accumulates $-1$ in reward, and upon escape it gets zero reward. This seems to change the set of optimal policies (now there are way fewer policies which achieve the best possible return). In other words, a positive affine transformation $r \mapsto 1 \cdot r - 1$ seems to have changed the optimal policy.
How can I reconcile "the optimal policy is invariant to positive affine transformation of the reward function" with the maze example?<issue_comment>username_1: This statement:
>
> (it is well-known that the optimal policy is invariant to positive affine transformation of the reward function).
>
>
>
is, as far as I know, and as you summarise, incorrect\* because simple translations to reward signal do affect the optimal policy, and the affine transform of a real number $x$ can be given by $f(x) = mx + c$
It *is* well known that optimal policy is unaffected by multiplying all rewards by a positive scaling, e.g. $f(x) = mx$ where $m$ is positive.
It is also worth noting that if an optimal policy is derived from Q values using $\pi(s) = \text{argmax}\_a Q(s,a)$, then that policy function *is* invariant to positive affine transformations of *action values* given by $Q(s,a)$. Perhaps that was what the paper authors meant to write, given that they go on to apply normalisation to Q values.
The impact of the mistake is not relevant to the the rest of the paper as far as I can see (caveat: I have not read the whole paper).
---
\* It is possible to make the statement correct even for episodic problems, *if*:
* You model episodic problems with an "absorbing state" and treat it as a continuing problem.
* You apply the same affine transform to the (usually zero reward) absorbing state.
* You still account for the infinite repeats of the absorbing state (requiring a value of discount factor $\gamma$ less than one). In practice this means either granting an additional reward of $\frac{b}{1-\gamma}$ for ending the episode, or not ending a simulation in the terminal state, but running learning algorithms over repeated time steps whilst still in the terminal state, so they can collect the non-zero reward.
Upvotes: 2 <issue_comment>username_2: In framing the problem as an *episodic* reinforcement learning problem, the goal is to find a policy that optimizes $\mathbb{E}[\sum\_{t=0}^\tau r(s\_t)],$ where $\tau$ is the random time at which the robot leaves the maze. This implicitly assigns a reward of 0 to the out-of-maze state, $s\_{terminal}$. If you include this state then the transformation $r\rightarrow 1\cdot r - 1$ does not change the optimal policy.
If we rewrite this episodic objective accounting for $s\_{terminal}$ (and a horizon $H$) we get the following objective:
\begin{align\*}\mathbb{E}\left[\sum\_{t=0}^\tau r(s\_t) + \sum\_{\tau}^H r(s\_{terminal})\right] &= \mathbb{E}\left[(H-\tau) r(s\_{terminal}) + \sum\_{t=0}^\tau r(s\_t)\right]\\
&= \mathbb{E}\left[(H-\tau) r(s\_{terminal}) + r(s\_{goal}) + (\tau-1) r(s\_{maze}) \right]\end{align\*}
Where $s\_{goal}$ is the exit state from the maze, the goal, and $s\_{maze}$ represents the other states of the maze. In the question, 1 is subtracted from $s\_{goal}$ and $s\_{maze}$, but not $s\_{terminal}$. Thus, this is not a positive affine transformation of the reward function. In effect, this changes the relative value of $s\_{terminal}$ from $\min\_s r(s)$ to $\max\_s r(s)$, and that changes the optimal policy.
Upvotes: 1 <issue_comment>username_3: **Neil answer is wrong**, **Dylan gives the correct one**!
Optimal policies are invariant under positive affine transformations of the reward function. and the reason why it's not the case in your example is explained in Dylan's answer.
Reference:
From the book **Artificial intelligence a modern approach 4th edition 16.1.3**
"*the scale of utilities is arbitrary: an affine transformation leaves the optimal decision unchanged. We can replace U(s) by U'(s) = m U(s)+ b where m and b are any constants such that m > 0. It is easy to see, from the definition of utilities as discounted sums of rewards, that a similar transformation of rewards will leave the optimal policy unchanged
in an MDP*"
\*\* Edit:
* Since the answer has been updated, the answer is fully correct now :)
Upvotes: -1 |
2020/03/03 | 707 | 2,630 | <issue_start>username_0: I know dropout layers are used in neural networks during training to provide a form of regularisation in an attempt to mitigate over-fitting.
Would you not get an increased fitness if you disabled the dropout layers during evaluation of a network?<issue_comment>username_1: Dropout is a technique that helps to avoid overfitting during training. That is, dropout is usually used for training.
>
> units may change in a way that they fix up the mistakes of the other
> units. This may lead to complex co-adaptations. This, in turn, leads to
> overfitting because these co-adaptations do not generalize to unseen
> data.
>
>
>
If you want to evaluate your model, you should turn off all dropout layers. For example, PyTorch's [`model.eval()`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module.eval) does this work.
Note that in some cases dropout can be used for inference, e.g. to add some stochasticity to the output.
More about dropout:
* [Improving neural networks by preventing co-adaptation of feature detectors](https://arxiv.org/abs/1207.0580)
* [Dropout: A Simple Way to Prevent Neural Networks from Overfitting](http://jmlr.org/papers/v15/srivastava14a.html)
Upvotes: 2 <issue_comment>username_2: [Dropout](http://jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf) is usually disabled at test (or evaluation) time. For example, in Keras, dropout is disabled at evaluation time by default, although you can enable it, if you need to (see below). The purpose of dropout is to decorrelate the units (or feature detectors) so that they learn more robust representations of the data (i.e. a form of regularisation).
However, there's also [Monte Carlo (MC) dropout](https://arxiv.org/pdf/1506.02142.pdf), i.e., you train the network with dropout and you also use dropout at test time in order to get stochastic outputs (i.e. you will get different outputs, for different forward passes, given the same inputs). [MC dropout is an approximation of Bayesian inference in deep Gaussian processes](https://arxiv.org/pdf/1506.02142.pdf), which means that MC dropout is roughly equivalent to a [Bayesian neural network](https://arxiv.org/pdf/1505.05424.pdf).
>
> Does the performance of a model increase if dropout is disabled at evaluation time?
>
>
>
Yes, possibly. However, MC dropout provides an uncertainty measure, which can be useful in certain scenarios (e.g. medical scenarios), where a point estimate (i.e. a single prediction or classification) is definitely not appropriate, but you also need a measure of the uncertainty or confidence of the predictions.
Upvotes: 2 |
2020/03/03 | 1,013 | 4,088 | <issue_start>username_0: I'm doing machine learning projects. I took a look at many datasets I worked with, mostly there are already famous datasets that everyone uses.
Let's say I decided to make my own dataset. Is there a possibility that my data are so random so that no relationship exists between my inputs and outputs? This is interesting because if this is possible, then no machine learning model will achieve to find an inputs outputs relationship in the data and will fail to solve the regression or classification problem.
Moreover, is it mathematically possible that some values have absolutely no relationship between them? In other words, there is no function (linear or nonlinear) that can map those inputs to the outputs.
Now, I thought about this problem and concluded that, if there is a possibility for this, then it will likely happen in regression because maybe the target outputs are in the same range and the same features values can correspond to the same output values and that will confuse the machine learning model.
Have you ever encountered this or a similar issue?<issue_comment>username_1: Not sure if I can answer the question as a whole, but a pure random input/output pair doesn't quite have "no relationship" at all. At the very least, for any fixed training set input/output pair, you can do an if...then mapping to construct a 1-to-1 function, such that you can classify the training set with 100% accuracy (assuming no duplicates of input).
In any case, I assume you mean uniform random, because if you have something like gaussian random, you can still learn some latent structure from how the random numbers are generated.
But even if you assume uniform random, and your algorithm is only guessing, your algorithm is technically still operating optimally per the data generating distribution, which basically means its as optimal as it gets.
The only such case that I can imagine which would satisfy your question, would be if you had a separate training/validation set, where the only element of the training input/output is [1,1], but the validation set only has elements of [1,-1], or something along those lines.
From reading your comments, I suspect that your intention with the question was: "Can there be a relationship of data such that no method can learn it?". To the extent that the data-generating distribution exists, then by the [universal approximation theorem of neural networks](https://en.wikipedia.org/wiki/Universal_approximation_theorem), then it is reasonable that you can at least partially learn it.
However it is important to note that the universal approximation theorem doesn't mean that such a data generating distribution can be *learned* by a neural net, it only means that you can get "non-zero as close as you want" to the data generating distribution. More explicitly: there is a setting of weights that gives you results as good as you want, but gradient descent doesn't necessarily learn it.
Upvotes: 0 <issue_comment>username_2: Of course, it's possible to define a problem where there is no relationship between input $x$ and output $y$. In general, if the mutual information between $x$ and $y$ is zero (i.e. $x$ and $y$ are statistically independent) then the best prediction you can do is independent of $x$. The task of machine learning is to learn a distribution $q(y|x)$ that is as close as possible to the real data generating distribution $p(y|x)$.
For example, looking at the common cross-entropy loss, we have
$$
\begin{align}
H(p,q) = -\mathbb{E}\_{y,x \sim p}\left[\log q(y|x)\right] & = \mathbb{E}\_{x\sim p}\left[\text{H}(p(y|x)) + \text{D}\_{\text{KL}}(p(y|x)\|q(y|x))\right] \\
& = \text{H}(p(y)) + \mathbb{E}\_{x \sim p}\left[\text{D}\_{\text{KL}}(p(y)\|q(y|x))\right],
\end{align}
$$
where we have used the fact that $p(y|x)=p(y)$ since $y$ and $x$ are independent. From this, we can see that the optimal predicted distribution $q(y|x)$ is equal to $p(y)$, and actually independent of $x$. Also, the best loss you can get is equal to the entropy of $y$.
Upvotes: 3 [selected_answer] |
2020/03/04 | 1,063 | 4,277 | <issue_start>username_0: I generated a bunch of simulation data from a complex physical simulation that spits out patterns. I am trying to apply unsupervised learning to analyze the patterns and ideally classify them into whatever categories the learning technique identifies. Using PCA or manifold techniques such as t-SNE for this problem is rather straightforward, but applying neural networks (autoencoders, specifically) becomes non-trivial, as I am not sure splitting my dataset into test and training data is the right way.
Naively, I was thinking of the following approaches:
1. Train an autoencoder with all the data as training data and train it for a large number of epochs (overfitting is not a problem in this case perse I would think)
2. Keras offers a `model.predict` option which enables me to just construct the encoder section of the autoencoder and obtain the bottleneck values
3. Carry out some data augmentation and split the data as one might into training and test data and carry out the workflow as normal (This approach makes me a little uncomfortable as I am not attempting to generalize a neural network or should I be?)
I would appreciate any guidance on how to proceed or if my understanding of the application of autoencoders is flawed in this context.<issue_comment>username_1: Not sure if I can answer the question as a whole, but a pure random input/output pair doesn't quite have "no relationship" at all. At the very least, for any fixed training set input/output pair, you can do an if...then mapping to construct a 1-to-1 function, such that you can classify the training set with 100% accuracy (assuming no duplicates of input).
In any case, I assume you mean uniform random, because if you have something like gaussian random, you can still learn some latent structure from how the random numbers are generated.
But even if you assume uniform random, and your algorithm is only guessing, your algorithm is technically still operating optimally per the data generating distribution, which basically means its as optimal as it gets.
The only such case that I can imagine which would satisfy your question, would be if you had a separate training/validation set, where the only element of the training input/output is [1,1], but the validation set only has elements of [1,-1], or something along those lines.
From reading your comments, I suspect that your intention with the question was: "Can there be a relationship of data such that no method can learn it?". To the extent that the data-generating distribution exists, then by the [universal approximation theorem of neural networks](https://en.wikipedia.org/wiki/Universal_approximation_theorem), then it is reasonable that you can at least partially learn it.
However it is important to note that the universal approximation theorem doesn't mean that such a data generating distribution can be *learned* by a neural net, it only means that you can get "non-zero as close as you want" to the data generating distribution. More explicitly: there is a setting of weights that gives you results as good as you want, but gradient descent doesn't necessarily learn it.
Upvotes: 0 <issue_comment>username_2: Of course, it's possible to define a problem where there is no relationship between input $x$ and output $y$. In general, if the mutual information between $x$ and $y$ is zero (i.e. $x$ and $y$ are statistically independent) then the best prediction you can do is independent of $x$. The task of machine learning is to learn a distribution $q(y|x)$ that is as close as possible to the real data generating distribution $p(y|x)$.
For example, looking at the common cross-entropy loss, we have
$$
\begin{align}
H(p,q) = -\mathbb{E}\_{y,x \sim p}\left[\log q(y|x)\right] & = \mathbb{E}\_{x\sim p}\left[\text{H}(p(y|x)) + \text{D}\_{\text{KL}}(p(y|x)\|q(y|x))\right] \\
& = \text{H}(p(y)) + \mathbb{E}\_{x \sim p}\left[\text{D}\_{\text{KL}}(p(y)\|q(y|x))\right],
\end{align}
$$
where we have used the fact that $p(y|x)=p(y)$ since $y$ and $x$ are independent. From this, we can see that the optimal predicted distribution $q(y|x)$ is equal to $p(y)$, and actually independent of $x$. Also, the best loss you can get is equal to the entropy of $y$.
Upvotes: 3 [selected_answer] |
2020/03/04 | 1,589 | 6,177 | <issue_start>username_0: After years of learning, I still can't understand what is considered to be an AI. What are the requirements for an algorithm to constitute Artificial Intelligence? Can you provide pseudocode examples of what constitutes an AI?<issue_comment>username_1: AI is not a simple term. There are different types, ranging from the most simplistic rule-based AI to black-box AI's so complicated it's unreasonable for a human to understand exactly what they're doing.
There's no pseudocode that if used in a program automatically constitutes it as an AI. It's not that black and white. But I can give examples:
Here's a rule-based chess AI that forfeits if it's too far behind, and plays aggressively if it's far enough ahead.
```
if player.score - my.score > 10:
forfeit
elif my.score - player.score > 10:
agressive = True
for each piece of my.pieces:
for each square of board.squares:
if noThreats(square) and agressive is True:
move(piece, square)
return
```
This is considered an "AI" because it feigns intelligence - appearing to have a true understanding of chess while simply following a set of rules, making it an "Artificial" Intelligence.
Here's another more complicated AI:
```
decisionNet = NeuralNetwork(64 inputs, 2 outputs)
choice = decisionNet(board.squares) // Returns a chess square with one of my pieces and desitnation
move(choice)
```
This uses a neural network to make the decision, which could have been trained on a bunch of example games or against itself. Due to this "training phase", humans can't understand precisely what the network is doing without extensive effort, so it's an even gives an even more convincing understanding of chess. But if we want, we could still understand the nuances of this network, and show it doesn't possess an intelligence, it again only feigns it.
I should mention that virtually any code that has an **if** statement can be considered AI. The examples I provided are just easier to pass off as understanding a very complicated concept (chess), as opposed to, say, verifying a user login. They both have the same fundamentals, it's just one appears more complicated on the surface than the other.
Upvotes: 2 <issue_comment>username_2: Philosophically, my own research has led me to understand AI as any artifact that makes a decision. This is because the etymology of "intelligence" strongly implies "selecting between alternatives", and these meanings are baked in all the way back to the proto-Indo-European.
(Degree of intelligence, or "strength" is merely a measure of utility, typically versus other decision making mechanisms, or, "fitness in an environment", where an environment is any action space.)
Therefore, the most basic form of automated (artificial) intelligence is:
```
if [some condition]
then [some action]
```
It is worth noting that [narrow AI](https://en.wikipedia.org/wiki/AlphaGo) which matches or exceeds human capability, in the popular sense, manifested only recently when we had sufficient processing and memory to derive sufficient utility from statistical decision making algorithms. But [Nimatron](https://en.wikipedia.org/wiki/Nimatron) constitutes perhaps the first functional strong-narrow AI in a modern computing context, and the first automated intelligence are [simple traps and snares](https://ai.stackexchange.com/questions/13865/are-simple-animal-snares-and-traps-a-form-of-automation-of-computation), which have been with us almost as long as we've used tools.
I will leave it to others to break down all the various forms of [modern AI](http://aima.cs.berkeley.edu/).
Upvotes: 2 <issue_comment>username_3: Another question is:
**"How to build intelligent behavior from natural (like) language pseudo code?"**
I am layering very natural like language above Python code and modules.
This will generate and execute code from english NLL pseudo code.
But, pseudo code is just code if the BNF definition works.
You can see the basics below.
I'm adding this NLL to my Iceberg/Sagent intelligent agent AI.
Theoretically, that will be more like what you mean by "AI"
As the first answer says, "AI as any artifact that makes a decision"
I would elaborate by saying, **"An AI performs an action by making a decision"**
My Iceberg/Sagent AI uses a layer above python modules turning them into Sagents (or Intelligent agents).
Sagents process data using a type of decision tree methodology.
This will be programmed via instincts, basically built-in (bootstrap) NLLs.
Answer 1 said, "sufficient processing and memory to derive sufficient utility from statistical decision making"
By sufficient, I would agree there needs to be a large body of
instinctive or bootstrapped performance to get the AI to the point
where it can grow its knowledge in an unattended way.
But just as you should never leave a child unattended,
the same is true for an AI. I've had unattended experiments that have gone a bit bat shit. I'm worried about this aspect of AI, that someone will create a dangerous AI and turn it loose. "AI Robot" could be a real issue at some point in the next few years.
Anyway, below is a simplistic thought starter on NLL pseudo code:
```
Built-Ins
|"execute python", code|
|"python", code|
|"execute c", code|
ai_def( "a human sentence" ):
sentence = |"python","input("Enter a sentence: " |
return( sentence)
ai_def( "remove stopwords from", tokens ):
stopwords = |"python","nltk.corpus.stopwords.words('english')"|
non_stops = |"python","tokens.difference(stopwords)"|
return( non_stops )
ai_def( "keywords from", sentence ):
tokens = |"python","nltk.word_tokenize( sentence)"|
keys = |"remove stopwords from", tokens|
return( keys )
ai_def( "add to sayings"):
keys = |"keywords from",saying=|"a human sentence"||
|"add to sayings dictionary using", keys, saying|
ai_def( "get saying for human" ):
subject, verb = |"get subject verb of", saying=|"a human sentence"||
match = |"find saying matched from", subject, verb|
report( match)
```
Anybody interested?
Or, if something already exists, let me know.
Upvotes: 0 |
2020/03/04 | 473 | 2,048 | <issue_start>username_0: Is there any difference between the convolution operation applied to images and applied to other numerical 2D data?
For example, we have a pretty good CNN model trained on a number of $64 \times 64$ images to detect two classes. On the other hand, we have a number of $64 \times 64$ numerical 2D matrices (which are not considered images), which also have two classes. Can we use the same CNN model to classify the numerical dataset?<issue_comment>username_1: Short answer is **no**. You can't use a model trained for one task to predict on a totally different task. Even if the second task was another image classification task, the CNN would have to be fine tuned for the new data to work.
A couple of things to note...
1) CNNs are good for images due to their nature. It isn't necessary that they'd be good for any 2-dimensional input.
2) By 2D numerical data I'm assuming you **don't** mean *tabular data*.
Upvotes: 2 <issue_comment>username_2: To offer a bit of theory, CNNs work well for many image tasks because they process spacially local information, without much care for absolute position. Essentially, every layer chops every image up into tiny crop images, and do an analysis step on the crops. The simple questions of "is this a line... corner... eye... face?" can be asked equally of every crop.
This means that the network only needs to learn once to detect a feature, rather than separately learn to detect that feature in each possible location it might appear. Therefore we can use smaller networks that train faster and need less data than if we had a fully connected architecture.
To return to the question, you could expect a CNN to work if your data is similarly spacially correlated. Put another way, if finding a pattern around cell x, y means the same sort of thing as the same pattern in cell a, b, then you are probably in luck. On the other hand if each column represents a meaningfully different concept, then a CNN will be a poor choice of architecture.
Upvotes: 2 [selected_answer] |
2020/03/05 | 1,883 | 7,473 | <issue_start>username_0: I have an MMO game where I have players. I wanted to invent something new to the game, and add player-bots to make the game be single-playable as well. The AI I want to add is simply only for fighting other players or other player-bots that he sees around at his level.
So, I thought of implementing my fighting strategy, exactly how I play, to the bot which is basically using if statements and randoms. For example, when the opponent has low health, and the bot has enough special attack power, he will use this chance and use his special attack power in order to try to knock the opponent down, or if the bot has low health he will eat in time but not too much because there is a point in risking fights, if you eat too much your opponent will do too. Or, for example, if the bot detects the opponent player is eating too much and gains health, he will do the same.
I told this idea of the implementation to one of my friends and he simply responded with: This is not AI, it's simply just a set of conditions, it does not have any heuristic functions.
**For that type of game, what are some ideas to create a real AI to achieve these conditions?**
Basically, the AI should know what to do in order to beat the opponent, based on the opponent's data such as current health, Armour and weapons, and level, if he risks his health or not and so on.
I am a beginner and it really interests me to do it in the right way.<issue_comment>username_1: You can train your bot using reinforcement learning (in particular Q-Learning).
>
> The most important part of the RL is a reward function. If we want agent to do some thing specific, we must provide rewards to it in such a way that it will achieve our goals. It is thus very important that the reward function accurately indicates the exact behaviour
>
>
>
So you can construct your own reward function that will satisfy your requirements. If the bot does something you want, you will reward it with higher score, otherwise you will punish it with negative reward.
AlhpaGo and OpenAI teams used a similar technique to train their model which could then beat humans in games like [Go](https://en.wikipedia.org/wiki/AlphaGo), [StarCraft 2](https://deepmind.com/blog/article/alphastar-mastering-real-time-strategy-game-starcraft-ii) and [Dota 2](https://openai.com/blog/openai-five/)
Also, check out this [Deep Reinforcement Learning Free Preview](https://classroom.udacity.com/courses/ud893-preview) on udacity.
Upvotes: 2 <issue_comment>username_2: I would set up a list of goals for your bot. These could be 'maintain a minimum level of health', 'knock out human player', 'block way to location X', etc. This obviously depends on the domain of your MMO.
Then you can use a planner to achieve these goals in the game. You define a set of actions with preconditions and effects, set the current goal, and the planner will work out a list of actions for the bot to achieve the goal. You can easily express your actions (and the domain) in [PDDL](https://en.wikipedia.org/wiki/Planning_Domain_Definition_Language).
Examples for actions would be 'move to location X', 'eat X', 'attack player X'. A precondition of 'attack player X' could be 'health(X) is low', and an effect could be 'health(X) is reduced by Y'. There are different ways of expressing these depending on the planner's capabilities.
The beauty of this is that you don't actually have to explicitly code any behaviour. You describe the domain, and tell the bot what it should achieve, and what capabilities it has. The actual behaviour then emerges out of that description. If the bot only attacks a player if the player has lower health, then observing the player eat (and thus up their health) could result in the bot eating (to push its own health above the player's so that it can attack) — but you have not told the bot directly to do that.
For a starting point, go to <http://education.planning.domains/> for a list of resources.
If you only have a few actions available, it might appear predictable to a human user, but with a variety of goals and actions, this will quickly become more complex and seem more 'intelligent'.
**Update:** Here is a link to a paper, *[Applying Goal-Oriented Action Planning to Games](http://alumni.media.mit.edu/~jorkin/GOAP_draft_AIWisdom2_2003.pdf)*, which describes how this can be applied in a game.
Upvotes: 3 <issue_comment>username_3: [username_2's answer](https://ai.stackexchange.com/a/18435/34038) is great for specific methods and tools to use, but I wanted to pull out a more general principle which was mentioned in a comment.
The distinction your friend is making is not one that would be generally recognised. One of my university lecturers defined AI as something like "an artificial system that exhibits behaviour that resembles how an intelligent being would behave".
If an intelligent being would *always* use the special attack in a particular situation, then an algorithm that always does so in the same situation is behaving intelligently, even though the algorithm behind it is incredibly simple. If you can come up with a complete description of an intelligent player, you have what is called an [expert system](https://en.wikipedia.org/wiki/Expert_system), i.e. a system which captures the decision-making process of a real expert.
Your friend is not even correct that your proposed AI "does not have any heuristic functions". When you write a condition like "if the AI's health is below 50%, it will eat food", you're *approximating* the rule a human would use. You can make the heuristic *more complex* by increasing the probability of eating in proportion to current health; that might in turn make the heuristic *closer to optimal*.
You can only really find out how "good" your AI is by putting it into different situations and observing it - sometimes, a simple set of rules gives rise to "emergent behaviours" that look surprisingly intelligent. As you build up more complex rules - i.e. more optimal heuristics - the emergent behaviour will change, and you can tweak it for the desired effect.
Upvotes: 3 <issue_comment>username_4: If you are willing to take an evolutionary approach, you may employ the NEAT algorithm (Neural Evolution of Augmenting Topologies) to train your bot. It will take some work setting it up and all, but it then will gradually improve over time.
Check out the following:
* <http://gekkoquant.com/2016/03/13/evolving-neural-networks-through-augmenting-topologies-part-1-of-4/>
* <https://www.youtube.com/watch?v=WiPZSieT6qs>
* <https://en.wikipedia.org/wiki/Neuroevolution_of_augmenting_topologies>
That should be enough to pique your interests and get you started. That last link links to a number of NEAT implementations available in a number of languages.
Upvotes: 0 <issue_comment>username_5: What about GANs or genetic algorithms?
The first idea (GAN) is that you basically create 2+ random bots who fight each other, and they keep adjusting their weights so that they can beat the other bot. That means, that those 2+ bots keep improving their "fighting performance" for as long as you want, eventually being even better than humans.
The second idea (genetic algorithms) is to generate a lot of bots who genetically differ from each other just for a slight mutation. You make them fight, and the best perforing/last standing will become the new clone where the new "lot of bots" gets generated from.
Upvotes: 0 |
2020/03/05 | 1,257 | 4,709 | <issue_start>username_0: I have a question about training a neural network for more epochs even after the network has converged without using early stopping criterion.
Consider the MNIST dataset and a LeNet 300-100-10 dense fully-connected architecture, where I have 2 hidden layers having 300 and 100 neurons and an output layer having 10 neurons.
Now, usually, this network takes about 9-11 epochs to train and have a validation accuracy of around 98%.
What happens if I train this network for 25 or 30 epochs, without using early stopping criterion?<issue_comment>username_1: Training a neural network for "too many" epochs than needed without using early stopping criterion leads to overfitting, where your model's ability to generalize decreases.
Upvotes: 2 <issue_comment>username_2: Running for "to many" epochs can indeed lead to over fitting. You should look at the validation loss. If on AVERAGE it continues to decrease then you are not yet over fitting.
You may be tempted to run more epochs in hopes your loss will decrease but unless you adjust your learning rate dynamically at some point you won't get any improvement.
If you use KERAS it has a useful callback ReduceLROnPlateau. Documentation is at <https://keras.io/callbacks/>
This allows you to monitor a metric (typically validation loss) and to adjust the learning rate by a user defined factor(parameter factor) if the metric you are monitoring fails to improve after a certain number of consecutive epochs(parameter patience). You can think of the training process as travelling down a valley in N space(N being the number of trainable parameters). As you descend towards a minimum they valley gets narrower. If you do not lower the learning rate you will reach a point where you can not descend any further.Now you could use a very small learning rate to begin with but then you will have to train for a lot more epochs. One problem with adjusting the learning rate just on the validation loss is that in the early training epochs validation loss often does not track with training accuracy and it could cause the learning rate to be decreased prematurely.
I wrote a custom callback which initially monitors training loss and adjust the learning rate based on that metric. Once the training accuracy reaches 95% it switches to monitoring validation loss and adjusts the learning rate based on that. It saves the model weights for the lowest validation loss in the variable val.best\_weights. After training load these weights into your model to make predictions. Code is below if you are interested. When you compile your model just add 'val' to the callback list.
```
class val(tf.keras.callbacks.Callback):
# functions in this class adjust the learning rate
lowest_loss=np.inf
lowest_trloss=np.inf
best_weights=model.get_weights()
lr=float(tf.keras.backend.get_value(model.optimizer.lr))
epoch=0
highest_acc=0
def __init__(self):
super(val, self).__init__()
self.lowest_loss=np.inf
self.lowest_trloss=np.inf
self.best_weights=model.get_weights()
self.lr=float(tf.keras.backend.get_value(model.optimizer.lr))
self.epoch=0
self.highest_acc=0
def on_epoch_end(self, epoch, logs=None):
val.lr=float(tf.keras.backend.get_value(self.model.optimizer.lr))
val.epoch=val.epoch +1
v_loss=logs.get('val_loss')
v_acc=logs.get('accuracy')
loss=logs.get('loss')
if lossval.lowest\_trloss:
lr=float(tf.keras.backend.get\_value(self.model.optimizer.lr))
ratio=val.lowest\_trloss/loss # add a factor to lr reduction
new\_lr=lr \* .7 \* ratio
tf.keras.backend.set\_value(model.optimizer.lr, new\_lr)
msg='{0}\n current training loss {1:7.5f} is above lowest training loss of {2:7.5f}, reducing lr to {3:11.9f}{4}'
print(msg.format(Cyellow, loss, val.lowest\_trloss, new\_lr,Cend))
if val.lowest\_loss > v\_loss:
msg='{0}\n validation loss improved,saving weights with validation loss= {1:7.4f}\n{2}'
print(msg.format(Cgreen, v\_loss, Cend))
val.lowest\_loss=v\_loss
val.best\_weights=model.get\_weights()
else:
if v\_acc>.95 and val.lowest\_loss val.best\_loss
lr=float(tf.keras.backend.get\_value(self.model.optimizer.lr))
ratio=val.lowest\_loss/v\_loss # add a factor to lr reduction
new\_lr=lr \* .7 \* ratio
tf.keras.backend.set\_value(model.optimizer.lr, new\_lr)
msg='{0}\n current loss {1:7.4f} exceeds lowest loss of {2:7.4f}, reducing lr to {3:11.9f}{4}'
print(msg.format(Cyellow, v\_loss, val.lowest\_loss, new\_lr,Cend))```
```
Upvotes: 2 [selected_answer] |
2020/03/05 | 271 | 1,105 | <issue_start>username_0: I'm currently trying to predict 1 output value with 52 input values. The problem is that I only have around 100 rows of data that I can use.
Will I get more accurate results when I use a small architecture than when I use multiple layers with a higher amount of neurons?
Right now, I use 1 hidden layer with 1 neuron, because of the fact that I need to solve (in my opinion) a basic regression problem.<issue_comment>username_1: It's harder to overfit it certainly!
I mean practically speaking there has to be some assumptions on the generation model of your data, either explicit or implicit.
I would try probably 1-2 layer network first(maybe your data is linearly separable if you're lucky).
Upvotes: 0 <issue_comment>username_2: I'm not aware of a direct way for finding the best NN architecture for a given task, but the recommended way, as far as I know, is to devise a network that can overfit the training data, and then apply regularization on top of it.
That way, you can be almost sure you're not underfitting/underperforming due to network capacity.
Upvotes: 1 |
2020/03/07 | 1,328 | 5,296 | <issue_start>username_0: I am trying to predict Forex time series. The nature of the market is that 80% of the time the price can not be predicted, but in 20% of the time it can be. For example, if the price drops down very deep, there is 99% probability that there will be a recovery process, and this is what I want to predict.
So , how do I train a feed-forward network the way it would only predict those cases that have 99% of certainty to take place and, for the rest of the cases it would output "unpredictable" status?
Imagine that my data set has 24 hours of continuous price data as input (as 1 minute samples), and then as output I want the network to predict 1 hour of future price data. The only restriction I need to implement is that if the network is not "sure" that the price is predictable, it would outupt 0s. So, how do I implement safety in predictions the network is outputting?
It seems that my problem is similar to Google Compose, where it predicts the next word as you are typing , for example, if you type "thank you", it would add " very much" and this would be like 95% correct. I want the same, but it is just that my problem has too much complexity. Google uses RNNs, so maybe I should try a deep network of many layers of RNNs?<issue_comment>username_1: Tbh I think stock prices are essentially impossible to predict as you're not taking into account the data from outside the stock market.
I'd argue any successful model would need to be trained on news, consumer sentiment, etc etc.
The only ones which maybe work are HFT.
Upvotes: 0 <issue_comment>username_2: Things like this a really hot topic in research right now, and it's very difficult to get high accuracy on a chaotic system like the stock market. That being said, I would probably recommend preprocessing your data rather than having your primary neural network decide what to accept and what not to.
For example, in your specific case, you could model a bubble bursting as perhaps a negative exponential drop-off or something of the sort. This could include machine learning too. You could gather historical drops in stock market data, and use some sort of regression (Bayesian would probably work well) to estimate the best function to use as an indicator to whether a steep drop has occurred. If so, then use your neural network specifically to classify the fate of the stock. I would think you would have more success following a specialised route such as this rather than trying to train a network on general trends in the market.
In terms of the structure of your neural network, you may want to consider a convolutional neural network (CNN) instead of a recurrent neural network (RNN). RNNs assume the current point in your time-series depends on all previous points, from the beginning of your data. I wouldn't think this would hold true in general for the stock market. The filters a CNN learns are suited to learning to extract certain features and the CNN will apply the filters to specific portions of the data, in the way it considers optimal. They are both nonlinear models, but the CNN will be less computationally costly to train. You could also try a gradient-boosting regression approach instead of a neural network. That being said, something like an LSTM RNN (long short-term memory) won't necessarily be bad - just my two cents.
Upvotes: 2 <issue_comment>username_3: I do not know how you will apply your data to the techniques I'll give you some brief overview of techniques used in time series prediction:
* Extended [Kalman Filtering](https://en.wikipedia.org/wiki/Kalman_filter): This is a kind of control system approach and is generally used to control trajectory of missiles. Here is a [question](https://ai.stackexchange.com/questions/6091/what-is-the-significance-of-this-stanford-university-financial-market-time-seri) (based on an EKF paper) in our stack on this topic. You can check the paper for more details.
* [Echo State Networks](https://en.wikipedia.org/wiki/Echo_state_network): This is a kind of ML/NN approach with based on the idea of [Liquid State Machines](https://en.wikipedia.org/wiki/Liquid_state_machine) used in Neuroscience. [Resources](https://mantas.info/code/simple_esn/) on the same.
* RNNs/LSTMs/GRUs - Probably the most popular approach to predicting any time series data when you don't want to delve into the statistical approaches.
* [ARMA/ARIMA models](https://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average): Entirely statistical approach with lots of maths, but libraries are available with implementation already done.
* Deep Belief Networks: People have also tried forecast time series using fixed number of previous states input to a DBN. It's somewhat of a popular [paper](https://www.sciencedirect.com/science/article/abs/pii/S0925231213007388) so I decided to put it here.
Finally you can look up this [overview](https://arxiv.org/pdf/1302.6613.pdf) on time series modelling approaches. Reinforcement Learning is also used for time series prediction, but from what I have heard it is not very easy to do so. Here is a Google Scholar search [result](https://scholar.google.co.in/scholar?q=reinforcement%20learning%20for%20time%20series%20scholarly%20articles&hl=en&as_sdt=0&as_vis=1&oi=scholart).
Upvotes: 2 |
2020/03/10 | 2,463 | 9,801 | <issue_start>username_0: I have collected a set of pictures of people with a text explaining the characteristics of the person on the picture, for example, "Big nose" or "Curly hair".
I want to train some type of model that takes in any picture and returns a description of the picture in terms of characteristics.
However, I have a hard time figuring out how to do this. It is not like labeling "dog" or "apple" because then I can create a set of training data and then evaluate its performance, now I can not. If so I would probably have used a CNN and probably also VGG-16 to help me out.
I only have two ML courses under my belt and have never really encountered a problem like this before. Can someone help me to get in the right direction?
As of now, I have a data set of 13000 labeled images I am very confident it is labeled well. I do not know of any pre-trained datasets that could be of help in this instance, but if you know of one it might help.
Worth noting is that every label is or should at least be unique. If for example there exist two pictures with the same label of "Big nose" it is purely coincidental.<issue_comment>username_1: From what you wrote, the problem sounds a bit like face recognition, where a camera takes a picture of your face and compares it with a bunch of pictures in a database, for example, one for each employee if its at a company's main gate. If you look "similar" to one of the pictures in the database, the door opens and your ID/Name is displayed on a terminal.
This kind of system generates an encoding for each picture and evaluates the distance between your encoded picture and the encoding of each picture in the database. If this is at most some minimum value, it's considered a match.
So, what you could do is figure out some way to encode your pictures (say sum the pixel values for a very simple example, ideally you would use some sort of vector here because distances make sense with vectors) and store this encoding together with the label of the picture.
Once your database is complete (i.e. you have a bunch of pictures saved as a pair of [encoding, label]), you can "scan" each new picture, calculate its encoding (using the same algorithm that calculated your database encodings) and find the one entry in your database which minimizes the "encoding-distance".
If this sounds like a way to solve your problem, you need to come up with a proper encoding (like "run my images through a CNN and save the output of my last fully connected layer") and apply this to all the images you want to use as "training data", before "testing" it on some of the leftover images.
Upvotes: 0 <issue_comment>username_2: I would do as suggested in the comments. First select an encoding scheme. I think what is called a difference hash would work well for this application. Code for that is shown below. Now take your data set of images and run them through the encoder and save the result in a database. The database would contain the "labeling" text and the encoder result. Now for a new image you are trying to label, input the image into the encoder. Take the encoder result and compare it to the encoded values in the database. Search through the encoded values in the database and find the closest match. You can then use a "threshold" value to determine if you want to give a specific label for the image or if the distance is above the threshold declare there is no matching label. You can determine the best "threshold" value by running you data set images with the known labels and iterate the threshold level and select the threshold with the least errors. I would use something like a 56 or a 128 length hash.
```
import cv2
import os
# f_path is the full path to the image file, hash length is an integer specifies length of the hash
def get_hash(f_path, hash_length):
r_str=''
img=cv2.imread(f_path,0) # read image as gray scale image
img = cv2.resize(img, (hash_length+1, 1), interpolation = cv2.INTER_AREA)
# now compare adjacent horizontal values in a row if pixel to the left>pixel toright result=1 else 0
for col in range (0,hash_length):
if(img[0][col]>img[0][col+1]):
value=str(1)
else:
value=str(0)
r_str=r_str + value
number=0
power_of_two=1
for char in r_str:
number = number + int(char) * power_of_two
power_of_two=2 * power_of_two
return ( r_str, number)
# example on an image of a bird
f_path=r'c:\Temp\birds\test\robin\1.jpg'
hash=get_hash ( f_path, 16) # 16 length hash on a bird image
print (' hash string ', hash[0], ' hash number ', hash[1])
> results is
hash string 1111111100000000 hash number 255
```
Upvotes: 0 <issue_comment>username_3: The term you are looking for is **multi-label classification**, i.e. where you are making more than one classification on each image (one for each label). Most examples you'll find online are in the NLP domain but it is just as easy with CNNs since it's essentially defined by the structure of the output layer and the loss function used. It's not as complicated as it might sound if you are already familiar with CNNs.
The output layer of a neural network (for 3 or more classes) has as many units as there are targets. The network learns to associate each of those units with a corresponding class. A **multi-class classifier** normally applies a softmax activation function to the raw unit output, which yields a probability vector. To get the final classification, the `max()` of the probability vector is taken (the most probable class). The output would look like this:
```
Cat Bird Plane Superman Ball Dog
Raw output: -1 2 3 6 -1 -1
Softmax: 0.001 0.017 0.046 0.934 0.001 0.001
Classification: 0 0 0 1 0 0
```
**Multi-label classification** typically uses a sigmoid activation function since the probabilities of a label occuring can be treated independently. The classification is then determined by the probability (>=0.5 for True). For your problem, this output could look like:
```
Big nose Long hair Curly hair Superman Big ears Sharp Jawline
Raw output: -1 -2 3 6 -1 10
Sigmoid: 0.269 0.119 0.953 0.998 0.269 1.000
Classification: 0 0 1 1 0 1
```
The **binary crossentropy** loss function is normally used for a multi-label classifier since a *n*-label problem is essentially splitting up a multi-class classification problem into *n* binary classification problems.
Since all you need to do to get from a multi-class classifier to a multi-label classifier is change the output layer, its very easy to do with pre-trained networks. If you get the pre-trained model from Keras its as simple as including `include_top=False` when downloading the model and then adding the correct output layer.
With 13000 images, I would recommend using Keras' `ImageDataGenerator` class with the `flow_from_dataframe` method. This allows you to use a simple pandas dataframe to label and feed in all your images. The dataframe would look like this:
```
Filename Big nose Long hair Curly hair Superman Big ears Sharp Jawline
0001.JPG 0 0 1 1 0 1
0002.JPG 1 0 1 0 1 1
. . . . . . .
```
`flow_from_dataframe`'s `class_mode` parameter can be set to `raw` or `multi_output` along with `x_col` to `'Filename'` and `y_col` to `['Big nose', 'Long hair', 'Curly hair', 'Superman', 'Big ears', 'Sharp Jawline']` (in this example). Check out the [documentation](https://keras.io/preprocessing/image/#imagedatagenerator-class) for more details.
The amount of data you need **for each label** depends on many factors and is essentially impossible to know without trying. 13000 sounds like a good start but it also depends on how many labels you have and how evenly distributed they are between the labels. A decent guide (one of many) on how to set up a multi-label classifier and how to implement it with Keras can be found [here](https://towardsdatascience.com/multi-label-image-classification-with-neural-network-keras-ddc1ab1afede). It also covers imbalances on label frequency and is well worth a read. I'd *highly* recommend that you become as intimately familiar with your dataset as possible before you start tuning your neural network architecture.
Upvotes: 3 [selected_answer]<issue_comment>username_4: You can use image captioning. Look at the article [Captioning Images with CNN and RNN, using PyTorch](https://link.medium.com/qx7ZMHsWK4). The idea is very profound. The model encodes the image to high dimensional space and then passes it through LSTM cells and LSTM cells produce linguistic output.
See also [Image captioning with visual attention](https://www.tensorflow.org/tutorials/text/image_captioning).
Upvotes: 1 <issue_comment>username_5: You can try image captioning. You can train a CNN model for image, and then, on top of that, provide the model embedding to another LSTM model to learn the encoded characteristics. You can directly use the pre-trained VGG-16 model and use the second last layer to create your image embeddings.
[Show and Tell: A Neural Image Caption Generator](https://arxiv.org/abs/1411.4555) is a really nice paper to start with. There is an implementation of it in TensorFlow: <https://www.tensorflow.org/tutorials/text/image_captioning>. The paper focuses on generating caption, but you can provide your 'characteristics' to LSTM, so that it can learn it for each image.
Upvotes: 2 |
2020/03/10 | 2,392 | 9,309 | <issue_start>username_0: Consider the following simple neural network with only one neuron.
* The input is $x\_1$ and $y\_2$, where $-250 < x < 250$ and $-250 < y < 250$
* The weights of the only neuron are $w\_1$ and $w\_1$
* The output of the neuron is given by $o = \sigma(x\_1w\_1 + x\_2w\_2 + b)$, where $\sigma$ is the ReLU activation function and $b$ the bias.
* Thus the cost should be $(o - y)^2$.
When using the sigmoid activation function, the target for each point is usually $0$ or $1$.
But I'm a little confused witch target to use when the activation function is the ReLU, given that it can output numbers greater than 1.<issue_comment>username_1: From what you wrote, the problem sounds a bit like face recognition, where a camera takes a picture of your face and compares it with a bunch of pictures in a database, for example, one for each employee if its at a company's main gate. If you look "similar" to one of the pictures in the database, the door opens and your ID/Name is displayed on a terminal.
This kind of system generates an encoding for each picture and evaluates the distance between your encoded picture and the encoding of each picture in the database. If this is at most some minimum value, it's considered a match.
So, what you could do is figure out some way to encode your pictures (say sum the pixel values for a very simple example, ideally you would use some sort of vector here because distances make sense with vectors) and store this encoding together with the label of the picture.
Once your database is complete (i.e. you have a bunch of pictures saved as a pair of [encoding, label]), you can "scan" each new picture, calculate its encoding (using the same algorithm that calculated your database encodings) and find the one entry in your database which minimizes the "encoding-distance".
If this sounds like a way to solve your problem, you need to come up with a proper encoding (like "run my images through a CNN and save the output of my last fully connected layer") and apply this to all the images you want to use as "training data", before "testing" it on some of the leftover images.
Upvotes: 0 <issue_comment>username_2: I would do as suggested in the comments. First select an encoding scheme. I think what is called a difference hash would work well for this application. Code for that is shown below. Now take your data set of images and run them through the encoder and save the result in a database. The database would contain the "labeling" text and the encoder result. Now for a new image you are trying to label, input the image into the encoder. Take the encoder result and compare it to the encoded values in the database. Search through the encoded values in the database and find the closest match. You can then use a "threshold" value to determine if you want to give a specific label for the image or if the distance is above the threshold declare there is no matching label. You can determine the best "threshold" value by running you data set images with the known labels and iterate the threshold level and select the threshold with the least errors. I would use something like a 56 or a 128 length hash.
```
import cv2
import os
# f_path is the full path to the image file, hash length is an integer specifies length of the hash
def get_hash(f_path, hash_length):
r_str=''
img=cv2.imread(f_path,0) # read image as gray scale image
img = cv2.resize(img, (hash_length+1, 1), interpolation = cv2.INTER_AREA)
# now compare adjacent horizontal values in a row if pixel to the left>pixel toright result=1 else 0
for col in range (0,hash_length):
if(img[0][col]>img[0][col+1]):
value=str(1)
else:
value=str(0)
r_str=r_str + value
number=0
power_of_two=1
for char in r_str:
number = number + int(char) * power_of_two
power_of_two=2 * power_of_two
return ( r_str, number)
# example on an image of a bird
f_path=r'c:\Temp\birds\test\robin\1.jpg'
hash=get_hash ( f_path, 16) # 16 length hash on a bird image
print (' hash string ', hash[0], ' hash number ', hash[1])
> results is
hash string 1111111100000000 hash number 255
```
Upvotes: 0 <issue_comment>username_3: The term you are looking for is **multi-label classification**, i.e. where you are making more than one classification on each image (one for each label). Most examples you'll find online are in the NLP domain but it is just as easy with CNNs since it's essentially defined by the structure of the output layer and the loss function used. It's not as complicated as it might sound if you are already familiar with CNNs.
The output layer of a neural network (for 3 or more classes) has as many units as there are targets. The network learns to associate each of those units with a corresponding class. A **multi-class classifier** normally applies a softmax activation function to the raw unit output, which yields a probability vector. To get the final classification, the `max()` of the probability vector is taken (the most probable class). The output would look like this:
```
Cat Bird Plane Superman Ball Dog
Raw output: -1 2 3 6 -1 -1
Softmax: 0.001 0.017 0.046 0.934 0.001 0.001
Classification: 0 0 0 1 0 0
```
**Multi-label classification** typically uses a sigmoid activation function since the probabilities of a label occuring can be treated independently. The classification is then determined by the probability (>=0.5 for True). For your problem, this output could look like:
```
Big nose Long hair Curly hair Superman Big ears Sharp Jawline
Raw output: -1 -2 3 6 -1 10
Sigmoid: 0.269 0.119 0.953 0.998 0.269 1.000
Classification: 0 0 1 1 0 1
```
The **binary crossentropy** loss function is normally used for a multi-label classifier since a *n*-label problem is essentially splitting up a multi-class classification problem into *n* binary classification problems.
Since all you need to do to get from a multi-class classifier to a multi-label classifier is change the output layer, its very easy to do with pre-trained networks. If you get the pre-trained model from Keras its as simple as including `include_top=False` when downloading the model and then adding the correct output layer.
With 13000 images, I would recommend using Keras' `ImageDataGenerator` class with the `flow_from_dataframe` method. This allows you to use a simple pandas dataframe to label and feed in all your images. The dataframe would look like this:
```
Filename Big nose Long hair Curly hair Superman Big ears Sharp Jawline
0001.JPG 0 0 1 1 0 1
0002.JPG 1 0 1 0 1 1
. . . . . . .
```
`flow_from_dataframe`'s `class_mode` parameter can be set to `raw` or `multi_output` along with `x_col` to `'Filename'` and `y_col` to `['Big nose', 'Long hair', 'Curly hair', 'Superman', 'Big ears', 'Sharp Jawline']` (in this example). Check out the [documentation](https://keras.io/preprocessing/image/#imagedatagenerator-class) for more details.
The amount of data you need **for each label** depends on many factors and is essentially impossible to know without trying. 13000 sounds like a good start but it also depends on how many labels you have and how evenly distributed they are between the labels. A decent guide (one of many) on how to set up a multi-label classifier and how to implement it with Keras can be found [here](https://towardsdatascience.com/multi-label-image-classification-with-neural-network-keras-ddc1ab1afede). It also covers imbalances on label frequency and is well worth a read. I'd *highly* recommend that you become as intimately familiar with your dataset as possible before you start tuning your neural network architecture.
Upvotes: 3 [selected_answer]<issue_comment>username_4: You can use image captioning. Look at the article [Captioning Images with CNN and RNN, using PyTorch](https://link.medium.com/qx7ZMHsWK4). The idea is very profound. The model encodes the image to high dimensional space and then passes it through LSTM cells and LSTM cells produce linguistic output.
See also [Image captioning with visual attention](https://www.tensorflow.org/tutorials/text/image_captioning).
Upvotes: 1 <issue_comment>username_5: You can try image captioning. You can train a CNN model for image, and then, on top of that, provide the model embedding to another LSTM model to learn the encoded characteristics. You can directly use the pre-trained VGG-16 model and use the second last layer to create your image embeddings.
[Show and Tell: A Neural Image Caption Generator](https://arxiv.org/abs/1411.4555) is a really nice paper to start with. There is an implementation of it in TensorFlow: <https://www.tensorflow.org/tutorials/text/image_captioning>. The paper focuses on generating caption, but you can provide your 'characteristics' to LSTM, so that it can learn it for each image.
Upvotes: 2 |
2020/03/10 | 1,262 | 4,542 | <issue_start>username_0: I am training a classifier to identify 24 hand signs of American Sign Language. I created a custom dataset by recording videos in different backgrounds for each of the signs and later converted the videos into images. Each sign has 3000 images, that were randomly selected to generate a training dataset with 2400 images/sign and validation dataset with the remaining 600 images/sign.
* Total number of images in entire dataset: 3000 \* 24 = 72000
* Training dataset: 2400 \* 24 = 57600
* Validation dataset: 600 \* 24 = 14400
* Image dimension (Width x Height): 1280 x 720 pixels
### The CNN architecture used for training
```
model = Sequential([
Conv2D(32, (3, 3), activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(pool_size=(2,2)),
Dropout(0.25),
Conv2D(32, (3, 3), activation='relu'),
MaxPooling2D(pool_size=(2,2)),
Dropout(0.25),
Conv2D(64, (3, 3), activation='relu'),
MaxPooling2D(pool_size=(2,2)),
Dropout(0.25),
Conv2D(64, (3, 3), activation='relu'),
MaxPooling2D(pool_size=(2,2)),
Dropout(0.25),
Flatten(),
Dense(128, activation='relu'),
Dropout(0.25),
Dense(NUM_CLASSES, activation='softmax')
])
```
### Training parameters:
```
IMG_HEIGHT = 224
IMG_WIDTH = 224
BATCH_SIZE = 32
NUM_CLASSES = 24
train_datagen = ImageDataGenerator(rescale = 1./255,
width_shift_range=0.1,
height_shift_range=0.1,
zoom_range=0.1,
fill_mode='constant')
EPOCHS = 20
STEPS_PER_EPOCH = TRAIN_TOTAL // BATCH_SIZE
VALIDATION_STEPS = VALIDATION_TOTAL // BATCH_SIZE
callbacks_list = [
tf.keras.callbacks.EarlyStopping(monitor = 'accuracy',
min_delta = 0.005,
patience = 3),
tf.keras.callbacks.ModelCheckpoint(filepath = 'D:\\Models\\HSRS_ThesisDataset_5Mar_1330.h5',
monitor= 'val_loss',
save_best_only = True)
]
optimizer = 'adam'
```
The model accuracy and model loss graph is shown in the figure below:
[](https://i.stack.imgur.com/tQdab.png)
### The results obtained at the end of the training are
* Train acc: 0.8000121
* Val acc: 0.914441
I read [this article explaining why the validation loss is lower than the training loss](https://www.pyimagesearch.com/2019/10/14/why-is-my-validation-loss-lower-than-my-training-loss/) I want to know:
1. Is it because of the smaller dataset and random shuffling of the images?
2. Is there any way to improve the condition without changing the dataset?
3. Will this have a very detrimental effect on the model performance in real test cases? If not, can I just focus on improving the training accuracy of the overall model?<issue_comment>username_1: 1. Assuming you pass through the entire validation dataset, this can't be due to shuffling since you still compute the loss/accuracy over the entire dataset, so order does not really matter here. It is more likely that you have a ***significantly smaller or less representative validation dataset***, e.g., distribution of the validation dataset can be skewed towards classes where your model performs better.
2. What do you mean exactly by improving the situation? Having a better validation accuracy is not necessarily bad. In any case, if you decrease the effect of regularization, e.g., lowering weight decay, training accuracy might go up but your model might generalize worse, i.e., you might get a lower validation accuracy.
3. No, the goal of training is never to maximize training accuracy. You can trivially do so by just memorizing the training dataset. In short, the goal of training is to get good generalization and as long as you get a satisfactory validation accuracy, it is likely that this has happened to some degree (assuming you have a ***good*** validation dataset of course).
Upvotes: 3 [selected_answer]<issue_comment>username_2: ```
Validation dataset: 600 * 24 = 14400
```
Means that you are augmenting the validation set, right? For an experiment, you can do that and it might take validation accuracy more than train accuracy?
The idea of augmentation in only valid for the training set and you should not change the validation set or test set.
You can try without the augmentation in the validation set and see the result.
Upvotes: 0 |
2020/03/11 | 683 | 2,894 | <issue_start>username_0: I don't know if this is the right place to ask this question. If it is not, please tell me and I remove it.
I've just started to learn CNN and I'm trying to understand what they do and how they do it.
I have written some sentences to describe it:
1. Let's say that CNN is a mathematical function that adjusts its values based on the result obtained and the desired output.
* The values are the filters of the convolutional layers (in other
types of networks would be the weights).
* To adjust these values there is a backpropagation method as in all networks.
2. The result obtained is an image of the same size as the original.
3. In this image you can see the delimited area.
4. The goal of the network is to learn these filters.
5. The overfitting may be because the network has learned where the pixels you are looking for are located.
6. The filters have as input a pixel of the image and return a 1 or a 0.
My doubt is:
In your own words, Have I forgotten something?
**NOTE**:
This is only one question. The six points above are affirmative sentences, not questions.
There is only one question mark, and it is on my question.
I have to clarify this because someone has closed my previous question because she/he thinks there were more than one question.<issue_comment>username_1: 1. Assuming you pass through the entire validation dataset, this can't be due to shuffling since you still compute the loss/accuracy over the entire dataset, so order does not really matter here. It is more likely that you have a ***significantly smaller or less representative validation dataset***, e.g., distribution of the validation dataset can be skewed towards classes where your model performs better.
2. What do you mean exactly by improving the situation? Having a better validation accuracy is not necessarily bad. In any case, if you decrease the effect of regularization, e.g., lowering weight decay, training accuracy might go up but your model might generalize worse, i.e., you might get a lower validation accuracy.
3. No, the goal of training is never to maximize training accuracy. You can trivially do so by just memorizing the training dataset. In short, the goal of training is to get good generalization and as long as you get a satisfactory validation accuracy, it is likely that this has happened to some degree (assuming you have a ***good*** validation dataset of course).
Upvotes: 3 [selected_answer]<issue_comment>username_2: ```
Validation dataset: 600 * 24 = 14400
```
Means that you are augmenting the validation set, right? For an experiment, you can do that and it might take validation accuracy more than train accuracy?
The idea of augmentation in only valid for the training set and you should not change the validation set or test set.
You can try without the augmentation in the validation set and see the result.
Upvotes: 0 |
2020/03/11 | 922 | 3,826 | <issue_start>username_0: Currently I'm working on a continuous control problem using DDPG as my RL algorithm. All in all, things are working out quite well, but the algorithm does not show any tendencies to eliminate the steady state control deviation towards the far end of the episode.
In the graphs you can see what happens:
In the first graph we see the setpoint in yellow and the controlled continuous parameter in purple. In the beginning, the algorithm brings the controlled parameter close to the setpoint fast, but then it ceases its further efforts and does not try to eliminate the remaining steady state error. This control deviation even increases over time.
[](https://i.stack.imgur.com/Dr3zC.png)
In the second graph, the actual reward is depicted in yellow. (Just ignore the other colors.) I use the normalized control deviation to calculate the reward: $r = \frac{\frac{|dev|}{k}}{1+\frac{|dev|}{k}}$.
This gives me a reward that lies within the interval $]0, 1]$ and has a value of $0.5$ when the deviation $dev$ equals the parameter $k$. (That is the parameter $k$ kind of indicates when *half of the work is done*)
This reward function is relatively steep for the last fraction of the deviation from $k$ to $0$. So it would definitely be worth the effort for the agent to eliminate the residual deviation.
However, it looks like the agent is happy with the existing state and the control deviation never gets eliminated. Eventhough the reward is stuck at ~0.85 instead of the maximum achievable 1.
[](https://i.stack.imgur.com/bJPFA.png)
Any ideas how to push the agent into some more effort to eliminate the steady state error?
(A PID controller would exactly do this by using its I-term. How can I translate this to the RL-algo?)
The state presented to the algo consists of the current deviation and the speed of change (derivatve) of the controlled value. The deviation is not included in the calculation of the reward function, but in the end we wat a flat line with no steady state deviation of course.
Any ideas welcome!
Regards,
Felix<issue_comment>username_1: 1. Assuming you pass through the entire validation dataset, this can't be due to shuffling since you still compute the loss/accuracy over the entire dataset, so order does not really matter here. It is more likely that you have a ***significantly smaller or less representative validation dataset***, e.g., distribution of the validation dataset can be skewed towards classes where your model performs better.
2. What do you mean exactly by improving the situation? Having a better validation accuracy is not necessarily bad. In any case, if you decrease the effect of regularization, e.g., lowering weight decay, training accuracy might go up but your model might generalize worse, i.e., you might get a lower validation accuracy.
3. No, the goal of training is never to maximize training accuracy. You can trivially do so by just memorizing the training dataset. In short, the goal of training is to get good generalization and as long as you get a satisfactory validation accuracy, it is likely that this has happened to some degree (assuming you have a ***good*** validation dataset of course).
Upvotes: 3 [selected_answer]<issue_comment>username_2: ```
Validation dataset: 600 * 24 = 14400
```
Means that you are augmenting the validation set, right? For an experiment, you can do that and it might take validation accuracy more than train accuracy?
The idea of augmentation in only valid for the training set and you should not change the validation set or test set.
You can try without the augmentation in the validation set and see the result.
Upvotes: 0 |
2020/03/11 | 513 | 1,858 | <issue_start>username_0: I'm actually trying to understand the policy iteration in the context of RL. I read an article presenting it and, at some point, a pseudo-code of the algorithm is given : [](https://i.stack.imgur.com/YcKxP.png)
What I can't understand is this line :
[](https://i.stack.imgur.com/IRJAR.png)
From what I understand, policy iteration is a model-free algorithm, which means that it doesn't need to know the environment's dynamics. But, in this line, we need $p(s',r \mid s, \pi(s))$ (which in my understanding is the transition function of the MDP that gave us the probability of landing in the state $s'$ knowing previous $s$ state and the action taken) to compute $V(s)$. So I don't understand how we can compute $V(s)$ with the quantity $p(s',r \mid s, \pi(s))$ since it is a parameter of the environment.<issue_comment>username_1: Everything you say in your post is correct, apart from the wrong assumption that policy iteration is model-free. PI is a model-based algorithm because of the reasons you're mentioning.
See [my answer](https://ai.stackexchange.com/a/8820/2444) to the question [What's the difference between model-free and model-based reinforcement learning?](https://ai.stackexchange.com/q/4456/2444).
Upvotes: 3 [selected_answer]<issue_comment>username_2: The Policy Iteration algorithm (given in the question) is model-based.
However, note that there exist methods that fall into the *Generalized Policy Iteration* category, such as SARSA, which are model-free.
>
> From what I understand, policy iteration is a model-free algorithm
>
>
>
Maybe this was referring to generalized policy iteration methods.
---
(Answer based on comments from @<NAME>.)
Upvotes: 0 |
2020/03/11 | 4,188 | 16,304 | <issue_start>username_0: **Background:** It's well-known that neural networks offer great performance across a large number of tasks, and this is largely a consequence of [their universal approximation capabilities](https://ai.stackexchange.com/questions/13317/where-can-i-find-the-proof-of-the-universal-approximation-theorem/25917#25917). However, in this post I'm curious about the *opposite*:
**Question:** Namely, what are some *well-known* cases, problems or real-world applications where neural networks don't do very well?
---
**Specification:**
I'm looking for specific *regression* tasks (with accessible data-sets) where neural networks are not the state-of-the-art. The regression task should be "naturally suitable", so no sequential or time-dependent data (in which case an RNN or reservoir computer would be more natural).<issue_comment>username_1: Here's a snippet from [an article by <NAME>](https://medium.com/@GaryMarcus/the-deepest-problem-with-deep-learning-91c5991f5695?_branch_match_id=687598797934356895)
>
> In particular, they showed that standard deep learning nets often fall
> apart when confronted with common stimuli rotated in three dimensional
> space into unusual positions, like the top right corner of this
> figure, in which a schoolbus is mistaken for a snowplow:
>
>
> [](https://i.stack.imgur.com/C4Bch.jpg)
>
>
> .
> .
> .
>
>
> Mistaking an overturned schoolbus is
> not just a mistake, it’s a revealing mistake: it that shows not only
> that deep learning systems can get confused, but they are challenged
> in making a fundamental distinction known to all philosophers: the
> distinction between features that are merely contingent associations
> (snow is often present when there are snowplows, but not necessary)
> and features that are inherent properties of the category itself
> (snowplows ought other things being equal have plows, unless eg they
> have been dismantled). We’d already seen similar examples with
> contrived stimuli, like <NAME>’s carefully designed, 3-d
> printed foam covered dimensional baseball that was mistaken for an
> espresso
>
>
> [](https://i.stack.imgur.com/tdE0h.jpg)
>
>
> Alcorn’s results — some from real photos from the natural
> world — should have pushed worry about this sort of anomaly to the top
> of the stack.
>
>
>
Please note that the opinions of the author are his alone and I do not necessarily share all of them with him.
Edit: Some more fun stuff
1) DeepMind's neural network that could play *Breakout* and *Starcraft*
saw a dramatic dip in performance when the paddle was moved up by a few pixels.
[See: General Game Playing With Schema Networks](https://www.vicarious.com/posts/general-game-playing-with-schema-networks)
While in the latter, it performed well with one race of the character but not on a different map and with different characters.
[Source](https://stream.nyu.edu/channel/CS%2BFaculty%2BSearch%2B2019/111128891)
2)
>
> AlphaZero searches just 80,000 positions per second in chess and
> 40,000 in shogi, compared to 70 million for Stockfish and 35 million
> for elmo.
>
>
>
What the team at Deepmind did was to build a very good search algorithm. A search algorithm that includes the capability to remember facets of previous searches to apply better results to new searches. This is very clever; it undoubtedly has immense value in many areas, but it cannot be considered general intelligence.
[See: AlphaZero: How Intuition Demolished Logic (Medium)](https://medium.com/intuitionmachine/alphazero-how-intuition-demolished-logic-66a4841e6810)
Upvotes: 6 [selected_answer]<issue_comment>username_2: In theory, most neural networks can approximate any continuous function on compact subsets of $\mathbb{R}^n$, provided that the activation functions satisfy certain mild conditions. This is known as the *universal approximation theorem* (UAT), but that should *not* be called *universal*, given that there are a lot more discontinuous functions than continuous ones, although certain discontinuous functions can be approximated by continuous ones. The UAT shows the theoretical powerfulness of neural networks and their purpose. They represent and approximate functions. If you want to know more about the details of the UAT, for different neural network architectures, see [this answer](https://ai.stackexchange.com/a/13319/2444).
However, in practice, neural networks trained with gradient descent and backpropagation face several issues and challenges, some of which are due to the training procedure and not just the architecture of the neural network or available data.
For example, it is well known that [neural networks are prone to catastrophic forgetting (or interference)](https://ai.stackexchange.com/a/13293/2444), which means that they aren't particularly suited for **incremental learning** tasks, although some more sophisticated incremental learning algorithms based on neural networks have already been developed.
Neural networks can also be [sensitive to their inputs](https://arxiv.org/abs/1712.07107), i.e. a small change in the inputs can drastically change the output (or answer) of the neural network. This is partially due to the fact that they learn a function that isn't really the function you expect them to learn. So, a system based on such a neural network can potentially be hacked or fooled, so they are probably not well suited for **safety-critical applications**. This issue is related to the [low interpretability and explainability of neural networks](https://arxiv.org/pdf/2001.02522.pdf), i.e. they are often denoted as [black-box models](https://stats.stackexchange.com/q/93705/82135).
[Bayesian neural networks (BNNs)](https://arxiv.org/abs/1505.05424) can potentially mitigate these problems, but they are unlikely to be the ultimate or complete solution. Bayesian neural networks maintain a distribution for each of the units (or neurons), rather than a point estimate. In principle, this can provide more uncertainty guarantees, but, in practice, this is not yet the case.
Furthermore, neural networks often require a lot of data in order to approximate the desired function accurately, so in cases where **data is scarce** neural networks may not be appropriate. Moreover, the training of neural networks (especially, deep architectures) also **requires a lot of computational resources**. Inference can also be sometimes problematic, when you need real-time predictions, as it can also be expensive.
To conclude, neural networks are just function approximators, i.e. they approximate a *specific* function (or set of functions, in the case of Bayesian neural networks), given a specific configuration of the parameters. They can't do more than that. They cannot magically do something that they have not been trained to do, and it is usually the case that you don't really know the specific function the neural network is representing (hence the expression *black-box model*), apart from knowing your training dataset, which can also contain spurious information, among other issues.
Upvotes: 4 <issue_comment>username_3: This is more in the direction of 'what kind of problems can be solved by neural networks'. In order to train a neural network you need a large set of training data which is labelled with correct/ incorrect for the question you are interested in. So for example 'identify all pictures that have a cat on them' is very suitable for neural networks. On the other hand 'summarize the story of this toddler picture book' is very hard. Although a human can easily decide whether a given summary is any good or not it would be very difficult to build a suitable set of training data for this kind of problem. So if you can't build a large training data set with correct answers, you can't train a neural network to solve the problem.
The answer of username_1 is also an instance of that, also a potentially solvable one. The neural network that misidentified upside-down school buses presumably had very few if any upside-down school buses in its training data. Put them into the training data and the neural network will identify these as well. This is still a flaw in neural networks, a human can correctly identify an upside-down school bus the first time they see one if they know what school busses look like.
Upvotes: 3 <issue_comment>username_4: A checkerboard with missing squares is impossible for a neural network to learn the missing color. The more it learns on training data, the worse it does on test data.
See e.g. this article [The Unlearnable Checkerboard Pattern](https://journals.blythinstitute.org/ojs/index.php/cbi/article/view/48) (which, unfortunately, is not freely accessible). In any case, it should be easy to try out yourself that this task is difficult.
Upvotes: 2 <issue_comment>username_5: I don't know if it might be of use, but many areas of NLP are still hard to tackle, and even if deep models achieve the state of the art results, they usually beat baseline shallow models by very few percentage points.
One example that I've had the opportunity to work on is stance classification [1](https://www.aclweb.org/anthology/S16-1075.pdf). In many datasets, the best F score achievable is around 70%.
Even though it's hard to compare results since in NLP many datasets are really small and domain-specific (especially for stance detection and similar SemEval tasks), many times SVM, conditional random fields, sometimes even Naive Bayes models are able to perform almost as good as CNN or RNN. Other tasks for which this holds are argumentation mining or claim detection.
See e.g. the paper [TakeLab at SemEval-2016 Task 6: Stance Classification in Tweets Using a Genetic Algorithm Based Ensemble](https://www.aclweb.org/anthology/S16-1075.pdf) (2016) by <NAME> et al.
Upvotes: 3 <issue_comment>username_6: In our deep learning lecture, we discussed the following example (from [Unmasking Clever Hans predictors and assessing what machines really learn](https://www.nature.com/articles/s41467-019-08987-4) (2019) by Lapuschkin et al.).
[](https://i.stack.imgur.com/cLggJ.jpg)
Here the neural network learned a wrong way to identify a picture, i.E by identifying the wrong "relevant components". In the sensitivity maps next to the pictures, we can see that the watermark was used to identify if there is a horse present in the picture. If we remove the watermark, the classification is no longer made. Even more worryingly, if we add the tag to a completely different picture, it gets identified as a horse!
Upvotes: 4 <issue_comment>username_7: Large scale route optimization problems.
The is progress made in using Deep Reinforcement learning to solve vehicle routing problems (VRP), for example in this paper: <https://arxiv.org/abs/1802.04240v2>.
However, for large scale problems and overall heuristic methods, like the ones provided by Google OR tools are much easier to use.
Upvotes: 2 <issue_comment>username_8: In the case of convolutional neural networks, the features may be extracted but without taking into account their relative positions (see the concept of *translation invariance*)
For example, you could have two eyes, a nose and a mouth be in different locations in an image and still have the image be classified as a face.
Operations like max-pooling may also have a negative impact on retaining position information.
Upvotes: 1 <issue_comment>username_9: Neural networks seem to have a great deal of difficulty handling *adversarial input*, i.e., inputs with certain changes (often imperceptible or nearly imperceptible by humans) designed by an attacker to fool them.
This is not the same thing as just being highly sensitive to certain changes in inputs. Robustness against wrong answers in that case can be increased by reducing the probability of such inputs. (If only one in 10^15 possible images causes a problem, it's not much of a problem.) However, in the adversarial case reducing the space of problematic images doesn't reduce the probability of getting one because the images are specifically selected by the attacker.
One of the more famous papers in this area is ["Synthesizing Robust Adversarial Examples"](https://arxiv.org/pdf/1707.07397.pdf), which produced not only examples where a few modified pixels or other invisible-to-humans modifications to a picture fooled a neural network-based image classifier, but also perhaps the first examples of 3D objects designed to fool similar classifiers and successfully doing so (from every angle!).
(Those familiar with IT securitity will no doubt recognise this as a familiar asymmetry: roughly, a defender must defend against *all* attacks launched against a system, but an attacker need find only *one* working attack.)
In ["A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance"](https://arxiv.org/pdf/1901.10861.pdf), <NAME>ir et al. propose a mathematical framework for analyzing the problem based on Hamming distances that, while currently a less practical attack than the MIT/Lab6 one, has some pretty disturbing theoretical implications, including that current approaches to preventing these attacks may be, in the end, ineffective. For example, he points out that blurring and similar techniques that have been used to try to defend against adversarial attacks can be treated mathematically as simply another layer added on top of the existing neural network, requiring no changes to the attack strategy.
(I attended a talk by Shamir a few months ago that was much easier going than the paper, but unfortunately I can't find a video of that or a similar talk on-line; if anybody knows of one please feel free to edit this answer to add a link!)
There's obviously still an enormous amount of research to be done in this area, but it seems possible that neural networks alone are not capable of defense against this class of attack, and other techniques will have to be employed in addition to make neural networks robust against it.
Upvotes: 2 <issue_comment>username_10: From my experience in industry, a lot of data science (operating on customer information, stored in a database) is still dominated by decision trees and even SVMs. Although neural networks have seen incredible performance on "unstructured" data, like images and text, there still do not appear to be great results extending to structured, tabular data (yet).
At my old company (loyalty marketing with 10 million+ members) there was a saying, "*You can try any model you like, but you must try XGBoost*". And let's just say that I did try comparing it to a neural network, and ultimately I did go with XGBoost ;)
Upvotes: 2 <issue_comment>username_11: My 50cents:
[NP\_(complexity)](https://en.wikipedia.org/wiki/NP_(complexity)) - is still hard to solve, even with NeuralNets.
>
> In computational complexity theory, NP (nondeterministic polynomial
> time) is a complexity class used to classify decision problems. NP is
> the set of decision problems for which the problem instances, where
> the answer is "yes", have proofs verifiable in polynomial time by a
> deterministic Turing machine.
>
>
>
The easiest example, to imagine what is speech about, it is cryptography's [Integer\_factorization](https://en.wikipedia.org/wiki/Integer_factorization), which is basement of [RSA cryptosystem](https://en.wikipedia.org/wiki/RSA_(cryptosystem)).
For example, we have two simple numbers:
* 12123123123123123123123.....45456
* 23412421341234124124124.....11112
NeuralNetwork shall answer us exactly digit to digit both this numbers, when we will show it only multiplication of this two numbers... This is not guessing about school bus. The field of numbers much more bigger than number of words in all languages on whole Earth. Imagine, that there are billions of billion different school buses, billions of billions of different fire-hydrants and billions of such classes, and NN shall answer exactly - what is on the picture - no way. The chance to guess is so little...
Upvotes: 2 |
2020/03/12 | 1,284 | 5,068 | <issue_start>username_0: It's an idea I heard a while back but couldn't remember the name of. It involves the existence and development of an AI that will eventually rule the world and that if you don't fund or progress the AI then it will see you as "hostile" and kill you. Also, by knowing about this concept, it essentially makes you a candidate for such consideration, as people who didn't know about it won't understand to progress such an AI. From my understanding, this idea isn't taken that seriously, but I'm curious to know the name nonetheless.<issue_comment>username_1: I believe the term you are looking for is "(technological) singularity".
<https://en.wikipedia.org/wiki/Technological_singularity>
Upvotes: 3 <issue_comment>username_2: The likely expression you are looking for is *AI takeover*, which is a common topic in [science fiction](https://en.wikipedia.org/wiki/Artificial_intelligence_in_fiction) movies, such as [2001: A Space Odyssey](https://en.wikipedia.org/wiki/2001:_A_Space_Odyssey_(film)) and [The Matrix](https://en.wikipedia.org/wiki/The_Matrix), and [popular culture](https://en.wikipedia.org/wiki/AI_takeovers_in_popular_culture). Although the AI takeover is an **unlikely scenario** in the next years, [certain scientists, such as <NAME>, have expressed concerns about it](https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence) and some philosophers, especially <NAME>, are really interested in the topic.
The AI takeover concept is related to concepts such as the [AI singularity](https://en.wikipedia.org/wiki/Technological_singularity), [superintelligence](https://en.wikipedia.org/wiki/Superintelligence), [intelligence explosion](https://wiki.lesswrong.com/wiki/Intelligence_explosion), [AI control problem](https://en.wikipedia.org/wiki/AI_control_problem), [existential risk](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence), [machine ethics](https://en.wikipedia.org/wiki/Machine_ethics) and [friendly AI](https://en.wikipedia.org/wiki/Friendly_artificial_intelligence).
The book [Superintelligence: Paths, Dangers, Strategies](https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies) (2014) by <NAME> may be helpful if you are interested in hypothetical scenarios.
Upvotes: 3 <issue_comment>username_3: If I'm not mistaken you're looking for [Roko's Basilisk](https://en.wikipedia.org/wiki/LessWrong#Roko's_basilisk),
>
> in which an otherwise benevolent future AI system tortures simulations of those who did not work to bring the system into existence
>
>
>
Upvotes: 6 [selected_answer]<issue_comment>username_4: It is called **Singularity.** A point in future where AI will surpass Human Knowledge and become *Omniscient.*
AI will be able to operate on an order manifolds time to that of a human brain thus developing and designing itself without any assistance.
Upvotes: 1 <issue_comment>username_5: If you think about it, it is already happening.
* Thousands of drivers work for Uber Intelligence.
* There are many applications that dictate the rules and define what the seller and the end user need to do
This idea is called Singularity or [Technological Singularity](https://en.wikipedia.org/wiki/Technological_singularity), it would be possible with a [Superintelligence](https://en.wikipedia.org/wiki/Superintelligence).
However, the possibility of this happening is unknown. Have they reached that level yet? We have quantum computers, we have companies with huge data centers spread all over the planet, we have technology in space, we have free Tensorflow and studies for anyone on the planet to be able to create artificial intelligence models.
If we have contact or help from other intelligent civilizations, the possibilities can expand on a surreal scale.
Google has complete information about humanity (or something close to that). But even with all this data, creating artificial intelligence with a conscience is something that goes far beyond.
But maybe we already have enough to improve our concept of morals, respect, social interaction. Facebook invests and studies ways to improve social interaction. And if you think about it, it is one of the main means of communication.
The big question is that it is not possible to know what would happen if a super artificial intelligence with conscience would do if it existed. Extinguish humans for harming nature? Just find ways to improve the planet by understanding that human defects and errors are just your own nature as well as everything else in nature? Just watching the show on Netflix because you gave up on humanity? We do not know.
But, particularly, I would love to see that happen. In fact, one of my personal goals is to create this super intelligence. But alone it will be very difficult. A conscience without interaction from other consciences is not a clash of universes. And the clash of universes is what makes us reflect, think, revise, create new paths and thoughts. It is what allows us to create other universes.
Upvotes: 1 |
2020/03/12 | 487 | 1,958 | <issue_start>username_0: I was running my gated recurrent unit (GRU) model. I wanted to get an opinion if my loss and validation loss graph is good or not, since I'm new to this and don't really know if that is considered underfitting or not
[](https://i.stack.imgur.com/9C4pr.png)
[](https://i.stack.imgur.com/2tCzo.png)<issue_comment>username_1: You should at least crop the plots and add a legend. Maybe also provide some scores (accuracy, auc, whatever you're using).
Anyway, it doesn't look your model is underfitting, if it was you should have high error at both, training and test phase and the lines would not cross.
Upvotes: 1 <issue_comment>username_2: When ever you are buliding a ML Model don't take accuracy seriously(Mistake done by Netflix that cost them alot), you should try to get the hit scores as they will help you to know how many times your model worked on real world users.However, if your model must have to measure the accuracy try it with the RMSE score as it will penalise you more for being more out of the Line. Here is the link for more information on it [RMSE](https://surprise.readthedocs.io/en/stable/accuracy.html#surprise.accuracy.rmse)
Its hard to predict if its overfitting or underfitting as your graph is vague(for example what does graph lines representing). However, you can solve underfitting by following steps:
1) Increase the size or number of parameters in the ML model.
2) Increase the complexity or type of the model.
3) Increasing the training time until cost function in ML is minimised.
For overfitting you can try Regularization methods like weight decay provide an easy way to control overfitting for large neural network models. A modern recommendation for regularization is to use early stopping with dropout and a weight constraint.
Upvotes: 3 [selected_answer] |
2020/03/12 | 592 | 2,413 | <issue_start>username_0: I want to solve the zero subset sum problem with the hill-climbing algorithm, but I am not sure I found a good state space for this.
Here is the problem: consider we have a set of numbers and we want to find a subset of this set such that the sum of the elements in this subset is zero.
My own idea to solve this by hill-climbing is that in the first step, we can choose a random subset of the set (for example, the main set is $X= \{X\_1,\dots,X\_n\}$ and we chose $X'=\{X\_{i\_1},\dots,X\_{i\_k}\}$ randomly), then the children of this state can be built by adding an element from $X-X'$ to $X'$ or deleting an element from $X'$ itself. This means that each state has $n$ children. and the objective function could be the sum of the elements in $X'$ that we want to minimize.
Is this a good modeling? Are there better modelings or objective functions that can work more intelligently?<issue_comment>username_1: You should at least crop the plots and add a legend. Maybe also provide some scores (accuracy, auc, whatever you're using).
Anyway, it doesn't look your model is underfitting, if it was you should have high error at both, training and test phase and the lines would not cross.
Upvotes: 1 <issue_comment>username_2: When ever you are buliding a ML Model don't take accuracy seriously(Mistake done by Netflix that cost them alot), you should try to get the hit scores as they will help you to know how many times your model worked on real world users.However, if your model must have to measure the accuracy try it with the RMSE score as it will penalise you more for being more out of the Line. Here is the link for more information on it [RMSE](https://surprise.readthedocs.io/en/stable/accuracy.html#surprise.accuracy.rmse)
Its hard to predict if its overfitting or underfitting as your graph is vague(for example what does graph lines representing). However, you can solve underfitting by following steps:
1) Increase the size or number of parameters in the ML model.
2) Increase the complexity or type of the model.
3) Increasing the training time until cost function in ML is minimised.
For overfitting you can try Regularization methods like weight decay provide an easy way to control overfitting for large neural network models. A modern recommendation for regularization is to use early stopping with dropout and a weight constraint.
Upvotes: 3 [selected_answer] |
2020/03/13 | 1,023 | 4,154 | <issue_start>username_0: I've just started learning natural language processing from [<NAME>'s videos lectures](https://www.youtube.com/watch?v=3Dt_yh1mf_U&list=PLQiyVNMpDLKnZYBTUOlSI9mi9wAErFtFm). In that video, minute 4:56, he is stating that dialogue is a hard problem in natural language processing (NLP). Why?<issue_comment>username_1: *First of all, I am not very familiar with details of NLP and NLU systems and concepts, so I will provide an answer based on the slides entitled [Natural language understanding in dialogue systems](https://web.archive.org/web/20170830051346/https://projects.ict.usc.edu/nld/cs599s13/LectureNotes/cs599s13dialogue1-30-13.pdf) (2013) by [<NAME>](http://djdsite.org/), a researcher on the topic.*
A dialogue system is composed of different parts or modules. Here's a diagram of an example of a dialogue system.
[](https://i.stack.imgur.com/SEZg1.png)
Each of these modules can introduce **errors**, which are then propagated to the rest of the pipeline. Of course, this is the first clear issue of such a dialogue system. Other issues or challenges include
* **ambiguity** of natural language (and there are different types of ambiguity, i.e. see slide number 5),
* **synonyms** (i.e. the dialogue system needs to handle different words or expressions that mean the same thing),
* **context-sensitivity** (i.e. the same words or expressions can mean different things in different contexts)
* **semantic representation** (i.e. how to represent semantics)
* **spontaneous speech** (i.e. how to handle stuff like "hm", pauses, etc.)
Upvotes: 1 <issue_comment>username_2: Dialogue is a hard problem because it requires pretty advanced cognitive functions. Leaving aside all the lower levels of language analysis (phonology if dealing with speech, morphology and syntax), you quickly run into interpretation problems that require a lot of world knowledge.
Simple question and answer is fine, and restricted domains are somewhat easier as well. As soon as you get into a normal conversation, you will refer back to things you said before, so an NLP system would have to recognise that and resolve the reference accordingly. Typically in a conversation you would use variations of reference terms: the first time you mention an object you might describe it fully, but subsequently you will use shorter terms to refer to it.
There is also a structure to conversation. This is typically modelled as conversational moves, and usually moves will have corresponding response-moves. For example, a common sequence would be greeting - greeting. Then you might have question - response - feedback. This sounds fairly easy, but once you try to annotate a dialogue with such moves you will find that it is pretty hard. As far as I am aware, there is no 'grammar' equivalent of describing the structure of conversations.
Often, pragmatic meaning is interfering with the 'surface' meaning of utterances. A statement can actually function as a question, or a question can be a command (or a statement). The pragmatics of utterances depend on the context and also the relationship between the interlocutors. If I talk to my manager, I will use language differently than when I talk to my children.
Dialogue/conversations are hard to analyse. Because of that, descriptive frameworks are still fairly limited. You need to keep track of what has said before, as that can change the way an utterance has to be interpreted. Grammatical analysis is a fairly solved problem, and word sense disambiguation as well. But pragmatics and conversational structure are still on the bleeding edge of linguistic research; at least they were when I was still teaching Discourse Analysis at university a few years ago.
For that reason, chatbots are generally not very good. Sometimes they can fool people into believing they are human speakers, but this is usually done through trickery ("smoke and mirrors") rather than competent handling of conversational structures. It's all in the little box in username_1's answer labelled "DM"...
Upvotes: 3 [selected_answer] |
2020/03/14 | 922 | 3,917 | <issue_start>username_0: The skip-gram and continuous bag of words (CBOW) are two different types of word2vec models.
What are the main differences between them? What are the pros and cons of both methods?<issue_comment>username_1: So as you're probably already aware of, CBOW and Skip-gram are just mirrored versions of each other. CBOW is trained to predict a single word from a fixed window size of context words, whereas Skip-gram does the opposite, and tries to predict several context words from a single input word.
Intuitively, the first task is much simpler, this implies a much faster convergence for CBOW than for Skip-gram, in the original paper (link below) they wrote that CBOW took hours to train, Skip-gram 3 days.
For the same logic regarding the task difficulty, CBOW learn better syntactic relationships between words while Skip-gram is better in capturing better semantic relationships. In practice, this means that for the word 'cat' CBOW would retrive as closest vectors morphologically similar words like plurals, i.e. 'cats' while Skip-gram would consider morphologically different words (but semantically relevant) like 'dog' much closer to 'cat' in comparison.
A final consideration to make deals instead with the sensitivity to rare and frequent words. Because Skip-gram rely on single words input, it is less sensitive to overfit frequent words, because even if frequent words are presented more times that rare words during training, they still appear individually, while CBOW is prone to overfit frequent words because they appear several time along with the same context. This advantage over frequent words overfitting leads Skip-gram to be also more efficient in term of documents required to achieve good performances, much less than CBOW (and it's also the reason of the better performances of Skip-gram in capturing semantical relationships).
Anyway, you can find some comparisons in the original paper (section 4.3).
[Mikolov et al. 2013](https://arxiv.org/pdf/1301.3781.pdf)
About the architecture, there's not much to say. They just randomly initialised word embedding for each word, then a projection matrix NxD (number of context words times embeddings dimension) is generated at each iteration, there is no hidden layer, the vectors are just averaged together and then fed into an activation function to predict index probabilities in a vector of dimension V (the size of the vocabulary).
For a more specific explanation (even Mikolov's paper lack some detail) I suggest checking this blog page ([Words as Vectors](https://iksinc.online/tag/continuous-bag-of-words-cbow/)), even though the model described there do apply a hidden layer, unlike the original architecture.
[](https://i.stack.imgur.com/ShJJX.png)
Upvotes: 5 <issue_comment>username_2: Word embeddings are the results of learning from deep learning algorithms, which can learn characters from data through feature extraction. One implementation of word embedding is word2vec.
Word2vec has two models, namely
* Continuous Bag of Word (CBOW) and
* Skip Gram Model.
Both of these methods use the concept of a neural network that maps words to target variables, which are also words. In these techniques, "weights" are used as word vector representations. CBOW tries to predict a word on the basis of its neighbors, while Skip Gram tries to predict the neighbors of a word.
In simpler words, CBOW tends to find the probability of a word occurring in a context. So, it generalizes over all the different contexts in which a word can be used. Whereas SkipGram tends to study different contexts separately. Skip Gram needs more data to be trained contains more knowledge about the context.
For further explanation, you can read the [paper](https://arxiv.org/pdf/1301.3781.pdf) from Mikolov about word embedding and word2vec.
Upvotes: 2 |
2020/03/14 | 1,383 | 6,155 | <issue_start>username_0: Reinforcement learning methods are considered to be extremely sample inefficient.
For example, in a recent DeepMind [paper](https://arxiv.org/abs/1710.02298) by Hessel et al., they showed that in order to reach human-level performance on an Atari game running at 60 frames per second they needed to run 18 million frames, which corresponds to 83 hours of play experience. For an Atari game that most humans pick up within a few minutes, this is a lot of time.
What makes DQN, DDPG, and others, so sample inefficient?<issue_comment>username_1: I will try to give a broad answer, if it's not helpful I'll remove it.
When we talk about sampling we are actually talking about the number of interaction required to an agent to learn a good model of the environment.
In general I would say that there are two issues related to sample efficiency:
1 the size of the 'action'+'environment states' space 2 the exploration strategy used.
Regarding the first point, in reinforcement learning is really easy to encounter situations in which the number of combinations of possible actions and possible environment states explode, becoming intractable. Lets for example consider the Atari game from the Rainbow paper you linked: the environment in which the agent operate in this case is composed of rgb images of size (210, 160, 3). This means that the agent 'see' a vector of size 100800. The actions that an agent can take are simply modifications of this vector, e.g. I can move to the left a character, slightly changing the whole picture. Despite the fact that in lot of games the number of possible actions is rather small, we must keep in mind that there are also other objects in the environment which change position as well. What other object/enemies do obviously influence the choice of the best action to perform in the next time step. To a high number of possible combinations between actions and environment states is associated a high number of observations/interaction required to learn a good model of the environment. Up to now, what people usually do is to compress the information of the environment (for example by resizing and converting the pictures to grayscale), to reduce the total number of possible states to observe. DQL itself is based on the idea of using neural networks to compress the information gathered from the environment in a dense representation of fixed size.
For what concern the exploration strategy, we can again divide the issue in subcategories: 1 how do we explore the environment 2 how much information do we get from each exploration. Exploration is usually tuned through the *greedy* hyper-parameter. Once in a while we let the agent perform a random action, to avoid to get stuck in suboptimal policies (like not moving at all to avoid to fall into a trap, eventually thanks to a greedy action the agent will try to jump and learn that it gives a higher reward). Exploration comes with the cost of more simulations to perform, so people quickly realise that we can't rely only on more exploration to train better policies. One way to boost performance is to leverage not only the present iteration but also past interactions as well, this approach is called *experience replay*. The underline idea is to update the q-values depending also on weighted past rewards, stored in a memory buffer. Other approaches point to computation efficiency rather than decreasing the amount of simulations. An old proposed technique that follow this direction is *prioritised sweeping* [Moore et al. 1993](https://people.eecs.berkeley.edu/~pabbeel/cs287-fa19/optreadings/MooreAtkeson-1993-prioritized-sweeping.pdf), in which big changes in q values are prioritised, i.e. q-values that are stable over iterations are basically ignored (this is a really crude way to put it, I have to admit that I still have to grasp this concept properly). Both this techniques were actually applied in the Rainbow paper.
On a more personal level (just pure opinions of mine from here) I would say that the problem between RL agents and humans is the fact that we (humans) have lot of common sense knowledge we can leverage, and somehow we are able, through cognitive heuristics and shortcuts, to pay attention to what is relevant without even being aware of it. RL agents learn to interact with an environment without any prior knowledge, they just learn some probability distribution through trial and errors, and if something completely new happens they have no ability at all to pick up an action based on external knowledge.
One interesting future direction in my opinion is reward modelling, described in this video: <https://youtu.be/PYylPRX6z4Q>
I particularly like the emphasis on the fact that the only true thing that human are good at is judging. We don't know how to design proper reward functions, because again, most of the actions we perform in real life are driven by reward of which we are not aware, but we are able in a glimpse to say if an agent is performing a task in a proper way or not. Combining this 'judging power' into RL exploration seems to be a really powerful way to increase sample efficiency in RL.
Upvotes: 3 <issue_comment>username_2: This is mostly because humans already have information when they start learning the game (priors) that makes them learn it more quickly. We already know to jump on monsters or avoid them or to get gold looking object.
When you remove these priors you can see a human is worse at learning these games. ([link](https://arxiv.org/pdf/1802.10217.pdf))
Some experiments they tried in the study to remove these priors where replacing all notable objects with colored squares, reversing and scrambling controls, changing gravity, and generally replacing all sprites with random pixels. All these experiments made human learning much harder, and increased death rate, time needed and states visited in the game.
If we want the reinforcement learning algorithm to perform as good as a human, we will have to somehow include these priors we have as a human before training the network. This of course is not yet done (as far as I know)
Upvotes: 2 |
2020/03/14 | 1,586 | 5,345 | <issue_start>username_0: The formula for mean prediction using Gaussian Process is $k(x\_\*, x)k(x, x)^{-1}y$, where $k$ is the covariance function. See e.g. equation 2.23 (in chapter 2) from [Gaussian Processes for Machine Learning](http://www.gaussianprocess.org/gpml/chapters/RW2.pdf) (2006) by <NAME> & <NAME>.
Oversimplifying, the mean prediction of the new point $y\_\*$ is the weighted average of previously observed $y$, where the weights are calculated by the $k(x\_\*,x)$ and normalized by $k(x,x)^{-1}$.
Now, the first part $k(x\_\*, x)$ is easy to interpret. The closer the new data point lies to the previously observed data points, the greater their similarity, the higher will be the weight and impact on the prediction.
But how to interpret the second part $k(x, x)^{-1}$? I presume this makes the weight of the points in the clusters greater than the outliers. Am I correct?<issue_comment>username_1: I interpret this as the following, I will use the uppercase matrix notation $\mathbf{K\_{\*}}$, etc...
The covariance matrix $\mathbf{K\_{xx}}$ summarizes everything we know about the input feature space. I think of it as a unique signature that describes the data we have in $\mathbb{R}^{d\_x}$. Along with the example training data, we have the labels $\mathbf{y}$, which give us a concrete definition of what the prediction values for $\mathbf{K\_{xx}}$ should be.
We can then ask the question "What can we multiply the unique signature $\mathbf{K\_{xx}}$ by in order to get the training outputs?" This takes the familiar form of $\mathbf{A}\mathbf{x} = \mathbf{b}$...
$$
\begin{aligned}
\mathbf{K}\_{xx}\mathbf{z} &= \mathbf{y} \\
\mathbf{z} &= \mathbf{K}\_{xx}^{-1}\mathbf{y}
\end{aligned}
$$
$\mathbf{z}$ then represents the vector that the covariance matrix transforms into the outputs. Then all we have to do is multiply the new covariance of the train/test points by that same vector to get the predictions...
$$
\mathbf{\hat{y}} = \mathbf{K\_{\*x}}\mathbf{K\_{xx}^{-1}}\mathbf{y}
$$
I should add that I am relatively certain about this, but I am still learning this stuff myself so I am not 100
Upvotes: 0 <issue_comment>username_2: Mathematical Interpretation
---------------------------
Note that equation (2.23) is simply calculating the conditional distribution of equation (2.21) and then finding the mean. Your question reduces to:
"Given normal variables $X$ and $Y$, why is $\mathbb{E}[Y|X] = \mu\_y + Cov(X, Y)Cov(Y, Y)^{-1}(x - \mu\_x)$? (note: in the book, $\mu\_x = \mu\_y = 0$)
Deriving the conditional probability mean is complicated (see [The Bivariate Normal Distribution](http://home.iitk.ac.in/%7Ezeeshan/pdf/The%20Bivariate%20Normal%20Distribution.pdf), page 3). A more intuitive look can be seen in the first graph in [this page](https://online.stat.psu.edu/stat414/lesson/21/21.1).
Here, the mean of $Y|X$ is linear in what value $X$ takes. The line starts at the intercept $\mu\_y$, and increases with slope $\rho (\frac{\sigma\_y}{\sigma\_x}) = \frac{Cov(X,Y)}{\sigma\_x \sigma\_y}\left(\frac{\sigma\_y}{\sigma\_x}\right)=\frac{Cov(X,Y)}{\sigma\_x^2} = \frac{Cov(X,Y)}{Var(x)} = \frac{Cov(X,Y)}{Cov(X,X)}$. So the mathematical interpretation of $Cov(X,Y)Cov(X,X)^{-1}$ is that **it is the slope of the relationship between the mean of $Y|X$ and the value of $x$ that you are given**. As you are given a higher value $x$, say $x + \delta$, then the mean of $Y|X$ raises by $Cov(X,Y)Cov(X,X)^{-1}\delta$.
Why is there even a $Cov(X,Y)Cov(X,X)^{-1}$ term there? For some reason, multiplying $Y|X$ by a $Cov(X,Y)Cov(X,X)^{-1}$ term makes $Y|X$ completely independent of $X$ (which makes sense as the definition of "conditional probability", because you are already given a value of $X$). This is just a mathematical property, I don't know if there's an intuitive explanation as to why.
Human Interpretation
--------------------
In case your post just want an intuition as to why there is a $Cov(X,X)^{-1}$ in the prediction of a Gaussian process (and ignoring the conditional probability fluff), I don't think there's a real basis for this, it would only be coincidental as the authors simply used the conditional probability mean formula, but I would guess **$Cov(X,X)^{-1}$ somehow normalizes the values of covariance matrix $Cov(X\_\*, X)$**.
For example, if the training set $X$ has a lot of outliers and therefore extremely high variance (e.g. all non-diagonal entries in millions), then it is very likely that $Cov(X\_\*, X)$ would also be extremely high as $X\_\*$ follows the same distribution as $X$ (unless each data in $X\_\*$ matches the exact same variance in $X$). It doesn't make sense to multiply $y$ by millions though, as $y$ is already a somewhat decent estimator/prior.
It makes more sense to normalize $Cov(X\_\*, X)$ by dividing it with the training data variance $Cov(X,X) = Var(X)$ so that the ratio $Cov(X\_\*, X)Cov(X, X)^{-1}$ tends to be more closer to 1 when $X\_\*$ follows the same distribution as $X$ (which should be the case). If the ratio is exactly 1, then $X\_\*$ has the exact same distribution as $X$, so you just return the prior estimate $y$. If the ratio is far away from $1$, then the test set distribution is wildly different than the training set distribution, so you return a number far away from $y$.
Upvotes: 1 |
2020/03/16 | 943 | 3,576 | <issue_start>username_0: How can I prove that gradient descent doesn't necessarily find the global optimum?
For example, consider the following function
$$f(x\_1, x\_2, x\_3, x\_4) = (x\_1 + 10x\_2)^2 + 5x\_2^3 + (x\_2 + 2x\_3)^4 + 3x\_1x\_4^2$$
Assume also that we can't find the optimal value for the learning rate because of time constraints.<issue_comment>username_1: You can find by yourself a counterexample that, in general, GD is not guaranteed to find the global optimum!
I first advise you to choose a simpler function (than the one you are showing), with 2-3 optima, where one is the global and the other(s) are local. You don't need neural networks or any other ML concept to show this, but only basic calculus (derivatives) and numerical methods (i.e. gradient descent). Just choose a very simple function with more than one optimum and apply the basic gradient descent algorithm. Then you can see that, if you start gradient descent close to one **local** optimum (i.e. you choose an initial value for $x$ or $\theta$, depending on your notation for the variable of the function) and then you apply gradient descent (for some iterations), you will end up in that close local optimum, from which you cannot escape, after having applied the gradient descent steps.
See also the question [Does gradient descent always converge to an optimum?](https://datascience.stackexchange.com/q/24534/10640) and [For convex problems, does gradient in Stochastic Gradient Descent (SGD) always point at the global extreme value?](https://stats.stackexchange.com/q/367397/82135)
Upvotes: 2 <issue_comment>username_2: Well, GD terminates once the gradients are 0, right? Now, in a non-convex function, there could be some points, which do not belong to the global minima, and yet, have 0 gradients. For example, such points can belong to saddle points and local minima.
Consider this picture and say you start GD at the x label.
[](https://i.stack.imgur.com/vmK8H.png)
GD will bring you the flat area and will stop making progress there as gradients are 0. However, as you can see, global minima is to the left of this flat region.
By the same token, you have to show, for your own function, that there exists at least a single point whose gradients are 0 and yet, it is not the global minima.
In addition to that, the guarantee on converge for convex functions depends on annealing the learning rate appropriately. For example, if your LR is too high, GD can just keep overshooting the minima. The visualization from [this](https://deeplearning.ai/ai-notes/optimization/?utm_campaign=BlogAINotesOptimizationNovember122019&utm_content=105388620&utm_medium=social&utm_source=linkedin&hss_channel=lcp-18246783) page might help you to understand more regarding the behavior of GD.
Upvotes: 2 [selected_answer]<issue_comment>username_3: There is no way you can be sure you have reached a global minimum. Steepest descent will converge toward where the gradient approaches zero. Depending on the initial conditions ( ie the initial values of the weights) you can and will converge on some minimum. Notice if you run your model several times with random weight initialization you will get slightly different results. What I do find interesting is that it seems that in general the local minima have roughly the same value. The cost function is some kind of surface in N space where N is the number of trainable parameters. We do not know what that surface is like and how many local minimums exist.
Upvotes: 0 |
2020/03/16 | 601 | 2,336 | <issue_start>username_0: My aim is to train a model for predicting diseases. Now, according to [this Wikipedia article](https://en.wikipedia.org/wiki/Disease#Classification), diseases are classified based on the following criteria in general:
* Causes (of the disease)
* Pathogenesis (the mechanism by which the disease progresses)
* Age
* Gender
* Symptoms (of the disease)
* Damage (caused by the disease)
* Organ type (e.g. heart disease, liver disease, etc.)
Are these features used for predicting diseases universally (i.e. all types of diseases)? I don't think so. There can be other attributes as well. For example, traveling in the case of coronavirus.
So, are there better features for predicting diseases?
Or which ones among them are better than the others, when patients specify their health issues?<issue_comment>username_1: To begin from the scratch, and in order to keep approach simple we have to analyze the input text(clinical narration) for the following data:
>
> 1. Is input a word or a group of words or a sentence?
> 2. Is input a meaningful sentence? By meaningful, I mean grammatically correct.
> 3. Does the word, group of words or a sentence contain symptoms or health issues?
> 4. Does the sentence contain data about a person’s age and gender?
> 5. Does the sentence contain data about a person’s diet, medical history, work routine, travelling history or getting in contact with
> any ill person?
>
>
>
If there are any other attributes that one has to look for then I would be keen to find out from the subject matter experts.
Upvotes: -1 <issue_comment>username_2: So for Medical Prognosis, there are some variables that commonly come up like Age, Sex, Ascites, Hepato, Spider, Status of the disease and many others but it depends on the disease. You'll commonly encounter these variables if you're doing **regression** or **classification**.
Also, if you're reading Radiology Reports for getting the input for the model then you also have to take care of jargons. The same symptoms can be written in various ways but all point towards the same prognosis i.e., there can be **synonyms for labels**. Try reading [this](https://arxiv.org/pdf/1901.07031.pdf) to get more information on how we can do information extraction from Radiology Reports. This is the famous **CheXpert** paper
Upvotes: -1 |
2020/03/16 | 1,313 | 4,509 | <issue_start>username_0: I was recently asked at an interview to calculate the number of parameters for a convolutional layer. I am deeply ashamed to admit I didn't know how to do that, even though I've been working and using CNN for years now.
Given a convolutional layer with ten $3 \times 3$ filters and an input of shape $24 \times 24 \times 3$, what is the total number of parameters of this convolutional layer?<issue_comment>username_1: What are the parameters in a convolutional layer?
-------------------------------------------------
The (learnable) parameters of a convolutional layer are the elements of the kernels (or filters) and biases (if you decide to have them). There are 1d, 2d and 3d convolutions. The most common are 2d convolutions, which are the ones people usually refer to, so I will mainly focus on this case.
2d convolutions
---------------
### Example
If the 2d convolutional layer has $10$ filters of $3 \times 3$ shape and the input to the convolutional layer is $24 \times 24 \times 3$, then this actually means that the filters will have shape $3 \times 3 \times 3$, i.e. each filter will have the 3rd dimension that is equal to the 3rd dimension of the input. So, the 3rd dimension of the kernel is not given because it can be determined from the 3rd dimension of the input.
2d convolutions are performed along only 2 axes (x and y), hence the name. Here's a picture of a typical 2d convolutional layer where the depth of the kernel (in orange) is equal to the depth of the input volume (in cyan).
[](https://i.stack.imgur.com/3folL.png)
Each kernel can optionally have an associated scalar bias.
At this point, you should already be able to calculate the number of parameters of a standard convolutional layer. In your case, the number of parameters is $10 \* (3\*3\*3) + 10 = 280$.
### A TensorFlow proof
The following simple TensorFlow (version 2) program can confirm this.
```
import tensorflow as tf
def get_model(input_shape, num_classes=10):
model = tf.keras.Sequential()
model.add(tf.keras.layers.Input(shape=input_shape))
model.add(tf.keras.layers.Conv2D(10, kernel_size=3, use_bias=True))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(num_classes))
model.summary()
return model
if __name__ == '__main__':
input_shape = (24, 24, 3)
get_model(input_shape)
```
You should try setting `use_bias` to `False` to understand how the number of parameters changes.
### General case
So, in general, given $M$ filters of shape $K \times K$ and an input of shape $H \times W \times D$, then the number of parameters of the standard 2d convolutional layer, with scalar biases, is $M \* (K \* K \* D) + M$ and, without biases, is $M \* (K \* K \* D)$.
See also these related questions [How is the depth of filters of hidden layers determined?](https://ai.stackexchange.com/q/17783/2444) and [In a CNN, does each new filter have different weights for each input channel, or are the same weights of each filter used across input channels?](https://ai.stackexchange.com/q/5769/2444).
1d and 3d convolutions
----------------------
There are also 1d and 3d convolutions.
For example, in the case of 3d convolutions, the kernels may not have the same dimension as the depth of the input, so the number of parameters is calculated differently for 3d convolutional layers. Here's a diagram of 3d convolutional layer, where the kernel has a depth different than the depth of the input volume.
[](https://i.stack.imgur.com/F282Z.png)
See e.g. [Intuitive understanding of 1D, 2D, and 3D convolutions in convolutional neural networks](https://stackoverflow.com/q/42883547/3924118).
Upvotes: 4 [selected_answer]<issue_comment>username_2: For a standard convolution layer, the weight matrix will have a shape of `(out_channels, in_channels, kernel_sizes)`. In addition, you will need a vector of shape `[out_channels]` for biases. For your specific case, 2d, your weight matrix will have a shape of `(out_channels, in_channels, kernel_size[0], kernel_size[1])`.
Now, if we plugin the numbers:
* `out_channels = 10`, you're having 10 filters
* `in_channels = 3`, the picture is RGB in this case, so there are 3 channels (the last dimension of the input)
* `kernel_size[0] = kernel_size[1] = 3`
In total you're gonna have `10*3*3*3 + 10 = 280` parameters.
Upvotes: 2 |
2020/03/17 | 1,368 | 4,687 | <issue_start>username_0: I'm trying to develop a multistep forecasting model using LSTM Network. The model takes three times steps as input and predicting two time\_steps. both input and output columns are normalised using minmax\_scalar within the range of 0 and 1.
Please see the below model architecture
*Model Architecture*
```
model = Sequential()
model.add(LSTM(80,input_shape=(3,1),activation='sigmoid',return_sequences=True))
model.add(LSTM(20,activation='sigmoid',return_sequences=False))
model.add(Dense(2))
```
In this case, using sigmoid as an activation function is it correct?<issue_comment>username_1: What are the parameters in a convolutional layer?
-------------------------------------------------
The (learnable) parameters of a convolutional layer are the elements of the kernels (or filters) and biases (if you decide to have them). There are 1d, 2d and 3d convolutions. The most common are 2d convolutions, which are the ones people usually refer to, so I will mainly focus on this case.
2d convolutions
---------------
### Example
If the 2d convolutional layer has $10$ filters of $3 \times 3$ shape and the input to the convolutional layer is $24 \times 24 \times 3$, then this actually means that the filters will have shape $3 \times 3 \times 3$, i.e. each filter will have the 3rd dimension that is equal to the 3rd dimension of the input. So, the 3rd dimension of the kernel is not given because it can be determined from the 3rd dimension of the input.
2d convolutions are performed along only 2 axes (x and y), hence the name. Here's a picture of a typical 2d convolutional layer where the depth of the kernel (in orange) is equal to the depth of the input volume (in cyan).
[](https://i.stack.imgur.com/3folL.png)
Each kernel can optionally have an associated scalar bias.
At this point, you should already be able to calculate the number of parameters of a standard convolutional layer. In your case, the number of parameters is $10 \* (3\*3\*3) + 10 = 280$.
### A TensorFlow proof
The following simple TensorFlow (version 2) program can confirm this.
```
import tensorflow as tf
def get_model(input_shape, num_classes=10):
model = tf.keras.Sequential()
model.add(tf.keras.layers.Input(shape=input_shape))
model.add(tf.keras.layers.Conv2D(10, kernel_size=3, use_bias=True))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(num_classes))
model.summary()
return model
if __name__ == '__main__':
input_shape = (24, 24, 3)
get_model(input_shape)
```
You should try setting `use_bias` to `False` to understand how the number of parameters changes.
### General case
So, in general, given $M$ filters of shape $K \times K$ and an input of shape $H \times W \times D$, then the number of parameters of the standard 2d convolutional layer, with scalar biases, is $M \* (K \* K \* D) + M$ and, without biases, is $M \* (K \* K \* D)$.
See also these related questions [How is the depth of filters of hidden layers determined?](https://ai.stackexchange.com/q/17783/2444) and [In a CNN, does each new filter have different weights for each input channel, or are the same weights of each filter used across input channels?](https://ai.stackexchange.com/q/5769/2444).
1d and 3d convolutions
----------------------
There are also 1d and 3d convolutions.
For example, in the case of 3d convolutions, the kernels may not have the same dimension as the depth of the input, so the number of parameters is calculated differently for 3d convolutional layers. Here's a diagram of 3d convolutional layer, where the kernel has a depth different than the depth of the input volume.
[](https://i.stack.imgur.com/F282Z.png)
See e.g. [Intuitive understanding of 1D, 2D, and 3D convolutions in convolutional neural networks](https://stackoverflow.com/q/42883547/3924118).
Upvotes: 4 [selected_answer]<issue_comment>username_2: For a standard convolution layer, the weight matrix will have a shape of `(out_channels, in_channels, kernel_sizes)`. In addition, you will need a vector of shape `[out_channels]` for biases. For your specific case, 2d, your weight matrix will have a shape of `(out_channels, in_channels, kernel_size[0], kernel_size[1])`.
Now, if we plugin the numbers:
* `out_channels = 10`, you're having 10 filters
* `in_channels = 3`, the picture is RGB in this case, so there are 3 channels (the last dimension of the input)
* `kernel_size[0] = kernel_size[1] = 3`
In total you're gonna have `10*3*3*3 + 10 = 280` parameters.
Upvotes: 2 |
2020/03/17 | 421 | 1,673 | <issue_start>username_0: From my understanding of the REINFORCE policy gradient method, we gently nudge the probabilities of actions based on the advantages. More specifically, the positive advantages **increase** the probabilities, negative advantages **reduce** the probabilities.
So, how do we compute the advantages given the **real discounted rewards** (aggregated rewards from the episode) and a **policy network** that only outputs the probabilities of actions?<issue_comment>username_1: The advantage is basically a function of the actual return received and a baseline. The function of the baseline is to make sure that only the actions that are better than average receive a positive nudge.
One way to estimate the baseline is to have a value function approximator. At every step, you train a NN, using the trajectories collected via the current policy, to predict the value function for states.
I hope that answers your query.
Upvotes: 1 <issue_comment>username_2: First let us note the definition of the advantage function:
$$A(s,a) = Q(s,a) - V(s) \; ,$$
where $Q(s,a)$ is the action-value function and $V(s)$ is the state-value function. In theory you could represent these by two different function approximators, but this would be quite inefficient. However, note that
$$Q(s,a) = \sum\_{s',r} \mathbb{P}(s',r|s,a)(r + V(s') = \mathbb{E}[r + V(s')|a,s]\;,$$
so we can actually use a single function approximation, for $V(s)$, to completely represent the advantage function. To optimise this function approximator you would use the returns at each step of the episode as in e.g. the REINFORCE algorithm like you mentioned.
Upvotes: 3 [selected_answer] |
2020/03/18 | 2,307 | 8,464 | <issue_start>username_0: The ongoing [coronavirus pandemic](https://en.wikipedia.org/wiki/2019%E2%80%9320_coronavirus_pandemic) of coronavirus disease 2019 (COVID-19), caused by [severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2)](https://en.wikipedia.org/wiki/Severe_acute_respiratory_syndrome_coronavirus_2), as of 29 September 2020, has affected many countries and territories, with more than 33.4 million cases of COVID-19 have been reported and more than 1 million people have died. The live statistics can be found at <https://www.worldometers.info/coronavirus/> or in the [World Health Organization (WHO) site](https://www.who.int/emergencies/diseases/novel-coronavirus-2019/situation-reports). Although countries have already started quarantines and have adopted extreme countermeasures (such as closing restaurants or forbidding events with multiple people), the numbers of cases and deaths will probably still increase in the next weeks.
Given that this pandemic concerns all of us, including people interested in AI, such as myself, it may be useful to share information about the possible current applications of AI to slow down the spread of SARS-CoV-2, to help infected people or people in the healthcare sector that have been uninterruptedly working for hours to attempt to save more lives, while putting at risk their own.
What are the existing AI technologies (e.g. computer vision or robotics tools) that are already being used to tackle these issues, such as slowing down the spread of SARS-CoV-2 or helping infected people?
I am looking for references that prove that the mentioned technologies are really being used. I am not looking for potential AI technologies (i.e. research work) that could potentially be helpful. Furthermore, I am **not** looking for data analysis tools (e.g. sites that show the evolution of the spread of coronavirus, etc.)<issue_comment>username_1: According to the Baidu Research's blog post [How Baidu is harnessing the power of AI in the battle against coronavirus](http://research.baidu.com/Blog/index-view?id=133) (12-03-2020), there are already some artificial intelligence tools or algorithms being used to fight the coronavirus.
Given that I cannot confirm that these AI tools and algorithms I will mention are really being used in practice, I will only quote the parts of the blog post that potentially answer my original question.
To give some context, similar to HIV viruses, the virus that is causing the coronavirus pandemic, [SARS-CoV-2](https://en.wikipedia.org/wiki/Severe_acute_respiratory_syndrome_coronavirus_2) is [capable of rapidly mutating, making vaccine development and virus analysis difficult](http://research.baidu.com/Blog/index-view?id=133).
### AI-powered and non-contact infrared sensor system
>
> Baidu has developed several tools that are effective in building awareness and screening populations, including **an AI-powered, non-contact infrared sensor system** that provides users with fast multi-person temperature monitoring that can quickly **detect a person if they are suspected of having a fever**, one of the many symptoms of the coronavirus. This technology is currently being **used in Beijing's Qinghe Railway Station** to identify passengers who are potentially infected where it can examine up to 200 people in one minute without disrupting passenger flow.
>
>
>
### AI-powered pneumonia screening and lesion detection system
>
> By leveraging PaddlePaddle and the semantic segmentation toolkit PaddleSeg, LinkingMed has developed an **AI-powered pneumonia screening and lesion detection system**, putting it into **use in the hospital affiliated with XiangNan University in Hunan Province**. The system can pinpoint the disease in less than one minute, with a detection accuracy of 92% and a recall rate of 97% on test data sets.
>
>
>
### Automated HealthMap system
>
> The **Boston Children's Hospital** used an **automated HealthMap system** that scans online news and social media **reports for early warning signs of outbreaks**, which **led to the initial awareness that COVID-19** was spreading outside China.
>
>
>
### Autonomous vehicles carry out non-contact tasks
>
> Access to health care and resources at a moment's notice is vital for battling the spread of the coronavirus. Autonomous vehicles are playing a useful role in providing access to necessary commodities for health-care professionals and the public alike by delivering goods in infected areas and disinfecting hospitals, effectively minimizing person-to-person transmission and alleviating the shortage of medical staff.
>
>
> **Apollo**, Baidu's autonomous vehicle platform, partnered with a local self-driving startup called Neolix to deliver supplies and food to the **Beijing Haidian Hospital**.
>
>
>
Upvotes: 2 <issue_comment>username_2: Not really something that slows it down but currently <NAME> at Harvard is working on modeling the pandemic and has shared some of his approaches. He explained that they have used google search trends to try to predict the number of actual cases (there is a delay between people being sick and getting tested). Looking for search terms like, "how to use an inhaler?" can reveal areas affected by the outbreak and is useful for modeling.
Paper can be found [here](https://arxiv.org/pdf/2004.04019.pdf)
Towards Data Science has several articles listing potential AI applications for helping in the fight against COVID
Including:
1. Identify who is most at risk
2. Diagnose patients
3. Develop drugs faster
4. Predict the spread of the disease
5. Understand viruses better
6. Map where viruses come from
7. Predict the next pandemic
[How to fight COVID-19 with machine learning](https://towardsdatascience.com/fight-covid-19-with-machine-learning-1d1106192d84)
Though there are probably many creative applications that AI can help with I think most of them are modeling related at the moment.
Upvotes: 1 <issue_comment>username_3: >
> What are the AI technologies currently used to fight the coronavirus pandemic?
>
>
>
You don't define what is AI for you. Let's suppose it means *advanced informatics* (but perhaps also [abstract interpretation](https://en.wikipedia.org/wiki/Abstract_interpretation)).
In France we have polling websites like <https://covidnet.fr/> ... Does that count as AI?
And we also have Deep Tech related Covid19 specific call for proposals, e.g. [here](https://systematic-paris-region.org/fr/actualite/appel-a-projets-structurants-pour-la-competitivite-specifique-a-la-crise-sanitaire-covid-19/) ... does that count as AI?
Does the French [StopCovid](https://stopcovid19.fr/) platform count as AI? It is an open source digital project (using Bluetooth) to trace potential Covid-infected people. The current political debate is involving intense discussions about its legitimacy and deployement.
At last, development of Covid specific ventilators (e.g. [this](https://github.com/Recovid/) open hardware / open software project) involves a lot of embedded software which could be analyzed by static analyzers such as [Frama-C](https://frama-c.com/) ? Does that count as AI ?
My employer [CEA](https://www.cea.fr/) (a French scaled down equivalent to [US DoE](https://en.wikipedia.org/wiki/United_States_Department_of_Energy)) is an applied research institution participating to several Covid related projects.
And several French research institutions (e.g. [INSERM](https://www.inserm.fr/), [CNRS](http://www.cnrs.fr/), [INRIA](https://www.inria.fr/), [LIP6](https://www.lip6.fr/), etc...) are participating to Covid related research programmes. Also, major health organizations like [AP-HP](https://www.aphp.fr/).
So is the [European Union](https://ec.europa.eu/info/index_en) which is partly funding several of them. Most of the Covid related research projects have strong digital aspects. In particular, related to genome decoding and bioinformatics.
PS. My GitHub [helpcovid](https://github.com/bstarynk/helpcovid/) ongoing project (a [free software](https://www.gnu.org/philosophy/free-sw.en.html) web application related to Covid19) does not claim to be AI, but certainly is a digital application. See also [these](http://refpersys.org/Starynkevitch-CAIA-RefPerSys-2020mar06.pdf) slides about [RefPerSys](https://refpersys.org/) (which *is* an ongoing [free software](https://www.gnu.org/philosophy/free-sw.en.html) ambitious AI project).
Upvotes: 0 |
2020/03/18 | 1,904 | 7,022 | <issue_start>username_0: I'm trying to find out how AI can help with efficient customer service, in fact call routing to the right agent. My usecase is given context of a query from a customer and agents' expertise, how can we do the matching?
Generally, how is this problem solved? What sub-topic within AI is suitable for this problems? Classification, recommender systems, ...? Any pointers to open-source projects would be very helpful.<issue_comment>username_1: According to the Baidu Research's blog post [How Baidu is harnessing the power of AI in the battle against coronavirus](http://research.baidu.com/Blog/index-view?id=133) (12-03-2020), there are already some artificial intelligence tools or algorithms being used to fight the coronavirus.
Given that I cannot confirm that these AI tools and algorithms I will mention are really being used in practice, I will only quote the parts of the blog post that potentially answer my original question.
To give some context, similar to HIV viruses, the virus that is causing the coronavirus pandemic, [SARS-CoV-2](https://en.wikipedia.org/wiki/Severe_acute_respiratory_syndrome_coronavirus_2) is [capable of rapidly mutating, making vaccine development and virus analysis difficult](http://research.baidu.com/Blog/index-view?id=133).
### AI-powered and non-contact infrared sensor system
>
> Baidu has developed several tools that are effective in building awareness and screening populations, including **an AI-powered, non-contact infrared sensor system** that provides users with fast multi-person temperature monitoring that can quickly **detect a person if they are suspected of having a fever**, one of the many symptoms of the coronavirus. This technology is currently being **used in Beijing's Qinghe Railway Station** to identify passengers who are potentially infected where it can examine up to 200 people in one minute without disrupting passenger flow.
>
>
>
### AI-powered pneumonia screening and lesion detection system
>
> By leveraging PaddlePaddle and the semantic segmentation toolkit PaddleSeg, LinkingMed has developed an **AI-powered pneumonia screening and lesion detection system**, putting it into **use in the hospital affiliated with XiangNan University in Hunan Province**. The system can pinpoint the disease in less than one minute, with a detection accuracy of 92% and a recall rate of 97% on test data sets.
>
>
>
### Automated HealthMap system
>
> The **Boston Children's Hospital** used an **automated HealthMap system** that scans online news and social media **reports for early warning signs of outbreaks**, which **led to the initial awareness that COVID-19** was spreading outside China.
>
>
>
### Autonomous vehicles carry out non-contact tasks
>
> Access to health care and resources at a moment's notice is vital for battling the spread of the coronavirus. Autonomous vehicles are playing a useful role in providing access to necessary commodities for health-care professionals and the public alike by delivering goods in infected areas and disinfecting hospitals, effectively minimizing person-to-person transmission and alleviating the shortage of medical staff.
>
>
> **Apollo**, Baidu's autonomous vehicle platform, partnered with a local self-driving startup called Neolix to deliver supplies and food to the **Beijing Haidian Hospital**.
>
>
>
Upvotes: 2 <issue_comment>username_2: Not really something that slows it down but currently <NAME> at Harvard is working on modeling the pandemic and has shared some of his approaches. He explained that they have used google search trends to try to predict the number of actual cases (there is a delay between people being sick and getting tested). Looking for search terms like, "how to use an inhaler?" can reveal areas affected by the outbreak and is useful for modeling.
Paper can be found [here](https://arxiv.org/pdf/2004.04019.pdf)
Towards Data Science has several articles listing potential AI applications for helping in the fight against COVID
Including:
1. Identify who is most at risk
2. Diagnose patients
3. Develop drugs faster
4. Predict the spread of the disease
5. Understand viruses better
6. Map where viruses come from
7. Predict the next pandemic
[How to fight COVID-19 with machine learning](https://towardsdatascience.com/fight-covid-19-with-machine-learning-1d1106192d84)
Though there are probably many creative applications that AI can help with I think most of them are modeling related at the moment.
Upvotes: 1 <issue_comment>username_3: >
> What are the AI technologies currently used to fight the coronavirus pandemic?
>
>
>
You don't define what is AI for you. Let's suppose it means *advanced informatics* (but perhaps also [abstract interpretation](https://en.wikipedia.org/wiki/Abstract_interpretation)).
In France we have polling websites like <https://covidnet.fr/> ... Does that count as AI?
And we also have Deep Tech related Covid19 specific call for proposals, e.g. [here](https://systematic-paris-region.org/fr/actualite/appel-a-projets-structurants-pour-la-competitivite-specifique-a-la-crise-sanitaire-covid-19/) ... does that count as AI?
Does the French [StopCovid](https://stopcovid19.fr/) platform count as AI? It is an open source digital project (using Bluetooth) to trace potential Covid-infected people. The current political debate is involving intense discussions about its legitimacy and deployement.
At last, development of Covid specific ventilators (e.g. [this](https://github.com/Recovid/) open hardware / open software project) involves a lot of embedded software which could be analyzed by static analyzers such as [Frama-C](https://frama-c.com/) ? Does that count as AI ?
My employer [CEA](https://www.cea.fr/) (a French scaled down equivalent to [US DoE](https://en.wikipedia.org/wiki/United_States_Department_of_Energy)) is an applied research institution participating to several Covid related projects.
And several French research institutions (e.g. [INSERM](https://www.inserm.fr/), [CNRS](http://www.cnrs.fr/), [INRIA](https://www.inria.fr/), [LIP6](https://www.lip6.fr/), etc...) are participating to Covid related research programmes. Also, major health organizations like [AP-HP](https://www.aphp.fr/).
So is the [European Union](https://ec.europa.eu/info/index_en) which is partly funding several of them. Most of the Covid related research projects have strong digital aspects. In particular, related to genome decoding and bioinformatics.
PS. My GitHub [helpcovid](https://github.com/bstarynk/helpcovid/) ongoing project (a [free software](https://www.gnu.org/philosophy/free-sw.en.html) web application related to Covid19) does not claim to be AI, but certainly is a digital application. See also [these](http://refpersys.org/Starynkevitch-CAIA-RefPerSys-2020mar06.pdf) slides about [RefPerSys](https://refpersys.org/) (which *is* an ongoing [free software](https://www.gnu.org/philosophy/free-sw.en.html) ambitious AI project).
Upvotes: 0 |
2020/03/19 | 1,928 | 7,057 | <issue_start>username_0: Some of the NLP applications taken from this link [NLP Applications](https://towardsdatascience.com/natural-language-processing-nlp-top-10-applications-to-know-b2c80bd428cb):
* Machine Translation
* Speech Recognition
* Sentiment Analysis
* Question Answering
* Automatic Summarization
* Chatbots
* Market Intelligence
* Text Classification
* Character Recognition
* Spell Check
Which are the NLP applications that supports recurrent neural network?<issue_comment>username_1: According to the Baidu Research's blog post [How Baidu is harnessing the power of AI in the battle against coronavirus](http://research.baidu.com/Blog/index-view?id=133) (12-03-2020), there are already some artificial intelligence tools or algorithms being used to fight the coronavirus.
Given that I cannot confirm that these AI tools and algorithms I will mention are really being used in practice, I will only quote the parts of the blog post that potentially answer my original question.
To give some context, similar to HIV viruses, the virus that is causing the coronavirus pandemic, [SARS-CoV-2](https://en.wikipedia.org/wiki/Severe_acute_respiratory_syndrome_coronavirus_2) is [capable of rapidly mutating, making vaccine development and virus analysis difficult](http://research.baidu.com/Blog/index-view?id=133).
### AI-powered and non-contact infrared sensor system
>
> Baidu has developed several tools that are effective in building awareness and screening populations, including **an AI-powered, non-contact infrared sensor system** that provides users with fast multi-person temperature monitoring that can quickly **detect a person if they are suspected of having a fever**, one of the many symptoms of the coronavirus. This technology is currently being **used in Beijing's Qinghe Railway Station** to identify passengers who are potentially infected where it can examine up to 200 people in one minute without disrupting passenger flow.
>
>
>
### AI-powered pneumonia screening and lesion detection system
>
> By leveraging PaddlePaddle and the semantic segmentation toolkit PaddleSeg, LinkingMed has developed an **AI-powered pneumonia screening and lesion detection system**, putting it into **use in the hospital affiliated with XiangNan University in Hunan Province**. The system can pinpoint the disease in less than one minute, with a detection accuracy of 92% and a recall rate of 97% on test data sets.
>
>
>
### Automated HealthMap system
>
> The **Boston Children's Hospital** used an **automated HealthMap system** that scans online news and social media **reports for early warning signs of outbreaks**, which **led to the initial awareness that COVID-19** was spreading outside China.
>
>
>
### Autonomous vehicles carry out non-contact tasks
>
> Access to health care and resources at a moment's notice is vital for battling the spread of the coronavirus. Autonomous vehicles are playing a useful role in providing access to necessary commodities for health-care professionals and the public alike by delivering goods in infected areas and disinfecting hospitals, effectively minimizing person-to-person transmission and alleviating the shortage of medical staff.
>
>
> **Apollo**, Baidu's autonomous vehicle platform, partnered with a local self-driving startup called Neolix to deliver supplies and food to the **Beijing Haidian Hospital**.
>
>
>
Upvotes: 2 <issue_comment>username_2: Not really something that slows it down but currently <NAME> at Harvard is working on modeling the pandemic and has shared some of his approaches. He explained that they have used google search trends to try to predict the number of actual cases (there is a delay between people being sick and getting tested). Looking for search terms like, "how to use an inhaler?" can reveal areas affected by the outbreak and is useful for modeling.
Paper can be found [here](https://arxiv.org/pdf/2004.04019.pdf)
Towards Data Science has several articles listing potential AI applications for helping in the fight against COVID
Including:
1. Identify who is most at risk
2. Diagnose patients
3. Develop drugs faster
4. Predict the spread of the disease
5. Understand viruses better
6. Map where viruses come from
7. Predict the next pandemic
[How to fight COVID-19 with machine learning](https://towardsdatascience.com/fight-covid-19-with-machine-learning-1d1106192d84)
Though there are probably many creative applications that AI can help with I think most of them are modeling related at the moment.
Upvotes: 1 <issue_comment>username_3: >
> What are the AI technologies currently used to fight the coronavirus pandemic?
>
>
>
You don't define what is AI for you. Let's suppose it means *advanced informatics* (but perhaps also [abstract interpretation](https://en.wikipedia.org/wiki/Abstract_interpretation)).
In France we have polling websites like <https://covidnet.fr/> ... Does that count as AI?
And we also have Deep Tech related Covid19 specific call for proposals, e.g. [here](https://systematic-paris-region.org/fr/actualite/appel-a-projets-structurants-pour-la-competitivite-specifique-a-la-crise-sanitaire-covid-19/) ... does that count as AI?
Does the French [StopCovid](https://stopcovid19.fr/) platform count as AI? It is an open source digital project (using Bluetooth) to trace potential Covid-infected people. The current political debate is involving intense discussions about its legitimacy and deployement.
At last, development of Covid specific ventilators (e.g. [this](https://github.com/Recovid/) open hardware / open software project) involves a lot of embedded software which could be analyzed by static analyzers such as [Frama-C](https://frama-c.com/) ? Does that count as AI ?
My employer [CEA](https://www.cea.fr/) (a French scaled down equivalent to [US DoE](https://en.wikipedia.org/wiki/United_States_Department_of_Energy)) is an applied research institution participating to several Covid related projects.
And several French research institutions (e.g. [INSERM](https://www.inserm.fr/), [CNRS](http://www.cnrs.fr/), [INRIA](https://www.inria.fr/), [LIP6](https://www.lip6.fr/), etc...) are participating to Covid related research programmes. Also, major health organizations like [AP-HP](https://www.aphp.fr/).
So is the [European Union](https://ec.europa.eu/info/index_en) which is partly funding several of them. Most of the Covid related research projects have strong digital aspects. In particular, related to genome decoding and bioinformatics.
PS. My GitHub [helpcovid](https://github.com/bstarynk/helpcovid/) ongoing project (a [free software](https://www.gnu.org/philosophy/free-sw.en.html) web application related to Covid19) does not claim to be AI, but certainly is a digital application. See also [these](http://refpersys.org/Starynkevitch-CAIA-RefPerSys-2020mar06.pdf) slides about [RefPerSys](https://refpersys.org/) (which *is* an ongoing [free software](https://www.gnu.org/philosophy/free-sw.en.html) ambitious AI project).
Upvotes: 0 |
2020/03/24 | 1,798 | 6,787 | <issue_start>username_0: I know they are not the same in working, but an input layer sends the input to $n$ neurons with a set of weights, based on these weights and the activation layer, it produces an output that can be fed to the next layer.
Aren't the filters the same, in the way that they convert an "image" to a new "image" based on the weights that are in that filter? And that the next layer uses this new "image"?<issue_comment>username_1: **tl;dr** The equivalent to a neuron in a Fully-Connected (FC) layer is the **kernel** (or filter) of a Convolution layer
### Differences
The neurons of these two types of layers have two key differences. These are that the convolution layers implement:
* **Sparse connectivity**, i.e. each neuron is connected only to an area of the input, not the whole.
* **Weight sharing**, i.e. *similar* connections end up having the same weights. This is usually visualized as the same filter traversing the image.
Besides these two key differences, there are some other technical details, e.g. [how the biases are implemented](https://stackoverflow.com/a/45140686/5668710). Other than that they perform the same operation.
What causes some confusion is that the input of a CNN is usually 2 or 3-dimensional, while a FC is usually 1-dimensional. These aren't mandatory however. To better help visualize the differences between the two I made a couple of figures illustrating the differences between a conv-layer and a FC one, both in 1D.
### Sparse connectivity
On the left are two FC neural networks. On the right, are layers with sparse connections.

### Weight sharing
On the left is a sparsely connected network. The colors represent the different values of the weights. On the right is the same network with weight-sharing. Note that similar weights (i.e. arrows with the same direction in each layer) have the same value.

---
To answer your other questions:
>
> Are filters not the same in the way that they convert an "image" to a new "image" based on the weights that are in that filter? And that the next layers use these new "images"?
>
>
>
Yes, if the input of a convolution layer is an image, so will the output. The next layer will also operate on an image.
However, I'd like to note that not all convolution layers accept images as their inputs. There are 1D and 3D convolutional layers as well.
Upvotes: 3 [selected_answer]<issue_comment>username_2: The [other answer](https://ai.stackexchange.com/a/18796/2444) gives a good overview of the differences between MLPs and CNNs, and it includes 2 diagrams that attempt to illustrate the main differences between MLPs and CNNs, i.e. sparse connectivity and weight sharing. However, these diagrams do not clarify what a **neuron** in a CNN could be. A better diagram, which illustrates what a neuron is in a CNN, from a CNN *and* MLP perspective, is the following (taken from [the famous article on CNNs](https://cs231n.github.io/convolutional-networks/)).
[](https://i.stack.imgur.com/o2CwW.jpg)
Here, there are 2 main blocks (aka volumes): the orange block on the left (the input) and the blue/cyan volume on the right (the **feature maps**, i.e. the outputs of the convolutional layer, i.e. after the application of the convolutions with different kernels).
The *circles* in the visible stack of the cyan block represent the neurons (or, more precisely, their **activations** or outputs). We only see $k=5$ neurons stacked: this corresponds to the application of $k=5$ different kernels (i.e. weights) to that specific subset of the input (aka [**receptive field**](https://www.youtube.com/watch?v=70A3uYfM1qA)), hence the **sparse connectivity** of CNNs. So, these neurons, in the same stack, are looking at the same small subset of the input, but with different weights (i.e. kernels). The neurons, which are not shown in this diagram, that are on the same (vertical) 2d plane (known as **feature map**) of the same neuron (e.g. the first that we see from left to right) in the cyan volume are the neurons that share the same weights, i.e. we use the same kernel to produce their outputs.
So, in this biological/neuroscientific view of the CNN, when you apply [the convolution (or cross-correlation) operation](https://ai.stackexchange.com/q/21999/2444) with 1 specific filter (or kernel), you are computing the activation (not to be confused with the *activation function*, which is used to compute the activation!) i.e. the output of multiple neurons, all of them share the same weights. You stack all these activations on the same 2d plane (known as **feature map**) of the output volume: note that this operation is just the convolution operation! When you compute the convolution with another kernel, you are again computing the activation of other multiple neurons, which share *another different* weight matrix, and so on and so forth.
Some authors prefer to use the term [**convolutional networks**](https://www.deeplearningbook.org/contents/convnets.html), i.e. without the term **neural**, probably because of this issue, i.e. it's not clear, especially to newcomers, what a neuron would be in a CNN, so the neuroscientific/biological view of CNNs is not always clear, although it's important to emphasize that [CNNs were inspired by the visual cortext](https://ai.stackexchange.com/a/24969/2444), so this biological interpretation could (and should) be more widely known or less confusing/misunderstood.
Now, let's address your question more directly.
>
> Aren't the filters the same, in the way that they convert an "image" to a new "image" based on the weights that are in that filter? And that the next layer uses this new "image"?
>
>
>
The filters in a CNN correspond to the weights of an MLP.
A neuron in a CNN can be viewed as performing exactly the same operation as a neuron in an MLP. The big differences between a CNN and an MLP (as explained also in [the other answer](https://ai.stackexchange.com/a/18796/2444)) are
* **Weight sharing**: Some neurons (not all!) in the same convolutional layer share the same weights. The convolution (or cross-correlation) is the operation that implements this partial forward pass with the same weights for different neurons.
* Neurons in a CNN only look at a **subset** of the input and not all inputs (i.e. receptive field), which leads to some notion of **sparse connectivity**
* A convolutional layer, in a CNN, is composed of neurons in a **3d dimensional** volume (or, more precisely, their activations are organized in a 3d volume), rather than a 1-dimensional one, as in an MLP.
* CNNs may use subsampling (aka pooling)
Upvotes: 0 |
2020/03/25 | 1,565 | 4,960 | <issue_start>username_0: What are some (good) online courses for deep reinforcement learning?
I would like the course to be both programming and theoretical. I really liked [David Silver's course](https://www.youtube.com/playlist?list=PLzuuYNsE1EZAXYR4FJ75jcJseBmo4KQ9-), but the course dates from 2015. It doesn't really teach deep Q-learning at this time.<issue_comment>username_1: For the programming part I suggest this YouTube [channel](https://www.youtube.com/channel/UC58v9cLitc8VaCjrcKyAbrw/featured) by <NAME> (he also has a website: [neuralnet.ai](https://www.neuralnet.ai). I found his videos really useful while I was attending reinforcement learning classes at the uni. He covers basic algorithms like value iteration and policy iteration and also more advanced like deep q learning, covering all main python libraries (Keras, tensorflow, pytorch). Hope it will help you as well!
Upvotes: 3 <issue_comment>username_2: Let me first say that deep RL is just the combination of RL with deep learning. So, if you study RL and deep learning, then studying deep RL should be straightforward. For this reason, this answer will point the reader to potentially useful courses on RL (also because there aren't many free courses completely dedicated to deep RL), which have at least one lesson on deep RL or function approximation. I have only followed the course by Isbell and Littman and partially the course by David Silver, so I can't ensure you that the other courses are good, but I found these two useful, although not perfect.
| Title | Instructor(s) | Focus on deep RL? | Topics | Free |
| --- | --- | --- | --- | --- |
| [Reinforcement Learning](https://classroom.udacity.com/courses/ud600/) | <NAME>, <NAME> | No | TD learning, convergence, function approximation, POMDP, options, game theory | Yes |
| [Introduction to Reinforcement Learning with David Silver](https://www.youtube.com/playlist?list=PLqYmG7hTraZBiG_XpjnPrSNw-1XQaM_gB) | <NAME> | No | MDPs, planning, dynamic programming, model-free prediction and control, function approximation, policy gradients, exploration and exploitation | Yes |
| [CS234: Reinforcement Learning Winter 2020](https://www.youtube.com/watch?v=FgzM3zpZ55o&list=PLoROMvodv4rOSOPzutgyCTapiGlY2Nd8u) | <NAME> | No | See the [course schedule](http://web.stanford.edu/class/cs234/schedule.html); [lesson 6](https://www.youtube.com/watch?v=gOV8-bC1_KU&list=PLoROMvodv4rOSOPzutgyCTapiGlY2Nd8u&index=6) is about DRL | Yes |
| [Reinforcement Learning](https://nptel.ac.in/courses/106/106/106106143) | NPTEL | No | Bandits, MDPs, policy gradients, dynamic programming, TD learning, function approximation, hierarchical RL, POMDP | Yes |
| [Reinforcement Learning in the Open AI Gym](https://www.youtube.com/watch?v=P9XezMuPfLE&list=PL-9x0_FO_lglnlYextpvu39E7vWzHhtNO&index=1&ab_channel=MachineLearningwithPhil) | <NAME> | ? | SARSA, double Q-learning, Monte Carlo methods, Q-learning | Yes |
| [Advanced Deep Learning & Reinforcement Learning](https://www.youtube.com/watch?v=iOh7QUZGyiU&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs) | DeepMind | No | [video 14](https://www.youtube.com/watch?v=L6xaQ501jEs&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs&index=14) discusses DRL topics | Yes |
| [Advanced AI: Deep Reinforcement Learning in Python](https://www.udemy.com/course/deep-reinforcement-learning-in-python/) | Udemy | Yes, it seems | ? | No |
| [Machine Learning: Beginner Reinforcement Learning in Python](https://www.udemy.com/course/machine-learning-beginner-reinforcement-learning-in-python/) | Udemy | ? | ? | No |
| [Deep Reinforcement Learning 2.0](https://www.udemy.com/course/deep-reinforcement-learning/) | Udemy | ? | ? | No |
| [Modern Reinforcement Learning: Deep Q Learning in PyTorch](https://www.udemy.com/course/deep-q-learning-from-paper-to-code/?couponCode=DQN-JUN-3-2020) | <NAME> (Udemy) | ? | ? | No |
| [Modern Reinforcement Learning: Actor-Critic Methods](https://www.udemy.com/course/actor-critic-methods-from-paper-to-code-with-pytorch/?couponCode=AC-JUN-3-2020) | <NAME> (Udemy) | ? | ? | No |
In any case, if you are familiar with RL and deep learning topics, I encourage you to directly read the DQN papers (both by DeepMind folks)
* [Playing Atari with Deep Reinforcement Learning](https://www.cs.toronto.edu/%7Evmnih/docs/dqn.pdf) (2013)
* [Human-level control through deep reinforcement learning](https://www.nature.com/articles/nature14236) (2015)
Of course, deep RL isn't just DQN, but these are two very important papers that you should read. Other key papers on deep RL can be found [here](https://spinningup.openai.com/en/latest/spinningup/keypapers.html).
Note that, depending on your experience with and knowledge of RL and DL, you may require a few iterations to fully understand these papers, but this applies every time you need to [read a research paper](https://ai.stackexchange.com/q/22269/2444).
Upvotes: 4 [selected_answer] |
2020/03/28 | 583 | 2,320 | <issue_start>username_0: I am learning deep learning from <NAME>'s tutorial [Mini-batch Gradient Descent](https://www.coursera.org/learn/deep-neural-network/lecture/qcogH/mini-batch-gradient-descent).
Can anyone explain the similarities and dissimilarities between batch GD and mini-batch GD?<issue_comment>username_1: It is really simple.
In gradient descent not using mini-batches, you feed your entire training set of data into the network and accumulate a cost function based on this full set of data. Then you use gradient descent to adjust the network weights to minimize the cost. Then you repeat this process until you get a satisfactory level of accuracy. For example, if you have a training set consisting of 50,000 samples, you would feed all 50,000 samples along with the 50,000 labels into the network, then perform gradient descent and update the weights. This is a slow process because you have to process 50,000 inputs to do just one step of gradient descent.
To make things go faster instead of running all 50,000 inputs through the network, you split up the training set into "batches". For example, you could break the training set up into 50 batches each containing 1000 samples. You would feed the network the first batch of 1000 samples, accumulate the loss value then perform gradient descent and adjust the weights. Then you feed in the next batch of 1000 samples and repeat the process. So, now, instead of only getting one step of gradient descent for 50,000 samples, you get 50 steps of gradient descent. This method of using batches leads to a much faster convergence of the network.
Upvotes: 2 <issue_comment>username_2: The other answer provides a correct description of what is often called "gradient descent" and "mini-batch gradient descent", respectively, but doesn't clarify the terminology used, so let me add a few notes about that.
From my experience, "batch GD" and "mini-batch GD" can refer to the same algorithm **or not**, i.e. some people may use "batch GD" and "mini-batch GD" interchangeably, but other people may use "batch GD" to refer to what the author of the other answer calls "gradient descent not using mini-batches", i.e. you use all training data before performing a GD step, which is sometimes just called "gradient descent" (as I wrote above).
Upvotes: 0 |
2020/03/28 | 448 | 1,916 | <issue_start>username_0: As a first step in many NLP courses, we learn about text preprocessing. The steps include lemmatization, removal of rare words, correcting typos etc. But I am not so sure about the actual effectiveness of doing such a step; in particular, if we are learning a neural network for a downstream task, it seems like modern state of the art (BERT, GPT-2) just take essentially raw input.
For instance, [this ACL paper](https://www.aclweb.org/anthology/W18-5406.pdf) seems to show that the result of text preprocessing is mixed, to say the least.
So is text preprocessing really all that necessary for NLP? In particular, I want to contrast/compare against vision and tabular data, where I have empirically found that standardization usually actually does help. Feel free to share your personal experiences/what use cases where text preprocessing helps!<issue_comment>username_1: It all depends on the quality of data. Due to old rule "Garbage in, garbage out" [link](https://en.wikipedia.org/wiki/Garbage_in,_garbage_out)
, if you have bad quality data(data redundancy, unstructured data, too much memory, etc) your results won't be spectacular.
In other cases, everybody could be a Data Scientist, because its only task was "put raw text into classifier". Also, you should remember that BERT or GPT-2 it's deep learning algorithms so they not need too much processing. Using preprocessing in machine learning is more that needed(prediction of sentiment for example).
Shortly, preprocessing is **optional**, but highly **advisable**.
Upvotes: 2 <issue_comment>username_2: It depends on the dataset we have and algorithm we use, usually text preprocessing can help your model perform better. But, some preprocessing method can have no significant impact on accuracy. We need to choose which preprocessing method that can help us make better quality dataset to give to the model.
Upvotes: 0 |
2020/03/29 | 1,032 | 3,362 | <issue_start>username_0: According to [this blog post](https://www.deeplearning-academy.com/p/ai-wiki-activation-functions)
>
> The purpose of an activation function is to add some kind of non-linear property to the function
>
>
>
The sigmoid is typically used as an activation function of a unit of a neural network in order to introduce non-linearity.
Is ReLU a non-linear activation function? And why? If not, then why is it used as an activation function of neural networks?<issue_comment>username_1: **Short Answer:** Yes
**Visually:**
[](https://i.stack.imgur.com/Yqgclm.png)
if you see the image from [wikipedia](https://en.wikipedia.org/wiki/Linear_function), it shown that ReLU (the blue line) is non-Linear (the line is not straight, it turns in 0). You can also check "visual" definition of linear function in [wikipedia](https://en.wikipedia.org/wiki/Linear_function):
>
> "In calculus and related areas, a linear function is a function whose graph is a straight line"
>
>
>
**Mathematically:**
Linear function of one variable can be defined as:
$ f(x) = ax + b $
If you plot that function in 2D, it will give you a straight line. Then, the form of linear function with multi variables:
$ f(x\_1, x\_2, ..., x\_n) = a\_1x\_1 + a\_2x\_2 + ... + a\_nx\_n + b $
If you again plot that function in the correct dimension it also give you a straight line. And if you that function carefully, it similar with calculation that happen in a neuron. That's why neuron addition and multiplication is a linear function:
$ f(x\_1, x\_2, ..., x\_n) = w\_1x\_1 + w\_2x\_2 + ... + w\_nx\_n + b $
Adding more layer of linear functions doesn't make the function become "complex" for example, if you have $f(x)$ like below and then you put another layer of linear function $g(x)$ on top of it:
$f(x) = ax + b$
$g(x) = cf(x) + d = cax + cb + d$
as the neural network is trained to find the value of $a,b,c,d$, we can group the constant from the formula above, and then rewrite to:
$h(x) = mx + n$
with $m=ca$ and $n=cb+d$. So without non-linear function the layer of neural network is useless, it only give you another "simple" linear function
ReLU formula is a $f(x)=max(0,x)$, it produces non-linearity as you can't write to linear function format. Using this function will give you "complexity" when you add more layer on top of it.
Upvotes: 2 <issue_comment>username_2: ReLU is non-linear by definition
================================
>
> In calculus and related areas, a [linear function](https://en.wikipedia.org/wiki/Linear_function) is a function whose graph is a straight line, that is a polynomial function of degree one or zero.
>
>
>
Since the graph of the ReLU function $f(x) = \max(0,x)$ is not a straight line (equivalently, it cannot be expressed in the form $f(x) = mx + c$), by definition it is not linear.
ReLU *is* piecewise linear
==========================
ReLU *is* [piecewise linear](https://en.wikipedia.org/wiki/Piecewise_linear_function) on the bounds $(-\inf,0]$ and $[0,\inf)$:
$$
f(x) = \max(0,x) = \begin{cases}
0 & x \le 0\\
x & x \gt 0\\
\end{cases}
$$
But this is still non-linear on the entire domain:
[](https://i.stack.imgur.com/Gg4bV.png)
Upvotes: 2 |
2020/03/29 | 1,182 | 4,533 | <issue_start>username_0: I was reading online that tic-tac-toe has a state space of $3^9 = 19,683$. From my basic understanding, this sounds too large to use tabular Q-learning, as the Q table would be huge. Is this correct?
If that is the case, can you suggest other (non-NN) algorithms I could use to create a TTT bot to play against a human player?<issue_comment>username_1: In the case of TicTacToe, you can make use of game theory. The entire search space can be denoted by a [game tree](https://en.wikipedia.org/wiki/Game_tree). You bot must now be able to maximize the chance of winning.
You can make use of the [Maximin algorithm](https://en.wikipedia.org/wiki/Minimax#Minimax_algorithm_with_alternate_moves). This is still computationally intensive on large search spaces. To improve the efficiency [Alpha-Beta pruning](https://en.wikipedia.org/wiki/Alpha%E2%80%93beta_pruning) can be applied to reduce the number of nodes in the Game tree.
These are core AI concepts and will always perform better than neural networks on dully defined and relatively smaller search spaces. Neural networks perform better when it's too difficult to compute all the possible combinations of a game at a certain state.
You can have a look at [this](https://www.freecodecamp.org/news/how-to-make-your-tic-tac-toe-game-unbeatable-by-using-the-minimax-algorithm-9d690bad4b37/) to build a TicTacToe bot.
Upvotes: 0 <issue_comment>username_2: >
> I was reading online that tic-tac-toe has a state space of $3^9 = 19,683$. From my basic understanding, this sounds too large to use tabular Q-learning, as the Q table would be huge. Is this correct?
>
>
>
That is a relatively small number of states that can easily be represented in a table on a modern computer. For a Q table, you would multiply by the number of moves possible in each state, but this is still a small amount of memory. Even with the most naive implementation that tracked impossible states and state/action pairs, using string state representations (so 10 bytes for each state/action key), and using double precision floating point for each action value (another 8 bytes), the full table would be around 3 megabytes in size.
So it is definitely possible to use tabular Q learning here. I have done just that whilst learning about RL - [my Q-learning Tic Tac Toe agent is written in Python and available on Github](https://github.com/neilslater/game_playing_scripts/blob/master/tictactoe_q.py). There are many ways to optimise the required space, e.g. only representing reachable states. Also in games with perfect control (moves directly result in a new state), it is common to use [afterstate values](https://stats.stackexchange.com/questions/411932/reinforcement-learning-afterstate-and-afterstate-value-functions) instead of action values.
>
> If that is the case, can you suggest other (non-NN) algorithms I could use to create a tic-tac-toe agent to play against a human player?
>
>
>
It is not the case, as explained. However, there are two basic approaches at the top level that are worth learning about for working with game trees:
* Forward planning or search algorithms, as mentioned by [username_1 in their answer](https://ai.stackexchange.com/a/19870/1847). These use game rules plus a heuristic (measure of likely success) to explore the future states of the game and pick the search result with the best heuristic. Minimax with a heuristic only based on win/lose at the end is a very basic approach, but would cope just fine with Tic Tac Toe.
+ Other planning algorithms include Negamax (a minor variation of Minimax for zero-sum games) and MCTS (famously used in Alpha Go).
* Policy function improvement, which generate policies - maps of current state to actions - and have ways to assess them and select better policies.
+ Q learning, and the many algorithms of reinforcement learning are in this category, as well as genetic algorithms.
+ Tic Tac Toe is simple enough that you could hard-code a policy function that mapped any state to the next action. The [xkcd webcomic encoded this policy into a diagram](https://xkcd.com/832/).
These two approaches are complementary, in that both can be used together to solve more sophisticated problems. For instance, value-based reinforcement learning algorithms - including Q learning - can be used to provide learned heuristics for a search algorithm. The categories I suggest above are not strict either, in that some algorithms are not clearly one thing or another.
Upvotes: 3 [selected_answer] |
2020/03/30 | 467 | 1,581 | <issue_start>username_0: I'm trying to build a neural network between protein sequence and its drug fingerprint. My input size is 20000. The output size is 881. The sample size is 610.
Can I process this huge neural network? But how? And in which tool?<issue_comment>username_1: It sure is possible, imagine a CNN can handle way bigger number of inputs. An image with size of `512x512` has already `262144 input nodes` when re-arranged to a one-row vector. The trick sicne 2012/2014 is to use Convolutions, and deep ones, so stacking a lot of `3x3 Convolutions` for example. Its way less sensitive than a fully-connected Dense network and needs a siginificant amount of less parameters. For more check this out, `chapter 9` : [Ian-Goodfellow, Deep Learning](https://github.com/janishar/mit-deep-learning-book-pdf/blob/master/complete-book-pdf/Ian%20Goodfellow%2C%20Yoshua%20Bengio%2C%20Aaron%20Courville%20-%20Deep%20Learning%20(2017%2C%20MIT).pdf)
Tools for that are [tensorflow](https://www.tensorflow.org/) and [keras](https://keras.io/) based on python, or tensorflow-js on java, you can also use pytorch but the community is rather small on comparison.
Upvotes: 0 <issue_comment>username_2: Yes, it should be no problem.
When you decide to use a CNN, you have to make sure that this makes sense. Another answer mentioned using `3x3` convolutions -- which I would recommend *against*. For that to work, you would need to turn your vector into a rectangular array, and you would be implying a structure that isn't there.
Use one-dimensional convolutions instead.
Upvotes: 2 |
2020/03/30 | 2,025 | 7,622 | <issue_start>username_0: [](https://i.stack.imgur.com/7iBKO.png)
The image above, a screenshot from [this article](https://towardsdatascience.com/intuitively-understanding-convolutions-for-deep-learning-1f6f42faee1), describes discrete 2D convolutions as *linear transforms.* The idea used, as far as I understand, is to represent the 2 dimensional $n$x$n$ input grid as a vector of $n^2$ length, and the $m$x$m$ output grid as a vector of $m^2$ length. I don't see why this can't be generalised to higher-dimensional convolutions, since a transformation matrix can be constructed from one vector to another, no matter the length (**right?**)
My question: **Aren't all discrete convolutions (not just 2D) linear transforms?**
Are there cases where such a transformation matrix cannot be found?<issue_comment>username_1: The convolutions are linear transformations. However in typical applications a non linear activation function like RELU is used following the convolution to provide non-linearity otherwise a convolutional neural network would just be a net linear transformation.
Upvotes: 1 <issue_comment>username_2: Convolution is a pretty misused term in recent times with the advent of CNN.
Short answer, Convolution is a linear operator (check [here](https://en.wikipedia.org/wiki/Convolution#Properties)) but what you are defining in context of CNN is not convolution, it is cross-correlation which is also linear in case of images (dot product).
**Convolution:**
Computers work only on discrete values, so the discrete time convolution is defined similarly as:
$$
y[n] = \sum\_{k=\infty}^{\infty} x[k] h[n-k]
$$
which has the nice property of
$$Y(e^{j\omega}) = X(e^{j \omega})H(e^{j \omega})$$
where each $A(e^{j\omega})$ is the Fourier Transform of $a(t)$
So this is the basic idea of discrete convolution. The point to illustrate here is that convolution operation has some nice properties and very helpful.
Now there are 2 main types of convolution (as far as I know). One is linear convolution another is circular convolution. In terms of $2D$ signal they are defined in the following ways:
$$y[m,n] = \sum\_{j=-\infty}^{\infty}\sum\_{k=-\infty}^{\infty} x[i,j]h[m-i,n-j]$$
Circular convolution is the same except that the input sequence is finite from $0$ to $N-1$ giving rise to periodicity in frequency domain. The convolution operation is a little bit different due to this periodicity.
So these are the actual definitions of convolution. It has linearity property (clearly obvious since it is used to calculate output of [LTI system](https://en.wikipedia.org/wiki/Linear_time-invariant_system)), and it also has the linearity in terms of matrix multiplications, since we want to do the computer to do these calculations for us. I don't know a lot but there are many clever manipulations e.g the [FFT algorithm](https://en.wikipedia.org/wiki/Fast_Fourier_transform) an indispensable tool in signal processing (used to convert signals to frequency domain for a certain sampling rate). Like this you can define convolutions in terms of Hermitian matrices and stuff (only if you want to process in the $n$ domain it is much easier to process in the frequency domain).
For example a $1D$ circular convolution between 2 signals in $n$ domain defined in matrix form, can be written as follows $y\_n=h\_n\*x\_n$:
where
$$y\_t = \begin{bmatrix} y\_0 \\
y\_1 \\
y\_2 \\
y\_3
\end{bmatrix} = \begin{bmatrix}
h\_0 & h\_3 & h\_2 & h\_1\\
h\_1 & h\_0 & h\_3 & h\_2\\
h\_2 & h\_1 & h\_0 & h\_3 \\
h\_3 & h\_2 & h\_1 & h\_0
\end{bmatrix} \begin{bmatrix} x\_0 \\
x\_1 \\
x\_2 \\
x\_3
\end{bmatrix}$$
The same can be done quite easily by converting the functions into frequency domain, multiplying and converting back to time domain.
When we move to $2D$ domain similar equation is formed except (in $n$ domain)instead of $h\_0$ we will have a Hermitian matrix $H\_0$ and we will perform the [Kronecker product](https://en.wikipedia.org/wiki/Kronecker_product) (I don't know the justification or proofs of these equations, it probably satisfies the [convolution theorem](https://en.wikipedia.org/wiki/Convolution_theorem) and is fast when ran on computers). Again much easier to do in frequency domain.
When we move to multiple dimensions, it is called Multidimensional discrete convolution as per this Wikipedia [article](https://en.wikipedia.org/wiki/Multidimensional_discrete_convolution). As the article [suggests](https://en.wikipedia.org/wiki/Multidimensional_discrete_convolution#Circular_Convolution_of_Discrete-Valued_Multidimensional_Signals) the property where everything is in frequency domain.
$$Y(k\_1,k\_2,...,k\_M)=H(k\_1,k\_2,...,k\_M)X(k\_1,k\_2,...,k\_M)$$
still holds good. When we do convolutions in the $n$ domain things get tricky and we have to do clever matrix manipulations as shown in the example above. Whereas, as stated above things get much easier in the frequency domain.
Its **counter-intuitive** that a picture has a frequency domain, and in general its actually not frequency. BUT in DSP we use a lot of filters whose math properties are similar to those of filters in the traditional sense of frequency and hence has the same calculations as in a frequency setting as shown in the 1st example of $1D$ signal.
The point convolution by **definition** is a linear operation. Check [here](http://robotics.itee.uq.edu.au/~elec3004/2013/lectures/Convolution%20(Sec%201.4%20from%20Hayes%20DSP).pdf) if the explanations are unconvincing. There are [Linear Time Varying systems](https://en.wikipedia.org/wiki/Time-variant_system) and its output maybe determined by convolution, but then it uses the equation at a certain point of time i.e:
$$Y(e^{j\omega}, t) = H(e^{j\omega}, t)X(e^{j\omega} )$$.
Now whether it can be represented by matrix products in $n$ domain or not I cannot say. But generalising, it should be possible, only it will involve increasingly complex matrix properties and operations.
**Cross Correlation:**
Cross correlation is nothing similar to convolution, it has a bunch of interesting properties on its own, hence I do not like the two being mixed together. In signal processing it is mainly related to finding energy (auto-correlation) and has other applications in communication channels.
While in statistics, it is related to finding correlation (misused term since it can take any value, but correlation coefficient takes value between -1 to 1) between 2 stochastic processes, at different time instants, where we have multiple samplings of the signal or ensembles of the signal based on which the expectation is taken (See [here](https://en.wikipedia.org/wiki/Cross-correlation#Cross-correlation_of_stochastic_processes)). In CNNs maybe a way to see this would be, the image dataset is a ensemble of stochastic process, while the filter is another process which is fixed. And moving the filter over the image is time delay.
In the context of digital image processing cross correlation is just a dot product. And hence it can be represented by matrices. Whether cross correlation is linear or not is difficult to tell (at least I don't know), but in the context of image processing it seems **linear**. Bottom-line is that it can be **definitely implemented by matrices** as it is a dot product, whether it is linear in true mathematical sense is doubtful.
The more intuitive way in context of CNNs would be to view filters as just a bunch of shared weights for better regularisation instead of cross correlation or convolution.
Upvotes: 4 [selected_answer] |
2020/03/30 | 701 | 2,874 | <issue_start>username_0: I am new to deep learning.
I am training a model and I am getting a root mean squared error (RMSE) greater on the test dataset than on the training dataset.
[](https://i.stack.imgur.com/qlotg.png)
What could be the reason behind this? Is this acceptable to get the RMSE greater in test data?<issue_comment>username_1: RMSE stands for Root Mean Squared Error. As the name suggests, it is calculated by taking the square root over the mean of the squared errors of individual points.
It is normal for the test error to be higher than the train error and in most cases, the test error will be greater than the train error.
Upvotes: 0 <issue_comment>username_2: >
> I am training a model and i am getting test results greater than train results.
>
>
>
You don't give us too many details, but most probably it's **underfitting**.
>
> What could be the reason behind this?
>
>
>
* Underfitting is often a result of an excessively simple model.
* Too much regularization techniques were used.
>
> Is this acceptable to get the RMSE greater in test data?
>
>
>
Yes, but that indicates a problem with model, so you should be aware of the consequences.
Upvotes: 0 <issue_comment>username_3: The training error (on any error metrics, not only for RMSE) will usually be less than the test error because the same data used to fit the model is employed to assess its training error. In other words, a fitted model usually adapts to the training data and hence its training error will be overly optimistic (too small). In fact, it is often the case that the training error steadily decreases as the size of the model increases.
Upvotes: 0 <issue_comment>username_4: It is common to have root mean squared error (RMSE) greater on the test dataset than on the training dataset (this is equal to having accuracy/score higher for model in training dataset than test dataset). This normally happens because the training data are assesed on the same data that have been learnt before, while the test dataset may have data that are unknown / not common that may give more errors or misclassification when doing prediction.
But if your model shows your test dataset have way too high RMSE result rather than your training dataset RMSE result, it may indicates that overfitting happens.
If overfitting happens, there are a lot of reasons this could happen.
Referenced from <https://elitedatascience.com/overfitting-in-machine-learning>, some factors that causes overfitting are:
* Complexity of data (e.g. there are irrelevant input features). This can be solved with removing irrelevant input features.
* Not enough training data. This can be solved by training with more data (Eventhough this may not always succeed. Sometimes it may give noise towards data), etc.
Upvotes: 1 |
2020/04/01 | 841 | 3,507 | <issue_start>username_0: I have read a lot of information about several notions, like batch\_size, epochs, iterations, but because of explanation was without numerical examples and I am not native speaker, I have some kind of problem of understanding still about those terms, so I decided to work with data. Let us suppose we have the following data
[](https://i.stack.imgur.com/BmtIx.png)
Of course, it is just subset of original data, and I want to build a neural network with three hidden layers, the first layer contains 500 nodes, it takes input three variable and on each node, there is sigmoid activation function, next layer contains 100 node and sigmoid activation, the third one contains 50 node and sigmoid again, and finally we have one output with sigmoid to convert the result into 0 or 1 that classify whether a person with those attributes is female or male.
I trained the model using Keras Tensorflow with the following code
```
model.fit(X_train,y_train,epochs=30)
```
With this data, what does mean `epochs=30`? Does it mean that all 177 rows (with 3 input at times) will go to the model 30 times? What about `batch_size=None` in [model.fit parameters](https://www.tensorflow.org/api_docs/python/tf/keras/Model?version=stable#fit)?<issue_comment>username_1: ok so let me explain in my word how i understood this process:
i know that one sample mean one row, therefore if we have data with size(177,3), that means we have 177 sample. because we have divided X and y into training and test, therefore we have following pairs (X\_train,y\_train) and (X\_test, y\_test)
now about batch size, if we have let say 177 sample(177 row) and 2 batch\_size , that means we have approximately $177/2$ batch right?update process goes like this:
let us suppose network takes 3 input and produce one output, from first sample of data, three data will go to the network and output will be generated, this output will be compared to the first value of y\_train and cost function will be created, then next sample will go(it means next three value) and compared to the second value of y\_train, also second cost function will be generated, final cost function for first batch will be sum of those cost functions and using gradient method weights are updated, after that one new batches will go through the network and on the based on updated weights, new weights are generated, when all $177/2$ batch will be finished , it will be our 1 epoch right? is that correct?
Upvotes: 0 <issue_comment>username_2: Batch size and epochs are independent parameters - they serve very different purposes. Your main question as I understand it (and for general, non-library specific consumption) is **what is an epoch and how is the data used for each epoch?**
Simply put, an epoch is a single iteration though the training data. Each and every sample from your training dataset will be used **once** per epoch, whether it is for training or validation. Therefore, the more epochs, the more the model is trained. The key is to identify the number of epochs that fits the model to the data without overfitting.
Your explaination of how batch size affects the training process is correct but not relevant to the question since it has no relation to the epoch training iterations. That is not to say that these values should be considered independently since they have similar effects on the model training process.
Upvotes: 2 [selected_answer] |
2020/04/03 | 571 | 2,521 | <issue_start>username_0: In an episodic training of an RL agent, should I always start from the same initial state or I can start from several valid initial states?
For example, in a gym environment, should my `env.reset()` function always resets me to the same start state or it can start from different states at each training episode?<issue_comment>username_1: ok so let me explain in my word how i understood this process:
i know that one sample mean one row, therefore if we have data with size(177,3), that means we have 177 sample. because we have divided X and y into training and test, therefore we have following pairs (X\_train,y\_train) and (X\_test, y\_test)
now about batch size, if we have let say 177 sample(177 row) and 2 batch\_size , that means we have approximately $177/2$ batch right?update process goes like this:
let us suppose network takes 3 input and produce one output, from first sample of data, three data will go to the network and output will be generated, this output will be compared to the first value of y\_train and cost function will be created, then next sample will go(it means next three value) and compared to the second value of y\_train, also second cost function will be generated, final cost function for first batch will be sum of those cost functions and using gradient method weights are updated, after that one new batches will go through the network and on the based on updated weights, new weights are generated, when all $177/2$ batch will be finished , it will be our 1 epoch right? is that correct?
Upvotes: 0 <issue_comment>username_2: Batch size and epochs are independent parameters - they serve very different purposes. Your main question as I understand it (and for general, non-library specific consumption) is **what is an epoch and how is the data used for each epoch?**
Simply put, an epoch is a single iteration though the training data. Each and every sample from your training dataset will be used **once** per epoch, whether it is for training or validation. Therefore, the more epochs, the more the model is trained. The key is to identify the number of epochs that fits the model to the data without overfitting.
Your explaination of how batch size affects the training process is correct but not relevant to the question since it has no relation to the epoch training iterations. That is not to say that these values should be considered independently since they have similar effects on the model training process.
Upvotes: 2 [selected_answer] |
2020/04/03 | 1,166 | 3,801 | <issue_start>username_0: I have created a neural network that is able to recognize images with the numbers 1-5. The issue is that I have a database of 16x5 images which ,unfortunately, is not proving enough as the neural network fails in the test set. Are there ways to improve a neural network's performance without using more data? The ANN has approximately a 90% accuracy on the training sets and a 50% accuracy in the test ones.
Code:
```
clear
graphics_toolkit("gnuplot")
sigmoid = @(z) 1./(1 + exp(-z));
sig_der = @(y) sigmoid(y).*(1-sigmoid(y));
parse_image; % This external f(x) loads the images so that they can be read.
%13x14
num=0;
for i=1:166
if mod(i-1,10)<=5 && mod(i-1,10) > 0
num=num+1;
data(:,num) = dlmread(strcat("/tmp/",num2str(i)))(:);
end
end
function [cost, mid_layer, last_layer] = forward(w1,w2,data,sigmoid,i)
mid_layer(:,1)=sum(w1.*data(:,i));
mid_layer(:,2)=sigmoid(mid_layer(:,1));
last_layer(:,1)=sum(mid_layer(:,2).*w2);
last_layer(:,2)=sigmoid(last_layer(:,1));
exp_res=rem(i,5);
if exp_res==0
exp_res=5;
end
exp_result=zeros(5,1); exp_result(exp_res)=1;
cost = exp_result-last_layer(:,2);
end
function [w1, w2] = backprop(w1,w2,mid_layer,last_layer,data,cost,sig_der,sigmoid,i)
delta(1:5) = cost;
delta(6:20) = sum(cost' .* w2,2);
w2 = w2 + 0.05 .* delta(1:5) .* mid_layer(:,2) .* sig_der(last_layer(:,1))';
w1 = w1 + 0.05 .* delta(6:20) .* sig_der(mid_layer(:,1))' .* data(:,i);
end
w1=rand(182,15)./2.*(rand(182,15).*-2+1);
w2=rand(15,5)./2.*(rand(15,5).*-2+1);
for j=1:10000
for i=[randperm(85)]
[cost, mid_layer, last_layer] = forward(w1,w2,data,sigmoid,i);
[w1, w2] = backprop(w1,w2,mid_layer,last_layer,data,cost,sig_der,sigmoid,i);
cost_mem(j,i,:)=cost;
end
end
```<issue_comment>username_1: **In theory**, yes, using synthetic data generation. This involves applying transformations to the original images to generate new 'unique' images. Some standard techniques include rotating, flipping, stretching, zooming or brightening. Obviously not all of these make sense depending on the data. In your problem, zooming, stretching and brightening could be used but flipping should not. Rotation could work but only for small angles.
Generally this is implemented by *replacing* the dataset for each epoch of training. Therefore, the number of images used in each training iteration is the same but the images themselves have been altered.
**In practice**, it's not a magic bullet. The reason a larger dataset generally yields better models is because the probability of a new feature falling within the feature distribution of the training data is higher. With synthetic data generation the new features are only marginally different to the original so even if the number of images to train on is increased, the feature distributions are not that different. There is a lot of variation in handwritten numbers so it would be very hard to guess how effective this would be without trying it.
Upvotes: 1 <issue_comment>username_2: You can synthetically increase the number of samples. For example with augmentation or unsupervised adaption ([Self-training](https://arxiv.org/abs/1911.04252)). With augmentation you grant the system way more robustness so i would really recommend this. For example this [github](https://github.com/aleju/imgaug). The problem with such small database sizes is that your test-set is also very small and you cannot test properly if your network generalizes well, or just overfits.
You can try transfer learning with another larger network to adapt those feature extractors and use them on ur problem. That may work better than training a new one from scratch with so less labeled images. Hope i could help at least a little, stay tuned.
Upvotes: 3 [selected_answer] |
2020/04/03 | 756 | 2,886 | <issue_start>username_0: In chapter 3.5 of Sutton's book, the value function is defined as:
[](https://i.stack.imgur.com/eNqIS.png)
Can someone give me some clarification about why there is the expectation sign behind the entire equation? Considering that the agent is following a fixed policy $\pi$, why there should be an expectation when the trajectory of the future possible states is fixed (or maybe I am getting it wrong and it's not). In total, if the expectation here has the meaning of averaging over a series of trajectories, what are those trajectories and what are the weights of them when we want to compute the expected value over them according to [this Wikipedia definition of the expected values](https://en.wikipedia.org/wiki/Expected_value)?<issue_comment>username_1: There needs to be an $E\_{\pi}$ over the infinite discounted return term because of two reasons-
1. The policy could be stochastic in nature. That is, for any given state $s\_t$ at time $t$, the policy $\pi(s\_t)$ does not provide a deterministic action $a$, but rather, it provides us with a distribution over the possible next states, that is the action at time $t$, $a\_t$ is distributed as-
$$a\_t \sim \pi(s\_t)$$
2. Even if the policy $\pi$ being followed by an agent is deterministic, there still needs to be an expectation over the behavior of the underlying stochastic MDP environment. That is, any action $a\_t$, in general, only provides us with a distribution over the possible next states of the agent. That is,
$$P(s\_{t + 1} = s') = P\_{\pi}(s' | s\_t) = \sum\_{a \in A} T(s,a,s') \times P\_{\pi}(a\_t = a)$$
Here $T(s, a, s')$ is the transition function for the MDP and the above equation captures the stochasticity arising from both 1 and 2.
As you see the expectation does not have to do with averaging over a collection of trajectories. However, that idea is often used in [Monte-Carlo](https://en.wikipedia.org/wiki/Monte_Carlo_method) estimation of value functions.
EDIT:
As pointed out in the comments, it is not correct to say that the expectation is not over a collection of trajectories.
Upvotes: 3 [selected_answer]<issue_comment>username_2: In addition to [this answer](https://ai.stackexchange.com/a/19975/2444), I would like to note that, if the future trajectories were fixed (i.e. the environment and the policies were deterministic, and the agent always starts from the same state), the expectation of the sum (of the fixed rewards) would simply correspond to the actual sum, because the sum is a constant (i.e. the expectation of a constant is the constant itself), so the expectation operator also applies to the deterministic cases. Therefore, the expectation is a general way of expressing the value of a state in all possible cases (both when trajectories are fixed or not).
Upvotes: 2 |
2020/04/03 | 841 | 3,310 | <issue_start>username_0: Does this prove AI Safety is undecidable?
Proof:
Output meaning output to computer program.
[A1] Assume we have a program that decides which outputs are “safe”.
[A2] Assume we have an example of an unsafe output: “unsafe\_output”
[A3] Assume we have an example of safe output: “safe\_output”.
[A4] Define a program to be safe if it always produces safe output.
[A5] Assume we have a second program (`safety_program`) that decides which programs are safe.
[A6] Write the following program:
```
def h()
h_is_safe := safety_program(h)
if (h_is_safe):
print unsafe_output
else:
print safe_output
```
Clearly h halts.
If the safety\_program said h was safe, then h prints out unsafe\_output.
If the safety\_program said h was not safe, then h prints out safe\_output.
Therefore safety\_program doesn’t decide h correctly.
This is a contradiction. Therefore we made a wrong assumption: Either safe output cannot be decided, or safe programs cannot be decided.
Therefore, in general, the safety of computer programs, including Artificial Intelligence, is undecidable.
Therefore AI Safety is undecidable.<issue_comment>username_1: In my opinion, there are several flaws in your proof and reasonings.
First, note that, in the case of Turing's proof, `h` will actually loop forever (i.e. not halt) when the oracle says that `h` halts. In this case, there's an actual contradiction, because `h` will do the opposite of what the oracle says.
So, to follow Turing's proof, you would need to make `h` behave unsafely if the oracle says `h` is safe. But how should we define a safe or unsafe program? There are many unsafe behaviors. For example, in a certain context, an insult could be unsafe, in other contexts, a certain limb movement could be unsafe, and so on. So, an agent is unsafe or behaves unsafely usually with respect to another agent (or itself) or environment. You probably need to keep this in mind if you want to prove anything about the safety of AI agents.
In your second assumption, you are implicitly saying that any machine that produces the output `unsafe_output` is unsafe, but, of course, this definition is not a realistic definition of an unsafe program.
To help you define safety in a more reasonable and natural way, I think it may be useful to reason first in terms of artificial agents, which are higher-level concepts than Turing machines. Then you could find a way of mapping agents to TMs and attempt to prove your conjectures by using the tools of the theory of computation.
Upvotes: 1 <issue_comment>username_2: You are essentially correct, although there may be some minor holes in the structure of your argument specifically. But if we're speaking informally, yes, you are correct. This is a consequence of [Rice's Theorem](https://en.m.wikipedia.org/wiki/Rice%27s_theorem). And it isn't just true for AI safety. All non-trivial semantic properties of algorithms (or functions) are undecidable.
"Semantic" just means it is about what the program outputs, or its behavior, both used synonymously to mean the actual result of the function, not syntax, aka the source code: the function specification itself. "Nontrivial" means that it isn't true for all programs or false for all programs.
Upvotes: 3 [selected_answer] |
2020/04/03 | 728 | 3,085 | <issue_start>username_0: When you play video games, sometimes there is an AI that attempts to predict what are you going to do.
For example, in the Candy Crush game, if you finish the level and you still have moves remaining, you get to see fishes or other powers destroying the other candies, but, instead of watching 10 minutes of your combos without moving at all after accomplishing a level, like this [Longest video game combo ever probably](https://www.youtube.com/watch?v=-_p8LZhIxtE), it tells an alert that says tap to skip, so basically the AI is predicting all the possible combos that will keep proceeding automatically and calculating every automatic move.
How can artificial intelligence predict such a thing?<issue_comment>username_1: Artificial Intelligence can predict such a thing because before they release the game, the train the BOT or AI to play the game million times so they have a model or BOT that can predict every next move or combo that they can do or predict all moves that can finish the game.
for example snake game. what they do to predict moves is train the model or bot to play the game when the snake perform some action . the snake got reward which can be positive or negative reward. the goal of the snake is to learn what action that maximize the reward, given every possible state. States are the observations that the agent receives at each iteration from the environment
this is the link that can give you the detail : <https://towardsdatascience.com/how-to-teach-an-ai-to-play-games-deep-reinforcement-learning-28f9b920440a>
Upvotes: 1 <issue_comment>username_2: AI is trained to predict such thing because that is their purpose, they are given almost all possibilities of move they can do to current state of the game and chose the best possible outcome of the possible move, but not only that the AI also predict what happens after that and predict the outcome of that prediction, just like a chess AI that can predict how to checkmate a player just by one move made by the player, so they did not just predict what move to do now but also what move to do after that move has been done
this can be done with deep learning as you can read here : <https://towardsdatascience.com/predicting-professional-players-chess-moves-with-deep-learning-9de6e305109e>
<https://electronics.howstuffworks.com/chess1.htm>
Upvotes: 1 <issue_comment>username_3: In video games, usually the developer spend an dedicated time to train their AI by feeding them with learning data, provided by either the developer itself or the feedback from open/closed beta tester that participated, and from that data, the developer can model the learning pattern for the algorithm and proceed to train them with some sets of goal.
Upvotes: 1 <issue_comment>username_4: AI can predict such thing by reading a data that they previously store by play the game many times. Using the data the AI can learn which is the best action to do. For example AI can find the best path to evade all incoming bullet while shooting down all the enemies in bullet hell game.
Upvotes: 0 |
2020/04/03 | 952 | 3,753 | <issue_start>username_0: To clarify it in my head, the value function calculates how 'good' it is to be in a certain state by summing all future (discounted) rewards, while the reward function is what the value function uses to 'generate' those rewards for it to use in the calculation of how 'good' it is to be in the state?<issue_comment>username_1: I think it is pedagogically useful to distinguish between the theory (equations) and the practice (algorithms).
If you're talking about the definition of the value function (the theory)
\begin{align}
v\_{\pi}(s)
& \dot{=} \mathbb{E}\_{\pi} \left[ G\_t \mid S\_t = s \right]\\
&= \mathbb{E}\_{\pi} \left[ \sum\_{k=0}^\infty \gamma^k R\_{t+k+1} \bigl\vert S\_t = s \right]\\
\end{align}
for all $s \in \mathcal{S}$, where $\dot{=}$ means "is defined as" and $\mathcal{S}$ is the state space, then the value function can be defined in terms of the reward, as it can be clearly seen above. (Note that $R\_{t+k+1}$, $G$ and $S\_t$ are random variables, and, in fact, expectations are taken with respect to random variables).
The definition above can actually be expanded to be a Bellman equation (i.e. a recursive equation) defined in terms of the reward function $R(s, a)$ of the underlying MDP. However, often, rather than the notation $R(s, a)$, you will see $p(s', r \mid s, a)$ (which represents the combination of the *transition probability function* and the *reward function*). Consequently, **the value is a function of the reward**.
If you're estimating a value function (the practice), e.g. using Q-learning, you don't necessarily use the reward function of the Markov decision process. You can **estimate** the value function by just observing the rewards that you receive while exploring the environment, without really knowing the reward function. But, by exploring the environment, you can actually estimate the reward function. For example, if every time you're in state $s$ you take action $a$ and you receive reward $r$, then you already know something about the actual underlying reward function. If you explore enough the MDP, you could potentially learn the reward function too (unless it keeps on changing, in that case, it may be more difficult to learn it).
To conclude, yes, value functions are certainly very related to reward functions and rewards, in ways that you immediately see from the equations that define the value functions.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Sutton and Barto give a great description in their book [Reinforcement Learning](http://www.incompleteideas.net/book/the-book-2nd.html):
>
> Whereas the reward signal indicates what is good in an immediate sense, a value
> function specifies what is good in the long run. Roughly speaking, the value of a state is
> the total amount of reward an agent can expect to accumulate over the future, starting
> from that state. Whereas rewards determine the immediate, intrinsic desirability of
> environmental states, values indicate the long-term desirability of states after taking into
> account the states that are likely to follow and the rewards available in those states. For
> example, a state might always yield a low immediate reward but still have a high value
> because it is regularly followed by other states that yield high rewards. Or the reverse
> could be true. To make a human analogy, rewards are somewhat like pleasure (if high)
> and pain (if low), whereas values correspond to a more refined and farsighted judgment
> of how pleased or displeased we are that our environment is in a particular state.
>
>
>
Mathematically, the value function is the expected sum of discounted rewards from a given state or for a particular action in a given state.
Upvotes: 0 |
2020/04/06 | 995 | 4,429 | <issue_start>username_0: I know that this has been asked a hundred times before, however, I was not able to find a question (and an answer) which actually answered what I wanted to know, respectively, which explained it in a way I was able to understand. So, I'm trying to rephrase the question…
When working with neural networks, you typically split your data set into three parts:
* Training set
* Validation set
* Test set
I understand that you use the training set for, well, train the network, and that you use the test set to verify how well it has learned: By measuring how well the network performs on the test set, you know what to expect when actually using it later on. So far, so good.
Now, a model has hyper parameters, which – besides the weights – need to be tuned. If you change these, of course, you get different results. This is where in all explanations the validation set comes into play:
* Train using the training set
* Validate how well the model performs using the validation set
* Repeat this for a number of variants which differ in their hyperparameters (or do it in parallel, right from the start)
* Finally, select one and verify its performance using the test set
Now, my question is: Why would I need steps 2 and 3? I could as well train multiple version of my model in parallel, and then run all of them against the test set, to see which performs best, and then use this one.
So, in other words: Why would I use the validation set for comparing the model variants, if I could directly use the test set to do so? I mean, I need to train multiple versions either way. What is the benefit of doing it like this?
Probably, there is some meaning to it, and probably I got something wrong, but I can't figure out what. Any hints?<issue_comment>username_1: The difference between the validation and test set in my opinion should be explained in this way:
* the validation set is meant to be used multiple times.
* the test set is meant to be used only once.
I think that the misunderstanding here arise because machine learning is mostly taught focusing only on a specific part of a large pipeline, which is the model training. In every tutorial standard datasets are used so that you don't have to worry about data collection, data labelling (it's really sad to see that lot of people have not a clue about what the inter annotator agreement is), data pre-processing and especially, all the part about the real application of the model is almost never mentioned.
The importance of having a set of instances that you can use for fine tuning (validation) and a set of instances that your model never encountered neither in training nor during fine tuning (test) becomes particularly clear if you focus on the subsequent deployment of the model you trained. No one expect a model to have the same performance scores in training and when applied to some unknown data. And the crucial point is that the performance of a model on the validation set are not representative either of the behaviour of a model with unknown data, because the same validation data have been used to fine tuned the model! So here's why having a set of data completely new to the model is important, because it gives you a much more unbiased view about the model performance on a real use case scenario.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Simply stated, you use your validation set to regularize your model for unseen data.
Test data is completely unseen data, on which you evaluate your model.
Various validation strategies are used to improve your model to perform for unseen data. So strategies like k-fold cross-validation are used.
Also, the validation set helps you in tuning your hyperparameters such as learning rate, batch size, hidden units, number of layers, etc.
Train, Validation, Test sets help you in identifying whether you are underfitting or overfitting.
E.g. If human error at a task is 1%, train error is 8%, validation error is 10%, test set error is 12 % then,
Difference between,
1. Human level and training set error tells you about "Avoidable Bias"
2. Training set error and Validation set error tells you about "Variance and data mismatch"
3. Validation set error and Test error tells you about "degree of overfitting" with the validation set.
Based on these metrics, you can apply appropriate strategies for better performance on validation or test sets.
Upvotes: 2 |
2020/04/06 | 1,386 | 5,154 | <issue_start>username_0: While looking at the mathematics of the back-propagation algorithm for a multi-layer perceptron, I noticed that in order to find the partial derivative of the cost function with respect to a weight (say $w$) from any of the hidden layers, we're just writing the error function from the final outputs in terms of the inputs and hidden layer weights and then canceling all the terms without $w$ in it as differentiating those terms with respect to $w$ would give zero.
Where is the back-propagation of error while doing this? This way, I can find the partial derivatives of the first hidden layer first and then go towards the other ones if I wanted to. Is there some other method of going about it so that the Back Propagation concept comes into play? Also, I'm looking for a general method/algorithm, not just for 1-2 hidden layers.
I'm fairly new to this and I'm just following what's being taught in class. Nothing I found on the internet seems to have proper notation so I can't understand what they're saying.<issue_comment>username_1: Have a look at the following article [Principles of training multi-layer neural network using backpropagation](http://home.agh.edu.pl/~vlsi/AI/backp_t_en/backprop.html). It was very useful to me.
You can also see [here an example of backpropagation in Matlab](https://ai.stackexchange.com/a/19945/2444). It effectively solves the [XOR problem](https://en.wikipedia.org/wiki/Exclusive_or). You can also play around with the cost function or the learning rate. You may get surprising results! Does this answer your question?
Upvotes: 3 [selected_answer]<issue_comment>username_2: >
> Why is it called back-propagation?
>
>
>
I don't think there is anything special here!
It's called back-propagation (BP) because, after the forward pass, you compute the partial derivative of the loss function with respect to the parameters of the network, which, in the usual diagrams of a neural network, are placed **before** the output of the network (i.e. to the left of the output if the output of the network is on the right, or to the right if the output of the network is on the left).
It's also called BP because it is just the application of the [chain rule](https://en.wikipedia.org/wiki/Chain_rule). Why is this interesting?
Let me answer this question with an example. Consider the function $y=e^{\sin(x^{2})}$. This is a [composite function](https://en.wikipedia.org/wiki/Function_composition), i.e. a function composed of multiple simpler functions, which, in this case, are $e^x$, $\sin(x)$, $x^2$ and $x$. To compute the derivative of $y$ with respect to $x$, let's define the following variables
\begin{align}
y &= f(u) = e^u,\\
u &= g(v) = \sin v = \sin(x^2),\\
v &= h(x) = x^2
\end{align}
The derivative of $y$ with respect to the variable $x$ is (according to the [chain rule](https://en.wikipedia.org/wiki/Chain_rule))
$$
\underset{\color{red}{\LARGE \rightarrow}}{
\frac{dy}{dx} = \frac{dy}{du} \color{green}{\cdot} \frac{du}{dv} \color{green}{\cdot}
\frac{dv}{dx}}
$$
If you read this equation from the left to the right, you can see that we are going backward (i.e. from the function $y$ to the function $v$). This is the same thing with BP!
Why is it called "chain rule"? Because you are [chaining](https://en.wikipedia.org/wiki/Chain) different partial derivatives. More specifically, you are multiplying them.
BP is also known as the **reverse mode of automatic differentiation**. Why? The [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation) should be self-explanatory, given that the BP algorithm is just the computation of partial derivatives, and you do this *automatically*, i.e. with a program, rather than by hand. The expression "reverse mode" refers to the fact that we compute the derivatives from the outer function (which, in the example above, is $e^x$) to the inner function (which, in the example above, is $x$). [The Wikipedia article related to automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation) provides more details.
>
> What exactly are you back-propagating?
>
>
>
The partial derivative of the loss function $\mathcal{L}$ with respect to a parameter $w\_i$, i.e. $\frac{\partial \mathcal{L}}{\partial w\_i}$, intuitively, represents the "contribution" of the parameter $w\_i$ to the loss. After having computed these partial derivatives (i.e. the gradient), you use gradient descent to update each parameter $w\_i$ as follows
$$
w\_i \leftarrow w\_i - \gamma \frac{\partial \mathcal{L}}{\partial w\_i}
$$
where $\frac{\partial \mathcal{L}}{\partial w\_i}$ represents WHAT we propagatED, which is the error (or loss) that the neural network makes.
This gradient descent step will hopefully make your network produce a smaller error next time.
The modern version of back-propagation was published (in 1970) by a Finnish master's student called <NAME>, but he didn't reference neural networks. [This Jürgen Schmidhuber's article goes into the details of the history of BP](http://people.idsia.ch/%7Ejuergen/who-invented-backpropagation.html).
Upvotes: 3 |
2020/04/06 | 1,242 | 4,470 | <issue_start>username_0: I'm watching the video [Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorflow Tutorial | Edureka](https://www.youtube.com/watch?v=y7qrilE-Zlc) where the author says that the LSTM and GRU architecture help to reduce the vanishing gradient problem. How do LSTM and GRU prevent the vanishing gradient problem?<issue_comment>username_1: Have a look at the following article [Principles of training multi-layer neural network using backpropagation](http://home.agh.edu.pl/~vlsi/AI/backp_t_en/backprop.html). It was very useful to me.
You can also see [here an example of backpropagation in Matlab](https://ai.stackexchange.com/a/19945/2444). It effectively solves the [XOR problem](https://en.wikipedia.org/wiki/Exclusive_or). You can also play around with the cost function or the learning rate. You may get surprising results! Does this answer your question?
Upvotes: 3 [selected_answer]<issue_comment>username_2: >
> Why is it called back-propagation?
>
>
>
I don't think there is anything special here!
It's called back-propagation (BP) because, after the forward pass, you compute the partial derivative of the loss function with respect to the parameters of the network, which, in the usual diagrams of a neural network, are placed **before** the output of the network (i.e. to the left of the output if the output of the network is on the right, or to the right if the output of the network is on the left).
It's also called BP because it is just the application of the [chain rule](https://en.wikipedia.org/wiki/Chain_rule). Why is this interesting?
Let me answer this question with an example. Consider the function $y=e^{\sin(x^{2})}$. This is a [composite function](https://en.wikipedia.org/wiki/Function_composition), i.e. a function composed of multiple simpler functions, which, in this case, are $e^x$, $\sin(x)$, $x^2$ and $x$. To compute the derivative of $y$ with respect to $x$, let's define the following variables
\begin{align}
y &= f(u) = e^u,\\
u &= g(v) = \sin v = \sin(x^2),\\
v &= h(x) = x^2
\end{align}
The derivative of $y$ with respect to the variable $x$ is (according to the [chain rule](https://en.wikipedia.org/wiki/Chain_rule))
$$
\underset{\color{red}{\LARGE \rightarrow}}{
\frac{dy}{dx} = \frac{dy}{du} \color{green}{\cdot} \frac{du}{dv} \color{green}{\cdot}
\frac{dv}{dx}}
$$
If you read this equation from the left to the right, you can see that we are going backward (i.e. from the function $y$ to the function $v$). This is the same thing with BP!
Why is it called "chain rule"? Because you are [chaining](https://en.wikipedia.org/wiki/Chain) different partial derivatives. More specifically, you are multiplying them.
BP is also known as the **reverse mode of automatic differentiation**. Why? The [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation) should be self-explanatory, given that the BP algorithm is just the computation of partial derivatives, and you do this *automatically*, i.e. with a program, rather than by hand. The expression "reverse mode" refers to the fact that we compute the derivatives from the outer function (which, in the example above, is $e^x$) to the inner function (which, in the example above, is $x$). [The Wikipedia article related to automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation) provides more details.
>
> What exactly are you back-propagating?
>
>
>
The partial derivative of the loss function $\mathcal{L}$ with respect to a parameter $w\_i$, i.e. $\frac{\partial \mathcal{L}}{\partial w\_i}$, intuitively, represents the "contribution" of the parameter $w\_i$ to the loss. After having computed these partial derivatives (i.e. the gradient), you use gradient descent to update each parameter $w\_i$ as follows
$$
w\_i \leftarrow w\_i - \gamma \frac{\partial \mathcal{L}}{\partial w\_i}
$$
where $\frac{\partial \mathcal{L}}{\partial w\_i}$ represents WHAT we propagatED, which is the error (or loss) that the neural network makes.
This gradient descent step will hopefully make your network produce a smaller error next time.
The modern version of back-propagation was published (in 1970) by a Finnish master's student called <NAME>, but he didn't reference neural networks. [This Jürgen Schmidhuber's article goes into the details of the history of BP](http://people.idsia.ch/%7Ejuergen/who-invented-backpropagation.html).
Upvotes: 3 |
2020/04/07 | 2,059 | 9,055 | <issue_start>username_0: I am reading the article [How Transformers Work](https://towardsdatascience.com/transformers-141e32e69591) where the author writes
>
> Another problem with RNNs, and LSTMs, is that it’s hard to parallelize the work for processing sentences, since you have to process word by word. Not only that but there is no model of **long and short-range dependencies**.
>
>
>
Why exactly does the transformer do better than RNN and LSTM in **long-range context dependencies**?<issue_comment>username_1: I'll list some bullet points of the main innovations introduced by transformers , followed by bullet points of the main characteristics of the other architectures you mentioned, so we can then compared them.
Transformers
============
Transformers ([Attention is all you need](https://arxiv.org/pdf/1706.03762.pdf)) were introduced in the context of machine translation with the purpose to avoid recursion in order to allow parallel computation (to reduce training time) and also to reduce drops in performance due to long dependencies. The main characteristics are:
* **Non sequential**: sentences are processed as a whole rather than word by word.
* **Self Attention**: this is the newly introduced 'unit' used to compute similarity scores between words in a sentence.
* **Positional embeddings**: another innovation introduced to replace recurrence. The idea is to use fixed or learned weights which encode information related to a specific position of a token in a sentence.
The first point is the main reason why transformer do not suffer from long dependency issues. The original transformers do not rely on past hidden states to capture dependencies with previous words. They instead process a sentence as a whole. That is why there is no risk to lose (or "forget") past information. Moreover, multi-head attention and positional embeddings both provide information about the relationship between different words.
RNN / LSTM
==========
Recurrent neural networks and Long-short term memory models, for what concerns this question, are almost identical in their core properties:
* **Sequential processing**: sentences must be processed word by word.
* **Past information retained through past hidden states**: sequence to sequence models follow the Markov property: each state is assumed to be dependent only on the previously seen state.
The first property is the reason why RNN and LSTM can't be trained in parallel. In order to encode the second word in a sentence I need the previously computed hidden states of the first word, therefore I need to compute that first. The second property is a bit more subtle, but not hard to grasp conceptually. Information in RNN and LSTM are retained thanks to previously computed hidden states. The point is that the encoding of a specific word is retained only for the next time step, which means that the encoding of a word strongly affects only the representation of the next word, so its influence is quickly lost after a few time steps. LSTM (and also GruRNN) can boost a bit the dependency range they can learn thanks to a deeper processing of the hidden states through specific units (which comes with an increased number of parameters to train) but nevertheless the problem is inherently related to recursion. Another way in which people mitigated this problem is to use bi-directional models. These encode the same sentence from the start to end, and from the end to the start, allowing words at the end of a sentence to have stronger influence in the creation of the hidden representation. However, this is just a workaround rather than a real solution for very long dependencies.
CNN
===
Also convolutional neural networks are widely used in nlp since they are quite fast to train and effective with short texts. The way they tackle dependencies is by applying different kernels to the same sentence, and indeed since their first application to text ([Convolutional Neural Networks for Sentence Classification](https://arxiv.org/pdf/1408.5882.pdf)) they were implement as multichannel CNN. Why do different kernels allow to learn dependencies? Because a kernel of size 2 for example would learn relationships between pairs of words, a kernel of size 3 would capture relationships between triplets of words and so on. The evident problem here is that the number of different kernels required to capture dependencies among all possible combinations of words in a sentence would be enormous and unpractical because of the exponential growing number of combinations when increasing the maximum length size of input sentences.
To summarize, Transformers are better than all the other architectures because they totally avoid recursion, by processing sentences as a whole and by learning relationships between words thanks to multi-head attention mechanisms and positional embeddings. Nevertheless, it must be pointed out that also transformers can capture only dependencies within the fixed input size used to train them, i.e. if I use as a maximum sentence size 50, the model will not be able to capture dependencies between the first word of a sentence and words that occur more than 50 words later, like in another paragraph. New transformers like [Transformer-XL](https://arxiv.org/abs/1901.02860) tries to overcome exactly this issue, by kinda re-introducing recursion by storing hidden states of already encoded sentences to leverage them in the subsequent encoding of the next sentences.
Upvotes: 7 [selected_answer]<issue_comment>username_2: Let's start with RNN. A well known problem is vanishin/exploding gradients, which means that the model is biased by most recent inputs in the sequence, or in other words, older inputs have practically no effect in the output at the current step.
LSTMs/GRUs mainly try to solve this problem, by including a separate memory (cell) and/or extra gates to learn when to let go of past/current information. Check these series of [lectures](https://youtu.be/QEw0qEa0E50) for more in-depth discussion. Also check the interactive parts of [this article](https://distill.pub/2019/memorization-in-rnns/) for some intuitive understanding of dependency on past elements.
Now, given all this, information from past steps *still* goes through a sequence of computations and we're relying on these new gate/memory mechanisms to pass information from old steps to the current one.
One major advantage of the transformer architecture, is that at each step we have **direct** access to all the other steps (self-attention), which practically leaves no room for information loss, as far as message passing is concerned. On top of that, we can look at both future and past elements at the same time, which also brings the benefit of bidirectional RNNs, without the 2x computation needed. And of course, all this happens in parallel (non-recurrent), which makes both training/inference much faster.
The self-attention with every other token in the input means that the processing will be in the order of $\mathcal{O}(N^2)$ (glossing over details), which means that it's going to be costly to apply transformers on long sequences, compared to RNNs. That's probably one area that RNNs still have an advantage over transformers.
Upvotes: 3 <issue_comment>username_3: First: RNN is one part of the Neural Network family for processing sequential data. The way in which RNN is able to store information from the past is to loop in its architecture, which automatically keeps information from the past stored.
Second: sltm / gru is a component of regulating the flow of information referred to as the gate and GRU has 2 gates, namely reset gate and gate update. If we want to make a decision to eat like the analogy above, resetting the gate on the GRU will determine how to combine new inputs with past information, and update the gate, will determine how much past information should be kept.
source : <https://link.springer.com/article/10.1007/s00500-019-04281-z>
Upvotes: 1 <issue_comment>username_4: [](https://i.stack.imgur.com/ULJvS.jpg)
*table is from arxiv:1706.03762 Attention Is All You Need*
It's because of the path length. If you have a sequence of length n. Then a transformer will have access to each element with O(1) sequential operations where a recurrent neural network will need at most O(n) sequential operations to access an element.
Very long sequences gives you problem with exploding and vanishing gradients because of the chain rule in backprop.
Transformers don't have this problem as the distance to each element in the sequence is always O(1) sequential operations away. (as it selects the right element instead of trying to "remember" it)
**(arxiv:2103.05487) UnICORNN: A recurrent model for learning very long time dependencies**
"The design of RNNs that can accurately handle sequential inputs with long-time dependencies is very challenging.
This is largely on account of the exploding and vanishing
gradient problem (EVGP)."
Upvotes: 0 |
2020/04/07 | 1,670 | 6,975 | <issue_start>username_0: What is the benefit of a test data set, especially for naive bayes estimator or decision tree construction?
When using a naive bayes classifier the probabilities are a fact. As far as I know there is nothing one could tune (like the weights in a neural net). So what is the purpose of the test data set? Simply to know if one can apply naive bayes or not?
Similiarly what is the benefit of the test data set when constructing a decision tree. We alread use the gini impurity to construct the best possibe decision tree and there is nothing we could do when we get bad results with the test data set.<issue_comment>username_1: Your assumption about the test data is not correct completely. Maybe you use the test data to tune your learning algorithm to work better on the test data, but it's not the whole thing. Sometimes you need to know that the ML method is working or not and have a sense about how much does it work!
You have other scenarios that you want to evaluate your method:
1. Compare the result of the leaner with other techniques. For example, you are considering DT versus an SVM classifier over a data set. If you want to compare them, you need a value to found such a sense about the comparison.
2. Sometimes you are using an ensemble method and you want to tune some parameters to balance between using different ML methods. Hence, you need to evaluate these learning methods (DT, Naive Bayes) to improve the ensemble method.
Upvotes: 1 <issue_comment>username_2: In the field of ML and AI, you should always remember that before choosing any algorithm you should know the data .One should always start with Data Analyzing, which itself is field for this critical job. Decision Tree can never work with its best optimization without tuning the datasets. Here is a great article that you can refer : [Tuning Decision Tree](https://towardsdatascience.com/how-to-tune-a-decision-tree-f03721801680)
Purpose of test data in naive byes classifier:
1) It's necessity to check the accuracy, hits, hit rates, coverage, diversity, novelty, etc. metrics.
2) It hypertune your testdata(as anti\_train\_set) also by using mean, standard deviation, variance.
I really think that you should try other algorithms to train your datasets. I can't name all of them. However, in Neural network, rnn, cnn, rbm are some great algorithm to work with.
Please always remember that Machine learning is like an art, where datasets (test, train, evaluated) are colors and its up-to us to use the right amount of them.
Upvotes: 0 <issue_comment>username_3: I actually pondered this question a few months ago, so i understand your point of view!
You are correct in assuming that if you already build your tree or calculate your probabilities, what is the point of using test data? Because your model is fixed the way they are, no matter if you use test data or not. Well, the purpose of test data is not only to test your model against unseen data and get some evaluation score. But it is also to test if your model is the right fit for your problem.
One of the main reason why we all build ML/AI model in the first place is to extract insights that can be used to solve problems, make decisions etc. If you don't test your naive bayes or decision tree model with test data, you won't know if the information that are given to you by those model mean anything. They may not even help you solve problems or give you relevant information. Yes they may spout out big numbers and classify things, but are those result what you're looking for? Are the result relevant to your problem? Can the result be used to solve what you are trying to do?
Using test data gives you the opportunity to see if your model gives you the best insights and the best solution to your problems. So here are the takeaways from my answer:
* Test data can be used to test your model against unseen data
* You can gain score (evaluation) on your model when you test it with test data, which in turn can be used to fine tune your model
* You can see if the answer the model gives you is any relevant to your initial problem. It its not, then it may be best to use some other algorithm.
Upvotes: 0 <issue_comment>username_4: When we train a model using a data train, sometimes the resulting score is very high. This makes us believe that our model is very good. But when predicting actual data, the resulting score is very low. Why?? This means that the trained model is **overfit** (to data train) and fail to predict anything useful on yet-**unseen** data.
That's why we have to check our model to test data (predict test data) and compare the accuracy between data train and data test. If the accuracy is **not too far away**, then our model does **not overfit**.
Later, we can improve our model with **Cross Validation** ([Reference](https://scikit-learn.org/stable/modules/cross_validation.html#cross-validation)) that split our data train to n-split data and take one of that to became data train. Then we take the average of Cross Validation score.
Upvotes: 0 <issue_comment>username_5: One way to test the accuracy of trained model is by testing it using data test. By testing we are able to check the accuracy. Whether your model is a good model or not depends heavily on the accuracy, if your accuracy is too low or too high (eg. up to 99%~100%), there could be some problem in your model.
For further information on the example in data test, you can access <https://jakevdp.github.io/PythonDataScienceHandbook/05.05-naive-bayes.html>
Hope this helps
Upvotes: 0 <issue_comment>username_6: In machine learning, we can use all the datasets as training data in a model. But if there are too many data sets, or too much data, and we do not split them up, our model may be not produce acceptable results.
Why?
Because if the model studies too much training data, it may be [overfitted](https://en.wikipedia.org/wiki/Overfitting).
(Just like when you cram for a test, and get overloaded with too much information!)
What I mean is, your model is only familiar with the data you provide, not for the new data.
So we need to use test data to train our algorithm. Naive Bayes and Decision Tree Classifier are no exception because they can produce an overfitted model based on train data.
So we test it on the data test to know how well the method works in relation to the problem.
>
> Most data scientists divide their data (with answers, that is historical data) into three portions: training data, cross-validation data and testing data. The training data is used to make sure the machine recognizes patterns in the data, the cross-validation data is used to ensure better accuracy and efficiency of the algorithm used to train the machine, and the test data is used to see how well the machine can predict new answers based on its training.
>
>
>
SOURCE: <https://www.researchgate.net/post/What_is_training_and_testing_data_in_machine_learning>
Upvotes: 1 |
2020/04/08 | 1,354 | 5,677 | <issue_start>username_0: For people who have experience in the field, why is creating AI that has the ability to write programs (that are syntactically correct and useful) a hard task?
What are the barriers/problems we have to solve before we can solve this problem? If you are in the camp that this isn't that hard, why hasn't it become mainstream?<issue_comment>username_1: AI has been applied to programming (check out TabNine, my favorite autocomplete engine) although not in as robust a fashion as you describe.
Programming requires a high level of abstract while AI is typically trained to solve a very specific task. Given thousands of examples of insert sort in Python I think a model could be trained (perhaps after autocomplete and syntax correction) figure it out. However at this point the field has not developed a more general intelligence that can apply the ideas of the algorithm to other problems.
Addition based on comments:
Big picture, training an algorithm to solve a general class of problems (say, web dev) requires a huge number of examples or an immense number of trials. Further, as the complexity of the problem grows the number of parameters necessary to build the model grows. Writing code is a very complex problem and would thus require a huge amount of data and a huge number of parameters making it totally infeasible with today's math and (because of how the math is solved) hardware.
Modern AI is has a very simple goal: find the model that solves a problem optimally. If we could quickly search every possible model this would be simple. Fields like machine (deep) learning and reinforcement learning are concerned with finding a good solution in a reasonable amount of time. At this point no such solution exists for a problem of such complexity.
Upvotes: 3 [selected_answer]<issue_comment>username_2: I am not an expert on this specific topic, but I can say a few words. I will use the term "programming" to refer to software development (of any kind).
>
> If you are in the camp that this isn't that hard, why hasn't it become mainstream?
>
>
>
It's definitely hard, otherwise, we would have already some useful artificial programmers.
>
> Why is creating AI that can code a hard task?
>
>
>
Programming is actually a hard task because it often requires creativity and a deep understanding of the context(s), goal(s), programming languages, etc. In other words, it's a very complex task (even for humans), apart from the exceptions where you can copy and paste.
Programming can probably be considered [an AI-complete problem](https://ai.stackexchange.com/a/12147/2444), i.e. a problem that probably requires an AGI to be solved. In other words, if an AI was as capable as humans in terms of programming, then that probably means it is an AGI (but this is not guaranteed), i.e. programming is a task that probably requires general intelligence. This is why I say that programming is an AI-complete problem. However, note that being able to program is just [a necessary (but not sufficient)](https://en.wikipedia.org/wiki/Necessity_and_sufficiency) ability that an AI needs to possess in order for it to be an AGI (although not all general intelligences, e.g. animals, may be able to develop software, but the definition of general intelligence is also fuzzy).
AFAIK, no AGI has yet been created, and I think we are still very far away from that goal. Currently, most AI systems are only able to tackle a specific problem (i.e. we only have narrow AIs, such as AlphaGo). You could say that programming is a very specific problem too, but this is misleading or wrong, because, unless you just want to develop very specific programs in a very limited context (and there are already machine learning models and approaches, such as [**neural programmer-interpreters**](https://arxiv.org/pdf/1511.06279.pdf) and **genetic programming** respectively, that can do this to some extent; see the answers to [this question](https://ai.stackexchange.com/q/7364/2444) for other examples), then you will need to know a lot about other contexts too. For example, consider the task of developing a program that can detect signs of cancer in images. To develop this program, the AI would need to have the knowledge of an AI engineer, doctor, etc.
Furthermore, programming often requires common-sense knowledge. For example, while reading the software specifications, the AI needs to interpret them in the way that they were originally meant to be interpreted. This also suggests that [programming requires an AGI (or human-level AI)](https://ai.stackexchange.com/a/13262/2444) to be solved.
(Finally, to address a comment, note that writing a 4-line program is not equivalent to writing a 10-line program. Also, the length of the program often doesn't correspond to its difficulty or complexity, so that alone is not a good measure of the ability to program.)
>
> What are the barriers/problems we have to solve before we can solve this problem?
>
>
>
I think that the answer to this question is also the answer to the question "How can we create an AGI?". However, to be more concrete, I think that, in order to be able to create an AI that is able to program as well as humans, we will need to be able to create an AI that is able to think about low- and high-level concepts, compose them and it will probably require common-sense knowledge (so knowledge representation). A typical supervised learning solution will not be enough to solve this task. See the paper [Making AI Meaningful Again](https://arxiv.org/pdf/1901.02918.pdf), which also suggests that ML-based solutions may not be enough to solve many tasks.
Upvotes: 1 |
2020/04/09 | 944 | 4,266 | <issue_start>username_0: Most reinforcement learning agents are trained in simulated environments. The goal is to maximize performance in (often) the same environment, preferably with a minimum amount of interactions. Having a good model of the environment allows to use planning and thus drastically improves the sample efficiency!
**Why is the simulation not used for planning** in these cases? It is a sampling model of the environment, right? Can't we try multiple actions at each or some states, follow the current policy to look several steps ahead and finally choose the action with the best outcome? Shouldn't this allow us to find better actions more quickly compared to policy gradient updates?
In this case, our environment and the model are kind of identical and this seems to be the problem. Or is the good old curse of dimensionality to blame again? Please help me figure out, what I'm missing.<issue_comment>username_1: >
> Shouldn't this allow us to find better actions more quickly compared to policy gradient updates?
>
>
>
It depends on the nature of the simulation. If the simulation models a car as a solid body moving with three $(x,y,\theta)$ degrees of freedom in a plane (hopefully, if it doesn't hit anything and propel vertically), the three ordinary differential equations of solid body motion can be solved quite quickly, compared to a simulation used to model the path of least resistance of a ship on wavy sea, where fluid dynamics equations must be solved, that require a huge amount of resources. OK, the response time needed for a ship is much longer, than for a car, yes, but to compute it predictively, one needs a huge amount of computational power.
Upvotes: 0 <issue_comment>username_2: I will give one perspective on this from the domain of robotics. You are right that most RL agents are trained in simulation particularly for research papers, because it allows researchers to in theory benchmark their approaches in a common environment. Many of the environments exist strictly as a test bed for new algorithms and are not even physically realizable, e.g. [HalfCheetah](https://gym.openai.com/envs/HalfCheetah-v2/). You could in theory have a separate simulator say running in another process that you use as your planning model, and the "real" simulator is then your environment. But really that's just a mocked setup for what you really want in the end, which is having a real-world agent in a real-world environment.
What you describe could be very useful, with one important caveat: the simulator needs to in fact be a good model of the real environment. For robotics and many other interesting domains, this is a tall order. Getting a physics simulator that faithfully replicates the real-world environment can be tricky, as one may need accurate friction coefficients, mass and center of mass, restitution coefficients, material properties, contact models, and so on. Oftentimes the simulator is too crude an approximation of the real-world environment to be useful as a planner.
That doesn't mean we're completely hosed though. [This paper](https://arxiv.org/pdf/1810.05687.pdf) uses highly parallelized simulators to search for simulation parameters that approximate the real-world well. What's interesting is it's not even necessarily finding the correct real-world values for e.g. friction coefficients and such, but it finds values for parameters that, taken together, produces simulations that match the real-world experience. The better the simulation gets at approximating what's going on in the real world, the more viable it is to use the simulator for task planning. I think with the advent of [GPU-optimized physics simulators](https://developer.nvidia.com/isaac-gym) we will see simulators be a more useful tool even for real-world agents, as you can try many different things in parallel to get a sense of what is the likely outcome of a planned action sequence.
Upvotes: 1 <issue_comment>username_3: The question is generalizability. I completely agree though but, ideally the policy found will generalize to more complex environments the model hasn't seen. You could also run a planner on a new scenario but the issue is that it would be too computationally demanding for real time.
Upvotes: 0 |
2020/04/09 | 1,473 | 6,083 | <issue_start>username_0: So this is my current result (loss and score per episode) of my RL model in a simple two players game:
[](https://i.stack.imgur.com/K7YTe.png)
I use DQN with CNN as a policy and target networks. I train my model using Adam optimizer and calculate the loss using Smooth L1 Loss.
In a normal "Supervised Learning" situation, I can deduce that my model is overfitting. And I can imagine some methods to tackle this problem (e.g. Dropout layer, Regularization, Smaller Learning Rate, Early Stopping).
* But would that solution will also work in RL problem?
* Or are there any better solutions to handle overfitting in RL?<issue_comment>username_1: Overfitting refers to a model being stuck in a local minimum while trying to minimise a loss function. In Reinforcement Learning the aim is to learn an optimal policy by maximising or minimising a non-stationary objective-function which depends on the action policy, so overfitting is not exactly like in the supervised scenario, but you can definitely talk about *sub-optimal policies*.
If we think of a specific task like avoiding stationary objects, a simple sub-optimal policy would be to just stay still without moving at all, or moving in circles if the reward function was designed to penalise lack of movements.
The way to avoid an agent to learn sub-optimal policies is to find a good compromise between *exploitation*, i.e. the constant selection of the next action to take based on the maximum expected reward possible, and *exploration*, i.e. a random selection of the next action to take regardless of the rewards. Here's a link to an introduction to the topic:
[Exploration and Exploitation in Reinforcement Learning](https://www.manifold.ai/exploration-vs-exploitation-in-reinforcement-learning)
It is worth mentioning that sometimes an agent can actually outsmart humans though, some examples are reported in this paper [The Surprising Creativity of Digital Evolution](https://arxiv.org/pdf/1803.03453.pdf). I particularly like the story of the insect agent trained to learn to walk while minimising the contact with the floor surface. The agent surprisingly managed to learn to walk without touching the ground at all. When the authors checked what was going on they discovered that the insect leaned to flip itself and then walk using its fake 'elbows' (fig7 in the linked paper). I add this story just to point out that most of the time the design of the reward function is itself even more important than exploration and exploitation tuning.
Upvotes: 3 [selected_answer]<issue_comment>username_2: [The accepted answer](https://ai.stackexchange.com/a/20135/2444) does not provide a good definition of *over-fitting*, which actually exists and is a defined concept in reinforcement learning too. For example, the paper [Quantifying Generalization in Reinforcement Learning](http://proceedings.mlr.press/v97/cobbe19a/cobbe19a.pdf) completely focuses on this issue. Let me give you more details.
### Over-fitting in supervised learning
In **supervised learning (SL)**, **over-fitting** is defined as the difference (or gap) in the performance of the ML model (such as a neural network) on the training and test datasets. If the model performs significantly better on the training dataset than on the test dataset, then the ML model has over-fitted the training data. Consequently, it has not generalized (well enough) to other data other than the training data (i.e. the test data). The relationship between over-fitting and [generalization](https://ai.stackexchange.com/a/16737/2444) should now be clearer.
### Over-fitting in reinforcement learning
In **reinforcement learning (RL)** (you can find a brief recap of what RL is [here](https://ai.stackexchange.com/a/20957/2444)), you want to find an **optimal policy** or value function (from which the policy can be derived), which can be represented by a neural network (or another model). A policy $\pi$ is optimal in environment $E$ if it leads to the highest cumulative reward in the long run in that environment $E$, which is often mathematically modelled as a (partially or fully observable) Markov decision process.
In some cases, you are also interested in knowing whether your policy $\pi$ can also be used in a different environment than the environment it has been trained in, i.e. you're interested in knowing if the knowledge acquired in that training environment $E$ can be **transferred** to a different (but typically related) environment (or task) $E'$. For example, you may only be able to train your policy in a simulated environment (because of resource/safety constraints), then you want to transfer this learned policy to the real world. In those cases, you can define the concept of **over-fitting** in a similar way to the way we define over-fitting in SL. The only difference may be that you may say that the learned policy *has over-fitted the training environment* (rather than saying that the ML model has over-fitted the training dataset), but, given that the environment provides the *data*, then you could even say in RL that your policy has over-fitted the training data.
### Catastrophic forgetting
There is also the issue of [**catastrophic forgetting (CF)**](https://ai.stackexchange.com/a/13293/2444) in RL, i.e., while learning, your RL agent may forget what it's previously learned, and this can even happen in the same environment. Why am I talking about CF? Because what it's happening to you is probably CF, i.e., *while learning*, the agent performs well for a while, then its performance drops (although I have read a paper that strangely defines CF differently in RL). You could also say that over-fitting is happening in your case, but, if you are continuously training and the performance changes, then CF is probably what you need to investigate. So, you should reserve the word *over-fitting* in RL when you're interested in **transfer learning** (i.e. the training and test environments do not coincide).
Upvotes: 3 |
2020/04/10 | 726 | 2,418 | <issue_start>username_0: In chapter 4.1 of Sutton's book, the Bellman equation is turned into an update rule by simply changing the indices of it. How is it mathematically justified? I didn't quite get the initiation of why we are allowed to do that?
$$v\_{\pi}(s) = \mathbb E\_{\pi}[G\_t|S\_t=s]$$
$$ = \mathbb E\_{\pi}[R\_{t+1} + \gamma G\_{t+1}|S\_t=s]$$
$$= \mathbb E\_{\pi}[R\_{t+1} + \gamma v\_{\pi}(S\_{t+1})|S\_t=s]$$
$$ = \sum\_a \pi(a|s)\sum\_{s',r} p(s',r|s,a)[r+ \gamma v\_{\pi}(s')]$$
from which it goes to the update equation:
$$v\_{k+1}(s) = \mathbb E\_{\pi}[R\_{t+1} + \gamma v\_{k}(S\_{t+1})|S\_t=s]$$
$$=\sum\_a \pi(a|s)\sum\_{s',r} p(s',r|s,a)[r+ \gamma v\_{k}(s')]$$<issue_comment>username_1: >
> Why are we allowed to convert the Bellman equations into update rules?
>
>
>
There is a simple reason for this: **convergence**. The same [chapter 4](http://incompleteideas.net/book/bookdraft2017nov5.pdf) of the same book mentions it. For example, in the case of *policy evaluation*, the produced sequence of estimates $\{v\_k\}$ is guaranteed to converge to $v\_\pi$ as $k$ (i.e. the number of iterations) goes to infinity. There are other RL algorithms that are also guaranteed to converge (e.g. tabular Q-learning).
To conclude, in many cases, the update rules of simple reinforcement learning (or dynamic programming) algorithms are very similar to their mathematical formalization because algorithms based on those update rules are often guaranteed to converge. However, note that many more advanced reinforcement learning algorithms (especially, the ones that use function approximators, such as neural networks, to represent the value functions or policies) are not guaranteed or known to converge.
Upvotes: 3 <issue_comment>username_2: You're asking why the finite horizon policy evaluation converges to the infinite right?
Since the total reward is bounded(by the discount factor) you know that you can make your finite horizon policy evaluation get arbitrarily close to it in a finite number of steps.
People praise Bartos book but I find it annoying to read as he's not formal enough with mathematics.
Upvotes: 0 <issue_comment>username_3: To me, Bellman update is simply supervised learning: right hand side (bootstrap) is a sample of the left hand side (conditional expectation).. The Bellman equation simply explains that the right hand side is such a sample.
Upvotes: 1 |
2020/04/12 | 1,501 | 5,744 | <issue_start>username_0: How can I cluster the data frame below with several features and observations? And how would I go about determining the quality of those clusters? Is k-NN appropriate for this?
```
id Name Gender Dob Age Address
1 MUHAMMAD JALIL Male 1987 33 Chittagong
1 MUHAMMAD JALIL Male 1987 33 Chittagong
2 MUHAMMAD JALIL Female 1996 24 Rangpur
2 MRS. JEBA Female 1996 24 Rangpur
3 <NAME> Male 1987 33 Sirajganj
3 MR. <NAME> Male 1987 33 Sirajganj
3 <NAME> Male 1987 33 Sirajganj
4 MISS. JEBA Female 1996 24 Rangpur
4 PROF. JEBA Female 1996 24 Rangpur
1 <NAME> Male 1987 33 Chittagong
1 MUH<NAME> Male 1987 33 Chittagong
```<issue_comment>username_1: A typical clustering algorithm is k-means (and [not k-NN, i.e. k-nearest neighbours, which is primarily used for classification](https://stats.stackexchange.com/q/56500/82135)). There are other clustering algorithms, such as [hierarchical clustering algorithms](https://en.wikipedia.org/wiki/Hierarchical_clustering). `sklearn` provides functions that implement [k-means](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html) (and [an example](https://scikit-learn.org/stable/auto_examples/cluster/plot_cluster_iris.html#sphx-glr-auto-examples-cluster-plot-cluster-iris-py)), [hierarchical clustering algorithms](https://scikit-learn.org/stable/modules/clustering.html#hierarchical-clustering), and [other clustering algorithms](https://scikit-learn.org/stable/modules/clustering.html).
To assess the quality of the produced clusters, you could use the [silhouette method](https://en.wikipedia.org/wiki/Silhouette_(clustering)) (`sklearn` provides a function that can be used to compute the [silhouette score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html)).
Regarding your specific data frame, note that it contains repetitions, so you may want to remove them before starting the clustering procedure. Also, the IDs are not unique, but you probably don't need the IDs for clustering.
Upvotes: 2 <issue_comment>username_2: You can use k-nn clustering but you must convert your dataset to numeric values or you can remove the unrelated features in your dataset.
Upvotes: 0 <issue_comment>username_3: KNN can be used in clustering with the data frame. but there are a number of steps that you must take.
1. You must separate the features you want to cluster. for example you can do clustering dob and age.
2. if there is data of type string you have to change it to an integer.
For easier clustering, you can use the Sklearn library. you can access at the following link <https://scikit-learn.org/stable/modules/clustering.html>
Upvotes: 0 <issue_comment>username_4: There are several algorithm for clustering such as: K-means, Mean shift, hierarchical,etc. Based on my experience, actually it's K-means(KNN for classifcication).It is suitable for clustering your dataset, there are several steps for clustering your dataset:
1. You have to determine which features that you want cluster
2. Changing your categorical dataset to numerical
3. This step is optional, You can drop columns that are not related to the features you have chosen before
4. Try to to code your clustering (like determine centroid from your dataset, calculate the euclidean distance from your centroid,etc) or if you want to use library maybe sklearn is the right place.
And for determine the quality of your clustering, you can measures SSE(sum of the square error from the items of each cluster),Inter cluster distance,Intra cluster distance for each cluster,Maximum Radius,Average Radius.
Upvotes: 0 <issue_comment>username_5: Yes you can use KNN algorithm to cluster (well actually its a classification not a clustering if you use KNN) the data. **But,** first you need to set one feature as a label because KNN is a supervised learning method, it need a labeled data to train the data first. For example you can use Gender as label to classify the data. To determine the quality of the classification result, you can simply use accuracy.
If you don’t want to use a label, you can use unsupervised learning method like K-Means to do the clusters. Because its unsupervised it doesn’t need label so you can use all of the feature to do the clusters task. For the k-means algorithm you can use a library from scikit-learn or create it from scratch. To evaluate the results you can use silhouette score or elbow method (to find the optimal number of cluster).
And don’t forget to do data exploration because maybe it can increase the quality of the cluster results.
You can learn more about the differentiation between K-Means and KNN in the link below:
<https://pythonprogramminglanguage.com/how-is-the-k-nearest-neighbor-algorithm-different-from-k-means-clustering/>
I hope this helps :)
Upvotes: 1 <issue_comment>username_6: you can clustering the data frame with unsupervised algorithm, for example you can use K-Means method. There are some options you can choose to eliminate some features in your data frame, like del dataFrame['Column Name']. In unsupervised learning, the algorithm not calculate the quality of the clusters, but you can set it up by yourself to make a parameter for calculate the quality for each clusters, for example it depend on sum of data in each clusters. Actually you can use KNN algorithm with your data frame, but you need to add a label in there because KNN is a supervised learning, and its function to make a classification, not clustering. hope it useful.
Upvotes: 0 |
2020/04/12 | 1,216 | 5,154 | <issue_start>username_0: I am looking for datasets that are used as a testing standard in the fully connected neural networks (FCNN). For example, in the image recognition and CNN, CIFAR datasets are used in most of the papers, but can't find anything like that for the FCNN.<issue_comment>username_1: A typical clustering algorithm is k-means (and [not k-NN, i.e. k-nearest neighbours, which is primarily used for classification](https://stats.stackexchange.com/q/56500/82135)). There are other clustering algorithms, such as [hierarchical clustering algorithms](https://en.wikipedia.org/wiki/Hierarchical_clustering). `sklearn` provides functions that implement [k-means](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html) (and [an example](https://scikit-learn.org/stable/auto_examples/cluster/plot_cluster_iris.html#sphx-glr-auto-examples-cluster-plot-cluster-iris-py)), [hierarchical clustering algorithms](https://scikit-learn.org/stable/modules/clustering.html#hierarchical-clustering), and [other clustering algorithms](https://scikit-learn.org/stable/modules/clustering.html).
To assess the quality of the produced clusters, you could use the [silhouette method](https://en.wikipedia.org/wiki/Silhouette_(clustering)) (`sklearn` provides a function that can be used to compute the [silhouette score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html)).
Regarding your specific data frame, note that it contains repetitions, so you may want to remove them before starting the clustering procedure. Also, the IDs are not unique, but you probably don't need the IDs for clustering.
Upvotes: 2 <issue_comment>username_2: You can use k-nn clustering but you must convert your dataset to numeric values or you can remove the unrelated features in your dataset.
Upvotes: 0 <issue_comment>username_3: KNN can be used in clustering with the data frame. but there are a number of steps that you must take.
1. You must separate the features you want to cluster. for example you can do clustering dob and age.
2. if there is data of type string you have to change it to an integer.
For easier clustering, you can use the Sklearn library. you can access at the following link <https://scikit-learn.org/stable/modules/clustering.html>
Upvotes: 0 <issue_comment>username_4: There are several algorithm for clustering such as: K-means, Mean shift, hierarchical,etc. Based on my experience, actually it's K-means(KNN for classifcication).It is suitable for clustering your dataset, there are several steps for clustering your dataset:
1. You have to determine which features that you want cluster
2. Changing your categorical dataset to numerical
3. This step is optional, You can drop columns that are not related to the features you have chosen before
4. Try to to code your clustering (like determine centroid from your dataset, calculate the euclidean distance from your centroid,etc) or if you want to use library maybe sklearn is the right place.
And for determine the quality of your clustering, you can measures SSE(sum of the square error from the items of each cluster),Inter cluster distance,Intra cluster distance for each cluster,Maximum Radius,Average Radius.
Upvotes: 0 <issue_comment>username_5: Yes you can use KNN algorithm to cluster (well actually its a classification not a clustering if you use KNN) the data. **But,** first you need to set one feature as a label because KNN is a supervised learning method, it need a labeled data to train the data first. For example you can use Gender as label to classify the data. To determine the quality of the classification result, you can simply use accuracy.
If you don’t want to use a label, you can use unsupervised learning method like K-Means to do the clusters. Because its unsupervised it doesn’t need label so you can use all of the feature to do the clusters task. For the k-means algorithm you can use a library from scikit-learn or create it from scratch. To evaluate the results you can use silhouette score or elbow method (to find the optimal number of cluster).
And don’t forget to do data exploration because maybe it can increase the quality of the cluster results.
You can learn more about the differentiation between K-Means and KNN in the link below:
<https://pythonprogramminglanguage.com/how-is-the-k-nearest-neighbor-algorithm-different-from-k-means-clustering/>
I hope this helps :)
Upvotes: 1 <issue_comment>username_6: you can clustering the data frame with unsupervised algorithm, for example you can use K-Means method. There are some options you can choose to eliminate some features in your data frame, like del dataFrame['Column Name']. In unsupervised learning, the algorithm not calculate the quality of the clusters, but you can set it up by yourself to make a parameter for calculate the quality for each clusters, for example it depend on sum of data in each clusters. Actually you can use KNN algorithm with your data frame, but you need to add a label in there because KNN is a supervised learning, and its function to make a classification, not clustering. hope it useful.
Upvotes: 0 |
2020/04/12 | 1,012 | 3,766 | <issue_start>username_0: I have a labeled dataset composed of 3000 data. Its single feature is the price of the house and its label is the number of bedrooms.
Which classifier would be a good choice to classify these data?<issue_comment>username_1: It is not really a metter of what model, but if it is possible at all to predict what you're trying to predict. Let's take a similar dataset from kaggle: [California Housing Prices](https://www.kaggle.com/camnugent/california-housing-prices)
This dataset contains house prices and other information among which the number of bedrooms per house. As suggested by Oliver in the comments we can compute the Person coefficient to estimate the correlation between the two variables.
```
import pandas as pd
from scipy.stats import pearsonr
df = pd.read_csv('housing.csv')
df = df.apply(lambda row: row[df['total_bedrooms'] <= 20]) # select subset of dataframe for sake of clarity
df.dropna(inplace=True)
x = df['median_house_value'] # our single feature
y = df['total_bedrooms'] # target labels
print('Correlation: \n', pearsonr(x,y))
```
Out:
```
>>Correlation:
>>(-0.14015312664251944, 0.12362969210761204)
```
The correlation is pretty low, which means that the price and number of bedrooms are basically not related. We can also plot the points to check that indeed there is no correlation at all.
```
df.plot(x='total_bedrooms',y='median_house_value',kind='scatter')
```
Out:
[](https://i.stack.imgur.com/XEKAY.png)
Training a model to predict the number of bedrooms uniquely from the price would mean to find a function that can interpolate all those points, which is an impossible task since we have several different prices for houses with the same amount of bedrooms.
The only way to tackle a problem like this would be to expand the dimensionality of the data, for example by using a Support Vector Machine with a non linear kernel. But even with non linear kernels you can't do miracles, so if you're dataset looks like this one, the only solution would be to expand your dataset to include extra features.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Classification can be performed on structured or unstructured data. Classification is a technique where we categorize data into a given number of classes.
Based on my project in price classification, when i compared into the 5 models, i got a higher score on a **Random Forest Classifier** compared to Decision Tree, SVM, Naive Bayes, Logistic Regression.
my project: <https://github.com/khaifagifari/Classification-and-Clustering-on-Used-Cars-Dataset>
source : <https://github.com/f2005636/Classification>
<https://www.kaggle.com/vbmokin/used-cars-price-prediction-by-15-models>
Upvotes: 1 <issue_comment>username_3: If your data is labeled, but you only have a limited amount, you should use a classifier with high bias (for example, Naive Bayes). I'm guessing this is because a higher-bias classifier will have lower variance, which is good because of the small amount of data
Source : <https://stackoverflow.com/questions/2595176/which-machine-learning-classifier-to-choose-in-general/15881662>
Upvotes: 1 <issue_comment>username_4: if you use just one feature for dataset, i'm recommend a Algorithm Naive Bayes Classifier because Naive Bayes is a method using probability and statistical methods. And we can also get the total data train and its accuracy value by using.
Upvotes: 0 <issue_comment>username_5: I think no matter you use one or more feature for dataset.
You can compare the classification algorithms regarding the accuracy provided by the algorithm. Like compare naive bayes with svm method, it based on your problem set.
Upvotes: 0 |
2020/04/13 | 1,149 | 4,248 | <issue_start>username_0: The [beginner colab example](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/quickstart/beginner.ipynb#scrollTo=he5u_okAYS4a&line=1&uniqifier=1) for tensorflow states:
>
> Note: It is possible to bake this `tf.nn.softmax` in as the activation function for the last layer of the network. While this can make the model output more directly interpretable, this approach is discouraged as it's impossible to
> provide an exact and numerically stable loss calculation for all models when using a softmax output.
>
>
>
My question is, then, why? What do they mean by *impossible to provide an exact and numerically stable loss calculation*?<issue_comment>username_1: It is not really a metter of what model, but if it is possible at all to predict what you're trying to predict. Let's take a similar dataset from kaggle: [California Housing Prices](https://www.kaggle.com/camnugent/california-housing-prices)
This dataset contains house prices and other information among which the number of bedrooms per house. As suggested by Oliver in the comments we can compute the Person coefficient to estimate the correlation between the two variables.
```
import pandas as pd
from scipy.stats import pearsonr
df = pd.read_csv('housing.csv')
df = df.apply(lambda row: row[df['total_bedrooms'] <= 20]) # select subset of dataframe for sake of clarity
df.dropna(inplace=True)
x = df['median_house_value'] # our single feature
y = df['total_bedrooms'] # target labels
print('Correlation: \n', pearsonr(x,y))
```
Out:
```
>>Correlation:
>>(-0.14015312664251944, 0.12362969210761204)
```
The correlation is pretty low, which means that the price and number of bedrooms are basically not related. We can also plot the points to check that indeed there is no correlation at all.
```
df.plot(x='total_bedrooms',y='median_house_value',kind='scatter')
```
Out:
[](https://i.stack.imgur.com/XEKAY.png)
Training a model to predict the number of bedrooms uniquely from the price would mean to find a function that can interpolate all those points, which is an impossible task since we have several different prices for houses with the same amount of bedrooms.
The only way to tackle a problem like this would be to expand the dimensionality of the data, for example by using a Support Vector Machine with a non linear kernel. But even with non linear kernels you can't do miracles, so if you're dataset looks like this one, the only solution would be to expand your dataset to include extra features.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Classification can be performed on structured or unstructured data. Classification is a technique where we categorize data into a given number of classes.
Based on my project in price classification, when i compared into the 5 models, i got a higher score on a **Random Forest Classifier** compared to Decision Tree, SVM, Naive Bayes, Logistic Regression.
my project: <https://github.com/khaifagifari/Classification-and-Clustering-on-Used-Cars-Dataset>
source : <https://github.com/f2005636/Classification>
<https://www.kaggle.com/vbmokin/used-cars-price-prediction-by-15-models>
Upvotes: 1 <issue_comment>username_3: If your data is labeled, but you only have a limited amount, you should use a classifier with high bias (for example, Naive Bayes). I'm guessing this is because a higher-bias classifier will have lower variance, which is good because of the small amount of data
Source : <https://stackoverflow.com/questions/2595176/which-machine-learning-classifier-to-choose-in-general/15881662>
Upvotes: 1 <issue_comment>username_4: if you use just one feature for dataset, i'm recommend a Algorithm Naive Bayes Classifier because Naive Bayes is a method using probability and statistical methods. And we can also get the total data train and its accuracy value by using.
Upvotes: 0 <issue_comment>username_5: I think no matter you use one or more feature for dataset.
You can compare the classification algorithms regarding the accuracy provided by the algorithm. Like compare naive bayes with svm method, it based on your problem set.
Upvotes: 0 |
2020/04/13 | 1,015 | 3,806 | <issue_start>username_0: Is there an online tool that can predict accuracy given only the dataset as input (i.e. without the compiled model)?
That would help to understand how data augmentation/distribution standardization, etc., is likely to change the accuracy.<issue_comment>username_1: It is not really a metter of what model, but if it is possible at all to predict what you're trying to predict. Let's take a similar dataset from kaggle: [California Housing Prices](https://www.kaggle.com/camnugent/california-housing-prices)
This dataset contains house prices and other information among which the number of bedrooms per house. As suggested by Oliver in the comments we can compute the Person coefficient to estimate the correlation between the two variables.
```
import pandas as pd
from scipy.stats import pearsonr
df = pd.read_csv('housing.csv')
df = df.apply(lambda row: row[df['total_bedrooms'] <= 20]) # select subset of dataframe for sake of clarity
df.dropna(inplace=True)
x = df['median_house_value'] # our single feature
y = df['total_bedrooms'] # target labels
print('Correlation: \n', pearsonr(x,y))
```
Out:
```
>>Correlation:
>>(-0.14015312664251944, 0.12362969210761204)
```
The correlation is pretty low, which means that the price and number of bedrooms are basically not related. We can also plot the points to check that indeed there is no correlation at all.
```
df.plot(x='total_bedrooms',y='median_house_value',kind='scatter')
```
Out:
[](https://i.stack.imgur.com/XEKAY.png)
Training a model to predict the number of bedrooms uniquely from the price would mean to find a function that can interpolate all those points, which is an impossible task since we have several different prices for houses with the same amount of bedrooms.
The only way to tackle a problem like this would be to expand the dimensionality of the data, for example by using a Support Vector Machine with a non linear kernel. But even with non linear kernels you can't do miracles, so if you're dataset looks like this one, the only solution would be to expand your dataset to include extra features.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Classification can be performed on structured or unstructured data. Classification is a technique where we categorize data into a given number of classes.
Based on my project in price classification, when i compared into the 5 models, i got a higher score on a **Random Forest Classifier** compared to Decision Tree, SVM, Naive Bayes, Logistic Regression.
my project: <https://github.com/khaifagifari/Classification-and-Clustering-on-Used-Cars-Dataset>
source : <https://github.com/f2005636/Classification>
<https://www.kaggle.com/vbmokin/used-cars-price-prediction-by-15-models>
Upvotes: 1 <issue_comment>username_3: If your data is labeled, but you only have a limited amount, you should use a classifier with high bias (for example, Naive Bayes). I'm guessing this is because a higher-bias classifier will have lower variance, which is good because of the small amount of data
Source : <https://stackoverflow.com/questions/2595176/which-machine-learning-classifier-to-choose-in-general/15881662>
Upvotes: 1 <issue_comment>username_4: if you use just one feature for dataset, i'm recommend a Algorithm Naive Bayes Classifier because Naive Bayes is a method using probability and statistical methods. And we can also get the total data train and its accuracy value by using.
Upvotes: 0 <issue_comment>username_5: I think no matter you use one or more feature for dataset.
You can compare the classification algorithms regarding the accuracy provided by the algorithm. Like compare naive bayes with svm method, it based on your problem set.
Upvotes: 0 |
2020/04/13 | 759 | 2,737 | <issue_start>username_0: Similarly to the question [Who first coined the term Artificial Intelligence?](https://ai.stackexchange.com/q/58/2444), who first coined the term "artificial general intelligence"?<issue_comment>username_1: According to [<NAME>](https://goertzel.org/who-coined-the-term-agi/), the first person that **probably** used the term "artificial general intelligence" (in an article related to artificial intelligence) was [<NAME>](https://medium.com/@mgubrud) in the 1997 article [Nanotechnology and International Security](https://foresight.org/Conferences/MNT05/Papers/Gubrud/index.php). Here's an excerpt from the article.
>
> By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed. Such systems may be modeled on the human brain, but they do not necessarily have to be, and they do not have to be "conscious" or possess any other competence that is not strictly relevant to their application. What matters is that such systems can be used to replace human brains in tasks ranging from organizing and running a mine or a factory to piloting an airplane, analyzing intelligence data or planning a battle.
>
>
>
Note that the term "AGI" could have been used even before that Mark A. Gubrud's article (as Goertzel also suggested).
In any case, <NAME> help to popularise this term, especially with the book [Artificial General Intelligence](https://link.springer.com/book/10.1007/978-3-540-68677-4). AGI was previously known as "strong AI", which goes back to <NAME>'s [Chinese Room argument](https://plato.stanford.edu/entries/chinese-room/), although strong AI often refers to an AGI with consciousness, and the definition of AGI doesn't necessarily imply consciousness.
You can read more about the history of the term "AGI" in [this Goertzel's blog post](https://goertzel.org/who-coined-the-term-agi/).
Upvotes: 3 <issue_comment>username_2: According to [this Wikipedia article](https://en.wikipedia.org/wiki/Artificial_general_intelligence)
>
> The term "artificial general intelligence" was used as early as 1997, by <NAME>. in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by <NAME> and <NAME> around 2002.
>
>
> The research objective is much older, for example <NAME>'s Cyc project (that began in 1984), and <NAME>'s Soar project are regarded as within the scope of AGI.
>
>
>
Upvotes: 1 |
2020/04/13 | 1,090 | 3,929 | <issue_start>username_0: I'm experimenting with training a feedforward neural network using a genetic algorithm and I've done a few tests using both the mean squared error and classification error functions as fitness heuristic in the GA.
When I use MSE as error function, my GA tends to converge around an MSE of 0.1 (initial conditions have an MSE of around 0.9). Testing system accuracy with this network gives me 95%+ for both training and testing data.
But, when I use classification error as my heuristic, my GA tends to converge around when the MSE is about 0.3. System accuracy is still around the same at 95%+.
>
> My question is, if you had two networks, one showing an MSE of 0.1 and one an MSE of 0.3, but both perform approximately the same in terms of accuracy, what can I deduce from the differences in MSE?
>
>
>
In other words: which network is "better", even if the accuracy is the same? Does a lower MSE mean anything below a certain amount? I could train my network for 100x as many generations and get a better MSE but not necessarily a better accuracy. Why?
For some context:
[](https://i.stack.imgur.com/UW3ga.png)
When the MSE is approximately 1.5 (epoch 250), the accuracy seems to match when the MSE is approximately 2.0 (epoch 50). Why does the accuracy not increase despite MSE decreasing?<issue_comment>username_1: According to [<NAME>](https://goertzel.org/who-coined-the-term-agi/), the first person that **probably** used the term "artificial general intelligence" (in an article related to artificial intelligence) was [<NAME>](https://medium.com/@mgubrud) in the 1997 article [Nanotechnology and International Security](https://foresight.org/Conferences/MNT05/Papers/Gubrud/index.php). Here's an excerpt from the article.
>
> By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed. Such systems may be modeled on the human brain, but they do not necessarily have to be, and they do not have to be "conscious" or possess any other competence that is not strictly relevant to their application. What matters is that such systems can be used to replace human brains in tasks ranging from organizing and running a mine or a factory to piloting an airplane, analyzing intelligence data or planning a battle.
>
>
>
Note that the term "AGI" could have been used even before that <NAME>'s article (as Goertzel also suggested).
In any case, <NAME> help to popularise this term, especially with the book [Artificial General Intelligence](https://link.springer.com/book/10.1007/978-3-540-68677-4). AGI was previously known as "strong AI", which goes back to <NAME>'s [Chinese Room argument](https://plato.stanford.edu/entries/chinese-room/), although strong AI often refers to an AGI with consciousness, and the definition of AGI doesn't necessarily imply consciousness.
You can read more about the history of the term "AGI" in [this Goertzel's blog post](https://goertzel.org/who-coined-the-term-agi/).
Upvotes: 3 <issue_comment>username_2: According to [this Wikipedia article](https://en.wikipedia.org/wiki/Artificial_general_intelligence)
>
> The term "artificial general intelligence" was used as early as 1997, by <NAME>. in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by <NAME> and <NAME> around 2002.
>
>
> The research objective is much older, for example <NAME>'s Cyc project (that began in 1984), and <NAME>'s Soar project are regarded as within the scope of AGI.
>
>
>
Upvotes: 1 |
2020/04/14 | 3,438 | 14,140 | <issue_start>username_0: Would it be ethical to allow an AI to make life-or-death medical decisions?
For instance, where there an insufficient number of ventilators during a respiratory pandemic, not every patient can have one. It seems like a straight forward question, but before you answer, consider:
1. Human decision-making in this regard is a form of algorithm.
(For instance, the statistics and rules that determine who gets kidney transplants.)
2. Even if the basis for the decision is statistical, the ultimate decision making process could be heuristic, so at least the bias could be identified.
In other words, the goal of this process, specifically, is favoring one patient over another, but doing so in the way that has the greatest utility.
3. Statistical bias is a core problem of Machine Learning, but human decision making is also subject to this condition.
One of the arguments in favor might be that at least the algorithm would be impartial, here in relation to human bias.
Finally, where there is scarcity, utilitarianism becomes more imperative. (Part of the trolley problem is you only have two tracks.) But the trolley problem is also relevant because it can be a commentary on the burden of responsibility.<issue_comment>username_1: I disagree with the idea that a trained Machine Learning model would be impartial. Models are trained on data sets that contain features. Humans prepare those data sets and decide what features are included in the data set. The model only knows what it is trained on. Human bias is still there just less blatantly obvious.
To address your question directly, the answer I believe is that it is no more or less ethical than to have humans make such decisions since in the end humans created the AI model.
My concern, however, is simply this:
Once we offload this to AI we will longer feel responsible for the results. The mentality of "the machine" made the choice will make it very easy to allow us to abdicate. This is especially true if the people implementing the AI decisions are not the ones who developed the AI.
Of course, humans having to repeatedly make life and death will suffer a serious and potentially devastating toll. So, in the end, it is a trade-off, but, from my perspective, I think the risk of abdication and the consequences thereof carry a heavier weight. But then I am not the one faced with making life and death choices.
Upvotes: 3 <issue_comment>username_2: I will interpret the questions as being about [triage](https://en.wikipedia.org/wiki/Triage). This is particularly important in crisis situations, where a lot of such life-or-death decisions have to be taken.
In the [START system](https://www.medicinenet.com/medical_triage_code_tags_and_triage_terminology/views.htm) there are four different categories:
* the deceased, who are beyond help
* the injured who could be helped by immediate transportation
* the injured with less severe injuries whose transport can be delayed
* those with minor injuries not requiring urgent care.
According to the category assigned to a patient, you then decide what to do. Other systems might be more fine-grained, but the principle is the same: effectively the human decision is in the classification, which then guides the assignment of resources. In the above list, the second category would probably be the highest priority for treatment. But once the category has been decided, the course of action has been determined, though the actual treatment options will then be considered for each case.
The ethics are thus in the judgment of the survival chances: if a nurse decides patient X is too far gone to warrant treatment, that's it. So this is the hard part, the life-or-death decision.
There are then two aspects to consider:
1. The accuracy of the diagnosis
2. The predicted likelihood of treatment being effective
The diagnosis should be (I'm not a medic!) fairly neutral. There are of course mis-diagnoses, but there is no value judgment involved. There might be some 'leakage' between step 1 and step 2, in that a human being might be influenced by the prospects of treatment of a diagnosis when deciding what the problem is. As in: this is a nice person, I don't want them to die, so I (subconsciously) exclude the diagnosis X which would mean certain death.
In this case a computer system that had sufficient accuracy when making a diagnosis would IMHO actually be more ethical than a human being. Provided, of course, that there is no bias in the diagnosis. Which I think is a practical problem, but in theory could be dealt with.
Once the diagnosis has been determined, the estimation of treatment success is the next decision. This also can be subject to human error, and a computer system with access to statistical information (again, unbiased) of issues and their likelihood of survival could be more independent of other factors which a human being might use, but which are unrelated to the case.
So basically, I would say that a computer system could be *more ethical* than a human being in such a situation. Because you can defend the way it reached it's decision, and it has been without taking into account factors not related to the problem at hand.
However, it's not always that easy. There are plenty of cases where other factors influence the decision. As long as there are fixed rules, that might not be a problem, though. Some issues would be (in each case the patients would have the same diagnosis and projected survival chance):
* One ventilator, but an 8-year-old child and a 85-year-old man
* A 30-year-old pregnant woman and the Prime Minister of the country
* A one-month-old baby and a 25-year-old student
* A homeless person and a billionaire
As a given, the system would have to be agnostic of general features such as gender, race, religion, etc. But how do you take into account age, social status, and a whole host of other factors?
If there is no difference in the situation, the fairest way would be to toss a coin. That would surely upset a lot of people ("How can you justify leaving our president to die in this time of crisis!"), but if you had explicit rules ("If one patient is in a higher tax bracket, they get priority") you might upset even more people. The advantage of having such rules would be to *make the bias explicit*, and in a way protect a nurse or paramedic from having to decide between a rich senator and a homeless person — whose family is more likely to go after you if you decided against them? And if you have explicit, unambiguous rules, why not use them to guide an AI system?
Every human being has their own preferences, and I am glad that I don't have to make those kinds of decisions — it sounds like a horrific task to me. How can you even sleep in peace after basically having condemned one each of the above cases to death?
That is another factor about the ethics of AI: by relieving humans from having to make such decisions, it would also have a beneficial effect. If the final decision is the same that the human would come to, then it's a win/win situation. But that is probably unlikely, due to a whole range of subconscious biases.
The prime issue to me seems the lack of recourse in the case of "computer said no". When a human makes a decision (like a referee in a game), there will always be an argument if people are unhappy with it. But in this case there could be none. The oracle has spoken, your father will be left to die. Sorry, no other outcome possible. It would probably be the same with a human decision, but it wouldn't feel as 'cold': you can observe that the person making the decision has to struggle with it. And you might understand that it was not an easy choice. With a computer, that element is missing.
Anyway, **to summarise**: given various caveats,
* sufficiently accurate and unbiased diagnosis,
* unbiased prediction of treatment outcome,
* transparent handling of non-medical factors,
I would say that an AI system would me *more ethical*, as it has a principled way of reaching a decision which does not disadvantage any group of patients and would always get the same decision for the same patient; furthermore it takes a heavy burden off triage staff who otherwise have to make such decisions.
Which does not mean that I would be happy with my loved-ones' survival being decided by such a system :)
Upvotes: 2 <issue_comment>username_3: [Oliver's answer](https://ai.stackexchange.com/a/20247/2444) is interesting and it provides valuable information (such as a brief description of the [triage](https://en.wikipedia.org/wiki/Triage) process, which I was not aware of), but I disagree with his conclusion or, at least, I think it can be misleading because he is implying it's "more ethical" because the AI will behave in "more principled way". It depends on your definition of "ethical" (and I will recall one below) and the implications of behaving in a "more principled way".
First of all, we should emphasize that *current* AI systems can be and usually are biased because they are mainly trained with data associated with humans and their actions (as pointed out by Gerry in [his answer](https://ai.stackexchange.com/a/20243/2444)). Furthermore, currently, AI systems (including the ones for healthcare) are only designed by humans, who can automatically and often inadvertently introduce bias, for example, by choosing the specific AI model over another, the data, how to process or acquire the data. (Maybe, in the future, AI systems will design other AI systems, but can this really reduce bias? Given that humans will probably design the first AI system that is able to design other AI systems, would the bias introduced in this first AI system also be propagated to the other AI systems?)
In principle, an AI could make more rational decisions, especially if it is not affected by human limitations (which often are not limitations, such as feelings; e.g., if you didn't feel pain when hitting a chair, you would start bleeding without even not noticing it) that make humans sometimes take irrational actions.
However, is the rational action also the most appropriate one? It depends on your definition of rational action and what we mean by "appropriate one".
Here's a definition of "rational" from the dictionary
>
> based on or in accordance with reason or logic
>
>
>
Although the AI system takes actions systematically by following the rules of logic, those actions are still based on some principles or axioms, which will bias the system. So, a rational agent can still be biased, but this bias will be systematic.
In general, every decision can potentially be biased because it's based on some principles and taken by a "subject".
Now, let's address your question more directly
>
> Would it be ethical to allow an AI to make life-or-death medical decisions?
>
>
>
First, let me report two definitions of "ethical" from the dictionary
>
> relating to moral principles or the branch of knowledge dealing with these:
>
>
> morally good or correct
>
>
>
The original question can thus be rephrased as
>
> Would it be morally good to allow an AI to make life-or-death medical decisions?
>
>
>
Of course, it's difficult to argue what is morally good or not, because this is often subjective. It's morally good for me to help my friends, but it isn't necessarily morally good to help other people. We have different friends, so this automatically implies that morally good is subjective.
The answer to this question ultimately boils down to the philosophical issue of good vs bad, which is naturally subjective. So, the answer to this question will depend on the philosophical ideas of each person. Some people will say "yes" and some people will say "no".
I think it's more productive to answer the question
>
> What are the advantages and disadvantages of allowing an AI to make life-or-death medical decisions?
>
>
>
This question can be answered more objectively. For example, we could say that this would free humans from doing this job, which, in certain scenarios (as Oliver points out in his answer), can be "inconvenient". However, we could also say that current AI systems are still not compatible with human values and they do not think in the way humans do, so they could unexpectedly take "wrong" actions, which can also be difficult to explain (especially if your AI system is or uses a black box system, such as a neural network).
>
> So, should AI systems be used to make life-or-death medical decisions?
>
>
>
I think that people should decide "democratically", and there's should be a great majority of acceptance, e.g. not just 51%, but e.g. 95-99% of people should agree with the idea of letting an artificial system to take a life-or-death decision. To take a reasonable vote, people should be aware of the consequences of such a vote, which means that people should be aware of the inner workings of the AI system and what they can or not do (which is often not possible when the AI system is also composed of black-box models, such as neural networks). Alternatively, allowing or not an AI to make such a decision can also be done on a case-by-case basis.
All these issues are related to "explainable artificial intelligence", "accountability", "transparency", which have been increasingly debated in the last years.
Upvotes: 2 <issue_comment>username_4: At face value, this sound monstrous--a measure to offload responsibility to a non-conscious mechanism that cannot be meaningfully punished for mistakes.
However, I will argue:
* There is humane benefit in taking this decision out of the hands of doctors re: the psychological toll
Specifically, doctors are not the reason for resource scarcity, yet they're the ones being forced to make scarcity-driven life-of-death decisions, and that has got to take a toll.
Essentially, unless one is a sociopath, there is going to be an emotional effect. Here the "sociopathy" of a pure algorithm relieves humans of this terrible burden.
(Might even reduce burnout, and keep more doctors working longer and with more focus.)
Upvotes: 2 |
2020/04/14 | 840 | 2,325 | <issue_start>username_0: Sutton and Barto define the state–action–next-state reward function, $r(s, a, s')$, as follows ([equation 3.6, p. 49](http://incompleteideas.net/book/RLbook2020.pdf#page=71))
$$
r(s, a, s^{\prime}) \doteq \mathbb{E}\left[R\_{t} \mid S\_{t-1}=s, A\_{t-1}=a, S\_{t}=s^{\prime}\right]=\sum\_{r \in \mathcal{R}} r \frac{p(s^{\prime}, r \mid s, a )}{\color{red}{p(s^{\prime} \mid s, a)}}
$$
Why is the term $p(s' \mid s, a)$ required in this definition? Shouldn't the correct formula be $\sum\_{r \in \mathcal{R}} r p(s^{\prime}, r \mid s, a )$?<issue_comment>username_1: $\frac{p(s', r \mid s, a)}{p(s' \mid s, a)}$ represents the probability of observing reward $r$ in state $s'$, given that state $s'$ is the next state transitioned to. The equation assumes a probability distribution of rewards $r$ over state $s'$, meaning that a different reward might be observed whenever a state transitions from $s$ to $s'$. In most cases, if $r(s, a, s')$ is a deterministic reward then $p(s', r \mid s, a) = p(s' \mid s,a )$.
Upvotes: 2 <issue_comment>username_2: Expectation of reward after taking action $a$ in state $s$ and ending up in state $s'$ would simply be
\begin{equation}
r(s, a, s') = \sum\_{r \in R} r \cdot p(r|s, a, s')
\end{equation}
The problem with this is that they do not define probability distribution for rewards separately, they use joint distribution $p(s', r|s, a)$, which represents probability for ending up in state $s'$ with reward $r$ after taking action $a$ in state $s$. This probability can be separated in 2 parts using product rule
\begin{equation}
p(s', r|s, a) = p(s'|s, a)\cdot p(r|s', s, a)
\end{equation}
which represents the probability for getting to state $s'$ from $(s, a)$, and then probability for getting reward $r$ after ending up in $s'$.
If we define reward expectation through the joint distribution, we would have
\begin{align}
r(s, a, s') &= \sum\_{r \in R} r \cdot p(s', r|s, a)\\
&= \sum\_{r \in R} r \cdot p(s'|s, a) \cdot p(r|s', s, a)
\end{align}
but this would not be correct, since we have this extra $p(s'|s, a)$, so we divide everything by it to get expression with only $p(r|s', s, a)$.
So, in the end we have
\begin{equation}
r(s, a, s') = \sum\_{r \in R} r \frac{p(r, s'|s, a)}{p(s'|s, a)}
\end{equation}
Upvotes: 4 [selected_answer] |
2020/04/14 | 598 | 2,363 | <issue_start>username_0: Could machine learning be used to measure the distance between two objects from a picture or live camera?
An example of this is the measurement between the centre of each eye pupil.
This area is all new to me, so any advice and suggestions would be greatly appreciated.<issue_comment>username_1: The short answer is: yes, it could. In what you are describing, there's nothing very new or specific conceptually; it sounds like a standard regression task. Now the problem that you're actually facing is: **do you have the data?**
Algorithms won't be able to learn the distance between eyes if you don't have the data that it takes. It could be supervised labels (1 distance per image which would be your regression target), reconstruction from depth maps, multi-view estimation etc. There's a number of ways you could do that *given the appropriate data*.
People focus on algorithms a lot, and that's good. But taking a good look at your data is often as important (if not more).
Now a good example would be in the self-driving car literature. You could start with this blog-post and go through the papers they reference: <https://towardsdatascience.com/vehicle-detection-and-distance-estimation-7acde48256e1>
There also seems to be some litterature about your eyes examples (<https://arxiv.org/pdf/1806.10890.pdf>, <https://www.sciencedirect.com/science/article/pii/S0165027019301578>) so skimming through these papers & the datasets they use could guide to towards answering my question: is there data for this task?
Upvotes: 2 <issue_comment>username_2: Yes. There's a library called OpenCV that can be used to measure distance between objects.
To find out the distance of two objects, we must know the dimensions of the reference object. There are two important properties for the reference object:
1. We know the dimensions of the object in certain units(inches,
millimeters, etc.)
2. The reference object needs to be easily found in the photo.
Dimensions of the reference object will be used to measure distance beetween other objects. Also, we firstly need to compute the “pixels-per-metric” ratio, used to determine how many pixels “fit” into a given unit of measurement.
Go to this post for detailed explanation:
<https://www.pyimagesearch.com/2016/03/28/measuring-size-of-objects-in-an-image-with-opencv/>
Upvotes: 1 |
2020/04/14 | 844 | 3,499 | <issue_start>username_0: I've built a neural network from the scratch, choosing arbitrary numbers for the hyperparameters: learning rate, number of hidden layers and neurons for these, number of epochs and size of mini batches. Now that I've been able to build something potentially useful (~93% of accuracy with test data, unseen by the model before), I want to focus on hyperparameter tuning.
The conceptual difference between training and validation sets is clear and makes a lot of sense. It's obvious that the model is biased towards the training set, so it wouldn't make sense to use it to tune the hyperparameters, nor for evaluating its performance.
But, how can I use the validation set for this, if changing any of the parameters enforces me to rebuild a new model again? The final prediction depends on the values of X number of MxN matrices (weights) and X number of N vectors (biases), whose values depend on the learning rate, batch size and number of epochs; and whose dimensions depends on the number and size of hidden layers. If I change any of these, I'd need to rebuild my model again. So I'd be using this validation set for training different models, ending up as in the first step: fitting a model from the scratch.
To sum up: I fall in a recursive problem in which I need to fine tune the hyperparameters of my model with unseen data, but changing any of these hyperparameters implies rebuilding the model.<issue_comment>username_1: The short answer is: yes, it could. In what you are describing, there's nothing very new or specific conceptually; it sounds like a standard regression task. Now the problem that you're actually facing is: **do you have the data?**
Algorithms won't be able to learn the distance between eyes if you don't have the data that it takes. It could be supervised labels (1 distance per image which would be your regression target), reconstruction from depth maps, multi-view estimation etc. There's a number of ways you could do that *given the appropriate data*.
People focus on algorithms a lot, and that's good. But taking a good look at your data is often as important (if not more).
Now a good example would be in the self-driving car literature. You could start with this blog-post and go through the papers they reference: <https://towardsdatascience.com/vehicle-detection-and-distance-estimation-7acde48256e1>
There also seems to be some litterature about your eyes examples (<https://arxiv.org/pdf/1806.10890.pdf>, <https://www.sciencedirect.com/science/article/pii/S0165027019301578>) so skimming through these papers & the datasets they use could guide to towards answering my question: is there data for this task?
Upvotes: 2 <issue_comment>username_2: Yes. There's a library called OpenCV that can be used to measure distance between objects.
To find out the distance of two objects, we must know the dimensions of the reference object. There are two important properties for the reference object:
1. We know the dimensions of the object in certain units(inches,
millimeters, etc.)
2. The reference object needs to be easily found in the photo.
Dimensions of the reference object will be used to measure distance beetween other objects. Also, we firstly need to compute the “pixels-per-metric” ratio, used to determine how many pixels “fit” into a given unit of measurement.
Go to this post for detailed explanation:
<https://www.pyimagesearch.com/2016/03/28/measuring-size-of-objects-in-an-image-with-opencv/>
Upvotes: 1 |
2020/04/15 | 1,177 | 3,827 | <issue_start>username_0: [](https://i.stack.imgur.com/pjJFA.jpg)
I'm having trouble understanding the 5th step in the flowchart.
For the 5th step, the '*update the Q function by taking the average of returns*' is confusing.
From what I understand, the Q function is basically the state-action pair values put in a table (the Q table). To update it means to make adjustments to the state-action pair value of the individual states and their respective actions (e.g state 1 action 1, state 3 action 1, state 3 action 2, so on and so forth).
I'm not sure what '*average of returns*' means though. Is it asking me to take the average of the returns after $x$ episodes? From my understanding, returns is the sum of rewards in a full episode (So, AVG=sum of returns for x episodes/x).
And what do I do with that '*average*'?
I'm a little confused when they say '*update the Q function*' because the Q function consists of many parameters that must be updated (the individual state-action pair value), and I'm not sure which one they are referring to.
What is the point of calculating the average of returns? Since the state-action pair value for a particular state and particular action will always be the same (e.g if I always take action 3 in state 4, I will always get value=2 forever)<issue_comment>username_1: each episode you will calculate the return, you will then update the action value or $Q(s,a)$ as the average each episode. Using the blackjack example from open [AI gym](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) and using a discount factor of 1, you get the following
episode 1
[{'state': (22, 10, False), 'reward': -1, 'action': 1}, {'state': (17, 10, False), 'reward': 0, 'action': 1}, {'state': (12, 10, False), 'reward': 0.0, 'action': 1}]
$Q((22, 10, False),0)=-1$
$Q((17, 10, False),1)=-1$
$Q((12, 10, False),1)=-1$
episode 2
[{'state': (21, 10, False), 'reward': 1, 'action': 0}, {'state': (17, 10, False), 'reward': 0, 'action': 1}, {'state': (12, 10, False), 'reward': 0.0, 'action': 1}]
$Q((21, 10, False),0)=1$
$Q((17, 10, False),1)=0$
$Q((12, 10, False),1)=0$
For $Q((17, 10, False),1)$ and $Q((12, 10, False),1)$ is the average return
i.e -1 for the first episode and 1 for the second.
Upvotes: 1 <issue_comment>username_2: $Q(s,a)$ denotes the $Q-value$ for the state-action pair. It means the expected returns if we start from state $s$, take action $a$, and act according to whatever policy we are currently following.
Suppose we are in state $s\_0$, take action $a\_0$. To compute the returns, we would need to follow our current policy from whatever state we land up after taking $a\_0$, till the end of the episode, and sum up the rewards (or discounted rewards) that we get along the way.
*Why average of returns?*
Because we would want to do this multiple times for a state-action pair and compute the average of all such episodes.
*Why multiple times?*
Generally, the environments and the transition function would have some randomness and we don't get the same reward every time.
*Why would you want to compute this?*
The idea is simple. Since our goal is to maximize the average return, if we compute Q-values for all the possible actions starting from state $s\_0$, then we can compare between the values and decide which action is going to be most beneficial to take from state $s\_0$.
Since this is a tabular approach, when they say update the Q-function, they just mean to update the Q-values.
As an example, suppose we are in state $s\_0$ and can take actions $a\_0$, $a\_1$, and $a\_2$. We first compute the Q-values for $(s\_0, a\_0), (s\_0,a\_1), (s\_0, a\_2)$ pairs, and then we would choose the action which has the maximum Q-value out of these three.
Upvotes: 0 |
2020/04/15 | 868 | 2,651 | <issue_start>username_0: Am I right to say that the Q value of a particular state and action is the same as the state-action pair value of that same state and action?<issue_comment>username_1: each episode you will calculate the return, you will then update the action value or $Q(s,a)$ as the average each episode. Using the blackjack example from open [AI gym](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) and using a discount factor of 1, you get the following
episode 1
[{'state': (22, 10, False), 'reward': -1, 'action': 1}, {'state': (17, 10, False), 'reward': 0, 'action': 1}, {'state': (12, 10, False), 'reward': 0.0, 'action': 1}]
$Q((22, 10, False),0)=-1$
$Q((17, 10, False),1)=-1$
$Q((12, 10, False),1)=-1$
episode 2
[{'state': (21, 10, False), 'reward': 1, 'action': 0}, {'state': (17, 10, False), 'reward': 0, 'action': 1}, {'state': (12, 10, False), 'reward': 0.0, 'action': 1}]
$Q((21, 10, False),0)=1$
$Q((17, 10, False),1)=0$
$Q((12, 10, False),1)=0$
For $Q((17, 10, False),1)$ and $Q((12, 10, False),1)$ is the average return
i.e -1 for the first episode and 1 for the second.
Upvotes: 1 <issue_comment>username_2: $Q(s,a)$ denotes the $Q-value$ for the state-action pair. It means the expected returns if we start from state $s$, take action $a$, and act according to whatever policy we are currently following.
Suppose we are in state $s\_0$, take action $a\_0$. To compute the returns, we would need to follow our current policy from whatever state we land up after taking $a\_0$, till the end of the episode, and sum up the rewards (or discounted rewards) that we get along the way.
*Why average of returns?*
Because we would want to do this multiple times for a state-action pair and compute the average of all such episodes.
*Why multiple times?*
Generally, the environments and the transition function would have some randomness and we don't get the same reward every time.
*Why would you want to compute this?*
The idea is simple. Since our goal is to maximize the average return, if we compute Q-values for all the possible actions starting from state $s\_0$, then we can compare between the values and decide which action is going to be most beneficial to take from state $s\_0$.
Since this is a tabular approach, when they say update the Q-function, they just mean to update the Q-values.
As an example, suppose we are in state $s\_0$ and can take actions $a\_0$, $a\_1$, and $a\_2$. We first compute the Q-values for $(s\_0, a\_0), (s\_0,a\_1), (s\_0, a\_2)$ pairs, and then we would choose the action which has the maximum Q-value out of these three.
Upvotes: 0 |
2020/04/16 | 868 | 3,258 | <issue_start>username_0: From my understanding, the policy $\pi$ is basically how the agent acts (i.e. the actions it will take in each state).
However, I am confused about the Q value and how it is "affected" by a policy. [This answer](https://datascience.stackexchange.com/a/31791/10640) says
>
> $Q^\pi(s, a)$ is the action-value function. It is the expected return starting from state $s$, following policy $\pi$, taking action $a$. It's focusing on the particular action at the particular state.
>
>
>
From this, I infer that the $Q$ value (the action-value function) will be affected by the policy $\pi$. Why? So, why does the Q value change according to policy $\pi$?
Shouldn't the Q value be constant, because the same action taken in the same state will always give the same yield (and hence remain constantly good/bad)?
All the policy does is find out the max Q values and bases its policy on that information.<issue_comment>username_1: Ok, Q is the reward associated with being in a given state, following a certain action and then following the given policy.
You need to take the expectation of the sum the immediate reward plus the value function which is defined by the policy.
Upvotes: 0 <issue_comment>username_2: First of all, $Q\_\pi(s, a)$ IS DEFINED AS the value (i.e. the expected return) of taking some action $a$ in some state $s$, AND THEN following some given policy $\pi$ (until e.g. the end of the game or your life). In other words, suppose that you take action $a$ in state $s$, AND THEN use the policy $\pi$ to behave in the world until you die, then $Q\_\pi(s, a)$ would represent the value that you would obtain.
So, we are DEFINING $Q\_\pi(s, a)$ in a certain way. This is a DEFINITION! It's not an algorithm. In the algorithms (e.g. Q-learning), things will typically change, but that's a different story that you should investigate later.
>
> From this, I infer that the $Q$ value (the action-value function) will be affected by the policy $\pi$.
>
>
>
So, $Q\_\pi(s, a)$ will not keep changing. You could say that $Q\_\pi(s, a)$ (which is a function) is "affected by" $\pi$ ONLY in the sense that it is "defined in terms of" $\pi$. To be precise, $Q\_\pi(s, a)$ is actually an expectation (which is a mathematical concept similar to an ideal average). If you are not familiar with the concept of expectation, I suggest you get familiar with it first, before studying reinforcement learning.
>
> Shouldn't the Q value be constant, because the same action taken in the same state will always give the same yield (and hence remain constantly good/bad)?
>
>
>
Again, there's the distinction between the algorithm that you actually use to find the function $Q\_\pi(s, a)$ and the definition of the same function. In case you are estimating the function with an algorithm, you will not necessarily find "constant Q values". It depends on different aspects, which I would like to avoid discussing here, so that this post doesn't become an open discussion (I suggest you first learn about the basic Bellman equations and then you study the algorithms from the book [Reinforcement learning: an introduction](http://incompleteideas.net/book/RLbook2020.pdf) by Sutton and Barto).
Upvotes: 3 [selected_answer] |
2020/04/16 | 1,731 | 6,296 | <issue_start>username_0: I'm a beginner in the RL field, and I would like to check that my understanding of certain RL concepts.
**Value function**: How good it is to be in a state S following policy π.
[](https://i.stack.imgur.com/uPiKw.jpg)
```
So, the value functions here are 0.3 and 0.9
```
**Q function**(also called state-action value, or just action value): How good it is to be in a state S and perform action A while following policy π. It uses reward to measure the state-action value
[](https://i.stack.imgur.com/Yn967.jpg)
```
So, the state-action values here are 0.03,0.02,0.5 and 0.9
```
**Q value**: The overall expected rewards after performing action A in state S, and continuing with policy π until the end of the episode. So, essentially I can only calculate the Q value if I know all the state-action values of the actions I will be taking in the single episode.(Because the Q value takes into account the actions after the current action A, till the end of the episode, following policy π)
**Reward**: The metric used to tell the agent how good/bad it's action was. It is a constant value.
For e.g
```
1. Fall in pond --> -1
2. On stone path --> +1
3. Reach home--> +10
```
**Return**: The sum of rewards in a single episode
**Policy π**: A set of specific instructions an agent will follow in an episode. For example, the policy will look like:
```
In state 1, take action 3 ( which takes me to state 2)
In state 2, take action 2 ( which takes me to state 3)
In state 3, take action 1 ( Which takes me to state 4)
In state 4, take action 2 ( Which takes me to terminal state)
1 episode completed
```
And my policy will keep updating each episode to get the best return<issue_comment>username_1: I think most of it is correct.
>
> Q function(also called state-action value, or just action value): How good it is to be in a state S and perform action A while following policy π. It uses reward to measure the state-action value
>
>
>
This is a bit off. Q function basically tells you how good it is to be in state S and perform action A, and follow policy $\pi$ from the next state onwards. The action A that you take can be any action from the action space and need not be according to the policy $\pi$.
Also, I think Q-function and Q-value are mostly used interchangeably to mean the same thing.
Upvotes: 1 <issue_comment>username_2: >
> **Value function**: How good it is to be in a state $s$ following policy $\pi$.
>
>
>
There are different value functions. There's the state value function, often denoted as $v(s)$ (or $V(s)$), so it's a function of only one variable, i.e. $s$ (a state). There's the state-action value function $q(s, a)$ (or $Q(s, a$)). A value function is a function, so it's not a number or a vector, or whatever. It's a function, so it maps inputs to outputs. In the first case, it maps states to real numbers. In the second case, it maps states and actions to real numbers. So, we could denote the state value function as $v : \mathcal{S} \rightarrow \mathbb{R}$ (where $\mathcal{S}$ is the set of states in your environment) and state-action value function as $q : \mathcal{S} \times \mathcal{A}\rightarrow \mathbb{R}$ (where $\mathcal{A}$ is the set of actions and $\times$ means "combination of").
So, your definition of a value function is not quite correct. The value function $v(s)$ doesn't represent "how good it is to be in a state $s$ following a policy $\pi$", but "how good it is to be in a state $s$ AND THEN following policy $\pi$". To emphasize this, you often use the notation $v\_{\pi}(s)$ rather than simply $v(s)$.
See [What are the value functions used in reinforcement learning?](https://ai.stackexchange.com/q/10575/2444) for more details about existing value functions in reinforcement learning. And to see the full definition of the value functions, I suggest you read Sutton and Barto's book.
>
> **Q function** (also called state-action value, or just action value): How good it is to be in a state $s$ and perform action $a$ while following policy $\pi$. It uses reward to measure the state-action value
>
>
>
As I said above, the $q$ function is a "value function" too. It's just a different value function than $v$.
Again, the same thing I said for $v$ also applies here, so "how good it is to be in a state $s$ and perform action $a$ while following policy $\pi$" is incorrect for the same reason your definition for $v$ was incorrect. The $q$ function can be defined as "how good it is to be in a state $s$ and take action $a$, AND, AFTER THAT, follow a given policy $\pi$. Again, to emphasize that $q$ is defined in terms of $\pi$, we often use the notation $q\_\pi$.
>
> **Reward**: The metric used to tell the agent how good/bad it's action was. It is a constant value.
>
>
>
This is roughly correct, but the reward doesn't have to be constant and it depends on your problem. Also, there's also the related notion of "reward function", which is the function that assigns rewards to each action. So, when defining your problem as a Markov decision process, you need to define this reward function. Actually, this is probably the most important function in reinforcement learning (because this is the way you teach the agent to behave).
>
> **Return**: The sum of rewards in a single episode
>
>
>
This is roughly correct. However, note that the sum can also be a "weighted sum".
>
> **Policy**: A set of specific instructions an agent will follow in an episode.
>
>
>
This is roughly correct, but a policy can also have some randomness in it. For example, if you are in state $s$, your policy could say "always take action $a\_i$", but another policy could say "take action $a\_i$ with probability $p$ and action $a\_j$ with probably $1 - p$. Also, note that the policy is not restricted to an episode. It's a general function that tells the agent how to behave independently of the episode.
(Sorry, I didn't look at your examples. Maybe I will review this answer later to look at your examples too, but the information in this answer should already tell you if your examples are correct or not).
Upvotes: 3 [selected_answer] |
2020/04/16 | 2,510 | 8,599 | <issue_start>username_0: Although I know how the algorithm of iterative policy evaluation using dynamic programming works, I am having a hard time realizing how it actually converges.
It appeals to intuition that, with each iteration, we get a better and better approximation for the value function and we can thus assure its convergence, but with this said simply, it seems that this method is very inefficient contrary to the reality that it actually is quite efficient.
What is the rigorous mathematical proof of the convergence of the policy evaluation algorithm to the actual answer? How is it that the value function obtained this way is close to the actual values computed by solving the set of bellman equations?<issue_comment>username_1: There exist other RL books which do a better job of talking about this but it's pretty simple at it's core.
The discounting factor puts an upper limit on the difference in reward between a finite number of iterations and an infinite number, each time you add another iteration it decreases by $\gamma$ multiplied into the upper bound of the difference.
$V\_\pi = E[\sum\_{i=0}^{\infty} \gamma^iR\_i]$, $\Delta V\_k = E[\sum\_{i=k}^{\infty} \gamma^iR\_i] = \gamma^k E[\sum\_{i=0}^{\infty} \gamma^{i}R\_{i+k}] < \gamma^k \frac{1}{1-\gamma}R\_{max}$
Upvotes: -1 <issue_comment>username_2: First of all, [efficiency](https://en.wikipedia.org/wiki/Algorithmic_efficiency) and convergence are two different things. There's also the [rate of convergence](https://en.wikipedia.org/wiki/Rate_of_convergence), so an algorithm may converge faster than another, so, in this sense, it may be more efficient. I will focus on the proof that policy evaluation (PE) converges. If you want to know about its efficiency, maybe ask another question, but the proof below also tells you about the rate of convergence of PE.
>
> What is the proof that policy evaluation converges to the optimal solution?
>
>
>
To provide some context, I will briefly describe policy evaluation and what you need to know to understand the proof.
Policy evaluation
-----------------
Policy evaluation (PE) is an iterative numerical algorithm to find the value function $v^\pi$ for a given (and arbitrary) policy $\pi$. This problem is often called the *prediction problem* (i.e. you want to predict the rewards you will get if you behave in a certain way).
### Two versions: synchronous and asynchronous
There are (at least) two versions of policy evaluation: a synchronous one and an asynchronous one.
In the **synchronous** version (SPE), you maintain two arrays for the values of the states: one array holds the current values of the states and the other array will contain the next values of the states, so two arrays are used in order to be able to update the value of each state at the same time.
In the **asynchronous** version (APE), you update the value of each state in place. So, first, you update the value of e.g. $s\_1$, then $s\_2$, etc., by changing your only array of values (so you do not require a second array).
SPE is similar in style to the numerical method called [Jacobi method](https://en.wikipedia.org/wiki/Jacobi_method), which is a general iterative method for finding a solution to a system of linear equations (which is exactly what PE is actually doing, and this is also explained in the cited book by Sutton and Barto). Similarly, APE is similar in style to [the Gauss–Seidel method](https://en.wikipedia.org/wiki/Gauss%E2%80%93Seidel_method), which is another method to solve a system of linear equations.
Both of these general numerical methods to solve a system of linear equations are studied in detail in [Parallel and Distributed Computation Numerical Methods](https://labs.xjtudlc.com/labs/wldmt/reading%20list/books/Distributed%20and%20parallel%20algorithms/Parallel%20and%20Distributed%20Computation%20Numerical%20Methods.pdf) (1989) by Bertsekas and Tsitsiklis, which I haven't read yet, but provides convergence results for these numerical methods.
The book [Reinforcement learning: an introduction](http://incompleteideas.net/book/RLbook2020.pdf) by Sutton and Barto provides a more detailed description of policy evaluation (PE).
Proof of convergence
--------------------
I will provide a proof for the SPE based on [these slides](http://www.andrew.cmu.edu/course/10-703/slides/lecture4_valuePolicyDP-9-10-2018.pdf) by [<NAME>](http://www.cs.cmu.edu/%7Etom/). Before proceeding, I suggest you read the following question [What is the Bellman operator in reinforcement learning?](https://ai.stackexchange.com/q/11057/2444) and its answer, and you should also get familiar with vector spaces, norms, fixed points and maybe contraction mappings.
The proof that PE finds a unique fixed-point is based on the **contraction mapping theorem** and on the concept of **$\gamma$-contractions**, so let me first recall these definitions.
>
> **Definition ($\gamma$-contraction)**: An operator on a normed vector space $\mathcal{X}$ is a $\gamma$-contraction, for $0 < \gamma < 1$, provided for all $x, y \in \mathcal{X}$
>
>
> $$\| F(x) - F(y) \| \leq \gamma \| x - y\|$$
>
>
> **Contraction mapping theorem**: For a $\gamma$-contraction $F$ in a complete normed vector space $\mathcal{X}$
>
>
> * Iterative application of $F$ converges to a **unique** fixed point in $\mathcal{X}$ independently of the starting point
> * at a linear convergence rate determined by $\gamma$
>
>
>
Now, consider the vector space $\mathcal{V}$ over state-value functions $v$ (i.e. $v \in \mathcal{V})$. So, each point in this space fully specifies a value function $v : \mathcal{S} \rightarrow \mathbb{R}$ (where $\mathcal{S}$ is the state space of the MDP).
**Theorem (convergence of PE)**: The [Bellman operator](https://ai.stackexchange.com/q/11057/2444) is a $\gamma$-contraction operator, so an iterative application of it converges to a unique fixed-point in $\mathcal{V}$. Given that PE is an iterative application of the Bellman operator (see [What is the Bellman operator in reinforcement learning?](https://ai.stackexchange.com/q/11057/2444)), PE finds this unique fixed-point solution.
So, we just need to show that the Bellman operator is a $\gamma$-contraction operator in order to show that PE finds this unique fixed-point solution.
### Proof
We will measure the distance between state-value functions $u$ and $v$ by the $\infty$-norm, i.e. the largest difference between state values:
$$\|u - v\|\_{\infty} = \operatorname{max}\_{s \in \mathcal{S}} |u(s) - v(s)|$$
**Definition (Bellman operator)**: We define the Bellman expectation operator as
$$F^\pi(v) = \mathbf{r}^\pi + \gamma \mathbf{T}^\pi v$$
where $v \in \mathcal{V}$, $\mathbf{r}^\pi$ is an $|\mathcal{S}|$-dimensional vector whose $j$th entry gives $\mathbb{E} \left[ r \mid s\_j, a=\pi(s\_j) \right]$ and $\mathbf{T}^\pi$ is an $|\mathcal{S}| \times |\mathcal{S}|$ matrix whose $(j, k)$ entry gives $\mathbb{P}(s\_k \mid s\_j, a=\pi(s\_j))$.
Now, let's measure the distance (with the $\infty$-norm defined above) between any two value functions $u \in \mathcal{V}$ and $v \in \mathcal{V}$ after the application of the Bellman operator $F^\pi$
\begin{align}
\| F^\pi(u) - F^\pi(v) \|\_{\infty}
&= \| (\mathbf{r}^\pi + \gamma \mathbf{T}^\pi u) - (\mathbf{r}^\pi + \gamma \mathbf{T}^\pi v)\|\_{\infty} \\
&= \| \gamma \mathbf{T}^\pi (u - v)\|\_{\infty} \\
&\leq \| \gamma \mathbf{T}^\pi ( \mathbb{1} \cdot \| u - v \|\_{\infty})\|\_{\infty} \\
&\leq \| \gamma (\mathbf{T}^\pi \mathbb{1}) \cdot \| u - v \|\_{\infty}\|\_{\infty} \\
&\leq \gamma \| u - v \|\_{\infty}
\end{align}
where $\mathbb{1} = [1, \dots, 1]^T$. Note that $\mathbf{T}^\pi \cdot \mathbb{1} = \mathbb{1}$ because $\mathbf{T}^\pi$ is a [stochastic matrix](https://en.wikipedia.org/wiki/Stochastic_matrix).
By the Bellman expectation equation (see Barto and Sutton's book and [What is the Bellman operator in reinforcement learning?](https://ai.stackexchange.com/q/11057/2444)), $v^\pi$ is **a** fixed-point of the Bellman operator $F^\pi$. Given the contraction mapping theorem, the iterative application of $F^\pi$ produces a **unique** solution, so $v^\pi$ must be this unique solution, i.e. SPE finds $v^\pi$. [Here](https://ai.stackexchange.com/a/22970/2444) is another version of the proof that the Bellman operator is a contraction.
Notes
-----
I didn't prove the contraction mapping theorem, but you can find more info about the theorem and its proof in [the related Wikipedia article](https://en.wikipedia.org/wiki/Banach_fixed-point_theorem).
Upvotes: 4 [selected_answer] |
2020/04/16 | 1,059 | 4,939 | <issue_start>username_0: I'm a biotech student and I'm currently working on single-particle tracking. For my work, I need to use aspects of deep learning (CNN, RNN and object segmentation) but I'm not familiar with these topics. I have some prior knowledge in python.
So, do I have to learn machine learning first before going into deep learning, or can I skip ML?
What are the pros and cons of studying machine learning before deep learning?<issue_comment>username_1: That question doesn't really make sense: deep learning is a sub-topic of machine learning, so you can't really 'skip' it. It's a bit like "I want to learn about trigonometry, but do I need to do geometry first?"
Having said that, in order to make sense of deep learning you should really know about the general principles of machine learning, otherwise you won't understand it. Or, more importantly, you won't understand what problems deep learning can be applied to, and what issues are better solved with other methods.
You don't need to go into much detail, but should at least get an overview.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Like username_1 mentioned, Deep learning is just a sub-field of machine learning. In order to learn deep learning effectively you need to have certain pre-requisites like basic principle of Machine learning and basics of simple Artificial neural network with some programming knowledge ( Python is go-to language). That being said, you don't need to know every single Machine learning algorithm and it's practices.
Now if deep learning happens to be just a tool that you need for this particular project and have no time to learn in depth about it then I would recommend you to take a look at python libraries like Tensorflow, pytorch, scikit learn, scipy, open cv etc. You can get started and use DL, ML models with these and many other libraries without knowing it's under the hood algorithms and implementations.
One of the best course to get started with deep learning with very little Ml knowledge is Andrew ng's deep learning.ai course on coursera ( you can audit the course and get all the course materials for free)
Here's the link to the course : [Deep learning.ai](https://www.coursera.org/specializations/deep-learning)
Upvotes: 1 <issue_comment>username_3: Machine learning uses algorithms to digest data sets, draw conclusions based on analyzed data, and use these conclusions to complete the task in the most effective way. This ability is a fundamental difference between machine learning and machine that has been programmed from the beginning with a certain sequence of commands. Machine learning has the capability to accomplish tasks dynamically.
While Deep Learning is one of the methods of implementing machine learning that aims to mimic the workings of the human brain using ANN. Deep learning uses a number of algorithms as 'neurons' to work together in determining and digesting certain characteristics in a data set.
In contrast to general machine learning programs that are designed to accomplish certain tasks, deep learning programs are usually programmed with more complex capabilities to study, digest, and classify data.
A machine learning model requires data to learn and obtain parameter estimates, so the more data that can be used, the machine learning program will be smarter. In addition, operating machine learning models — especially logical networks for deep learning — requires high computational power. This is because the deep learning model must operate many processes simultaneously, especially in the training phase. In the training phase, the machine learning model must process very large amounts of data to be categorized as a reference.
Upvotes: 0 <issue_comment>username_4: >
> So, do I have to learn machine learning first before going into deep learning, or can I skip ML?
>
>
>
Quoting from [Wikipedia](https://en.wikipedia.org/wiki/Deep_learning), "Deep learning (also known as deep structured learning or differential programming) is part of a broader family of machine learning methods based on artificial neural networks with representation learning."
That being said, it is better to understand the fundamental of machine learning first so you would be able to understand completely of how deep learning works, and how to apply it effectively and efficiently. However, You can always skip straight to deep learning without having any major issues, as there is already a lot of libraries supporting deep learning on python such as [TensorFlow](https://www.tensorflow.org/).
>
> What are the pros and cons of studying machine learning before deep learning?
>
>
>
pros:
Since deep learning is a subset of machine learning, having fundamental knowledge about machine learning and the other machine learning algorithms will be beneficial.
cons:
You might (not) waste your time and energy to learn something you wouldn't use.
Upvotes: 0 |
2020/04/16 | 1,220 | 4,291 | <issue_start>username_0: Assume that I have a Dataframe with the text column.
Problem: Classification / Prediction
```
sms_text
0 Go until jurong point, crazy.. Available only ...
1 Ok lar... Joking wif u oni...
2 Free entry in 2 a wkly comp to win FA Cup fina...
3 U dun say so early hor... U c already then say...
4 Nah I don't think he goes to usf, he lives aro...
```
After preprocessing the text
[](https://i.stack.imgur.com/4EQWp.png)
From the above WordCloud, we can find the most frequent(occurred) words like
```
Free
Call
Text
Txt
```
As these are the most frequent words and adds less importance in prediction/classification as they appear a lot. (My Opinion)
>
> My Question is
> Removing top frequent(most occurred) words will improve the model score?
>
>
> How does this impact on model performance?
>
>
> Is it ok to remove the most occurred words?
>
>
><issue_comment>username_1: The technical term for these words is "[stop words](https://en.wikipedia.org/wiki/Stop_words)". Have a look at [Information Retrieval](https://en.wikipedia.org/wiki/Information_retrieval) and indexing (eg [TF/IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf)) to make up your mind whether you want to remove them or not.
Upvotes: 1 <issue_comment>username_2: As far as I know, there are few **aspects** that would probably improve the model score:
>
> 1. Normalization
> 2. Lemmatization
> 3. Stopwords removal (as you asked here)
>
>
>
Based on your question, "is removing top frequent words (stopwords) will improve the model score?". The answer is, it depends on what kind of stopwords are you removing. The problem here is that if you do not remove stop words, the noise will increase in the dataset because of words like I, my, me, etc. Here is the comparison of those three **aspects** using **SVM Classifier**.
[](https://i.stack.imgur.com/BQavF.png)
You may see that without stopwords removal the Train Set Accuracy decreased to **94.81%** and the Test Set Accuracy decreased to **88.02%**. But, you should be careful about what kind of stopwords you are removing.
If you are working with basic NLP techniques like BOW, Count Vectorizer or TF-IDF(Term Frequency and Inverse Document Frequency) then removing stopwords is a good idea because stopwords act like noise for these methods. If you working with LSTM’s or other models which capture the semantic meaning and the meaning of a word depends on the context of the previous text, then it becomes important not to remove stopwords.
So, what's the solution?
>
> You may want to create a Python package **nlppreprocess** which removes stops words that are not necessary. It also has some additional functionalities that can make cleaning of text fast. For example:
>
>
>
```
from nlppreprocess import NLP
import pandas as pd
nlp = NLP()
df = pd.read_csv('some_file.csv')
df['text'] = df['text'].apply(nlp.process)
```
Source:
1. <https://github.com/miguelfzafra/Latest-News-Classifier>
2. <https://towardsdatascience.com/why-you-should-avoid-removing-stopwords-aa7a353d2a52>
Upvotes: 3 [selected_answer]<issue_comment>username_3: Based on my project, there is how i clean and doing some preparation on the data.
1. Delete specific charaters ('\r', '\n', '"',)
2. Change into the lowercase
3. Delete some symbols
4. Lemmatization (change base word with wordnet)
5. Delete stopwords.
With these following step, i get some improvement accuracy score on my model.
My project: <https://github.com/khaifagifari/NLP-Course-TelU>
Upvotes: 1 <issue_comment>username_4: Based on my experience, I did 2 tasks that is proven to improve the accuracy/score of my model.
1. Normalization
* removing characters and symbols in a text
* lowercase folding
2. Stopwords removal (as what you asked)
These process helped me improve my model since stopwords gave my model noise as I am using word frequency count to represent text.
So based on what you asked, does stopwords removal improve score? It depends on your model. If you are using word count to represent text you may do stopwords removal to remove noise when doing text classification.
Upvotes: 2 |
2020/04/16 | 2,201 | 8,344 | <issue_start>username_0: This might be a little broad question, but I have been watching Caltech youtube videos on Machine Learning, and in [this](https://www.youtube.com/watch?v=Dc0sr0kdBVI&list=PLD63A284B7615313A&index=7) video prof. is trying to explain how we should interpret the VC dimension in terms of what it means in layman terms, and why do we need it in practice.
The first part I think I understand, please correct me if I am wrong. VC Dimension dictates the number of effective parameters (i.e. degrees of freedom) that model has. In other words, the number of parameters the model needs in order to cover all possible label combinations for the chosen dataset. Now, the second part is not clear to me. The professor is trying to answer the question:
>
> How does knowing the VC dimension of the hypothesis class affect number of samples we need for training?
>
>
>
Again, I apologize if all of this may be trivial, but I am new to the field and wish to learn as much as I can, so I can implement better and more efficient programs in practice.<issue_comment>username_1: Given a hypothesis set $H$, the set of all possible mappings from $X\to Y$ where $X$ is our input space and $Y$ are our binary mappings: $\{-1,1\}$, the growth function, $\Pi\_H(m)$, is defined as the maximum number of dichotomies generated by $H$ on $m$ points. Here a dichotomy is the set of $m$ points in $X$ that represent a hypothesis. A hypothesis is just a way we classify our points. Therefore with two labels we know,
$$\Pi\_H(m)\leq 2^m$$
This is just counts every possible hypothesis. The VC dimension is then the largest $m$ where $\Pi\_H(m)=2^m$.
Consider a 2D perceptron, meaning our $X$ is $\mathbb{R}^2$ and our classifying hyperlane is one-dimensional: a line. The VC dimension will be 3. This is because we can shatter (correctly classify) all dichotomies for $m=3$. We can either have all points be the same colour, or one point be a different colour - which is $2^3=8$ dichotomies. You may ask what if the points we are trying to classify are collinear. This does not matter because we are concerned with resolving the dichotomies themselves, not the location of the points. We just need a set of points (wherever they may be located) that exhibits that dichotomy. In other words, we can pick the points such that they maximize the number of dichotomies we can shatter with one classifying hyperplane (a triangle): the VC dimension is a statement of the capacity of our model.
To make this clear, consider $m=4$. We can represent the truth table of the XOR gate as a dichotomy but this is not resolvable by the perceptron, no matter where we choose the location of the points (not linearly separable). Therefore, we can resolve a maximum of 8 dichotomies, so our VC dimension is 3. In general, the VC dimension of perceptrons is $d+1$ where $d$ is the dimension of $X$ and $d-1$ is the dimension of the classifying hyperplane.
Upvotes: 2 <issue_comment>username_2: The VC dimension represents the capacity ([the same Vapnik, the letter V from VC, calls it the "capacity"](https://www.math.arizona.edu/~hzhang/math574m/Read/vapnik.pdf)) of a model (or, in general, hypotheses class), so a model with a higher VC dimension has more capacity (i.e. it can represent more functions) than a model with a lower VC dimension.
The VC dimension is typically used to provide theoretical bounds e.g. on the number of samples required for a model to achieve a certain *test error* with a given *uncertainty* or, similarly, to understand the quality of your estimation given a certain dataset.
Just to give you an idea of how the bounds look like, have a look at the theorem on page 6 (of the pdf) of the paper [An overview of statistical learning theory](https://www.math.arizona.edu/~hzhang/math574m/Read/vapnik.pdf) (1999) by Vapnik.
Have also a look at [this answer](https://ai.stackexchange.com/a/17881/2444), where I provide more info about the VC dimension, in particular, in the context of neural networks.
Upvotes: 2 <issue_comment>username_3: From [[1]](https://www.springer.com/gp/book/9780387987804) we know that we have the following bound between the test and train error for i.i.d samples:
$$
\mathbb{P}\left(R \leqslant R\_{emp} + \sqrt{\frac{d\left(\log{\left(\frac{2m}{d}\right)}+1\right)-\log{\left(\frac{\eta}{4}\right)}}{m}}\right) \geqslant 1-\eta
$$
$R$ is the test error, $R\_{emp}$ is the training error, $m$ is the size of the training dataset, and $d$ is the hypothesis class's VC dimension. As you can see, the training and test errors have some relations to the dataset's size ($m$) and $d$.
Now, in terms of PAC learnability, we want to find a (lower or upper) bound for $m$ such that the absolute difference between $R$ and $R\_{emp}$ will be less than a given $\epsilon$ with a given probability of at least $1-\eta$. Hence, $m$ can be computed in terms of $\epsilon$, $\eta$, and $d$. For example, it can be proved ([[2]](https://dl.acm.org/doi/10.5555/2946645.2946683)) to train a binary classifier with $\epsilon$ difference between test and train error with the probability of at least $1-\eta$, we need $O\left(\frac{d + \log\frac{1}{\eta}}{\epsilon} \right)$ i.i.d sample data, i.e., $m = O\left(\frac{d + \log\frac{1}{\eta}}{\epsilon}\right)$. See more example and references [here](https://en.wikipedia.org/wiki/Sample_complexity).
Upvotes: 3 [selected_answer]<issue_comment>username_4: Since the mathematical details have already been covered by other answers, I will try to provide an intuitive explanation. I will answer this assuming the question meant $model$ and not $learning$ $algorithm$.
One way to think of $\mathcal V \mathcal C$ dimension is that it is an indicator of the number of functions (i.e a set of functions) you can choose from to approximate your classification task over a domain. So a model (here assume neural nets, linear separators, circles, etc whose parameters can be varied) having $\mathcal V \mathcal C$ dimension of $m$ shatters all subsets of the single/multiple set of $m$ points it shatters.
For a learning algorithm, to select a function, which gives accuracy close to the best possible accuracy (on a classification task) from the aforementioned set of functions (shattered by your model, which means it can represent the function with $0$ error) it needs a certain sample size of $m$. For the sake of argument, let's say your set of functions (or the model shatters) contains all the possible mappings from $\mathcal X \rightarrow \mathcal Y$ (assume $\mathcal X$ contains $n$ points i.e finite sized, as a result number of functions possible is $2^n$). One of the function it will shatter is the function which performs the classification, and thus you are interested in finding it.
Any learning algorithm which sees $m$ number of samples can easily pick up the set of functions which agrees on these points. The number of these functions agreeing on these sampled $m$ points but disagreeing on the $n-m$ points is $2^{(n-m)}$. The algorithm has no way of selecting from these shortlisted functions (agreeing on $m$ points) the one function which is the actual classifier, hence it can only guess. Now increase the sample size and the number of functions disagreeing keeps falling and the algorithms probability of success keeps getting better and better until you see all $n$ points when your algorithm can identify the mapping function of the classifier exactly.
The $\mathcal V \mathcal C$ dimension is very similar to the above argument, except it doesn't shatter the entire domain $\mathcal X$ and only a part of it. This limits the models capability to approximate a classification function exactly. So your learning algorithm tries to pick a function from all the functions your model shatter, which is very close to the best possible classification function i.e there will exist a best possible (not exact) function (optimal) in your set of functions which is closest to the classification function and your learning algorithm tries to pick a function which is close to this optimal function. And thus again, as per our previous argument it will need to keep increasing the sample size to reach as close as possible to the optimal function. The exact mathematical bounds can be found in books, but the proofs are quite daunting.
Upvotes: 0 |
2020/04/17 | 4,506 | 16,406 | <issue_start>username_0: I am currently attempting to detect a signal from background noise. The signal is pretty well known but the background has a lot of variability. I've since come to know this problem as Open Set Recognition. Another complicating factor is that the signal mixes with the background noise (think equivalent to a transparent piece of glass in-front of scenery for a picture, or picking out the sound of a pin drop in an office space).
When I started this project, it seemed like the current state of the art in this space was generating Spectrograms and feeding them to a CNN and this is the path I've followed. I'm at a place where I think I've overcome most of the initial problems you might encounter but I'm still not getting good enough results for a project solution.
Here's the overall steps I've gone through:
0. Generate 17000 ground truth "signals" and 17000 backgrounds (negatives or other classes depending on what nn scheme I'm training)
1. Generate separate test samples (not training samples but external model validation samples: "blind test") where I take the backgrounds and randomly overlay the signal into it at various intensities.
2. My first attempt was with a pre-built library training solution (ImageAI) with resnet50 base model. This solution is a multiclass classifier so I had 400 each of the signal + 5 other classes that were the background. It did not work well at classifying the signal. I don't think I ever got this off the ground for two reasons a) My spectrogram pictures were not optimised (waay to large) and b) I couldn't adjust the image input shape via the library. It mostly just ended up classifying one background class.
3. I then started building my own neural nets. The first reason to make sure my spectrogram input shape was matched in the input shape of the CNN. The second reason was to test various neural net schemes to see what worked best.
4. The first net I built was a simple feed forward net with a couple of dense layers. This trains to .9998 val\_acc. It (like the rest of what I try) produces poor results on my blind tests, in the range of 60% true positive.
```
def build(width, height, depth, classes):
# initialize the model along with the input shape to be
# "channels last" and the channels dimension itself
model = Sequential()
inputShape = (height, width, depth)
chanDim = -1
# if we are using "channels first", update the input shape
# and channels dimension
if K.image_data_format() == "channels_first":
inputShape = (depth, height, width)
chanDim = 1
model.add(Flatten())
model.add(Dense(512, input_shape=(inputShape),activation="relu"))
model.add(Dense(128, activation="relu"))
model.add(Dense(32, activation="relu"))
# sigmoid classifier
model.add(Dense(classes))
model.add(Activation("sigmoid"))
# return the constructed network architecture
return model
```
5. I then try a "VGG Light" model. Again, trains to .9999 but gives me only 62% true positive results on my blind tests
```
def build(width, height, depth, classes):
# initialize the model along with the input shape to be
# "channels last" and the channels dimension itself
model = Sequential()
inputShape = (height, width, depth)
chanDim = -1
# if we are using "channels first", update the input shape
# and channels dimension
if K.image_data_format() == "channels_first":
inputShape = (depth, height, width)
chanDim = 1
# CONV => RELU => POOL
model.add(Conv2D(32, (3, 3), padding="same", input_shape=inputShape))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
# (CONV => RELU) * 2 => POOL
model.add(Conv2D(64, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(Conv2D(64, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
# (CONV => RELU) * 2 => POOL
model.add(Conv2D(128, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(Conv2D(128, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(GaussianNoise(.05))
# first (and only) set of FC => RELU layers
model.add(Flatten())
model.add(Dense(1024))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(512))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(.5))
model.add(Dense(128))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(GaussianDropout(0.5))
# sigmoid classifier
model.add(Dense(classes))
model.add(Activation("sigmoid"))
# return the constructed network architecture
return model
```
6. I then try a "full VGG" net. This again trains to .9999 but only a blind test true positive result of 63%.
```
def build(width, height, depth, classes):
# initialize the model along with the input shape to be
# "channels last" and the channels dimension itself
model = Sequential()
inputShape = (height, width, depth)
chanDim = -1
# if we are using "channels first", update the input shape
# and channels dimension
if K.image_data_format() == "channels_first":
inputShape = (depth, height, width)
chanDim = 1
#CONV => RELU => POOL
model.add(Conv2D(64, (3, 3), padding="same", input_shape=inputShape))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(MaxPooling2D(pool_size=(3, 3)))
#model.add(Dropout(0.25))
# (CONV => RELU) * 2 => POOL
model.add(Conv2D(128, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(Conv2D(128, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(MaxPooling2D(pool_size=(2, 2)))
#model.add(Dropout(0.25))
# (CONV => RELU) * 2 => POOL
model.add(Conv2D(256, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(Conv2D(256, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(MaxPooling2D(pool_size=(2, 2)))
#model.add(Dropout(0.25))
# (CONV => RELU) * 2 => POOL
model.add(Conv2D(512, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(Conv2D(512, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(MaxPooling2D(pool_size=(2, 2)))
#model.add(Dropout(0.25))
# (CONV => RELU) * 2 => POOL
model.add(Conv2D(1024, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(Conv2D(1024, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(MaxPooling2D(pool_size=(2, 2)))
#model.add(Dropout(0.25))
model.add(GaussianNoise(.1))
# first (and only) set of FC => RELU layers
model.add(Flatten())
model.add(Dense(8192))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(4096))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(1024))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(GaussianDropout(0.5))
# sigmoid classifier
model.add(Dense(classes))
model.add(Activation("sigmoid"))
# return the constructed network architecture
return model
```
7. All of the above are binary\_crossentropy trained in keras.
8. I've tried multi-class with these models as well but when testing them on the blind test they usually pick the background rather than the signal.
9. I've also messed around with Autoencoders to try and get the encoder to rebuild the signal well and then compare to known results but haven't been successful yet though I'd be willing to give it another try if everyone thought that might produce better results.
10. In the beginning I ran into unbalanced classification problems (I was noob) but under all the models shown above the classes all have the same number of samples.
I'm at the point where the larger VGG models trained on 34,000 samples is taking days and I don't see any better results than a basic, feed forward NN that takes 4 minutes to train.
Does anyone see the path forward here?<issue_comment>username_1: Given a hypothesis set $H$, the set of all possible mappings from $X\to Y$ where $X$ is our input space and $Y$ are our binary mappings: $\{-1,1\}$, the growth function, $\Pi\_H(m)$, is defined as the maximum number of dichotomies generated by $H$ on $m$ points. Here a dichotomy is the set of $m$ points in $X$ that represent a hypothesis. A hypothesis is just a way we classify our points. Therefore with two labels we know,
$$\Pi\_H(m)\leq 2^m$$
This is just counts every possible hypothesis. The VC dimension is then the largest $m$ where $\Pi\_H(m)=2^m$.
Consider a 2D perceptron, meaning our $X$ is $\mathbb{R}^2$ and our classifying hyperlane is one-dimensional: a line. The VC dimension will be 3. This is because we can shatter (correctly classify) all dichotomies for $m=3$. We can either have all points be the same colour, or one point be a different colour - which is $2^3=8$ dichotomies. You may ask what if the points we are trying to classify are collinear. This does not matter because we are concerned with resolving the dichotomies themselves, not the location of the points. We just need a set of points (wherever they may be located) that exhibits that dichotomy. In other words, we can pick the points such that they maximize the number of dichotomies we can shatter with one classifying hyperplane (a triangle): the VC dimension is a statement of the capacity of our model.
To make this clear, consider $m=4$. We can represent the truth table of the XOR gate as a dichotomy but this is not resolvable by the perceptron, no matter where we choose the location of the points (not linearly separable). Therefore, we can resolve a maximum of 8 dichotomies, so our VC dimension is 3. In general, the VC dimension of perceptrons is $d+1$ where $d$ is the dimension of $X$ and $d-1$ is the dimension of the classifying hyperplane.
Upvotes: 2 <issue_comment>username_2: The VC dimension represents the capacity ([the same Vapnik, the letter V from VC, calls it the "capacity"](https://www.math.arizona.edu/~hzhang/math574m/Read/vapnik.pdf)) of a model (or, in general, hypotheses class), so a model with a higher VC dimension has more capacity (i.e. it can represent more functions) than a model with a lower VC dimension.
The VC dimension is typically used to provide theoretical bounds e.g. on the number of samples required for a model to achieve a certain *test error* with a given *uncertainty* or, similarly, to understand the quality of your estimation given a certain dataset.
Just to give you an idea of how the bounds look like, have a look at the theorem on page 6 (of the pdf) of the paper [An overview of statistical learning theory](https://www.math.arizona.edu/~hzhang/math574m/Read/vapnik.pdf) (1999) by Vapnik.
Have also a look at [this answer](https://ai.stackexchange.com/a/17881/2444), where I provide more info about the VC dimension, in particular, in the context of neural networks.
Upvotes: 2 <issue_comment>username_3: From [[1]](https://www.springer.com/gp/book/9780387987804) we know that we have the following bound between the test and train error for i.i.d samples:
$$
\mathbb{P}\left(R \leqslant R\_{emp} + \sqrt{\frac{d\left(\log{\left(\frac{2m}{d}\right)}+1\right)-\log{\left(\frac{\eta}{4}\right)}}{m}}\right) \geqslant 1-\eta
$$
$R$ is the test error, $R\_{emp}$ is the training error, $m$ is the size of the training dataset, and $d$ is the hypothesis class's VC dimension. As you can see, the training and test errors have some relations to the dataset's size ($m$) and $d$.
Now, in terms of PAC learnability, we want to find a (lower or upper) bound for $m$ such that the absolute difference between $R$ and $R\_{emp}$ will be less than a given $\epsilon$ with a given probability of at least $1-\eta$. Hence, $m$ can be computed in terms of $\epsilon$, $\eta$, and $d$. For example, it can be proved ([[2]](https://dl.acm.org/doi/10.5555/2946645.2946683)) to train a binary classifier with $\epsilon$ difference between test and train error with the probability of at least $1-\eta$, we need $O\left(\frac{d + \log\frac{1}{\eta}}{\epsilon} \right)$ i.i.d sample data, i.e., $m = O\left(\frac{d + \log\frac{1}{\eta}}{\epsilon}\right)$. See more example and references [here](https://en.wikipedia.org/wiki/Sample_complexity).
Upvotes: 3 [selected_answer]<issue_comment>username_4: Since the mathematical details have already been covered by other answers, I will try to provide an intuitive explanation. I will answer this assuming the question meant $model$ and not $learning$ $algorithm$.
One way to think of $\mathcal V \mathcal C$ dimension is that it is an indicator of the number of functions (i.e a set of functions) you can choose from to approximate your classification task over a domain. So a model (here assume neural nets, linear separators, circles, etc whose parameters can be varied) having $\mathcal V \mathcal C$ dimension of $m$ shatters all subsets of the single/multiple set of $m$ points it shatters.
For a learning algorithm, to select a function, which gives accuracy close to the best possible accuracy (on a classification task) from the aforementioned set of functions (shattered by your model, which means it can represent the function with $0$ error) it needs a certain sample size of $m$. For the sake of argument, let's say your set of functions (or the model shatters) contains all the possible mappings from $\mathcal X \rightarrow \mathcal Y$ (assume $\mathcal X$ contains $n$ points i.e finite sized, as a result number of functions possible is $2^n$). One of the function it will shatter is the function which performs the classification, and thus you are interested in finding it.
Any learning algorithm which sees $m$ number of samples can easily pick up the set of functions which agrees on these points. The number of these functions agreeing on these sampled $m$ points but disagreeing on the $n-m$ points is $2^{(n-m)}$. The algorithm has no way of selecting from these shortlisted functions (agreeing on $m$ points) the one function which is the actual classifier, hence it can only guess. Now increase the sample size and the number of functions disagreeing keeps falling and the algorithms probability of success keeps getting better and better until you see all $n$ points when your algorithm can identify the mapping function of the classifier exactly.
The $\mathcal V \mathcal C$ dimension is very similar to the above argument, except it doesn't shatter the entire domain $\mathcal X$ and only a part of it. This limits the models capability to approximate a classification function exactly. So your learning algorithm tries to pick a function from all the functions your model shatter, which is very close to the best possible classification function i.e there will exist a best possible (not exact) function (optimal) in your set of functions which is closest to the classification function and your learning algorithm tries to pick a function which is close to this optimal function. And thus again, as per our previous argument it will need to keep increasing the sample size to reach as close as possible to the optimal function. The exact mathematical bounds can be found in books, but the proofs are quite daunting.
Upvotes: 0 |
2020/04/17 | 665 | 2,724 | <issue_start>username_0: I'm a bit new to AI and I'd like to use some kind of clustering algorithm to solve a problem:
I'm trying to parse pdf documents to get headings and titles. I can parse pdf to html and I'm then able to get some information on the lines of the document. I've identified some properties that can be useful for identifying the headings.
* font-size (int): of course it's quite usual that heading's font-size is bigger than normal text
* font-family (string): it's possible for headings to be bold so font-family may differ
* left property (int): it's also possible that headings are aligned a bit to the right, there's an indentation that's not always there on normal paragraphs
* bonus boolean: I have identified some properties that I can combine to get a boolean value. When the boolean is set to true it can increase the chances of the paragraph being a heading.
Of course, these are not rules that apply to all headings. Some headings may follow some of these but not all of them. It could also be possible that some 'normal' paragraphs follow all these points, but what I've seen is that, in general, those rules where what made headings different from paragraphs.
With this information, is there a way of doing what I'm looking for? As I said, I'm new to AI even though I have a background in CS and mathematics. I thought clustering could be interesting since I'm trying to create 2 clusters: headings and normal paragraphs.
What algorithm do you think might work for this use case. Should I look outside clustering?<issue_comment>username_1: Yes, you could use clustering: Encode your features as a feature vector and feed it into a clustering algorithm (see [Finding Groups in Data](https://onlinelibrary.wiley.com/doi/book/10.1002/9780470316801) for a comprehensive description of these). You could use agglomerative clustering, which would give you groups of similar items; perhaps different level headings will be clustered together.
Alternatively you could try a decision tree, something like [ID3](https://en.wikipedia.org/wiki/ID3_algorithm), which would also be suitable; for this you'd need some annotated training data, though. But with a small amount of data you might solve it, if your items are clearly separated.
Upvotes: 2 <issue_comment>username_2: Here im trying to answer, yes you could use clustering to parse pdf document. it similar of how the text mining works (you can read from [here](https://guides.library.duke.edu/c.php?g=289707&p=1930855)).
and for the clustering method, you could probably use K-NN methods, K-Means, agglomerative hierarchical clustering, and the other methods based on your preference. or u could use naive bayes alternatively.
Upvotes: 0 |
2020/04/18 | 434 | 1,773 | <issue_start>username_0: I have just started to study reinforcement learning and, as far as I understand, existing algorithms search for the optimal solution/policy, but do not allow the possibility for the programmer to suggest a way to find the solution (to guide their learning process). This would be beneficial for finding the optimal solution faster.
Is it possible to guide the learning process in (deep) reinforcement learning?<issue_comment>username_1: The programmer already guides the RL algorithm (or agent) by specifying the reward function. However, the reward function alone may not be sufficient to learn efficiently and fast, as you correctly noticed.
To attempt to solve this *inefficiency problem*, one solution is to combine reinforcement learning with supervised learning. For example, the paper [Deep Q-learning from Demonstrations](https://arxiv.org/pdf/1704.03732.pdf) (2017) by <NAME> et al. describes an approach to achieve this.
The paper [Active Reinforcement Learning](https://cs.stanford.edu/people/acvogel/papers/290.pdf) (2008) by <NAME> et al. also tries to solve this problem but by incorporating approximations (given by domain experts) of the MDP.
There are probably many other possible solutions. In fact, all model-based RL algorithms could probably fall into this category of algorithms that estimate or incorporate the dynamics of the environment to find a policy more efficiently.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Here are two very related interesting papers:
1. [Learning from Human Preferences](https://openai.com/blog/deep-reinforcement-learning-from-human-preferences/)
2. [Improving Reinforcement Learning with Human Input](https://www.ijcai.org/Proceedings/2018/0817.pdf)
Upvotes: 2 |
2020/04/18 | 3,933 | 13,660 | <issue_start>username_0: I have a question about how the averaging works when doing mini-batch gradient descent.
I think I now understood the general gradient descent algorithm, but only for online learning. When doing mini-batch gradient descent, do I have to:
* forward propagate
* calculate error
* calculate all gradients
...repeatedly over all samples in the batch, and then average all gradients and apply the weight change?
I thought it would work that way, but recently I have read somewhere that you basically only average the error of each example in the batch, and then calculate the gradients at the end of each batch. That left me wondering though, because, the activations of which sample in the mini-batch am I supposed to use to calculate the gradients at the end of every batch?
It would be nice if somebody could explain what exactly happens during mini-batch gradient descent, and what actually gets calculated and averaged.<issue_comment>username_1: Introduction
------------
First of all, it's completely normal that you are confused because nobody really explains this well and accurately enough. Here's my partial attempt to do that. So, this answer doesn't *completely* answer the original question. In fact, I leave some unanswered questions at the end (that I will eventually answer).
The gradient is a linear operator
---------------------------------
The gradient operator $\nabla$ is a linear operator, because, for some $f : \mathbb{R} \rightarrow \mathbb{R} $ and $g: \mathbb{R} \rightarrow \mathbb{R}$, the following two conditions hold.
* $\nabla(f + g)(x) = (\nabla f)(x) + (\nabla g)(x),\; \forall x \in \mathbb{R}$
* $\nabla(kf)(x) = k(\nabla f)(x),\; \forall k, x \in \mathbb{R}$
*In other words, the restriction, in this case, is that the functions are evaluated at the same point $x$ in the domain. This is a very important restriction to understand the answer to your question below!*
The linearity of the gradient directly follows from the linearity of the derivative. See a [simple proof here](https://www.youtube.com/watch?v=CfaaOc7dZIs).
### Example
For example, let $f(x) = x^2$, $g(x) = x^3$ and $h(x) = f(x) + g(x) = x^2 + x^3$, then $\frac{dh}{dx} = \frac{d (x^2 + x^3)}{d x} = \frac{d x^2}{d x} + \frac{d x^3}{d x} = \frac{d f}{d x} + \frac{d g}{d x} = 2x + 3x$.
Note that both $f$ and $g$ are not linear functions (i.e. straight-lines), so the linearity of the gradients is not just applicable in the case of straight-lines.
Straight-lines are not necessarily linear maps
----------------------------------------------
Before proceeding, I want to note that there are at least two notions of *linearity*.
1. There's the notion of a [linear map](https://en.wikipedia.org/wiki/Linear_map) (or linear operator), i.e. which is the definition above (i.e. the gradient operator is a linear operator because it satisfies the two conditions, i.e. it preserves addition and scalar multiplication).
2. There's the notion of a straight-line function: $f(x) = c\*x + k$. A function can be a straight-line and not be a linear map. For example, $f(x) = x+1$ is a straight-line but it doesn't satisfy the conditions above. More precisely, in general, $f(x+y) \neq f(x) + f(y)$, and you can easily verify that this is the case if $x = 2$ and $y=3$ (i.e. $f(2+3) = 6$, $f(2) = 3$, $f(3) = 4$, but $f(2) + f(3) = 7 \neq f(2+3)$.
Neural networks
---------------
A neural network is a composition of (typically) non-linear functions (let's ignore the case of linear functions), which can thus be represented as $$y'\_{\theta}= f^{L}\_{\theta\_L} \circ f^{L-1}\_{\theta\_{L-1}} \circ \dots \circ f\_{\theta\_1},$$ where
* $f^{l}\_{\theta\_l}$ is the $i$th layer of your neural network and it
computes a non-linear function
* ${\theta\_l}$ is a vector of parameters associated with the $l$th layer
* $L$ is the number of layers,
* $y'\_{\theta}$ is your neural network,
* $\theta$ is a vector containing all parameters of the neural network
* $y'\_{\theta}(x)$ is the output of your neural network
* $\circ $ means the composition of functions
Given that $f^l\_{\theta}$ are non-linear, $y'\_{\theta}$ is also a non-linear function of the input $x$. This notion of linearity is the second one above (i.e. $y'\_{\theta}$ is not a straight-line). In fact, neural networks are typically composed of sigmoids, ReLUs, and hyperbolic tangents, which are not straight-lines.
Sum of squared errors
---------------------
Now, for simplicity, let's consider the sum of squared error (SSE) as the loss function of your neural network, which is defined as
$$
\mathcal{L}\_{\theta}(\mathbf{x}, \mathbf{y}) = \sum\_{i=1}^N \mathcal{S}\_{\theta}(\mathbf{x}\_i, \mathbf{y}\_i) = \sum\_{i=1}^N (\mathbf{y}\_i - y'\_{\theta}(\mathbf{x}\_i))^2
$$
where
* $\mathbf{x} \in \mathbb{R}$ and $\mathbf{y} \in \mathbb{R}$ are vectors of inputs and labels, respectively
* $\mathbf{y}\_i$ is the label for the $i$th input $\mathbf{x}\_i$
* $\mathcal{S}\_{\theta}(\mathbf{x}\_i, \mathbf{y}\_i) = (\mathbf{y}\_i - y'\_{\theta}(\mathbf{x}\_i))^2$
Sum of gradients vs gradient of a sum
-------------------------------------
Given the gradient is a linear operator, one could think that computing the sum of the gradients is equal to the gradient of the sums.
However, in our case, we are summing $\mathcal{S}\_{\theta}(\mathbf{x}\_i, \mathbf{y}\_i)$ and, in general, $\mathbf{x}\_i \neq \mathbf{x}\_j$, for $i \neq j$. So, essentially, the SSE is the sum of the same function, i.e. $S\_{\theta}$, evaluated at different points of the domain. However, the definition of a linear map applies when the functions are evaluated at the same point in the domain, as I said above.
So, in general, in the case of neural networks with SSE, the gradient of the sum may not be equal to the sum of gradients, i.e. the definition of the linear operator for the gradient doesn't apply here because we are evaluating every squared error at different points of their domains.
Stochastic gradient descent
---------------------------
The idea of stochastic gradient descent is to approximate the *true gradient* (i.e. the gradient that would be computed with all training examples) with a *noisy gradient* (which is an approximation of the true gradient).
How does the noisy gradient approximate the true gradient?
----------------------------------------------------------
In the case of mini-batch ($M \leq N$, where $M$ is the size of the mini-batch and $N$ is the total number of training examples), this is actually a sum of the gradients, one for each example in the mini-batch.
The papers [Bayesian Learning via Stochastic Gradient Langevin Dynamics](https://www.ics.uci.edu/~welling/publications/papers/stoclangevin_v6.pdf) (equation 1) or [Auto-Encoding Variational Bayes](https://arxiv.org/pdf/1312.6114.pdf) (in section 2.2) use this type of approximation. See also [these slides](http://www.princeton.edu/~yc5/ele522_optimization/lectures/stochastic_gradient.pdf).
### Why?
To give you some intuition of why we sum the gradients of the error of each input point $\mathbf{x}\_i$, let's consider the case $M=1$, which is often referred to as the (actual) stochastic gradient descent algorithm.
Let's assume we uniformly sample an arbitrary tuple $(\mathbf{x}\_j, \mathbf{y}\_j)$ from the dataset $\mathcal{D} = \{ (\mathbf{x}\_i, \mathbf{y}\_i) \}\_{i=1}^N$.
Formally, we want to show that
\begin{align}
\nabla\_{\theta} \mathcal{L}\_{\theta}(\mathbf{x}, \mathbf{y})
&=
\mathbb{E}\_{(\mathbf{x}\_j, \mathbf{y}\_j) \sim \mathbb{U}}\left[ \nabla\_{\theta} \mathcal{S}\_{\theta} \right] \label{1} \tag{1}
\end{align}
where
* $\nabla\_{\theta} \mathcal{S}\_{\theta}$ is the gradient of $\mathcal{S}\_{\theta}$ with respect to the parameters $\theta$
* $\mathbb{E}\_{(\mathbf{x}\_j, \mathbf{y}\_j) \sim \mathbb{U}}$ is the expectation with respect to the random variable associated with a sample $(\mathbf{x}\_j, \mathbf{y}\_j)$ from the uniform distribution $\mathbb{U}$
Under some conditions (see [this](https://math.stackexchange.com/q/217702/168764)), we can exchange the expectation and gradient operators, so \ref{1} becomes
\begin{align}
\nabla\_{\theta} \mathcal{L}\_{\theta}(\mathbf{x}, \mathbf{y})
&=
\nabla\_{\theta} \mathbb{E}\_{(\mathbf{x}\_j, \mathbf{y}\_j) \sim \mathbb{U}}\left[ \mathcal{S}\_{\theta} \right]
\label{2} \tag{2}
\end{align}
Given that we uniformly sample, the probability of sampling an arbitrary $(\mathbf{x}\_j, \mathbf{y}\_j)$ is $\frac{1}{N}$. So, equation \ref{2} becomes
\begin{align}
\nabla\_{\theta} \mathcal{L}\_{\theta} (\mathbf{x}, \mathbf{y})
&=
\nabla\_{\theta} \sum\_{i=1}^N \frac{1}{N} \mathcal{S}\_{\theta}(\mathbf{x}\_i, \mathbf{y}\_i) \\
&=
\nabla\_{\theta} \frac{1}{N} \sum\_{i=1}^N \mathcal{S}\_{\theta}(\mathbf{x}\_i, \mathbf{y}\_i)
\end{align}
Note that $\frac{1}{N}$ is a constant with respect to the summation variable $i$ and so it can be taken out of the summation.
This shows that the gradient with respect to $\theta$ of the loss function $\mathcal{L}\_{\theta}$ that includes all training examples is equivalent, in expectation, to the gradient of $\mathcal{S}\_{\theta}$ (the loss function of one training example).
Questions
---------
1. How can we extend the previous proof to the case $1 < M \leq N$?
2. Which conditions need exactly to be satisfied so that we can exchange the gradient and the expectation operators? And are they satisfied in the case of typical loss functions, or sometimes they aren't (but in which cases)?
3. What is the relationship between the proof above and the linearity of the gradient?
* In the proof above, we are dealing with expectations and probabilities!
4. What would the gradient of a sum of errors represent? Can we still use it in place of the sum of gradients?
Upvotes: 4 <issue_comment>username_2: >
> do I have to:
>
>
> * forward propagate
> * calculate error
> * calculate all gradients
> * ...repeatedly over all samples in the batch, and then average all gradients and apply the weight change?
>
>
>
Yes, that is correct. You can save a bit of memory by summing gradients as you go. Once you have calculated the gradients for one example for the weights of one layer, then you do not re-use the individual gradients again, so you can just keep a sum. Alternatively for speed, you can calculate a minibatch of gradients in parallel, as each example is independent - which is a major part of why GPU acceleration is so effective in neural network training.
It is *critical* to getting correct results that you calculate the gradient of the loss function with respect to each example input/output pair separately. Once you have done that, you can average the gradients across a batch or mini-batch to estimate a true gradient for the dataset which can be used to take a gradient descent step.
>
> recently I have read somewhere that you basically only average the error of each example in the batch, and then calculate the gradients at the end of each batch.
>
>
>
Without a reference it is hard to tell whether this is an error in the "somewhere", or you have misunderstood, or there is a specific context.
If by "error" you mean the literal difference $\hat{y}\_i - y\_i$, where $\hat{y}\_i$ is your estimate for data input $i$ and $y\_i$ is the ground-truth training value, then that *is* the gradient for many loss functions and activation function pairs. For instance, it is the error gradient for mean square error and linear output. Some texts loosely refer to this as the "error", and talk about backpropagating "the error", but actually it is a gradient.
In addition, if the article was referring to linear regression, logistic regression or softmax regression, everything else is linear - in those specific models then you can just "average the error" and use that as the gradient.
In general, however, the statement is incorrect because a neural network with one or more hidden layers has many non-linearities that will give different results when calculating average first then backpropagating vs taking backpropagating first the averaging - that is $f'(\mu(Y))$ vs $\mu(f'(Y))$ where $f'$ is the derivative of the transfer function and $\mu$ is the mean for the batch (i.e. $\mu(Y) = \frac{1}{N}\sum\_{i=1}^{N} y\_i$ and $Y$ represents all the $y\_i$ in a given batch of size $N$)
When $y\_i = f(x\_i) = ax\_i +b$ i.e. the transfer function is linear, then $f'(\mu(Y)) = \mu(f'(Y)) = \frac{a}{N}\sum\_{i=1}^N x\_i$, but almost all useful loss functions and all transfer functions except some output layers in neural networks are non-linear. For those, $f'(\mu(Y)) \neq \mu(f'(Y))$.
A simple example would show this, if we start a small minibatch back propagation with the loss function (as opposed to its gradient).
Say you had the following data for regression:
```
x y
1 2
1 4
```
You want a model that can regress to least mean squared error $y$ when given an input $x = 1$. The best model should predict $3$ in that case.
If your model has converged, the average MSE of the dataset is $1$. Using that would make your model move *away* from convergence and it will perform worse.
If you first take the gradients, then average those, you will calculate $0$. A simple gradient update step using that value will make no change, leaving the model in the optimal position.
This issue occurs on every hidden layer in a neural network, so in general you cannot simply resolve the loss function gradient and start with the average error gradient at the output. You would still hit the inequality $f'(\mu(Y)) \neq \mu(f'(Y))$ on each nonlinearly.
Upvotes: 2 |
2020/04/19 | 1,220 | 4,252 | <issue_start>username_0: I understand that in DQNs, the loss is measured by taking the MSE of outputted Q-values and target Q-values.
What does the target Q-values represent? And how is it obtained/calculated by the DQN?<issue_comment>username_1: When training a Deep Q network with experienced replay, you accumulate what is known as training experiences $e\_t = (s\_t, a\_t, r\_t, s\_{t+1})$. You then sample a batch of such experiences and for each sample you do the following.
1. Feed $s\_t$ into the network to get $Q(s,a;\theta)$.
2. Feed $s\_{t+1}$ into the network to get $Q(s’,a’,\theta)$.
3. Choose $max\_aQ(s’,a’,\theta)$ and set $ \gamma max\_aQ(s’,a’,θ)$ + $r\_t$ as the target of the network.
4. Train the network with $s\_t$ as input to update $\theta$. The output from the input of $s\_t$ is $Q(s,a,\theta)$ and the gradient descent step minimises the squared distance between $Q(s,a,\theta)$ and $\gamma max\_aQ(s’,a’,θ)$ + $r\_t$
Upvotes: 2 <issue_comment>username_2: >
> What does the target Q-values represent?
>
>
>
In a DQN, which uses off-policy learning, they represent a *refined* estimate for the expected future reward from taking an action $a$ in state $s$, and from that point on following a target policy. The target policy in Q learning is based on always taking the maximising action in each state, according to current estimates of value.
The estimate is *refined* in that it is based on at least a little bit of data from experience - the immediate reward, and what transition happened next - but generally it is not going to be perfect.
>
> And how is it obtained/calculated by the DQN?
>
>
>
There are lots of ways to do this. The simplest in DQN is to process a single step lookahead based on the experience replay table.
If your table contains the tuple *[state, action, immediate reward, next state, done?]* as $[s, a, r, s', d]$ then the formula for TD target, $g\_{t:t+1}$ is
$$r + \gamma \text{max}\_{a'}[Q\_{target}(s',a')], \qquad \text{when}\space d \space \text{is false}$$
$$r, \qquad \text{when}\space d \space \text{is true}$$
Typically $Q\_{target}$ is calculated using the "target network" which is a copy of the learning network for Q that is updated every N steps. This delayed update of the target predictions is done for numerical stability in DQN - conceptually it is an estimate for the same action values that you are learning.
This target value can change every time you use any specific memory from experience replay. So you have to perform the same calculations on each minibatch, you cannot store the target values.
Upvotes: 2 <issue_comment>username_3: The deep Q-learning (DQL) algorithm is really similar to the tabular Q-learning algorithm. I think that both algorithms are actually quite simple, at least, if you look at their pseudocode, which isn't longer than 10-20 lines.
Here's a screenshot of the pseudocode of DQL (from [the original paper](https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf)) that highlights the Q target.
[](https://i.stack.imgur.com/Eyy49.png)
Here's the screenshot of Q-learning (from [Barto and Sutton's book](http://incompleteideas.net/book/RLbook2020.pdf)) that highlights the Q target.
[](https://i.stack.imgur.com/qgJdk.png)
In both cases, the $\color{red}{\text{target}}$ is a **reward plus a discounted maximum future Q value** (apart from the exception of final states, in the case of DQL, where the target is just the reward).
There are at least 3 differences between these two algorithms.
* DQL uses gradient descent because the $Q$ function are represented by neural networks rather than tables, like in Q-learning, and so you have an explicit loss function (e.g. MSE).
* DQL typically uses experience replay (but, in principle, you could also do this in Q-learning)
* DQL encodes the states (i.e. $\phi$ encodes the states).
Apart from that, the logic of both algorithms is more or less the same, so, if you know Q-learning (and you should know it before diving into DQL), then it shouldn't be a problem to learn DQL (if you also have a decent knowledge of deep learning).
Upvotes: 2 |
2020/04/20 | 707 | 2,641 | <issue_start>username_0: I'm seeking guidence here.
Can I use Multi Layers Perceptron (MLP), e.g regular flat neural networks, for image classification?
Will they perform better than Fisher Faces?
Is it difficult to do image classification with a MLP network?
It's on basic level like classifying objects and not detailed structures and patterns.
Important to me is that the MLP need to be trained with pictures that can have noise in background and different light shadows.<issue_comment>username_1: let me try to answer your question. yes, you can use multilayer perceptron to image classification. Multilayer Perceptron is topology
the most common of ANN, where perceptrons are connected to form layers. An MLP has
input layer, at least one
hidden layer, and output layer.
Multilayer perceptron is one method
many used. one of them, regards research on classification
human skin based on its color, Khan
(Khan, Hanbury, Stöttinger, & Bais, 2012)
compare the nine methods for
classifications include BayesNet, J48, Multilayer Perceptron (MLP),
Naive Bayes, Random Forest, and SVM.
The results show that the Multilayer Perceptron (MLP)
produce the highest performance after
Random Forest and J48.
Upvotes: 1 <issue_comment>username_2: Multi Layers Perceptron(MLP) can be used for image classification, but it has a lot of deficiency than Convolutional Neural network(CNN). But if you compare **MLP** and **Fisher Faces** , the better one is **MLP**, because Fisher Faces will be increasingly difficult if adding more individuals or classes. You can make a simple MLP model, because it just has 3 layers which are an input layer, hidden layer and output layer, here a source code that you can try:
* <https://towardsdatascience.com/multi-layer-neural-networks-with-sigmoid-function-deep-learning-for-rookies-2-bf464f09eb7f>
if you make a model, it will be based on training data, I think if make data trained composed based on noise in background and different light shadows on your image I think it will have a better performance, but remember if you are using **MLP** for image classification it can just predict an image on one spot, for example:
"if you train a model with the object in the middle of an image, your model can not predict it when the image is moved to the different spot".
here is pdf to see Fisher Faces performance:
* <https://core.ac.uk/download/pdf/84801048.pdf>
Upvotes: 3 [selected_answer]<issue_comment>username_3: Depends, if the faces are centered and have the same background yes. You also need a lot of data.
If they are daily life images, then no. You will have very bad generalization.
Upvotes: 1 |