date stringlengths 10 10 | nb_tokens int64 60 629k | text_size int64 234 1.02M | content stringlengths 234 1.02M |
|---|---|---|---|
2021/03/25 | 725 | 2,933 | <issue_start>username_0: This is a 600\*800 image.
[](https://i.stack.imgur.com/gQ6LD.jpg)
**Which algorithm/model should I use to get an image like the one below, in which each key is detected and labeled by a rectangle?**
I guess this is some kind of a segmentation problem where U-net is the most popular algorithm, though I don't know how to apply it to this particular problem.
[](https://i.stack.imgur.com/XyqiJ.png)<issue_comment>username_1: The short answer is no, you shouldn't do that.
There is a "distribution shift" thing when you have different x-y relation on the validation set then on the train set. The distribution shift would deteriorate your model performance and you should try to avoid that. The reason it's bad - ok, you find the way to fix the model for validation data, but what about novel *test* data? Will it be like a train? Will it be like validation? You don't know and your model is worthless in fact.
What you can do
1. You should redefine the train/validation scheme. I would recommend making [cross-validation](https://en.wikipedia.org/wiki/Cross-validation_(statistics)) with at least three splits
2. As mentioned in a comment, your model seems to overfit; try to apply some regularization techniques, like dropout, batch norm, data augmentation, model simplification and so on
3. Add some data if it's possible, that would better cover all the possible cases.
Upvotes: 3 [selected_answer]<issue_comment>username_2: I'm new to all this so take what I say with a grain of salt and not as fact, I don't have any formal education or training. I believe when you're referring to inversion predictions, you're not overthinking you're underthinking. For anything to have value it must also have an inverse or else there's no way to cognitively perceive it (contrast) otherwise you're looking at a white paper against white paper. Now since you're referring to data set prediction, you need to define it linearly (x, y) or f(x) to scale and plot. Therefore x and y BOTH must retain inverse proportionate values (I made up that term) in order to, in the context of assigning value, exist. So you need to have 4 quadrants of data for predictive, so now you're looking at quantum data processing in order to facilitate predictions in a non-linear context. Use a matrix, I believe Diroches Matrix should be applicable here. Also, remember that predictions are always changing and updating based on empirical and real-time data, so don't get your programming stuck in an ONLY RIGHT NOW mindset, matrices are designed to be constantly moving and evolving. Therefore your z-axis should always retain a state of variability, or it should always be Z don't attach a value to it. Good luck. I'm jealous I would love to ACTUALLY be working on something cool :/
Upvotes: 0 |
2021/03/26 | 449 | 2,078 | <issue_start>username_0: I will start working on a project where we want to optimize the production of a chemical unit through reinforcement learning approach. From the SME's, we already obtained a simulator code that can take some input and render us the output. A part of our output is our objective function that we want to maximize by tuning the input variables. From a reinforcement learning angle, the inputs will be the agent actions, while the state and reward can be obtained from the output. We are currently in the process of building a RL environment, the major part of which is the simulator code described above.
We were talking to a RL expert and she mentioned that one of the thing that we have here conceptually wrong is that our environment will not have the Markov property in the sense that it is really a 'one-step process' with the process not continuing from the previous state and there is no sort of continuity in state transitions. She is correct there. This made me think, how can we get around this then. Can we perhaps append some part of the current state to the next state etc. More importantly, I have seen RL applied to optimal control in other examples as well which are non-markovian ex. scheduling, tsp problems, process optimization etc. What is the explanation in such cases? Does one simply assumes process to be markovian with unknown transition function?<issue_comment>username_1: RL is currently being applied to environments which are definitely not markovian, maybe they are weakly markovian with decreasing dependency.
You need to provide details of your problem, if it is 1 step then any optimization system can be used.
Upvotes: 3 [selected_answer]<issue_comment>username_2: I view it as a generalization of the conditional Markovian case.
It does have the Markov property, in that the future state depends solely on the input at the given state, which probably is to be sampled from a stochastic policy, that is conditioned on the current state.
It seems to me to be a more general, simpler, and unconstrained case.
Upvotes: 0 |
2021/03/27 | 519 | 2,214 | <issue_start>username_0: After looking into transformers, BERT, and GPT-2, from what I understand, GPT-2 essentially uses only the decoder part of the original transformer architecture and uses masked self-attention that can only look at prior tokens.
Why does GPT-2 not require the encoder part of the original transformer architecture?
**GPT-2 architecture with only decoder layers**
[](https://i.stack.imgur.com/Kb8Gq.png)<issue_comment>username_1: **GPT-2** is a **close copy** of the basic transformer architecture.
GPT-2 does not require the encoder part of the original transformer architecture as it is **decoder-only**, and *there are no encoder attention blocks*, so the **decoder is equivalent to the encoder**, except for the *MASKING* in the multi-head attention block, the decoder is only allowed to glean information from the prior words in the sentence. It works just like a traditional language model as it takes **word vectors** as input and produces **estimates for the probability** of the next word as outputs but it is **auto-regressive** as each token in the sentence has the context of the previous words. Thus GPT-2 works one token at a time.
**BERT**, by contrast, is **not auto-regressive**. It uses the **entire surrounding context** all-at-once. GPT-2 the context vector is **zero-initialized** for the first word embedding.
Upvotes: 4 <issue_comment>username_2: The cases when we use encoder-decoder architectures are typically when we are mapping one type of sequence to another type of sequence, e.g. translating French to English or in the case of a chatbot taking a dialogue context and producing a response. In these cases, there are qualitative differences between the inputs and outputs so that it makes sense to use different weights for them.
In the case of GPT-2, which is trained on continuous text such as Wikipedia articles, if we wanted to use an encoder-decoder architecture, we would have to make arbitrary cutoffs to determine which part will be dealt with by the encoder and which part by the decoder. In these cases therefore, it is more common to just use the decoder by itself.
Upvotes: 3 |
2021/03/28 | 742 | 2,894 | <issue_start>username_0: In this <https://pytorch.org/vision/stable/models.html> tutorial it clearly states:
>
> All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (3 x H x W), where H and W are expected to be at least 224. The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225].
>
>
>
Does that mean that for example if I want my model to have input size 128x128 it is or if I calculate mean and std which is unique to my dataset that it is gonna perform worse or won't work at all? I know that with tensorflow if you are loading pretrained models there is a specific argument input\_shape which you can set according to your needs just like here:
```
tf.keras.applications.ResNet101(
include_top=True, weights='imagenet', input_tensor=None,
input_shape=None, pooling=None, classes=1000, **kwargs)
```
I know that I can pass any shape to those (pytorch) pretrained models and it works. What I wanna understand is can I change input shape of those models so that I don't decrease my models training performance?<issue_comment>username_1: **GPT-2** is a **close copy** of the basic transformer architecture.
GPT-2 does not require the encoder part of the original transformer architecture as it is **decoder-only**, and *there are no encoder attention blocks*, so the **decoder is equivalent to the encoder**, except for the *MASKING* in the multi-head attention block, the decoder is only allowed to glean information from the prior words in the sentence. It works just like a traditional language model as it takes **word vectors** as input and produces **estimates for the probability** of the next word as outputs but it is **auto-regressive** as each token in the sentence has the context of the previous words. Thus GPT-2 works one token at a time.
**BERT**, by contrast, is **not auto-regressive**. It uses the **entire surrounding context** all-at-once. GPT-2 the context vector is **zero-initialized** for the first word embedding.
Upvotes: 4 <issue_comment>username_2: The cases when we use encoder-decoder architectures are typically when we are mapping one type of sequence to another type of sequence, e.g. translating French to English or in the case of a chatbot taking a dialogue context and producing a response. In these cases, there are qualitative differences between the inputs and outputs so that it makes sense to use different weights for them.
In the case of GPT-2, which is trained on continuous text such as Wikipedia articles, if we wanted to use an encoder-decoder architecture, we would have to make arbitrary cutoffs to determine which part will be dealt with by the encoder and which part by the decoder. In these cases therefore, it is more common to just use the decoder by itself.
Upvotes: 3 |
2021/03/29 | 1,641 | 6,271 | <issue_start>username_0: I am trying to make a big classification model using the coco2017 dataset. Here is my code:
```
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
import IPython.display as display
from PIL import Image, ImageSequence
import os
import pathlib
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import cv2
import datetime
gpus = tf.config.list_physical_devices('GPU')
if gpus:
try:
# Currently, memory growth needs to be the same across GPUs
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Memory growth must be set before GPUs have been initialized
print(e)
epochs = 100
steps_per_epoch = 10
batch_size = 70
IMG_HEIGHT = 200
IMG_WIDTH = 200
train_dir = "Train"
test_dir = "Val"
train_image_generator = ImageDataGenerator(rescale=1. / 255)
test_image_generator = ImageDataGenerator(rescale=1. / 255)
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='sparse')
test_data_gen = test_image_generator.flow_from_directory(batch_size=batch_size,
directory=test_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='sparse')
model = Sequential([
Conv2D(265, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
keras.layers.Dense(256, activation="relu"),
keras.layers.Dense(128, activation="relu"),
keras.layers.Dense(80, activation="softmax")
])
optimizer = tf.keras.optimizers.Adam(0.001)
optimizer.learning_rate.assign(0.0001)
model.compile(optimizer='adam',
loss="sparse_categorical_crossentropy",
metrics=['accuracy'])
model.summary()
tf.keras.utils.plot_model(model, to_file="model.png", show_shapes=True, show_layer_names=True, rankdir='TB')
checkpoint_path = "training/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
save_weights_only=True,
verbose=1)
os.system("rm -r logs")
log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
history = model.fit(train_data_gen,steps_per_epoch=steps_per_epoch,epochs=epochs,validation_data=test_data_gen,validation_steps=10,callbacks=[cp_callback, tensorboard_callback])
model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
model.save('model.h5', include_optimizer=True)
test_loss, test_acc = model.evaluate(test_data_gen)
print("Tested Acc: ", test_acc)
print("Tested Acc: ", test_acc*100, "%")
```
I have tried different optimizers like `SGD`, `RMSProp`, and `ADAM`. I also tried changing the configuration of the hidden layers. I also tried to change the metrics from `accuracy` to `sparse_categorical_accuracy` with no improvement. I cannot go beyond 30% accuracy. My guess is that the `MaxPooling` is doing something because I just added it but don't know what it means. Can somebody explain what the `MaxPooling` Layer does and what is stopping my neural network from gaining accuracy?<issue_comment>username_1: You have two questions in one.
1. Is it maxpool that ruins the model?
I would say no, the maxpool is a standard operation for convolution networks, it down-samples the intermediate representation to reduce the necessary computations, improve the regularization, and adds translation invariance to some degree. Originally averaging was used to downsample over few neighbor pixels, for example, 2x2 were averaged to one pixel. Then it was discovered max-pool often performs better in practice, where you took the max value out of these 2x2 pixels. The way you applied is ok in general.
2. Why the accuracy is not that great?
I see two issues here - first one is COCO dataset is not a classification dataset. It's an *object detection* dataset and there are many objects on the same image. I.e. there is an image with a person on a bicycle and a car behind him. Which class the model should assign - a person, a bicycle, or a car? The model can't know. To check if it's the issue try [top-5 accuracy](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/TopKCategoricalAccuracy) - it tells if the correct answer would be among top-5 guesses of the network. I would also recommend to watch the images and try to manually guess the class for few dozens of them, that would help to build the intuition
The second thing is that your model is not that deep and 30% accuracy is not bad, i.e. the random guess would be around 1% and your model doing x30 times better. You could try models like [resnet](https://www.tensorflow.org/api_docs/python/tf/keras/applications/ResNet50) - it's still quite fast, but should be doing noticeably better.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Accuracy is a good measure if our classes are evenly split, but is very misleading if we have imbalanced classes.Always use caution with accuracy. You need to know the distribution of the classes to know how to interpret the value.
Upvotes: 0 |
2021/03/31 | 604 | 2,256 | <issue_start>username_0: I wonder if the following equation (you can find it in almost every ML book) refers to a general assumption that we make when using machine learning:
$$y = f(x)+\epsilon,$$
where $y$ is our output, $f$ is e.g. a neural network and $\epsilon$ is an independent noise term.
Does this mean that we assume the $y$'s contained in our training data set come from a noised version of our network output?<issue_comment>username_1: Not necessarily. The neural network (or whatever else you use) is a *model* of what you are trying to do, and usually models are not able to perfectly model reality, as it is too complex. A noise term is generally used to represent that, ie the imperfection of the model's relationship with the actual world.
Upvotes: 2 <issue_comment>username_2: That equation is just an assumption that we make about the relationship between a *response variable* (aka [*dependent variable*](https://en.wikipedia.org/wiki/Dependent_and_independent_variables#Statistics_synonyms)) $y$ and a *predictor* (aka [*independent variable*](https://en.wikipedia.org/wiki/Dependent_and_independent_variables#Statistics_synonyms)) $x$, i.e. the response variable (target) is an **unknown** function $f$ of the predictor $x$ plus some noise $\epsilon$ due to e.g. measurement errors (caused e.g. by damaged sensors). So, if you have a dataset $D = \{(y\_i, x\_i)\}\_{i=1}^N$, you assume that $y\_i = f(x\_i) + \epsilon, \forall i$. The goal (in supervised learning) is then to **estimate** $f$ with e.g. a neural network $\hat{f}\_\theta$, so the goal is to find a function $\hat{f}\_\theta$ such that $\hat{f}\_\theta(x\_i) = y\_i$, so, in practice, you often ignore $\epsilon$ because that is associated with irreducible errors.
You can find that equation on page 16 of the book [An Introduction to Statistical Learning](https://www.statlearning.com/). There you will also find more info about the goal of (statistical) supervised learning and why $\epsilon$ is irreducible.
So, the answer to your question is **no**, given that $f$ there is not the neural network but an unknown function. If your neural network $\hat{f}$ was equal to $f$, then, yes, but, of course, in practice, this will almost never be the case.
Upvotes: 1 |
2021/03/31 | 929 | 3,954 | <issue_start>username_0: I stumbled upon this passage when reading [this](http://neuralnetworksanddeeplearning.com/chap4.html) guide.
>
> Universality theorems are a commonplace in computer science, so much
> so that we sometimes forget how astonishing they are. But it's worth
> reminding ourselves: the ability to compute an arbitrary function is
> truly remarkable. Almost any process you can imagine can be thought of
> as function computation.\* Consider the problem of naming a piece of
> music based on a short sample of the piece. That can be thought of as
> computing a function. Or consider the problem of translating a Chinese
> text into English. Again, that can be thought of as computing a
> function. Or consider the problem of taking an mp4 movie file and
> generating a description of the plot of the movie, and a discussion of
> the quality of the acting. Again, that can be thought of as a kind of
> function computation.\* Universality means that, in principle, neural
> networks can do all these things and many more.
>
>
>
How is this true? How can any process be thought of as function computation? How would one compute function in order to translate Chinese text to English?<issue_comment>username_1: A function is simply a procedure that maps a particular input to a particular output. You put in $X$, and the function computes $Y$. Those $X$ and $Y$ can take many different forms. It could be mapping one number to another number (*convert miles to kilometres*), mapping sound to text (name that tune), mapping text to text (*translate languages*), mapping a video to text (review this movie), or mapping text to an image (*draw a picture of $X$*). Anytime you have a procedure that produces a fixed output based on a fixed input, it's a function.
Universality theorems guarantee that a neural network can produce an arbitrarily good approximation of any possible function. That doesn't mean it's easy, though - *finding* the right function that maps $X$ to $Y$ is the hard part.
Upvotes: 2 <issue_comment>username_2: To speak to your question about how **Chinese** to **English** translation can be a computation, it first requires a way to turn the base units of translation (*tokens*) into something computable. One basic way is to define the set of your vocabulary terms and create a gigantic matrix (*typically called an **embedding***) with each column representing a token as well as **one-hot encoded** matrices to perform a selection from the matrix.
Say, if my vocabulary is (`"apple"`, `"kiwi"`) and I want 2-dimensional vectors for the tokens (trivially small but manageable example), you'd need a `2x2` (two tokens in two dimensions) random matrix and two one-hot vectors:
* `"apple" = [1 0]`
* `"kiwi" = [0 1]`
Multiplying `[1 0]` for instance, by your `2x2` matrix, will "select" the vector in the first column, which represents the token "apple".
Once you have a **randomly initialized** embedding matrix, you have to train it to be useful. A relatively common method to do so is to make it the hidden layer in a neural network, mask tokens in the source data, and train the model with gradient descent to "guess" what token was removed. You can also just fully train the embedding as part of training a larger network, but that increases training time significantly.
If you have a lot of sentence pairs of English-Chinese translations, you could train an English/Chinese joint embedding and then further train a neural network that uses it to translate between sentence pairs (*but this is not a state of the art method*).
Every step of the process after text preparation here is a mathematical operation, so a forward pass of the trained model can be expressed as an equation (*though one you could never fully write out by hand*), and if we're careful to only choose differentiable operations, then the training of the model as well comes down to solving (*many, massive*) equations.
Upvotes: 0 |
2021/04/01 | 602 | 2,479 | <issue_start>username_0: So far I've developed simple RL algorithms, like Deep Q-Learning and Double Deep Q-Learning. Also, I read a bit about A3C and policy gradient but superficially.
If I remember correctly, all these algorithms focus on the value of the action and try to get the maximum one. **Is there an RL algorithm that also tries to predict what the next state will be, given a possible action that the agent would take?**
Then, in parallel to the constant training for getting the best reward, there will also be constant training to predict the next state as accurately as possible? And then have that prediction of the next state always be passed as an input into the NN that decides on the action to take. Seems like a useful piece of information.<issue_comment>username_1: Check out [Imagination-Augmented Agents](https://arxiv.org/abs/1707.06203) paper - seems like it does what you are talking about. The agent itself is the standard A3C that you are familiar with. The novelty is the "imagination" environment model which is trained to predict the behavior of the environment.
Upvotes: 2 <issue_comment>username_2: Yes, there are algorithms that try to predict the next state. Usually this will be a model based algorithm -- this is where the agent tries to make use of a model of the environment to help it learn. I'm not sure on the best resource to learn about this but my go-to recommendation is always the Sutton and Barto book.
[This paper](https://arxiv.org/abs/2006.00900) introduces PlanGAN; the idea of this model is to use a GAN to generate a trajectory. This will include not only predicting the next state but all future states in a trajectory.
[This paper](https://arxiv.org/abs/1507.00814) introduces a novelty function to incentivise the agent to visit unexplored states. The idea is that for unexplored states, a model that predicts the next state from the state-action tuple will have high error (measured by Euclidean distance from true next state) and they add this error to the original reward to make a modified reward.
[This paper](https://arxiv.org/abs/1912.01603) introduces Dreamer. This is where *all* learning is done in a latent space and so the transition dynamics of this latent space must be learned, another example of needing to learn the next state.
These are just some examples of papers that try to predict the next state, there are many more out there that I would recommend you look for.
Upvotes: 3 [selected_answer] |
2021/04/06 | 1,058 | 3,414 | <issue_start>username_0: Training on a quadratic function
```
x = np.linspace(-10, 10, num=1000)
np.random.shuffle(x)
y = x**2
```
Will predict an expected quadratic curve between `-10 < x < 10`.
[](https://i.stack.imgur.com/vkkC4.png)
Unfortunately my model's predictions become linear outside of the trained dataset.
See `-100 < x < 100` below:
[](https://i.stack.imgur.com/mnlq5.png)
Here is how I define my model:
```
model = keras.Sequential([
layers.Dense(64, activation='relu'),
layers.Dense(64, activation='relu'),
layers.Dense(1)
])
model.compile(loss='mean_absolute_error', optimizer=tf.keras.optimizers.Adam(0.1))
history = model.fit(
x, y,
validation_split=0.2,
verbose=0, epochs=100)
```
[Here's a link to a google colab for more context.](https://colab.research.google.com/drive/1yPJY10Iu0XhWbJN6wOprtpYfNvauEiyl?usp=sharing)<issue_comment>username_1: It isn't too surprising to see behaviour like this, since you're using $\mathrm{ReLU}$ activation.
Here is a simple result which explains the phenomenon for a single-layer neural network. I don't have much time so I haven't checked whether this would extend reasonably to multiple layers; I believe it probably will.
**Proposition**. In a single-layer neural network with $n$ hidden neurons using $\mathrm{ReLU}$ activation, with one input and output node, the output is linear outside of the region $[A, B]$ for some $A < B \in \mathbb{R}$. In other words, if $x > B$, $f(x) = \alpha x + \beta$ for some constants $\alpha$ and $\beta$, and if $x < A$, $f(x) = \gamma x + \delta$ for some constants $\gamma$ and $\delta$.
*Proof.* I can write the neural network as a function $f \colon \mathbb R \to \mathbb R$, defined by
$$f(x) = \sum\_{i = 1}^n \left[\sigma\_i\max(0, w\_i x + b\_i)\right] + c.$$
Note that each neuron switches from being $0$ to a linear function, or vice versa, when $w\_i x + b\_i = 0$. Define $r\_i = -\frac{b\_i}{w\_i}$. Then, I can set $B = \max\_i r\_i$ and $A = \min\_i r\_i$. If $x > B$, each neuron will either be $0$ or linear, so $f$ is just a sum of linear functions, i.e. linear with constant gradient. The same applies if $x < A$.
Hence, $f$ is a linear function with constant gradient if $x < A$ or $x > B$.
$\square$
If the result isn't clear, here's an illustration of the idea:
[](https://i.stack.imgur.com/7BtzI.png)
This is a $3$-neuron network, and I've marked the points I denote $r\_i$ by the black arrows. Before the first arrow and after the last arrow, the function is just a line with constant gradient: that's what you're seeing, and what the proposition justifies.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Short answer: Yes.
Consider a non-linear regression on that dataset. Using a model of degree two, it would fit a quadratic exactly to your perfect data here. But I suppose you're asking about neural networks. You can have neural networks set up that are exactly equivalent to this kind of regression, so even with neural networks, yes you can get this non-linear extrapolation. Of course as you probably realise, you would have to know in advance what kind of behaviour you expect in this extrapolation before really trusting any extrapolated predictions.
Upvotes: 1 |
2021/04/07 | 782 | 2,731 | <issue_start>username_0: I have often encountered the term 'clock rate' when reading literature on recurrent neural networks (RNNs). For example, see [this](https://arxiv.org/abs/1402.3511) paper. However, I cannot find any explanations for what this means. What does 'clock rate' mean in this context?<issue_comment>username_1: It isn't too surprising to see behaviour like this, since you're using $\mathrm{ReLU}$ activation.
Here is a simple result which explains the phenomenon for a single-layer neural network. I don't have much time so I haven't checked whether this would extend reasonably to multiple layers; I believe it probably will.
**Proposition**. In a single-layer neural network with $n$ hidden neurons using $\mathrm{ReLU}$ activation, with one input and output node, the output is linear outside of the region $[A, B]$ for some $A < B \in \mathbb{R}$. In other words, if $x > B$, $f(x) = \alpha x + \beta$ for some constants $\alpha$ and $\beta$, and if $x < A$, $f(x) = \gamma x + \delta$ for some constants $\gamma$ and $\delta$.
*Proof.* I can write the neural network as a function $f \colon \mathbb R \to \mathbb R$, defined by
$$f(x) = \sum\_{i = 1}^n \left[\sigma\_i\max(0, w\_i x + b\_i)\right] + c.$$
Note that each neuron switches from being $0$ to a linear function, or vice versa, when $w\_i x + b\_i = 0$. Define $r\_i = -\frac{b\_i}{w\_i}$. Then, I can set $B = \max\_i r\_i$ and $A = \min\_i r\_i$. If $x > B$, each neuron will either be $0$ or linear, so $f$ is just a sum of linear functions, i.e. linear with constant gradient. The same applies if $x < A$.
Hence, $f$ is a linear function with constant gradient if $x < A$ or $x > B$.
$\square$
If the result isn't clear, here's an illustration of the idea:
[](https://i.stack.imgur.com/7BtzI.png)
This is a $3$-neuron network, and I've marked the points I denote $r\_i$ by the black arrows. Before the first arrow and after the last arrow, the function is just a line with constant gradient: that's what you're seeing, and what the proposition justifies.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Short answer: Yes.
Consider a non-linear regression on that dataset. Using a model of degree two, it would fit a quadratic exactly to your perfect data here. But I suppose you're asking about neural networks. You can have neural networks set up that are exactly equivalent to this kind of regression, so even with neural networks, yes you can get this non-linear extrapolation. Of course as you probably realise, you would have to know in advance what kind of behaviour you expect in this extrapolation before really trusting any extrapolated predictions.
Upvotes: 1 |
2021/04/09 | 1,850 | 7,250 | <issue_start>username_0: I've come across the concept of fitness landscape before and, in my understanding, a smooth fitness landscape is one where the algorithm can converge on the global optimum through incremental movements or iterations across the landscape.
My question is: **Does deep learning assume that the fitness landscape on which the gradient descent occurs is a smooth one? If so, is it a valid assumption?**
Most of the graphical representations I have seen of gradient descent show a smooth landscape.
This [Wikipedia page](https://en.m.wikipedia.org/wiki/Fitness_landscape) describes the fitness landscape.<issue_comment>username_1: I'm going to take the fitness landscape to be the [graph](https://en.wikipedia.org/wiki/Graph_of_a_function#Definition) of the loss function, $\mathcal{G} = \{\left(\theta, L(\theta)\right) : \theta \in \mathbb{R}^n\}$, where $\theta$ parameterises the network (i.e. it is the weights and biases) and $L$ is a given loss function; in other words, the surface you would get by plotting the loss function against its parameters.
We always assume the loss function is differentiable in order to do backpropagation, which means at the very least the loss function is smooth enough to be continuous, but in principle it may not be infinitely differentiable1.
You talk about using gradient descent to find the global minimiser. In general this is not possible: many functions have local minimisers which are not global minimisers. For an example, you could plot $y = x^2 \sin(1/x^2)$: of course the situation is similar, if harder to visualise, in higher dimensions. A certain class of functions known as **convex functions** satisfy the property that any local minimiser is a global minimiser. Unfortunately, the loss function of a neural network is rarely convex.
For some interesting pictures, see [*Visualizing the Loss Landscape of Neural Nets*](https://papers.nips.cc/paper/2018/file/a41b3bb3e6b050b6c9067c67f663b915-Paper.pdf) by Li et al.
---
1 For a more detailed discussion on continuity and differentiability, any good text on mathematical analysis will do, for example Rudin's *Principles of Mathematical Analysis*. In general, any function $f$ that is differentiable on some interval is also continuous, but it need not be twice differentiable, i.e. $f''$ need not exist.
Upvotes: 1 <issue_comment>username_2: ### Main answer
To answer your question as directly as possible: No, deep learning does not make that "assumption".
But you're close. Just swap the word "assumption" with "imposition".
Deep learning **sets things up** such that the landscape is (mostly) smooth and always continuous\*, and therefore it is possible to do some sort of optimization via gradient descent.
\* quick footnotes on that bit:
* [Smoothness is a stronger condition than continuity](https://math.stackexchange.com/questions/472148/smooth-functions-or-continuous#:%7E:text=Smooth%20implies%20continuous%2C%20but%20not,continuous%20everywhere%2C%20yet%20nowhere%20differentiable.&text=A%20smooth%20function%20is%20a,desired%20order%20over%20some%20domain.), that's why I mention them both.
* My statement is not authoritative, so take it with a grain of salt, especially the "always" bit. Maybe someone will debunk this in the comments.
* The reason that I say "(mostly) smooth" is because I can think of a counter example to smoothness, which is the ReLU activation function. ReLU is still continuous though.
### Further elaboration
In deep learning we have linear layers which we know are differentiable. We also have non-linear activations, and a loss function which for the intents of this discussion can be bundled with non-linear activations. If you look at papers which focus specifically on crafting new types of non-linear activations and loss functions you will usually find a discussion section that goes something like "and we designed it this way such that it's differentiable. Here's how you differentiate it. Here are the properties of the derivative". For instance, just check out this [paper on ELU, a refinement on ReLU](https://arxiv.org/pdf/1511.07289.pdf).
We don't need to "assume" anything really, as we are the ones who designate the building blocks of the deep learning network. And the building blocks are not all that complicated in themselves, so we can **know** that they are differentiable (or piecewise differentiable like ReLU). And for rigor, I should also remind you that [the composition of multiple differentiable functions is also differentiable](https://math.stackexchange.com/questions/142084/prove-that-the-composition-of-differentiable-functions-is-differentiable).
So hopefully that helps you see what I mean when I say deep learning architects "impose" differentiability, rather than "assume" it. After all, we are the architects!
Upvotes: 1 <issue_comment>username_3: **Does deep learning assume that the fitness landscape on which the gradient descent occurs is a smooth one?**
One can interpret this question from a formal-mathematical standpoint and from a more "intuitively-practical" standpoint.
From the formal point of view, [smoothness](https://en.wikipedia.org/wiki/Smoothness) is the requirement that the function is continuous with continuous first derivatives. And this assumption is quite often not true in lots of applications - mostly because of the widespread use of ReLU activation function - [it is not differentiable at zero](https://en.wikipedia.org/wiki/Rectifier_(neural_networks)#Potential_problems).
From the practical point of view, though, by "smoothness" we mean that the function's "landscape" does not have a lot of sharp jumps and edges like that:
[](https://i.stack.imgur.com/E4i3F.png)
Practically, there's not much difference between having a discontinuous derivative and having derivatives making very sharp jumps.
And again, the answer is no - the loss function landscape is extremely spiky with lots of sharp edges - the picture above is an example of an actual loss function landscape.
**But... why the gradient descent works then?**
As far as I know, this is a subject of an ongoing discussion in the community. There are different takes and some conflicting viewpoints that are still subject of a debate.
My opinion is that, fundamentally, the idea that we need it to converge to the global optimum is a flawed one. Neural networks was shown to have enough capacity to [completely remember the training dataset](https://arxiv.org/pdf/1611.03530.pdf). A neural network, that completely remembered the training data. has reached the global optimization minimum (given only the training data). We are not interested in such overtrained models - we want models that generalize well.
As far as I know, there is no conclusive results on which properties of the minimum are linked to ability to generalize. People argued that these should be the ["flat"](https://arxiv.org/pdf/1609.04836.pdf) minima, but then it [was refuted](https://arxiv.org/pdf/1703.04933.pdf). After that a ["wide optimium"](https://arxiv.org/pdf/1803.05407.pdf) term was introduced and gave rise of an interesting technique of Stochastic Weight Averaging.
Upvotes: -1 |
2021/04/10 | 1,525 | 5,739 | <issue_start>username_0: I'm working on a neural network that plays some board games like reversi or tic-tac-toe (zero-sum games, two players). I'm trying to have one network topology for all the games - I specifically don't want to set any limit for the number of available actions, thus I'm using only a state value network.
I use a convolutional network - some residual blocks inspired by the Alpha Zero, then global pooling and a linear layer. The network outputs one value between 0 and 1 for a given game state - it's value.
The agent, for each possible action, chooses the one that results in a state with the highest value, it uses the epsilon greedy policy.
After each game I record the states and the results and create a replay memory. Then, in order to train the network, I sample from the replay memory and update the network (if the player that made a move that resulted in the current state won the game, the state's target value is 1, otherwise it's 0).
The problem is that after some training, the model plays quite well as one of the players, but loses as the other one (it plays worse than the random agent). At first, I thought it was a bug in the training code, but after further investigation it seems very unlikely. It successfully trains to play vs a random agent as both players, the problem arises when I'm using only self play.
I think I've found some solution to that - initially I train the model against a random player (half of the games as the first player, half as the second one), then when the model has some idea what moves are better or worse, it starts training against itself. I achieved pretty good results with that approach - in tic-tac-toe, after 10k games, I have 98.5% win rate against the random player as the starting player (around 1% draws), 95% as the second one (again around 3% draws) - it finds a nearly optimal strategy. It seems to work also in reversi and breakthrough (80%+ wins against random player after the 10k games as both players). It's not perfect, but it's also not that bad, especially with only 10k games played.
I believe that, when training with self play from the beginning, one of the players gains a significant advantage and repeats the strategy in every game, while the other one struggles with finding a counter. In the end, the states corresponding to the losing player are usually set to 0, thus the model learns that whenever there is the losing player's turn it should return a 0. I'm not sure how to deal with that issue, are there any specific approaches? I also tried to set the epsilon (in eps-greedy) initially to some large value like 0.5 (50% chance for a random move) and gradually decrease it during the training, but it doesn't really help.<issue_comment>username_1: The [AlphaZero](https://www.nature.com/articles/nature24270.epdf?author_access_token=<KEY>) paper mentions an "evaluation" step that seems to deal with the the problem similar to yours:
>
> ... we evaluate each new neural network checkpoint against the current best network $f\_{\theta\_\*}$ before using it for data generation ... Each evaluation consists of 400 games ... If the new player wins by a margin of > 55% (to avoid selecting on noise alone) then it becomes the best player $\alpha\_{\theta\_\*}$ , and is subsequently used for self-play generation, and also becomes the baseline for subsequent comparisons
>
>
>
In the [AlphaStar](https://www.nature.com/articles/s41586-019-1724-z.epdf?author_access_token=<KEY>-wO3GEoAMF9bAOt7mJ0RWQnRVMbyfgH9A%3D%3D) they've use a whole league of agents that was constantly played against each other.
Upvotes: 3 [selected_answer]<issue_comment>username_2: When in an environment with competing agents, from the perspective of each agent, the environment becomes non-markovian. That occurs because each agent is constantly adapting its own strategy to other's actions, so a transition that occurred to a pair (s,a) before, resulting in a positive reward, might result in zero or negative reward in future iterations of the game.
I didn't see mentioned, but I imagine that you are using some DQN variation to train the network, since you use a replay buffer. To use this framework, you assume that the environment, from the perspective of the agent, follows a MDP. But, as I argued above, some tuples from the replay buffer might not represent valid data for training, so the corresponding network that is trained with it becomes unstable.
A solution might be use the idea of centralized training with decentralized execution, in conjunction with some policy gradient (PG) algorithm, like REINFORCE or Actor-Critic. Since PG are on-policy algorithms, the data used to train the network is generated by the current policy, so you don't have the replay buffer issue. On the other hand, since is on-policy, it's sample inefficient. The centralized training might help to increase the sample efficiency (it's in fact a good solution to partial observable environments, but from what I understand is not the case with the your game). An additional solution to the sample inefficiency is to use off-policy PG, using, for example, past policies, with respective experience, in a importance sampling framework.
Some related references:
Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments: <https://arxiv.org/abs/1706.02275>
Off-Policy Policy Gradient: <https://lilianweng.github.io/lil-log/2018/04/08/policy-gradient-algorithms.html#off-policy-policy-gradient>
Upvotes: 2 |
2021/04/10 | 2,113 | 8,552 | <issue_start>username_0: Adding [BatchNorm](https://en.wikipedia.org/wiki/Batch_normalization) layers improves training time and makes the whole deep model more stable. That's an experimental fact that is widely used in machine learning practice.
My question is - why does it work?
The [original (2015) paper](https://arxiv.org/abs/1502.03167) motivated the introduction of the layers by stating that these layers help fixing "*internal covariate shift*". The rough idea is that large shifts in the distributions of inputs of inner layers makes training less stable, leading to a decrease in the learning rate and slowing down of the training. Batch normalization mitigates this problem by standardizing the inputs of inner layers.
This explanation was harshly criticized [by the next (2018) paper](https://arxiv.org/abs/1805.11604) -- quoting the abstract:
>
> ... distributional stability of layer inputs has little to do with the success of BatchNorm
>
>
>
They demonstrate that BatchNorm only slightly affects the inner layer inputs distributions. More than that -- they tried to *inject* some non-zero mean/variance noise into the distributions. And they still got almost the same performance.
Their conclusion was that the real reason BatchNorm works was that...
>
> Instead BatchNorm makes the optimization landscape significantly smoother.
>
>
>
Which, to my taste, is slightly tautological to saying that it improves stability.
I've found two more papers trying to tackle the question: In [this paper](https://arxiv.org/abs/2002.10444) the "key benefit" is claimed to be the fact that Batch Normalization biases residual blocks towards the identity function. And in [this paper](https://arxiv.org/abs/2003.01652) that it "avoids rank collapse".
So, is there any bottom line? Why does BatchNorm work?<issue_comment>username_1: To some extend, it get rid of low intensity numerical noise. Condition properties of the optimization problem is always an issue, i suspect BatchNorm alleviate this instability.
Upvotes: -1 <issue_comment>username_2: It is a question with no simple answer.
On one hand the BatchNormalization is unloved by some arguing it doesn't change the accuracy of neural networks or biased them.
On the other hand, it is highly recommended by the other because it leads to better trained models with a larger scope of predictions and less chances of overflow.
All I know for sure is that BN is really efficient on image classification. In fact, like the image categorization and classification soar this last years and that BN is a good practice in this field, it has spread to almost all DNNs.
Not only is the BN not always used in the right purpose, but it is often used without taking into account several elements such as :
* The layers between which apply BN
* The initializer algorithms
* The activation algorithms
* etc
For more computer sciences litterature "against" BN, I will let you look at the [<NAME> et al paper](https://arxiv.org/abs/1901.09321) who has trained a DNN without BN and get good results.
Some people use Gradient Clipping technique (R. Pascanu) instead of the BN in particular for RNNs
I hope it will give you some answers !
Upvotes: -1 <issue_comment>username_3: >
> When we are training deep neural Network gradient tells how to update each parameter, under the assumption other layers do not change.In Practice, we update all the layers simultaneously. When we update, unexpected results can happen because many functions composed together are changed simultaneously using updates that were computed under the assumption that other function remains constant.This makes it very hard to choose an appropriate leaning rate, because the effects of an update to the parametrs of one layer strongly on all other layers.
>
>
>
***How does Batch Normalisation Help :***
Batch Normalisation a layer which is added to any input or hidden layer in the neural network. Suppose H is the minitach of activations of the layer to normalize.
The formula for normalizing H is :
$\_H = \frac{H - Mean}{Standard Deviation}$
Mean : Vector Containing Mean of each unit
Standard Deviation : Vector Containing Mean of each unit
At training time mean and sd are calculated and when we backpropogate through these operations for apply mean, sd and Normalize H. This means that gradient will never propose an operation that acts simply to increase the standard deviation and mean of hi, the normalization operation remove the effect of such an action and zero out its componenr in the gradient. Hence Batch Normalisation thus ensure no or slight covariance shift in the input to layer after Batch Normalisation and thus improving learning time as shown in the original paper mentioned in question.
For more details : <https://www.deeplearningbook.org/contents/optimization.html>
Upvotes: 0 <issue_comment>username_4: This got me thinking about my understanding of batch normalization. I thought I understand it until I read this. Then, I refer to the Coursera deep learning specialization by Andrew Ng.
Prof. <NAME> explained it this way.
---
One reason why does batch norm work is that it normalizes not only the input features but also further values in the hidden units to take
on a similar range of values that can speed up learning.
The second reason why **batch norm** works, is it makes weights, later or
deeper than the network you have, say the weight on layer 10, more robust to changes to weights in earlier layers of the neural network (eg. in layer one). However, these hidden unit values are changing all the time, and so it's suffering from the **problem of covariate shift**. So **what batch norm does**, is it reduces the amount that the distribution of these hidden unit values shifts around. What batch norm ensures is that no matter how the parameters of the neural network update, their mean and variance will at least stay the same mean and variance, causing the input values to become more stable, so that the later layers of the neural network has more firm ground to stand on.
And even though the input distribution changes a bit, it changes less, and
what this does is, even as the earlier layers keep learning, the amounts that this forces the later layers to adapt
to as early as layer changes is reduced or, if you will, it weakens the coupling between what the early layers
parameters has to do and what the later layers parameters have to do. **And so it allows each layer of the
network to learn by itself, a little bit more independently of other layers, and this has the effect of
speeding up of learning in the whole network.** Takeaway is that batch norm means that, especially from
the perspective of one of the later layers of the neural network, the earlier layers don't get to shift around as
much, because they're constrained to have the same mean and variance. And so this makes the job of learning
on the later layers easier. It turns out batch norm has a second effect, it has a slight regularization effect. So
one non-intuitive thing of a batch norm is that each mini-batch, the mean and variance computed on just that
mini-batch as opposed to computed on the entire data set, that mean and variance has a little bit of noise
in it, because it's computed just on your mini-batch of, say, 64, or 128, or maybe 256 or larger training
examples. Batch norm works with mini-batch
Upvotes: 0 <issue_comment>username_5: I believe anything in machine learning that works, works because it flattens and smoothens the loss landscape.
Batch and layer normalization would help ensure that the feature vectors (i.e. channels) are embedded around the unit sphere [Batch/Instance norm translates to origin. Layer norm scales radially to unit sphere](https://ai.stackexchange.com/questions/35072/batch-instance-norm-translates-to-origin-layer-norm-scales-radially-to-unit-sph). Viewing neural networks as [transformations](https://colah.github.io/posts/2014-03-NN-Manifolds-Topology/), this would make the loss landscape smoother since the transformations the neural net needs to find would be more "regular".
I would recomend this [video](https://www.youtube.com/watch?v=78vq6kgsTa8) to learn about loss landscapes.
From [Visualizing the Loss Landscape of Neural Nets. NeuRIPS 2018:](https://arxiv.org/pdf/1712.09913.pdf)


Upvotes: 2 |
2021/04/11 | 608 | 2,587 | <issue_start>username_0: I trained different classification models using Keras with different numbers of hidden layers and the same number of neurons in each layer. What I found was the accuracy of the models decreased as the number of hidden layers increased However, the decrease was more significant in larger numbers of hidden layers. The accuracies refer to the test data and were obtained using k-fold=5. Also, no regularization was used. The following graph shows the accuracies of different models where the number of hidden layers changed while the rest of the parameters stayed the same (each model has 64 neurons in each hidden layer):
[](https://i.stack.imgur.com/JdZc9.png)
My question is why is the drop in accuracy between 8 hidden layers and 16 hidden layers much greater than the drop between 1 hidden layer and 8 hidden layers, even though the difference in the number of hidden layers is the same (8).<issue_comment>username_1: In your case the most probable explanation would be the case of overfitting. The model with too many hidden layers have lots of parameters. By means of all these parameters the model is remembering stuff from the training data itself instead of generalizing by learning the useful patterns.
As a rule of thumb if you increase the number of hidden layers more and more at some point model would perform poorly. (I am assuming there is non-linearity in between. In case there is no non-linearity, it doesn't matter how much you stack, it would give the same result because it just boils down to one single layer).
As an experiment, you can try to add regularization and you will see the model won't be performing that bad. Because now model is being punished for being too confident about the things it is remembering. As a result, it won't overfit to the training data.
Upvotes: 1 <issue_comment>username_2: In general, yes.
Stacking more layers and adding non-linearities will form a better function approximation (neural nets **are** basically function approximators), and when trained with the current regularization for each layer (such as L2 or L1) will cause your model to learn a better mapping, and hence generalize better.
If you don't regularize, it will overfit.
They are overparameterized, but why they don't overfit more (or in other words, why they generalize so well to unseen data) with increasing number of parameters is an effect that is even yet to be understood by the ML theory community [1]
[1] - <https://arxiv.org/abs/1806.11379>
Upvotes: 0 |
2021/04/11 | 767 | 3,027 | <issue_start>username_0: I do not understand the link of importance sampling to Monte Carlo off-policy learning.
We estimate a value using sampling on whole episodes, and we take these values to construct the target policy.
So, it is possible that in the target policy, we could have state values (or state action values) coming from different trajectories.
If the above is true, and if the values depend on the subsequent actions (the behavior policy), there is something wrong there, or else, better, something I do not understand.
Linking this question with importance sampling, do we use this ro value to correct this inconsistency?
Any clarification is welcome.<issue_comment>username_1: >
> We estimate a value using sampling on whole episodes, and we take this values to construct the target policy.
>
>
>
The crucial bit that you are missing is that there is no single value of $V(s)$ (or $Q(s,a)$) of a state (or a state action pair). These value functions are always defined with respect to some policy $\pi(a|s)$ and is given the notation of $V^{\pi}(s)$ (or $Q^{\pi}(s,a)$).
The off-policy learning problems are arising when you have two policies: the generation policy $\mu(a|s)$ and the target policy $\pi(a|s)$. Your MC sampling data came from an agent following $\mu$, while you want to improve your target policy $\pi$. It is pretty straightforward from here that you'd need to weight your calculations with factors like $\frac{\pi(a\_i|s\_i)}{\mu(a\_i|s\_i)}$ - that's what importance sampling is.
Upvotes: 2 <issue_comment>username_2: Recall that the definition of a value function is
$$v\_\pi(s) = \mathbb{E}\left[G\_t | S\_t = s\right]\;.$$
That is, the expected future returns given from state $s$ at time $t$ when we follow our policy $\pi$ -- i.e. our trajectory is generated according to $\pi$.
Using Monte Carlo methods we typically will estimate our value function by looking at the empirical mean of rewards we see throughout many training episodes, i.e. we will generate many episodes, keep track of all the rewards we see from state $s$ onwards across all of our episodes (this may be the first visit method or the all visit methods) and use these to approximate the expectation that is our value function.
The key here is that to approximate the value function in this way, then the episodes must be generated according to our policy $\pi$. If we choose the actions in an episode according to some other policy $\beta$ then we cannot use these episodes to approximate the expectation directly. As an example, this would be like trying to approximate the mean of a Normal(0, 1) distribution with data drawn from a Normal(10, 1) distribution.
To account for the fact that the actions came from a different distribution, we have to reweight the returns according to an importance sampling ratio. To see why we need importance sampling, [see this question/answer.](https://ai.stackexchange.com/questions/25553/why-do-we-need-importance-sampling/25559#25559)
Upvotes: 1 [selected_answer] |
2021/04/14 | 686 | 2,441 | <issue_start>username_0: If $x \sim \mathcal{N}(\mu,\,\sigma^{2})$, then it is a continuous variable, and therefore $P(x) = 0$ for any x. One can only consider things like $P(x to get a probability greater than 0.
So what is the meaning of probabilities such as $P(x|z)$ in variational autoencoders? I can't think of $P(x|z)$ as meaning $P(x, if $x$ is an image, since $x don't really make sense (all images smaller than a given one?)<issue_comment>username_1: In VAE's, we want to model the distribution of images $x$ with some latent variable $z$. Because $x$ is a random variable, You can think of $P(x|z)$ as the distribution of images $x$ conditioned on the random variable $z$. So given a particular value of $z$, we can generate a distribution over images $x$.
VAE's try to model images, which are themselves high dimensional 2D data. Given a 28x28 image, we already have 784 latent variables to model. We cannot visualise the distribution over all images $x$. Your notation $P(x < X|z)$ makes sense in a 1D case with a scalar value. However when considering 2D and higher, we have a problem with how we consider what is less then. if $x = (y\_1,y\_2)$ and $X = (y\_3,y\_4)$, then is $x < X$ if both $y\_1 < y\_3$ and $y\_2 < y\_4$? (I.e all dimensions have to be less than or if only one dimension needs to be less than). When talking about high dimensional space therefore, it is not very useful to denote $P(x < X|z)$ because of difficulty in interpreting the results.
Upvotes: 0 <issue_comment>username_2: Whilst you're right that for any continuous distribution $P(X = x) = 0 \;; \forall x \in \mathcal{X}$ where $\mathcal{X}$ is there support of the distribution, they are not referring to probabilities here, rather they are referring to [density functions](https://en.wikipedia.org/wiki/Probability_density_function) (though this should really be denoted with a lower case $p$ to avoid confusion such as this).
$p(x|z)$ is a [conditional distribution](https://en.wikipedia.org/wiki/Conditional_probability_distribution), which is also allowed in the continuous case -- you can also 'mix and match', i.e. $x$ could be continuous and $z$ could be discrete, and vice-versa.
In the paper, all the authors are meaning when they write $p(x|z)$ is the density of $x$ conditioned on $z$; in VAE's with an image application this is the conditional density of the image $x$ given your latent vector $z$.
Upvotes: 3 [selected_answer] |
2021/04/15 | 907 | 2,847 | <issue_start>username_0: In VAEs, we try to maximize the ELBO = $\mathbb{E}\_q [\log\ p(x|z)] + D\_{KL}(q(z \mid x), p(z))$, but I see that many implement the first term as the MSE of the image and its reconstruction. Here's a paper (section 5) that seems to do that: [Don't Blame the ELBO! A Linear VAE Perspective on Posterior Collapse](https://arxiv.org/pdf/1911.02469.pdf) (2019) by <NAME> et al. Is this mathematically sound?<issue_comment>username_1: On page 5 of [the VAE paper](https://arxiv.org/pdf/1312.6114.pdf#page=5), it's clearly stated
>
> We let $p\_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$ be a multivariate Gaussian (in case of real-valued data) or Bernoulli (in case of binary data) whose distribution parameters are computed from $\mathbf{z}$ with a MLP (a fully-connected neural network with a single hidden layer, see appendix $\mathrm{C}$ ).
>
>
> ...
>
>
> As explained above and in appendix $\mathrm{C}$, the decoding term $\log p\_{\boldsymbol{\theta}}\left(\mathbf{x}^{(i)} \mid \mathbf{z}^{(i, l)}\right)$ is a Bernoulli or Gaussian MLP, **depending on the type of data we are modelling**.
>
>
>
So, if you are trying to predict real numbers (in the case of images, these can be the RGB values in the range $[0, 1]$), then you can assume $p\_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$ is a Gaussian.
It turns out that maximising the Gaussian likelihood is equivalent to minimising the MSE between the prediction of the decoder and the real image. You can easily show this: just replace $p\_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$ with [the Gaussian pdf](https://en.wikipedia.org/wiki/Normal_distribution), then maximise that wrt the parameters, and you should end up with something that resembles the MSE. <NAME> shows this in [this video lesson](https://www.youtube.com/watch?v=vEPQNwxd1Y4). See also [this related answer](https://ai.stackexchange.com/a/17671/2444).
So, **yes, minimizing the MSE is theoretically founded**, provided that you're trying to predict some real number.
When the binary cross-entropy (instead of the MSE) is used (e.g. [here](https://github.com/pytorch/examples/blob/main/vae/main.py)), the assumption is that you're maximizing a Bernoulli likelihood (instead of a Gaussian) - this can also be easily shown.
Upvotes: 2 <issue_comment>username_2: If $p(x|z) \sim \mathcal{N}(f(z), I)$, then
\begin{align}
\log\ p(x|z)
&\sim \log\ \exp(-(x-f(z))^2) \\
&\sim -(x-f(z))^2 \\
&= -(x-\hat{x})^2,
\end{align}
where $\hat{x}$, the reconstructed image, is just the distribution mean $f(z)$.
It also makes sense to use the distribution mean when using the decoder (vs. just when training), as it is the one with the highest pdf value. So, the decoder produces a distribution from which we take the mean as our result.
Upvotes: 3 [selected_answer] |
2021/04/15 | 862 | 2,902 | <issue_start>username_0: I know that when using *Sigmoid*, you only need 1 output neuron (binary classification) and for *Softmax* - it's 2 neurons (multiclass classification). But for performance improvement (if there is one), is there any difference which of these 2 approaches works better, or when would you recommend using one over the other. Or maybe there are certain situations when using one of these is better than the other.
Any comments or shared experience will be appreciated.<issue_comment>username_1: On page 5 of [the VAE paper](https://arxiv.org/pdf/1312.6114.pdf#page=5), it's clearly stated
>
> We let $p\_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$ be a multivariate Gaussian (in case of real-valued data) or Bernoulli (in case of binary data) whose distribution parameters are computed from $\mathbf{z}$ with a MLP (a fully-connected neural network with a single hidden layer, see appendix $\mathrm{C}$ ).
>
>
> ...
>
>
> As explained above and in appendix $\mathrm{C}$, the decoding term $\log p\_{\boldsymbol{\theta}}\left(\mathbf{x}^{(i)} \mid \mathbf{z}^{(i, l)}\right)$ is a Bernoulli or Gaussian MLP, **depending on the type of data we are modelling**.
>
>
>
So, if you are trying to predict real numbers (in the case of images, these can be the RGB values in the range $[0, 1]$), then you can assume $p\_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$ is a Gaussian.
It turns out that maximising the Gaussian likelihood is equivalent to minimising the MSE between the prediction of the decoder and the real image. You can easily show this: just replace $p\_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$ with [the Gaussian pdf](https://en.wikipedia.org/wiki/Normal_distribution), then maximise that wrt the parameters, and you should end up with something that resembles the MSE. G. Hinton shows this in [this video lesson](https://www.youtube.com/watch?v=vEPQNwxd1Y4). See also [this related answer](https://ai.stackexchange.com/a/17671/2444).
So, **yes, minimizing the MSE is theoretically founded**, provided that you're trying to predict some real number.
When the binary cross-entropy (instead of the MSE) is used (e.g. [here](https://github.com/pytorch/examples/blob/main/vae/main.py)), the assumption is that you're maximizing a Bernoulli likelihood (instead of a Gaussian) - this can also be easily shown.
Upvotes: 2 <issue_comment>username_2: If $p(x|z) \sim \mathcal{N}(f(z), I)$, then
\begin{align}
\log\ p(x|z)
&\sim \log\ \exp(-(x-f(z))^2) \\
&\sim -(x-f(z))^2 \\
&= -(x-\hat{x})^2,
\end{align}
where $\hat{x}$, the reconstructed image, is just the distribution mean $f(z)$.
It also makes sense to use the distribution mean when using the decoder (vs. just when training), as it is the one with the highest pdf value. So, the decoder produces a distribution from which we take the mean as our result.
Upvotes: 3 [selected_answer] |
2021/04/15 | 783 | 3,037 | <issue_start>username_0: Is the reason why linear activation functions are usually pretty bad at approximating functions the same reason why combinations of hermitian polynomials or combinations of sines and cosines are better at approximating a function than combinations of linear functions?
For example, regardless of the amount of terms in this combination of linear functions, the function *will always* be some form of $y = mx + b$. However, if we're summing sines, you *absolutely cannot* express a combination of sines and cosines as something of the form $A \sin{bx}$. For example, a combination of three sinusoids cannot be simplified further than $A \sin{bx} + B \sin{cx} + D \sin{ex}$.
Is this fact essentially *why* the Fourier series is able to approximate functions (other than obviously the fact that $A \sin{bx}$ is orthogonal to $B \sin{cx}$)? Because if it could be simplified into *one* sinusoid, it could never approximate an arbitrary function because it's lost its robustness? Because with other terms combined, whereas linear functions summed up gain no further ability to approximate, things like sinusoids actually begin to approximate really well with enough terms and with the right constants.
In that vein, is this the reason why *non*-linear activiation functions (also called non-linear classifiers?) are generally valued more than linear ones? Because linear activation functions simply are lousy function approximators, while, with enough constants and terms, non-linear activation functions can approximate *any* function?<issue_comment>username_1: Ok. Here is an analogy for you. The equation for a neuron is wx + b, which is equivalent to a straight line. If we don't apply non-linearity we will be stuck with a straight line forever. So, this type of network won't be even able to model points in a unit circle randomly distributed.
What does non-linearity do? If you look the graphs for x to the power 2, 3, 4 and so on. You see with each increase in power, the line gets tugged like a sine or cosine curve. That bend in the straight line allows us to then model boundaries with arbitrary shapes.
The more the difficult the boundary to model between the classes the more bends in the line you need to model and so, you keep on increasing the layers in neural network.
Upvotes: 0 <issue_comment>username_2: Your analogy is correct, except it is not really an "analogy". [Sin is an activation function](https://openreview.net/forum?id=Sks3zF9eg) - in past works (before modern deep learning boom) it was rather standard to see it listed as a possible activator.
So your expression $\sigma(x) = A\sin ax + B \sin bx + D \sin ex$ is of a neural network with one 3-neuron layer and a single output linear neuron:
$$\sigma(x) = \sum\_i V\_i \sin\left(\sum\_jW\_{ij} x\_j + b\_i\right)+\beta$$
With all biases being zero $b\_i=\beta=0$, output weights are $V\_i = \left(A,B,C\right)$ and the inner weight matrix is diagonal:
$$W\_{ij} = \begin{pmatrix}a&0&0\\0&b&0\\0&0&e\end{pmatrix}$$
Upvotes: 2 |
2021/04/17 | 504 | 2,162 | <issue_start>username_0: I was reading [DT-LET: Deep transfer learning by exploring where to transfer](https://www.sciencedirect.com/science/article/abs/pii/S0925231220300874), and it contains the following:
>
> It should be noted direct use of labeled source domain data on a new scene of target domain would result in poor performance due to the semantic gap between the two domains, even they are representing the same objects.
>
>
>
Can someone please explain what the semantic gap is?<issue_comment>username_1: In terms of transfer learning, semantic gap means different meanings and purposes behind the same syntax between two or more domains. For example, suppose that we have a deep learning application to detect and label a sequence of actions/words $a\_1, a\_2, \ldots, a\_n$ in a video/text as a "greeting" in a society A. However, this knowledge in Society A cannot be transferred to another society B that the same sequence of actions in that society means "criticizing"! Although the example is very abstract, it shows the semantic gap between the two domains. You can see the different meanings behind the same syntax or sequence of actions in two domains: Societies A and B. This phenomenon is called the "semantic gap".
Upvotes: 4 [selected_answer]<issue_comment>username_2: The wiki has a concise quote by <NAME>, where the gap is defined by "the difference in meaning between constructs formed within different representation systems". This connotes the core problem of translating meaning between an informal language (typically natural language) and a formal language (programing language or other formal symbolic system).
Informally, the problem could be defined as "the gap in meaning between two different contexts", and we might observe this in something as simple as [gestures having different meanings in different cultures](https://www.businessinsider.com/hand-gestures-offensive-different-countries-2018-6). It would be less of a problem translating meaning between two formal systems.
At its core, "Semantic gap" seems to relate to the difficulty in formalizing certain fuzzy concepts in symbolic systems.
Upvotes: 1 |
2021/04/22 | 1,622 | 6,917 | <issue_start>username_0: While reading the AlphaZero paper in preparation to code my own RL algorithm to play Chess decently well, I saw that the
>
> "The board is oriented to the perspective of the current player."
>
>
>
I was wondering why this is the case if there are two agents (black and white). Is it because there is only one central DCNN network used for board and move evaluation (i.e. there aren't two separate networks/policies used for the respective players - black and white) in the algorithm AlphaZero uses to generate moves?
If I were to implement a black move policy and a white move policy for the respective agents in my environment, would reflecting the board to match the perspective of the current player be necessary since theoretically the black agent should learn black's perspective of moves while the white agent should learn white's perspective of moves?<issue_comment>username_1: I am not an expert in RL. I have been playing Go for some years.
Let's quote from AlphaZero's paper first:
>
> Aside from
> komi, the rules of Go are also invariant to colour transposition; this knowledge is
> exploited by representing the board from the perspective of the current player (see
> Neural network architecture).
>
>
>
In the Game of Go, the difference between Black and White except the board representation is the komi (the amount of points that Black has to compensate White in the final count for playing first). Except the presence of komi, there should be no difference in strategy under the same position if colours exchanged. In other words, given a state $s$ of black stones and white stones on the board, if the optimal policy of Black playing first is $\pi$, then if colours of stones on the board exchanged and it is White's turn, the optimal policy for White should be the same as $\pi$.
With this in consideration, there are at least 2 advantages of using a network that represents the board in the perspective of Self/Opponent rather than Black/White.
The first is that it prevents the network from the possibility of giving inconsistent strategies under two representations of the same state. Consider a network $f\_\theta$ that accepts the board representation in the order of $(B,W)$, and a state $s = (X\_t,Y\_t)$ in which $X\_t$ is a feature map for black stones and $Y\_t$ is a feature map for white stones and it is black's turn. Now consider a state $s' = (Y\_t,X\_t)$ (i.e. colours flipped) and it is white's turn. $s$ and $s'$ are essentially representation of the same state (except Komi which does not affect optimal policy). There could be a possibility that the network $f\_\theta$ gives different policies for these two representations. However, if $f\_\theta$ accepts the state as $(Self,Opponent)$, the input to the network would be the same (except the komi feature).
Therefore, this representation would significantly reduce number of states represented by the features vector $(X\_t,Y\_t)$, which would be the second advantage to training the neural network. If we consider that in Go, the same local position could appear in exchanged colour in another position, the network could by this implementation, recognize them as the same position. A decrease in the number of states could mean a significant drop in parameters and power needed from the network.
The same principle of making use of different representations of the same state is followed in AlphaGo's other training implementations as well, such as augmenting its training data to include rotations and reflections of the same board position.
However, in the game of Chess, this would be a different case. For a chess position, if the pieces' colours are exchanged and it becomes opponent's turn, it would be a different state because the positions of the KING and the QUEEN are not the same for the two colours.
Upvotes: 2 <issue_comment>username_2: There is a single neural network that guides self-plays in the Monte Carlo Tree Search algorithm. The neural network gets the current state of the board $s$ as an input and outputs current policy $\pi(a|s)$ and value $v(s)$.
The action probabilities are encoded in a (8,8,73) tensor. First two dimensions encode the coordinates of the figure to "pick" from the board. The third dimension encode where to move this figure: check out [this question](https://ai.stackexchange.com/questions/27336/how-does-the-alpha-zeros-move-encoding-work) for a discussion on how all possible moves are encoded in a 73 dimensional vector.
Similarly, the inputs of the network are organized in the (8, 8, 14 \* 8 + 7 = 119) tensor. The first two 8 x 8 dimensions, again, encode the positions on the board. Then the positions of the figures one plane per 6 figure types: first 6 planes for player's figures, next 6 planes for opponent's figures and two [repetition planes](https://ai.stackexchange.com/a/26656/20538). The 14 planes are repeated 8 times supplying predecessor positions to the network. Finally, there are 7 extra planes encoding as a single uniform value over the board - castling rights (4 planes), total move count (2 planes) and the current player color (1 plane).
Note that the positions of player's figures and opponent's figures are encoded in fixed layers of the state tensor. If you don't flip the board to the perspective of the player then the network will have very different training inputs for black and white states. It also will have to figure out which direction the pawns can move depending on the current player color. None of that it is impossible, of course - but that unnecessarily complicates something that is already a very hard problem for the DNN to learn.
You can go further and completely split the training for white and black players, as you've described. But that'll essentially double the work you'll have to do train your nets (and, I suspect, there would be some stability troubles typical for adversarial training).
To summarize - you are generally right - there is no fundamental need to flip the board. All the above details in state encoding are done to simplify the learning task for the deep neural network.
Upvotes: 3 [selected_answer]<issue_comment>username_3: In board games, whose turn is it play makes a significant difference in the outcome of the game. Many times its a difference of loss or a win.
So this information has to be communicated to the neural network.
There are two ways to communicate that information:
1. To add extra input nodes which say whose turn it is to make move.
2. To simply flip the signs of the pieces.
Method 1 works but it increases the number of nodes hence making the neural network more complex and larger. So method 2 is preferred to keep the neural network simple without losing the needed information. This process is known as **canonical form of the board**.
Flipping signs wouldn't work for games that have more than two players.
Upvotes: 0 |
2021/04/25 | 1,562 | 6,619 | <issue_start>username_0: **Any (AGI)-KERAS like libraries useful to develop a real Artificial General Intelligence (AGI) system?** Any deep-learning framework to develop AGI applications? any real Transformer-based (like Chat-GPT) libraries that can be used to achieve a minimum level of Artificial General Intelligence?
Existing frameworks/algorithms used in NN, NLP, ML, etc are not enough in my opinion. In my opinion any framework has to be based on building blocks from: Cognitive Science, Neuroscience, Mathematics, Artificial Intelligence, Computer Science, psycology, sociology, etc.<issue_comment>username_1: I am not an expert in RL. I have been playing Go for some years.
Let's quote from AlphaZero's paper first:
>
> Aside from
> komi, the rules of Go are also invariant to colour transposition; this knowledge is
> exploited by representing the board from the perspective of the current player (see
> Neural network architecture).
>
>
>
In the Game of Go, the difference between Black and White except the board representation is the komi (the amount of points that Black has to compensate White in the final count for playing first). Except the presence of komi, there should be no difference in strategy under the same position if colours exchanged. In other words, given a state $s$ of black stones and white stones on the board, if the optimal policy of Black playing first is $\pi$, then if colours of stones on the board exchanged and it is White's turn, the optimal policy for White should be the same as $\pi$.
With this in consideration, there are at least 2 advantages of using a network that represents the board in the perspective of Self/Opponent rather than Black/White.
The first is that it prevents the network from the possibility of giving inconsistent strategies under two representations of the same state. Consider a network $f\_\theta$ that accepts the board representation in the order of $(B,W)$, and a state $s = (X\_t,Y\_t)$ in which $X\_t$ is a feature map for black stones and $Y\_t$ is a feature map for white stones and it is black's turn. Now consider a state $s' = (Y\_t,X\_t)$ (i.e. colours flipped) and it is white's turn. $s$ and $s'$ are essentially representation of the same state (except Komi which does not affect optimal policy). There could be a possibility that the network $f\_\theta$ gives different policies for these two representations. However, if $f\_\theta$ accepts the state as $(Self,Opponent)$, the input to the network would be the same (except the komi feature).
Therefore, this representation would significantly reduce number of states represented by the features vector $(X\_t,Y\_t)$, which would be the second advantage to training the neural network. If we consider that in Go, the same local position could appear in exchanged colour in another position, the network could by this implementation, recognize them as the same position. A decrease in the number of states could mean a significant drop in parameters and power needed from the network.
The same principle of making use of different representations of the same state is followed in AlphaGo's other training implementations as well, such as augmenting its training data to include rotations and reflections of the same board position.
However, in the game of Chess, this would be a different case. For a chess position, if the pieces' colours are exchanged and it becomes opponent's turn, it would be a different state because the positions of the KING and the QUEEN are not the same for the two colours.
Upvotes: 2 <issue_comment>username_2: There is a single neural network that guides self-plays in the Monte Carlo Tree Search algorithm. The neural network gets the current state of the board $s$ as an input and outputs current policy $\pi(a|s)$ and value $v(s)$.
The action probabilities are encoded in a (8,8,73) tensor. First two dimensions encode the coordinates of the figure to "pick" from the board. The third dimension encode where to move this figure: check out [this question](https://ai.stackexchange.com/questions/27336/how-does-the-alpha-zeros-move-encoding-work) for a discussion on how all possible moves are encoded in a 73 dimensional vector.
Similarly, the inputs of the network are organized in the (8, 8, 14 \* 8 + 7 = 119) tensor. The first two 8 x 8 dimensions, again, encode the positions on the board. Then the positions of the figures one plane per 6 figure types: first 6 planes for player's figures, next 6 planes for opponent's figures and two [repetition planes](https://ai.stackexchange.com/a/26656/20538). The 14 planes are repeated 8 times supplying predecessor positions to the network. Finally, there are 7 extra planes encoding as a single uniform value over the board - castling rights (4 planes), total move count (2 planes) and the current player color (1 plane).
Note that the positions of player's figures and opponent's figures are encoded in fixed layers of the state tensor. If you don't flip the board to the perspective of the player then the network will have very different training inputs for black and white states. It also will have to figure out which direction the pawns can move depending on the current player color. None of that it is impossible, of course - but that unnecessarily complicates something that is already a very hard problem for the DNN to learn.
You can go further and completely split the training for white and black players, as you've described. But that'll essentially double the work you'll have to do train your nets (and, I suspect, there would be some stability troubles typical for adversarial training).
To summarize - you are generally right - there is no fundamental need to flip the board. All the above details in state encoding are done to simplify the learning task for the deep neural network.
Upvotes: 3 [selected_answer]<issue_comment>username_3: In board games, whose turn is it play makes a significant difference in the outcome of the game. Many times its a difference of loss or a win.
So this information has to be communicated to the neural network.
There are two ways to communicate that information:
1. To add extra input nodes which say whose turn it is to make move.
2. To simply flip the signs of the pieces.
Method 1 works but it increases the number of nodes hence making the neural network more complex and larger. So method 2 is preferred to keep the neural network simple without losing the needed information. This process is known as **canonical form of the board**.
Flipping signs wouldn't work for games that have more than two players.
Upvotes: 0 |
2021/04/26 | 811 | 3,416 | <issue_start>username_0: I am training an object detection machine learning pipeline. Among the many metrics provided out of the box by tensorflow object detection API, I look at *total\_loss* and *DetectionBoxes\_Precision/mAP@.75IOU*:
[](https://i.stack.imgur.com/oEsLQ.png)[](https://i.stack.imgur.com/2eygc.png)
Here the x-axis is the number of steps i.e. model experience. The orange line is for the training data and the blue line is for the validation data.
From the loss graph I would conclude, that at approx 2k steps overfitting starts, so using the model at approx 2k steps would be the best choice. But looking at the precision graph, training e.g. until 24k steps would be a much better model.
Which one is the best model?
Here, loss and the precision metric where picked just for illustrating the dilemma, there are many more metrics available, leading to multiple conclusions about when overfitting actually starts.<issue_comment>username_1: Generally, a model is considered to be overfitted when there's a huge gap between training and test/validation performances.
So during the training, you monitor the loss on validation data, and training data, and stop training if validation loss stagnates/increases given the training loss keeps decreasing.
In your scenario, I'm not sure what the total loss metric corresponds to. As I said, you've to measure loss on a held-out data other than training data to detect, and prevent overfitting.
Upvotes: 0 <issue_comment>username_2: >
> From the loss graph I would conclude, that at approx 2k steps overfitting starts, so using the model at approx 2k steps would be the best choice. But looking at the precision graph, training e.g. until 24k steps would be a much better model. Which one is the best model?
>
>
>
Different metrics will lead you to different conclusions. To detect overfitting under any metric you do need to plot training and validation/dev of the same metric, so it is harder to spot on your second graph.
The reason why loss/cost functions and other metrics can disagree on what the best model is, is due to sensitivity to different types of error. Often loss functions will be sensitive to outliers - large differences in predictions versus ground truth in the training or validation data. So it is common to see loss diverging between validation and train first - probably due to a very few bad predictions caused by difficult edge cases that the model is not coping well with - whilst metrics such as accuracy may continue to improve on validation and test sets.
The way to resolve this is to pick the metric that is important to you for business reasons (ideally ahead of the training, so you are not biased in your own decision about what to do), and use that to make judgement calls. This is easier to rely on the larger and cleaner your dataset is.
One other thing you can do is to some error analysis - find the worst cases for loss function, and check that they are labelled correctly.
If things still look inconclusive to you, and dataset size is small enough that results are noisy, then consider using k-fold cross validation to reduce effects of sample bias. This takes more time, but may improve your confidence in your model selection process.
Upvotes: 2 [selected_answer] |
2021/04/26 | 981 | 4,265 | <issue_start>username_0: I am trying to add more data points in my (almost) balanced dataset for training my neural network. I have come across techniques such as SMOTE or Random Over Sampling, but they work best for imbalanced data (as they balance the dataset). How can I do this and is it even worth it?
P.S. I know copying the same data points and appending them at the end doesn't add much value, but can we do it, and can it help to increase the prediction accuracy?<issue_comment>username_1: Random over sampling creates duplicates of existing examples, so applying this to your training data would be the same as increasing the weight of the oversampled examples. If it's done to all of the examples uniformly then the effects will probably cancel out. SMOTE, on the other hand, creates synthetic examples that are linear combinations of existing examples. Thus it can be thought of as a type of data augmentation, and in some situations this might improve your model's predictions.
Upvotes: 0 <issue_comment>username_2: [Gretel](https://console.gretel.cloud/projects/_new/) is a good tool for processing data. [Facets](https://github.com/PAIR-code/facets) is good for the visualizations.
Is it worth it? [most learners will exhibit bias towards the majority class](https://journalofbigdata.springeropen.com/articles/10.1186/s40537-019-0192-5), and in extreme cases, may ignore the minority class altogether.
It really depends on the goal and requirements of your project. Not because it's desirable it's better for your particular project, if your dataset is **almost** balanced probably you should continue with something else and consider balancing your data later in the project.
**Edit**: What's new to learn from the same example?
Upvotes: -1 <issue_comment>username_3: I presume you are attempting to solve a classification problem.
IMO, there's no decision-making template you could follow to know whether to use over sampling or not.I would typically compare results (ROC AUC, PRC curves) across datasets (Original vs Undersampled vs Oversampled) to decide.
You can consider some additional variants of SMOTE like SMOTE NC (SMOTE does not work if any of your predictors is categorical , SMOTE NC does), Borderline SMOTE, K-means SMOTE, as well as ADASYN.
Alternately, you can also choose to undersample your majority class using techniques such as ENN, Tomek Links, Instance Hardness, CNN, One sided Selection etc.
They usually generate better results than random over/under sampling.
Do note that over/under sampling methods are generally used for imbalanced datasets.
Upvotes: 0 <issue_comment>username_4: If your data is well balanced but small, I would recommend using a simpler algorithm to classify your data.
Upvotes: -1 <issue_comment>username_5: Does your question pertain to general data augmentation? That is already in heavy use- using transformations while training is very common, and over several epochs the network benefits from learning the new representations. The transformations are applied to all classes, with a probability of transformation ( horizontal flip, for example) specified by the user.
If you want to make your almost balanced dataset a balanced one, you can look into specific augmentations that you perform to the (almost) minority class before feeding it to the model. You could look into preprocessing methods that libraries like Keras have made open-source.
Upvotes: 0 <issue_comment>username_6: If your dataset is almost balanced, I don't think any sampling method would provide a significant improvement in accuracy. You can also experiment with the [focal loss](https://arxiv.org/abs/1708.02002?fbclid) function designed to deal with minor class-imbalance, but I'm not sure how much that would help.
I don't think you can artificially increase the dataset size besides doing data augmentation. For example, for computer vision tasks (like classification and object detection) dealing with images, you can randomly translate and rotate the images, so that the neural network is effectively seeing different "representations" of the same images to alleviate overfitting. This is especially important if your dataset size is small, but does not pertain to solving the class imbalance issue.
Upvotes: 0 |
2021/04/28 | 1,211 | 4,493 | <issue_start>username_0: [](https://i.stack.imgur.com/J2usn.png)
I have this problem above and I'm trying to think of how to apply [alpha-beta pruning](https://en.wikipedia.org/wiki/Alpha%E2%80%93beta_pruning) to the above. The algorithm states that on you're opponents turn the (expecti turn) you just return the lowest value, but does that mean you apply the probabilities to those values? So for the far left you'd get 2 as the largest value then multiply that by 0.5, but then that set's $\\beta$ in the expecti node to $0.5\*2=1$ and when it goes into the branch to the right it's comparing values without the probabilities applied to it when updating $\beta$.<issue_comment>username_1: alpha-beta-prunning algorithm is using for improve performance and reject the options, which not consist the condition - or it's possible to set some factors of probability to take into consideration or not.
This algorithm work on tree structure - and if there are a lot of levels (10-20) - It allows you to eliminate paths - which will logically not be used - saving memory and computing resources.
In this particular case - for finding the minimum value it works like this:
First branch:
* Go to B
* Go to D - and there is 2 and 3 - so return the min 2
* Go From B to E - and choose 5 - the minimal value in B points is actually 3 - so there isn't need for checking the next - cause everything below E, will be higher than D (3)
Second branch:
* Go to
* Go to F - and check 0 and 1
* If C have 1 - than not necessary to go to G - cause 1 is the smaller.
The given sequence is a simplification - and in the case of this algorithm there are also layers - min & max
[](https://i.stack.imgur.com/p0QEK.png)
Source:
[https://www.javatpoint.com/ai-alpha-beta-pruning#:~:text=Key%20points%20about%20alpha%2Dbeta,values%20to%20the%20child%20nodes](https://www.javatpoint.com/ai-alpha-beta-pruning#:%7E:text=Key%20points%20about%20alpha%2Dbeta,values%20to%20the%20child%20nodes).
---
Implementation, however, is a more complex matter - unless we do copy and paste :-)
<https://gist.github.com/exallium/1446104/5109388cfc21578f555dcac0ba54da680326af7b>
Upvotes: -1 <issue_comment>username_2: The internet doesnt seem to have easily accessible resources on this topic. In my newbie opinion, it should not be possible to prune any nodes. Expectiminimax takes the weighted average of the children, so it would need to consider all leaf values in order to do so. There is no node which can be ignored, as it will an operand in a sum. Now lets say the left subtree was bigger, in the right subtree you evaluated the first min node and it is smaller: well since the second min node can contribute to the sum making it bigger, it cannot be ignored. Now lets say the first min node on the right subtree was bigger, well since the second term adds or reduces(reduces in this case) weight it still has to be considered to get the final value.
Since all the nodes are checked and added together with appropriate weights under chance nodes, I highly doubt there is any advantage of alpha-beta pruning here since it would take the same number of steps as regular expectiminimax. Perhaps if the tree was larger and the chance nodes were central, more nodes could have been pruned out.
Upvotes: 0 <issue_comment>username_3: I know I'm a bit late, but here is a response from the book "Artifical Intelligence, A Modern Approach" from Russel and Norvig: "It may have occured to you that something like alpha-beta pruning could be applied to game trees with chance nodes. It turns out that you can. The analysis for MIN and MAX nodes is unchanged, but we can also prune chance nodes, using a bit of ingenuity. At first sight, it might seem impossible to find an upper bound on the value of a chance node before looking at all its children, because the value of a chance node is the average of it's children's value, and in order to compute the average of a set of numbers, we must look at all the numbers. But if we put bounds on the possible values of the utility function, then we can arrive at bounds for the average without looking at every number. For example, say that all utility values are between -2 and +2; then the value of leaf nodes is bounded, and in turn we can place an upper bound on the value of a chance node without looking at all its children".
Upvotes: 0 |
2021/04/29 | 1,339 | 4,760 | <issue_start>username_0: I am familiar with the currently popular neural network in deep learning, which has weights and is trained by gradient descent.
However, I found many papers that were popular in the 1980s and 1990s.
These papers have titles like "Neural networks to solve optimization problems".
For example, Hopfield first use this name, and they used "Neural network" to solve linear programming problems [1].
Later, Kennedy et.al used "Neural network" to solve nonlinear programming problems [2].
I summarize the difference between the current popular neural network and the "Neural networks":
1. They do not have parameter weights and bias to train or to learn from data.
2. They used a circuit diagram to present the model.
3. The model can be simplified as an ODE system and has a Lyapunov function as objective.
Please take a look at these two papers in the 1980s:
1. [Neurons with graded response have computational properties like those of two state neurons][2] (<NAME>)
2. [Neural Networks for Non-linear Programming](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1783) (<NAME> & <NAME>)
Reference:
[1]: <NAME>, <NAME>, “neural” computation of decisions in optimization problems, Biological265 cybernetics 52 (3) (1985) 141–152.
[2]: <NAME>, <NAME>, Neural networks for nonlinear programming, IEEE Transactions on Circuits and Systems 35 (5) (1988) 554–562.<issue_comment>username_1: Well, the goal of any paper is to allow the reader to understand what the author is trying to describe.
A lot of people have a lot of experience looking at circuit diagrams and figuring out those circuits will do. For these people, a circuit diagram may be the clearest and easiest way for them to understand how a particular thing works. So, it makes sense that an author would include a circuit diagram, in order to make it easy for those people to understand the concepts.
There are two particular reasons why a circuit diagram is especially likely to show up in a paper about neural networks:
The first reason is that analog circuits are closely related to [ordinary differential equations](https://en.wikipedia.org/wiki/Ordinary_differential_equation), and digital circuits are closely related to [sequential logic](https://en.wikipedia.org/wiki/Sequential_logic). So, if you have a neural network or something that uses ordinary differential equations or sequential logic, then a circuit diagram might be a simple way to express how it works.
The second reason is that a lot of researchers who are familiar with computers are also familiar with electronic circuits. This was especially true in the early days of computers, when people *had* to be familiar with electronics, math, or both in order to understand how to program a computer.
Upvotes: 2 <issue_comment>username_2: In the early days of neural networks the theorists and practitioners were educated in mathematics, psychology, neurophysiology, **electrical engineering**, and neurobiology. Computer science was still in its infancy. The first neural networks were modeled as electrical circuits.
There is evidence of this in the 1943 paper by <NAME> and <NAME> [1], and a 1956 paper by Rochester et al. [2].
The latter paper uses terms such as 'circuits' and 'switching'. One idea in the paper is explained in terms of a "Eccles-Jordan Flip Flop circuit" although there are no drawings. Nathanial Rochester had designed the IBM 701k [3] and "led the first effort to simulate a neural network" [4]
Brain structure was discussed in terms of 'neural circuits' as early as 1937 [5].
I am not sure when the first electrical circuit diagram appeared in publication, but it makes sense that early neural network designers, would have thought of their implementation as such.
References:
* [1] <NAME>., and <NAME>. (1943) A Logical Calculus of the Ideas Immanent in Nervous Activity. <https://jontalle.web.engr.illinois.edu/uploads/498-NS.21/McCulloch-Pitts-1943-neural-networks-ocr.pdf>
* [2] <NAME>., <NAME>., <NAME>., & <NAME>. (1956). Tests on a cell assembly theory of the action of the brain, using a large digital computer. IRE Transactions on Information Theory, Information Theory, IRE Transactions on, IRE Trans. Inf. Theory, 2(3), 80–93. https://doi-org/10.1109/TIT.1956.1056810
* [3] Brief History of Neural Networks
<https://www.linkedin.com/pulse/data-science-term-neural-networks-70-year-old-anchit-sharma/?trk=public_profile_article_view>
* [4] This Data Science term - Neural Networks is 70-Year-Old Technology!
<https://medium.com/analytics-vidhya/brief-history-of-neural-networks-44c2bf72eec>:
* [5] Papez circuit, <https://en.wikipedia.org/wiki/Papez_circuit>
Upvotes: 2 |
2021/05/01 | 1,126 | 3,932 | <issue_start>username_0: Let $\mathcal{S}$ be the training data set, where each input $u^i \in \mathcal{S}$ has $d$ features.
I want to design an ANN so that the cost function below is minimized (the sum of the square of pairwise differences between model outputs) and the given constraint is satisfied, where $w$ is the ANN model parameter vector.
\begin{align}
\min \_{w}& \sum\_{\{i, j\} \in \mathcal{S}}\left(f\left(w, u^{i}\right)-f\left(w, u^{j}\right)\right)^{2} \\
&f\left(w, u^{i}\right) \geq q\_{\min }, \quad i \in \mathcal{S}
\end{align}
What kind of ANN is suitable for this purpose?<issue_comment>username_1: Well, the goal of any paper is to allow the reader to understand what the author is trying to describe.
A lot of people have a lot of experience looking at circuit diagrams and figuring out those circuits will do. For these people, a circuit diagram may be the clearest and easiest way for them to understand how a particular thing works. So, it makes sense that an author would include a circuit diagram, in order to make it easy for those people to understand the concepts.
There are two particular reasons why a circuit diagram is especially likely to show up in a paper about neural networks:
The first reason is that analog circuits are closely related to [ordinary differential equations](https://en.wikipedia.org/wiki/Ordinary_differential_equation), and digital circuits are closely related to [sequential logic](https://en.wikipedia.org/wiki/Sequential_logic). So, if you have a neural network or something that uses ordinary differential equations or sequential logic, then a circuit diagram might be a simple way to express how it works.
The second reason is that a lot of researchers who are familiar with computers are also familiar with electronic circuits. This was especially true in the early days of computers, when people *had* to be familiar with electronics, math, or both in order to understand how to program a computer.
Upvotes: 2 <issue_comment>username_2: In the early days of neural networks the theorists and practitioners were educated in mathematics, psychology, neurophysiology, **electrical engineering**, and neurobiology. Computer science was still in its infancy. The first neural networks were modeled as electrical circuits.
There is evidence of this in the 1943 paper by <NAME> and <NAME> [1], and a 1956 paper by Rochester et al. [2].
The latter paper uses terms such as 'circuits' and 'switching'. One idea in the paper is explained in terms of a "Eccles-Jordan Flip Flop circuit" although there are no drawings. <NAME> had designed the IBM 701k [3] and "led the first effort to simulate a neural network" [4]
Brain structure was discussed in terms of 'neural circuits' as early as 1937 [5].
I am not sure when the first electrical circuit diagram appeared in publication, but it makes sense that early neural network designers, would have thought of their implementation as such.
References:
* [1] <NAME>., and <NAME>. (1943) A Logical Calculus of the Ideas Immanent in Nervous Activity. <https://jontalle.web.engr.illinois.edu/uploads/498-NS.21/McCulloch-Pitts-1943-neural-networks-ocr.pdf>
* [2] <NAME>., <NAME>., <NAME>., & <NAME>. (1956). Tests on a cell assembly theory of the action of the brain, using a large digital computer. IRE Transactions on Information Theory, Information Theory, IRE Transactions on, IRE Trans. Inf. Theory, 2(3), 80–93. https://doi-org/10.1109/TIT.1956.1056810
* [3] Brief History of Neural Networks
<https://www.linkedin.com/pulse/data-science-term-neural-networks-70-year-old-anchit-sharma/?trk=public_profile_article_view>
* [4] This Data Science term - Neural Networks is 70-Year-Old Technology!
<https://medium.com/analytics-vidhya/brief-history-of-neural-networks-44c2bf72eec>:
* [5] Papez circuit, <https://en.wikipedia.org/wiki/Papez_circuit>
Upvotes: 2 |
2021/05/02 | 917 | 4,318 | <issue_start>username_0: I wonder if creating data set only by augmentation base images is a bad practice.
I mean the situation when you have to train net to predict really simple patterns, for example printed-like digits. And all digits from specific group looks basically the same, for example all one's look the same and so on. The only difference is rotation/translation etc. in the image.
Is it bad way to create data set by taking digit image and randomly rotate, translate and maybe erode/dilate it?
My intuition tells me that something's wrong with that approach, but I cannot find any reason why it should be wrong.<issue_comment>username_1: Data augmentation is usually rotating, cropping and translating images. And this makes sense if your network could meet these kind of images.
If I take a digit **recognition** like LeNet, it is useless to complicate the task of the network by forcing it to learn rotated digits, which could lead to a more complex architecture and training and less accuracy in the task. Another example I could think of is human pose recognition (openpose project). As we humans usually stands with our feet on the bottom of the image and the head on top, openpose project didn't use rotation of images on the dataset.
So I would say data augmentation is a great tool (especially when we lack data), but I would only use it when the augmented data could be met when performing the task. If the digits are always oriented and placed on the middle of the image, it doesn't make much sense to use translations and rotations on the data set to augment it unless we really lack data.
In your example, it does make sense to me to create the dataset using data augmentation.
Upvotes: 1 <issue_comment>username_2: Outside of using a generated dataset to study machine learning, the typical purpose of a trained machine learning model is to process new inputs from some source.
For a model to be effective, the training data set inputs and new inputs should be taken from the same distribution. The loss function used in training, combined with cross-validation to measure and maintain generalisation, will have ensured that the most accurate results occur for a population of inputs that is similar to the training data.
The further that inputs stray from being like the training data, in any aspect, then the more likely that outputs from a trained model are inaccurate. This may include over-generalising - if you set much higher ranges for some variations - e.g. far more dilate and erode that would be seen in practice, then the neural network weights will be tuned to allow for this data and *may* score worse on your target data even though it will appear to score well overall on the training data. That is because the measurements for loss and accuracy from the more realistic generation will be diluted by measurements from training data that has no relevance to the real-world problem you are trying to solve. Maybe it will be OK, maybe worse, maybe even better - however your measurements of loss and accuracy during training will not tell you.
So there is significant danger in relying on a generated-only dataset for training. If any aspect of the simulated inputs does not match how the system will be used in practice, the impact is likely to be felt in terms of reduced accuracy.
For your digits example, you should consider where the "real" digits will come from later, and try to ensure that your data generation takes into account any complications, variations, imperfections that will occur when collecting the data. For instance, if the real digits are scanned from paper, then take a look at some typical scanned images, and check how close your generated data is to them.
If you can obtain a limited number of "real" values, perhaps not enough for training, but enough to get some accuracy statistics from, then consider using them for test and cross-validation phases. Remember, that using them for test should be done sparingly, and not used to select between models with similar results, but only to establish a rough estimate of accuracy at the end of training. Whilst using any for cross-validation may help select a model that generalises the best between the generated data set and reality, but precludes using the same examples for test.
Upvotes: 0 |
2021/05/03 | 1,102 | 5,049 | <issue_start>username_0: Recognition of optical patterns (as pixel maps) by neural networks is standard. But optical patterns may be only slightly distorted or noisy, and may not be arbitrarily scrambled – e.g. by permutations of rows and columns of the pixel map – without losing the possibility to recognize them. This in turn is the normal case for abstract graphs in their standard representation as adjacency matrices: only under some permutations of nodes a possible pattern is visible. In general, for **almost all** random graphs under **no** permutation a pattern is visible, but for **all** graphs under **almost all** permutations a pattern is *in*visible.
How can this be handled in the context of either unsupervised or supervised learning? Assume you have a huge set of graphs with 100 nodes and 1,000 edges, given as 100$\times$100 adjacency matrices under arbitrary permutations, but with only two isomorphism classes. How could a neural network find this out and learn from the samples?
Is this possibly common knowledge: that it can **not**? Or are there any tricks?
(One trick might be to draw the graph [force-directed](https://en.wikipedia.org/wiki/Force-directed_graph_drawing) and hope that it settles in a recognizable configuration. But this to be detectable would require a much larger pixel map than 100$\times$100. But why not?)<issue_comment>username_1: Data augmentation is usually rotating, cropping and translating images. And this makes sense if your network could meet these kind of images.
If I take a digit **recognition** like LeNet, it is useless to complicate the task of the network by forcing it to learn rotated digits, which could lead to a more complex architecture and training and less accuracy in the task. Another example I could think of is human pose recognition (openpose project). As we humans usually stands with our feet on the bottom of the image and the head on top, openpose project didn't use rotation of images on the dataset.
So I would say data augmentation is a great tool (especially when we lack data), but I would only use it when the augmented data could be met when performing the task. If the digits are always oriented and placed on the middle of the image, it doesn't make much sense to use translations and rotations on the data set to augment it unless we really lack data.
In your example, it does make sense to me to create the dataset using data augmentation.
Upvotes: 1 <issue_comment>username_2: Outside of using a generated dataset to study machine learning, the typical purpose of a trained machine learning model is to process new inputs from some source.
For a model to be effective, the training data set inputs and new inputs should be taken from the same distribution. The loss function used in training, combined with cross-validation to measure and maintain generalisation, will have ensured that the most accurate results occur for a population of inputs that is similar to the training data.
The further that inputs stray from being like the training data, in any aspect, then the more likely that outputs from a trained model are inaccurate. This may include over-generalising - if you set much higher ranges for some variations - e.g. far more dilate and erode that would be seen in practice, then the neural network weights will be tuned to allow for this data and *may* score worse on your target data even though it will appear to score well overall on the training data. That is because the measurements for loss and accuracy from the more realistic generation will be diluted by measurements from training data that has no relevance to the real-world problem you are trying to solve. Maybe it will be OK, maybe worse, maybe even better - however your measurements of loss and accuracy during training will not tell you.
So there is significant danger in relying on a generated-only dataset for training. If any aspect of the simulated inputs does not match how the system will be used in practice, the impact is likely to be felt in terms of reduced accuracy.
For your digits example, you should consider where the "real" digits will come from later, and try to ensure that your data generation takes into account any complications, variations, imperfections that will occur when collecting the data. For instance, if the real digits are scanned from paper, then take a look at some typical scanned images, and check how close your generated data is to them.
If you can obtain a limited number of "real" values, perhaps not enough for training, but enough to get some accuracy statistics from, then consider using them for test and cross-validation phases. Remember, that using them for test should be done sparingly, and not used to select between models with similar results, but only to establish a rough estimate of accuracy at the end of training. Whilst using any for cross-validation may help select a model that generalises the best between the generated data set and reality, but precludes using the same examples for test.
Upvotes: 0 |
2021/05/04 | 945 | 3,688 | <issue_start>username_0: Convolutional neural networks (CNNs) contain convolutional layers. In modern deep learning libraries such as [Tensorflow](http://www.tensorflow.org) and [PyTorch](http://www.pytorch.org), convolutional layers are implemented by using the cross-correlation operator instead of the convolution operator. The difference is that in convolution, the kernel is flipped before applying it to the input.
For example, in the book ["Deep Learning"](https://www.deeplearningbook.org/), it is explained as follows.
>
> Many machine learning libraries implement cross-correlation but call
> it convolution. --- In the
> context of machine learning, the learning algorithm will learn the
> appropriate values of the kernel in the appropriate place, so an
> algorithm based on convolution with kernel flipping will learn a
> kernel that is flipped relative to the kernel learned by an algorithm
> without the flipping. It is also rare for convolution to be used alone
> in machine learning; instead convolution is used simultaneously with
> other functions, and the combination of these functions does not
> commute regardless of whether the convolution operation flips its
> kernel or not.
>
>
>
This makes perfect sense and convincingly argues why implementing the flipping of the kernel would be unnecessary.
But how come CNNs are not commonly called "cross-correlational neural networks" instead of "convolutional neural networks"? To the best of my knowledge, the first concrete implementations of CNNs predate any of the above-mentioned libraries. Did these early implementations of CNNs indeed use the convolution operator, leading to the name? Or is there another reason?<issue_comment>username_1: It's inherited from math, but computer scientists optimized the algorithm and stuck with the original term.
3Blue1Brown also comes to the same conclusion ([at 13:23](https://youtu.be/KuXjwB4LzSA?t=804)) that
>
> Another thing worth highlighting is that in the computer science context, this notion of flipping around that kernel before you let it march across the original often feels really weird and just uncalled for. But again, note that that's what's inherited from the pure math context, where like we saw with the probabilities, it's an incredibly natural thing to do.
>
>
>
One of the first mentions of 'convolutions' in neural networks is in the seminal paper by [<NAME> et al (1989)](https://proceedings.neurips.cc/paper/1989/file/53c3bce66e43be4f209556518c2fcb54-Paper.pdf). However, they also do not specify whether they perform true convolution. They simply state
>
> This operation is equivalent to a convolution with a small size kernel, followed by a squashing function.
>
>
>
Upvotes: 1 <issue_comment>username_2: Maths and calculus. I remember studying convolution 40 years ago in the Digital Signal Processing subject at the University.
>
> "It was the 1760s when the Swiss mathematical genius Leonhard Euler
> (1707–1783) suffered complete blindness, but this illness did not
> prevent him from contributing to mathematics. In fact, in those years,
> he wrote a memorable book about integral calculus and the solution of
> differential equations (DEs) by certain definite integrals [6]. ....
>
>
> A first and simple case of this solution is K(u) = u and Q(x) = x and
> a second case is K(u) = -u and Q(x) = x. Euler studied the first case
> in [7], which is the first source in which the correlation operation
> is used exhaustively in statistical applications and related to
> convolution. "
>
>
>
[A History of the Convolution Operation](https://www.embs.org/pulse/articles/history-convolution-operation/)
Upvotes: 0 |
2021/05/06 | 692 | 2,868 | <issue_start>username_0: In the book [Deep Learning with Python](http://silverio.net.br/heitor/disciplinas/eeica/papers/Livros/%5BChollet%5D-Deep_Learning_with_Python.pdf), <NAME> writes (section 1.2.6, page 18)
>
> In practice, there are fast-diminishing returns to successive applications of shallow-learning methods, because *the optimal first representation layer in a three-layer model isn't the optimal first layer in a one-layer or two-layer model*. What is transformative
> about deep learning is that it allows a model to learn all layers of representation *jointly*, at the same time, rather than in succession (*greedily*, as it's called).
>
>
>
By *shallow learning*, we mean traditional machine learning models that aren't deep learning, such as support vector machines.
I understood the above as below.
>
> Using a model with three-layer shallow-learning methods has the same output (predicted) value as using one-layer shallow learning method. The effect of using multiple layers of shallow learning methods is to 'increase running time or repetition'.
>
>
>
Did I understand properly?<issue_comment>username_1: Quite surprising to find someone reading this same book. I read this part a week ago and the explanation is quite clear in the book :
* If you use successive shallow learning methods, you first train one model, then you train another model with the outputs of your first model, and then a third with the outputs of your 2nd model. The problem with that is that each model is trained to get good results at his task, not to send information, so there can be an increase when adding successive models, but it is a very weak increase.
* If you use deep learning, all the layers are trained at the same time, so each layer learns how to efficiently transfer important information to the next layer of the model. This is why it is much more efficient.
Hope I made it clearer
Upvotes: 3 [selected_answer]<issue_comment>username_2: No, you did not correctly understood the meaning of the passage.
>
> Using a model with three-layer shallow-learning methods has the same
> output (predicted) value as using one-layer shallow learning method,
> The effect of using multiple layers of shallow learning methods is to
> 'increase running time or repetition'.
>
>
>
A three-layer shallow learning method (so three one-layer methods stacked one after the other in a secuential way) has not the same predicted output value as a one-layer shallow learning method. It will be different output (and a 3 layers model should show some improvement) and clearly will need more time to get the result.
Deep learning methods as mentioned by @username_1 are (generally speaking) mode efficient at finding the important features and (usually) get better results than using a model with three-layer shallow-learning methods.
Upvotes: 0 |
2021/05/06 | 641 | 2,848 | <issue_start>username_0: Is binary classification using CNN possible if the training data only consists of one class?
I am working on landslide risk assessment using Convolutional Neural Networks and I want to train a network that can recognize high-risk areas using multi-spectral imagery. The bands will contain numeric and categorical data that I have found to be related to my field of work.
The problem is that I only have historical data indicating where a landslide has happened before and defining zones as low-risk is not reliable in this field (since we are not yet sure how these variables affect the risk or susceptibility, and I don't want to bias my categorization) and my training data will be made up of only one class.
Can this be done? Is training a network from scratch using only one class of training data possible?
If so, after building this network, can I use it to classify any zone and get any meaningful data from its output for risk assessment (for example, output value "1" being "similar to past landslides" and "0" being "not similar at all")?<issue_comment>username_1: It probably won't work because of in the training of the artificial intelligence, it'll set the weights to always answer *class 1*, and your data will say *you're right*, and it'll continue forever.
Upvotes: 0 <issue_comment>username_2: The first answer is correct in that you can't use discrimitive learning for binary classifucation here since you only have one class. There are a few things you can try however. If you can convert your images to feature vectors, kernal density estimation can be used to assign a probability density over the space of images, and then for any new image you can get a probability of it being similar to the training data. Generalising, you could use outlier detection methods such as isolation forest to determine if new images are "inliers" (i.e. landslide images) or "outliers".
Upvotes: 0 <issue_comment>username_3: While it won't work as you've possibly imagined it, you might find that implementing it as an autoencoder will allow you to train on one class and then identify things that are "not that."
With an autoencoder, the network works to build a latent, significantly lower dimensionality, representation of $x$. Rather than generating a $\hat{y}$ prediction, you are really generating $\hat{x}$. As a result, the loss function is measuring how well the output of the network matches the original input.
To apply this to your problem, after training the autoencoder you might either measure the binary cross-entropy loss of $(x, \hat{x})$ or the KL loss. If you graph output loss of items within the class vs items that are not in the class you will *usually* find that a very clear linear boundary can be defined to distinguish things that are not of the class you're interested in.
Upvotes: 1 |
2021/05/06 | 487 | 2,160 | <issue_start>username_0: I recently heard of GPT-3 and I don't understand how the attention models and transformers encoders and decoders work. I heard that GPT-3 can make a website from a description and write perfectly factual essays. How can it understand our world using algorithms and then recreate human-like content? How can it learn to understand a description and program in HTML?<issue_comment>username_1: It probably won't work because of in the training of the artificial intelligence, it'll set the weights to always answer *class 1*, and your data will say *you're right*, and it'll continue forever.
Upvotes: 0 <issue_comment>username_2: The first answer is correct in that you can't use discrimitive learning for binary classifucation here since you only have one class. There are a few things you can try however. If you can convert your images to feature vectors, kernal density estimation can be used to assign a probability density over the space of images, and then for any new image you can get a probability of it being similar to the training data. Generalising, you could use outlier detection methods such as isolation forest to determine if new images are "inliers" (i.e. landslide images) or "outliers".
Upvotes: 0 <issue_comment>username_3: While it won't work as you've possibly imagined it, you might find that implementing it as an autoencoder will allow you to train on one class and then identify things that are "not that."
With an autoencoder, the network works to build a latent, significantly lower dimensionality, representation of $x$. Rather than generating a $\hat{y}$ prediction, you are really generating $\hat{x}$. As a result, the loss function is measuring how well the output of the network matches the original input.
To apply this to your problem, after training the autoencoder you might either measure the binary cross-entropy loss of $(x, \hat{x})$ or the KL loss. If you graph output loss of items within the class vs items that are not in the class you will *usually* find that a very clear linear boundary can be defined to distinguish things that are not of the class you're interested in.
Upvotes: 1 |
2021/05/11 | 2,712 | 9,641 | <issue_start>username_0: The VC dimension is a very important concept in computational/statistical learning theory. However, the first time you read its definition, you may not immediately understand what it really represents or means, as it involves other concepts, such as *shattering*, *hypothesis class*, *learning* and sets. For example, let's take a look at [the definition given by <NAME> and <NAME> (p. 70)](https://www.cs.huji.ac.il/%7Eshais/UnderstandingMachineLearning/understanding-machine-learning-theory-algorithms.pdf#page=70)
>
> DEFINITION $6.5$ (VC-dimension) The VC-dimension of a hypothesis class $\mathcal{H}$, denoted $\operatorname{VCdim}(\mathcal{H})$, is the maximal size of a set $C \subset \mathcal{X}$ that can be shattered by $\mathcal{H}$. If $\mathcal{H}$ can shatter sets of arbitrarily large size we say that $\mathcal{H}$ has infinite VC-dimension.
>
>
>
Without knowing what a hypothesis class is, or what the specific $C$, $X$ and $H$ in this definition are, it's difficult to understand this definition. Even if you are familiar with what a hypothesis class is (i.e. a set of sets, i.e. our set of functions/hypotheses/models, e.g. the set of all possible neural networks with a specific topology) and you know that $C$ and $X$ are sets of input points, it should still not be clear what the VC dimesion really is or represents.
So, how would you intuitively and rigorously explain the exact definition of the VC dimension?
Note that I am not asking for answers like
>
> The VC dimension represents the *complexity* (or *expressive power, richness*, or *flexibility*) of your model/hypothesis class.
>
>
>
Of course, this is easy to memorize, but it's quite vague. So, I am **not** looking for vague/general answers. I am looking for answers that rigorously but intuitively describe the mathematical definition of the VC dimension. For example, you could provide an **illustration** that shows what the VC dimension is, and, in your *example* (e.g. the XOR problem cannot be solved by a set of lines), you can describe what $H$, $C$, and $X$ are, and how they relate to the typical concepts you will find an introductory course to machine learning, but you should not forget to describe the concept of *shattering*. If you have other ideas of how to illustrate this concept memnomically, feel free to provide an answer.<issue_comment>username_1: This may be lacking some rigour, but this is how I have explained it in the past:
>
> The VC Dimension is the maximum number of inputs such that for any
> subset of these inputs, it is possible for the model to classify the
> subset as true and the rest as false.
>
>
>
I am aware that this definition still uses the term subset, but in my experience even people who are not familiar with set theory understand the concept of a subset.
The part about this that tends to confuse most people is the notion of a subset unable to be induced by a model. This can be nicely illustrated by picking a really simply hypothesis, such as a closed interval, and showing how it cannot express non-contiguous positive inputs, or using a linear classifier and showing how it cannot express XOR.
Upvotes: 0 <issue_comment>username_2: **Shattered set.** First we need a concept of a shattered set. I'll work from a shattered set [example in Wikipedia](https://en.wikipedia.org/wiki/Shattered_set#Example) adjusting it to your notation.
The statement that $\mathcal{H}$ shatters $C$ means that for every subset $A \subset C$ there is a set $B\in\mathcal{H}$ such that $B$ "separates" $A$ from $C \backslash A$. Writing this formally:
$$\text{shatters}(\mathcal{H},C) = \forall A \subset C\; \exists B\in\mathcal{H}\;.\;A = B\,\cap A $$
As an example consider the set $C$ of four points on $\mathcal{X} = \mathbb{R}^2$:
$$C = \left\{(0,0); (0,1); (1,0); (1,1)\right\}$$
And the classification class $\mathcal{H}$ being all possible 2D discs on $\mathcal{X} =\mathbb{R}^2$. (Notice that people use the world "[class](https://en.wikipedia.org/wiki/Class_(set_theory))" here, because we are dealing with "set-of-sets" stuff and that might get [tricky](https://en.wikipedia.org/wiki/Class_(set_theory)#Paradoxes)). Note that a disk can be represented as the set of all points inside the circle.
Now, it turns out that this $\mathcal{H}$ doesn't shatter $C$. The counterexample would be the subset $A$ of "diagonal" points:
$$A = \left\{(0,0); (1,1)\right\} \subset C$$
There is no 2D disk $B\in\mathcal{H}$ (in the context of learning theory, an element/set of the hypothesis class is a *hypothesis*, which can also be viewed as a function) that satisfies $A = B\,\cap A$. Intuitively, this means that you cannot use a 2D disk to classify your pair of points $A$ from the rest of the set $C$.
**The [VC dimension](https://en.wikipedia.org/wiki/Vapnik%E2%80%93Chervonenkis_dimension)** of $\mathcal{H}$ is the maximal cardinality $d$ of the set $C$ that it can shatter. For $d=3$, we can provide tree points $C'$ for which we can easily find all discs that cover all eight possible subsets of $C'$:
$$C' = \left\{(0,0); (0,1); (1,0)\right\}$$
Above we've shown that for a particular set $C$ of $d=4$ points $\text{shatters}(\mathcal{H}, C)$ is false. But we need to prove that it is false *for all 4-point sets*. To prove this, we consider a convex hull of an arbitrary set of four points $C = \left\{a;b;c;d\right\}$. In general position, the convex hull is either a triangle or a quad:
[](https://i.stack.imgur.com/Igk5q.png)
In the case of a triangle, we choose the outermost three points as a counterexample set $A$. So, with this (labeling) configuration, you cannot find a disk that covers (i.e. classifies) the outermost three points correctly while excluding the point inside the triangle. In the case of a quad, we choose the pair on the longest diagonal. If any three points lay on a single line, then we choose the pair of outermost points.
This sketches a proof that no $d=4$ point set can be shattered by $\mathcal{H}$, but we've shown that there is a $d=3$ set $C'$ that can be. Concluding that $VCdim(\mathcal{H}) = 3$
**Another example** is considered on the next page (pg. 71) of the book you've referenced. It again considers 2D plane $\mathcal{X} =\mathbb{R}^2$ and the classification class $\mathcal{H}$ is all possible axis-aligned rectangles. Authors show a configuration $C$ of four points that can be shattered by $\mathcal{H}$ on the left of figure 6.1. And then provide a proof that no $d=5$ points can be shattered by axis-aligned rectangles. Concluding that VC dimension of their $\mathcal{H}$ is four.
Hope these examples help. (BTW, note that, it seems [that Deep Learning has quite bad VC dimension](https://cs.stackexchange.com/questions/75327/why-is-deep-learning-hyped-despite-bad-vc-dimension) but it still works somehow - which is rather puzzling).
Upvotes: 2 <issue_comment>username_2: Trying to explain the idea of VC to some of my colleagues I've discovered quite an intuitive way of laying out the basic idea. Without going through lots of math and notation as I've done in [my other answer](https://ai.stackexchange.com/a/27756/20538).
Imagine a following game between two players $\alpha$ and $\beta$ :
1. First, player $\alpha$ plots $d=4$ points on a piece of paper. She may place the points however she likes.
2. Next, player $\beta$ marks several of the drawn points.
3. Finally, player $\alpha$ should draw a circle such that all the marked points are inside the circle, and all the unmarked points - outside. (Points on the boundary considered "inside".)
The player $\alpha$ wins if she can draw such a circle at step #3. The player $\beta$ wins if making such circle is impossible.
If you try to analyze this game then you'll notice that the player $\beta$ has a winning strategy. For any $d=4$ points on a plane, there is always a subset such that player $\alpha$ is unable to draw a required circle. ( I don't want to go into the detailed proof of the strategy - it is straightforward, but cumbersome - I've sketched it [my other answer](https://ai.stackexchange.com/a/27756/20538)). If we now change the number of points to $d=3$ then the game suddenly becomes winnable by player $\alpha$ - for three points that are not on the same line any subset can be separated by a circle.
The largest number $d$ at which the game is winnable by player $\alpha$ is called the VC dimension of our classification set. So, in the case of 2D discs (insides of a circle) the VC dimension is 3. If one changes the rules to use rectangles instead of circles, then the maximum number of points winnable by $\alpha$ would be 4 - thus, the VC dimension of rectangular classification sets is 4.
Restoring our the mathematical notation, we denote $\mathcal{X} = \mathbb{R}^2$ our two-dimensional plane. $C$ is the subset of cardinality $|C| = d$ that the player $\alpha$ selects. And $\mathcal{H}$ is a class (a "set-of-sets") of the subsets of $\mathcal{X}$ that one should use as a classification boundary. Formally, the statement that the game above is winnable by $\alpha$ can be written as:
$$ \exists\;C\subset\mathcal{X}.|C|=d \Rightarrow\forall\; A\subset C\; \exists\; B\in\mathcal{H}\;. A = B\,\cap\,A $$
The maximal $d$ at which this statement is true would be the VC dimension. (I actually worked backwards from noticing the alternating quantifiers $\exists\,\forall\,\exists$ in the VC definition - which is typical in game playing, so I worked back from the definition to make the game above.)
Upvotes: 2 |
2021/05/12 | 459 | 1,817 | <issue_start>username_0: Newbie here.
I recently read about cognitive architectures (see: <https://en.wikipedia.org/wiki/Cognitive_architecture>). They are supposed to be modeled after the human mind and represent a promising approach towards artificial general intelligence (AGI).
My question is, however, why haven't these cognitive architectures achieved AGI yet? What are the specific limitations and roadblocks that cognitive architectures face?<issue_comment>username_1: This is a philosophical question that does not have a one-off answer.
If I may suggest a quick thought:
Our brains are trained on millions of tasks when we grow up (recognising so many objects, emotions, actions, etc.), that if we were to make one giant neural network, and we train them on so many tasks, perhaps we can achieve a human like response.
As for 'consciousness'. Do any of us really know how that works? Is that due to the neural networks in our brain or something more intangible (that spark of life)? I'm not so sure of it, but it surely seems more complex than any of the deep learning models I've seen.
Upvotes: 1 <issue_comment>username_2: That article only describes a way of thinking about and modeling the sort of cognitive capabilities that can lead to AGI. The thoughts and models are in no way a clear and certain roadmap to achieving AGI. And, even if they were, there are many implementation hurdles that would still need to be cleared to realize this.
<NAME> summed this up pretty well in a recent tweet:
<https://twitter.com/Grady_Booch/status/1615284029594697728>
We do not yet - nor do I expect we will anytime in the future - have the proper architecture for the semantics of causality, abductive reasoning, common sense reasoning, theory of mind and of self, or subjective experience.
Upvotes: 0 |
2021/05/12 | 607 | 2,458 | <issue_start>username_0: I've been learning Python machine-learning using [this project report](https://github.com/daines-analytics/tabular-data-projects/blob/master/py-classification-faulty-steel-plates-take2/py-classification-faulty-steel-plates-take2.ipynb) and the guy who wrote it begins by visualizing his data using various statistical analysis methods: histograms, density plots, box plots, scatter plots, etc.
The problem is that he doesn't explain what this is for. The only detail he provides is that "univariate plots help to understand each attribute" and "multivariate plots help to understand the relationships between attributes."
What would be the reason behind using these plots for ML development? Do they help you to determine which algorithm(s) you should try? If so, how? Can anyone explain the main points or maybe point me to a resource that will help?<issue_comment>username_1: At a basic level, these kinds of low-dimensional plots where you look at one or two variables at a time can help to give you a sense of what types of relationships you might expect to see, such as linear, non-linear, or periodic relationships, which can steer you toward an appropriate family of models. You wouldn't want to use a linear model to predict data that has highly non-linear relationships, for example, nor would you want to use a monotonic non-linear model to predict a periodic function like a sine wave. Knowing about the general distribution of certain variables can also give you a sense of what statistical assumptions might or might not be met - if a model assumes that data is normally distributed, it might not be appropriate if your histograms suggest otherwise. Statistical analysis can help you determine if the underlying assumptions for certain model classes are or are not met.
Upvotes: 2 [selected_answer]<issue_comment>username_2: In addition to [this answer](https://ai.stackexchange.com/a/27775/2444) and given that you were also looking for a resource, I suggest that you read chapter 1 of the book [An Introduction to Statistical Learning: with Applications in R](https://static1.squarespace.com/static/5ff2adbe3fe4fe33db902812/t/6062a083acbfe82c7195b27d/1617076404560/ISLR%2BSeventh%2BPrinting.pdf#page=11), where you can find multiple examples of these plots and explanations of why they can be useful: to understand the relationship between the features and the target variable, which you want to predict.
Upvotes: 0 |
2021/05/13 | 2,605 | 7,414 | <issue_start>username_0: I am working on a project where I am trying to detect and localize forgeries in images. I am using the CASIA v2 dataset and using Unet model for the task. I have the binary masks of all the images in the CASIA v2 dataset. The metric I am using for the model are F1 score.
The issue with the model is that it is highly overfitting, the validation loss plateaus up.[](https://i.stack.imgur.com/43mV9.png)
Batch size is 128 and Learning rate is 0.000001. Image size is 128 x 128.
Updated graph for batch size 16 with the changes mentioned by @spb is as follows:
[](https://i.stack.imgur.com/WXQhd.png)
I have also tried using Learning rate scheduler to decrease the learning rate(starting with high learning rate) on plateaus but that didn't help much.
I am also using the package Albumentations for data augmentation of both the images and its masks. I load the images and the masks and then apply the augmentations and save the augmented images and masks in a separate arrays and finally extend the original images and masks with the augmented images and masks. So technically I have original plus the augmented images and masks that I use for training the model. The augmentations I am using are:
```
Augment = A.Compose([
A.VerticalFlip(p=0.5),
A.RandomRotate90(p=0.5),
A.HorizontalFlip(p = 0.5)
])
```
I have split the dataset into 70% Training, 20% Validation and 10% for testing.
Here is a snippet of my model. **Updated Code below**
```
def conv2d_block(input_tensor, n_filters, kernel_size = 3, batchnorm = True):
"""Function to add 2 convolutional layers with the parameters passed to it"""
# first layer
x = Conv2D(filters = n_filters, kernel_size = (kernel_size, kernel_size),\
kernel_initializer = 'he_normal', padding = 'same')(input_tensor)
if batchnorm:
x = BatchNormalization()(x)
x = Activation('relu')(x)
# second layer
x = Conv2D(filters = n_filters, kernel_size = (kernel_size, kernel_size),\
kernel_initializer = 'he_normal', padding = 'same')(input_tensor)
if batchnorm:
x = BatchNormalization()(x)
x = Activation('relu')(x)
return x
def get_unet(input_img, n_filters = 16, dropout = 0.1, batchnorm = True):
"""Function to define the UNET Model"""
# Contracting Path
c1 = conv2d_block(input_img, n_filters * 1, kernel_size = 3, batchnorm = batchnorm)
p1 = MaxPooling2D((2, 2))(c1)
#p1 = Dropout(dropout)(p1)
c2 = conv2d_block(p1, n_filters * 2, kernel_size = 3, batchnorm = batchnorm)
p2 = MaxPooling2D((2, 2))(c2)
#p2 = Dropout(dropout)(p2)
c3 = conv2d_block(p2, n_filters * 4, kernel_size = 3, batchnorm = batchnorm)
p3 = MaxPooling2D((2, 2))(c3)
#p3 = Dropout(dropout)(p3)
c4 = conv2d_block(p3, n_filters * 8, kernel_size = 3, batchnorm = batchnorm)
p4 = MaxPooling2D((2, 2))(c4)
#p4 = Dropout(dropout)(p4)
c5 = conv2d_block(p4, n_filters * 16, kernel_size = 3, batchnorm = batchnorm)
p5 = MaxPooling2D((2, 2))(c5)
#p5 = Dropout(dropout)(p5)
c6 = conv2d_block(p5, n_filters = n_filters * 32, kernel_size = 3, batchnorm = batchnorm)
# Expansive Path
u7 = Conv2DTranspose(n_filters * 16, (3, 3), strides = (2, 2), padding = 'same')(c6)
u7 = concatenate([u7, c5])
u7 = Dropout(dropout)(u7)
c7 = conv2d_block(u7, n_filters * 16, kernel_size = 3, batchnorm = batchnorm)
u8 = Conv2DTranspose(n_filters * 8, (3, 3), strides = (2, 2), padding = 'same')(c7)
u8 = concatenate([u8, c4])
u8 = Dropout(dropout)(u8)
c8 = conv2d_block(u8, n_filters * 8, kernel_size = 3, batchnorm = batchnorm)
u9 = Conv2DTranspose(n_filters * 4, (3, 3), strides = (2, 2), padding = 'same')(c8)
u9 = concatenate([u9, c3])
u9 = Dropout(dropout)(u9)
c9 = conv2d_block(u9, n_filters * 4, kernel_size = 3, batchnorm = batchnorm)
u10 = Conv2DTranspose(n_filters * 2, (3, 3), strides = (2, 2), padding = 'same')(c9)
u10 = concatenate([u10, c2])
u10 = Dropout(dropout)(u10)
c10 = conv2d_block(u10, n_filters * 2, kernel_size = 3, batchnorm = batchnorm)
u11 = Conv2DTranspose(n_filters * 1, (3, 3), strides = (2, 2), padding = 'same')(c10)
u11 = concatenate([u11, c1])
u11 = Dropout(dropout)(u11)
c11 = conv2d_block(u11, n_filters * 1, kernel_size = 3, batchnorm = batchnorm)
outputs = Conv2D(1, (1, 1), activation='sigmoid')(c11)
model = Model(inputs=[input_img], outputs=[outputs])
return model
```
Currently I am not using the dropout as it leads to higher validation loss plateaus in my case.
The F1 score and F1 loss I am calculating are as follows
```
def f1(y_true, y_pred):
y_pred = K.round(y_pred)
tp = K.sum(K.cast(y_true*y_pred, 'float'), axis=0)
tn = K.sum(K.cast((1-y_true)*(1-y_pred), 'float'), axis=0)
fp = K.sum(K.cast((1-y_true)*y_pred, 'float'), axis=0)
fn = K.sum(K.cast(y_true*(1-y_pred), 'float'), axis=0)
p = tp / (tp + fp + K.epsilon())
r = tp / (tp + fn + K.epsilon())
f1 = 2*p*r / (p+r+K.epsilon())
f1 = tf.where(tf.is_nan(f1), tf.zeros_like(f1), f1)
return K.mean(f1)
def f1_loss(y_true, y_pred):
tp = K.sum(K.cast(y_true*y_pred, 'float'), axis=0)
tn = K.sum(K.cast((1-y_true)*(1-y_pred), 'float'), axis=0)
fp = K.sum(K.cast((1-y_true)*y_pred, 'float'), axis=0)
fn = K.sum(K.cast(y_true*(1-y_pred), 'float'), axis=0)
p = tp / (tp + fp + K.epsilon())
r = tp / (tp + fn + K.epsilon())
f1 = 2*p*r / (p+r+K.epsilon())
f1 = tf.where(tf.is_nan(f1), tf.zeros_like(f1), f1)
return 1 - K.mean(f1)
```
I have also tried using other losses like **focal\_tversky** but have a similar result.
What can be the issue and how can I solve it?
Is it
1. Issue with my data like presence of outliers
2. Model related issue
3. Batch size and Learning rate related issue
4. Or anything else?
Please your help in this regard is really appreciated as I really need to solve it soon.<issue_comment>username_1: At a basic level, these kinds of low-dimensional plots where you look at one or two variables at a time can help to give you a sense of what types of relationships you might expect to see, such as linear, non-linear, or periodic relationships, which can steer you toward an appropriate family of models. You wouldn't want to use a linear model to predict data that has highly non-linear relationships, for example, nor would you want to use a monotonic non-linear model to predict a periodic function like a sine wave. Knowing about the general distribution of certain variables can also give you a sense of what statistical assumptions might or might not be met - if a model assumes that data is normally distributed, it might not be appropriate if your histograms suggest otherwise. Statistical analysis can help you determine if the underlying assumptions for certain model classes are or are not met.
Upvotes: 2 [selected_answer]<issue_comment>username_2: In addition to [this answer](https://ai.stackexchange.com/a/27775/2444) and given that you were also looking for a resource, I suggest that you read chapter 1 of the book [An Introduction to Statistical Learning: with Applications in R](https://static1.squarespace.com/static/5ff2adbe3fe4fe33db902812/t/6062a083acbfe82c7195b27d/1617076404560/ISLR%2BSeventh%2BPrinting.pdf#page=11), where you can find multiple examples of these plots and explanations of why they can be useful: to understand the relationship between the features and the target variable, which you want to predict.
Upvotes: 0 |
2021/05/16 | 843 | 3,212 | <issue_start>username_0: I created a binary image classification model. The dataset contains about 500K images in each class, with ratio = Train : Validation : Test = 7 : 2 : 1. Total images = 1M
I split my dataset into 5 parts (compute constraints)—5 training subsets, 5 validation subsets, and 1 test subset.
I trained and evaluated my model stage by stage. In first stage (evaluation), my model's accuracy was 65%. I re-fitted it with 2nd dataset and the accuracy was 43%. I did same process with the rest, and my accuracies were: 65%, 43%, 57%, 21%, 30%.
How can I train my model in staged training?
I want to train models with different datasets without reinitialize the weight every training process.<issue_comment>username_1: You can save weights during training by passing checkpoint callback to model.fit() method.
```
# Instantiate your model here
model = create_model()
# Set model configurations here
model.compile(loss=..., optimizer=..., metrics=...)
# Set checkpoint path
checkpoint_path = "model_weights.ckpt"
# Create a callback that saves the model's weights
filepath=checkpoint_path,
save_weights_only=True,
monitor='val_loss',
mode='min',
save_best_only=True)
# Train the model with the new callback
model.fit(train_images_1,
train_labels_1,
epochs=50,
batch_size=batch_size,
callbacks=[cp_callback],
validation_data=(test_images_1, test_labels_1),
verbose=0)
```
After finishing training 1st dataset, model weights will be saved in file called *model\_weights.ckpt*. Before starting training next dataset, load the model weights as below
```
# Create a new model instance
model = create_model()
# Set model configurations here
model.compile(loss=..., optimizer=..., metrics=...)
# Set checkpoint path
checkpoint_path = "model_weights.ckpt"
# Load the previously saved weights
model.load_weights(checkpoint_path)
# Create a callback that saves the model's weights
cp_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_path,
save_weights_only=True,
monitor='val_loss',
mode='min',
save_best_only=True)
# Train the model with the new callback
model.fit(train_images_2,
train_labels_2,
epochs=50,
batch_size=batch_size,
callbacks=[cp_callback],
validation_data=(test_images_2, test_labels_2),
verbose=0)
```
Repeat this for all datasets.
Upvotes: 1 <issue_comment>username_2: If you reinitialize your model weights before training it on a new subset, you erase everything it learned before; is it what you want to ?
If not, saving your model after each subset and loading it before training on the next subset isn't a good practice neither because your model will see a lot of times a subset of samples and then move on the next ones without seing again these samples.
A better solution is to load batches on-fly. This can be done using [keras generators](https://stanford.edu/%7Eshervine/blog/keras-how-to-generate-data-on-the-fly). It allows you to load a batch and do a gradient descent step without changing the model.fit function.
It will be a bit slower but it will work.
Upvotes: 0 |
2021/05/16 | 1,369 | 5,032 | <issue_start>username_0: I'm interested in artificial neural networks (ANN) and I wonder how big ANNs in practical use are, for example, Tesla Autopilot, Google Translate, and others.
The only thing I found about Tesla is [this one](https://www.tesla.com/autopilotAI):
>
> "A full build of Autopilot neural networks involves 48 networks that
> take 70,000 GPU hours to train. Together, they output 1,000 distinct
> tensors (predictions) at each timestep."
>
>
>
It seems like most companies don't publish clear information about their ANN sizes. I really can't find anything detailed on this subject.
Is there any information about the size of big practical/commercial ANNs that include something like the amount of neurons/connections/layers etc.?
I'm looking for a few *examples* in this scale with more precise information on the size of the neural networks.<issue_comment>username_1: ### NLP Domain
You can easily find such open-source neural networks in NLP applications that have been published by Companies like Google. For example, in [BERT models](https://github.com/google-research/bert/blob/master/README.md), you can see the BERT-Base has the following specifications:
>
> BERT-Base, Multilingual Cased: 104 languages, 12-layer, 768-hidden, 12-heads, 110M parameters
>
>
>
You can find more data about other versions of BERT in the same link.
Another example is GPT models, like [GPT-3](https://lambdalabs.com/blog/demystifying-gpt-3):
>
> All GPT-3 models use the same attention-based architecture as their GPT-2 predecessor. The smallest GPT-3 model (125M) has 12 attention layers, each with 12x 64-dimension heads. The largest GPT-3 model (175B) uses 96 attention layers, each with 96x 128-dimension heads.
>
>
>
### Image Procssing Domain
Another useful domain for your expectation is image processing tasks such as image classification. Pretrained models such as [VGG, ResNet, and Inception](https://www.analyticsvidhya.com/blog/2020/08/top-4-pre-trained-models-for-image-classification-with-python-code/). These are mostly used for image classification tasks in different companies, and you can find their specification many where. For example for VGG-16, we can see the followings:
[](https://i.stack.imgur.com/YekFP.png)
### Speech Processing
Another practical domain is Auto-Speech Recognition or ASR in short. One of the renowned models in this context is DeepSpeech(2) by Baido Research center. For example, you can find some info like the number of its parameters and its structure in [this github link](https://gitlab.com/Jaco-Assistant/Scribosermo).
### Summing up
Note that one regular metric to measure the size of neural networks is "**the number of parameters**" of the network that is required to be learned in the training phase. Hence, you can compare the size of models even between cross domains by knowing their number of parameters (instead of going to more details about the number of hidden layers and their types). Although, sometimes the length (number of layers) and height (number of neurons in each layer) of the network are very important in the matter of performance and capability of the network.
Upvotes: 3 [selected_answer]<issue_comment>username_2: I hope this helps. Disclaimer: the info is extracted from [Computer Vision at Tesla](https://heartbeat.fritz.ai/computer-vision-at-tesla-cd5e88074376), though aditional references may be needed....
* Tesla is using 8 cameras per vehicle fused with radars.
* To detect road lanes, vehicles, pedestrians etc., Tesla have to run
at least 50 neural networks simultaneously.
* The neural networks are trained using PyTorch. Each image (for each
camera I guess) is of dimension (1280, 960, 3). RGB image it seems.
* Regarding the neural network architecture, "**the backbone is a modified [ResNet 50](https://www.kaggle.com/keras/resnet50)", that is, a convolutional network.**
* [The network is **50 layers deep**. The network depth is defined as the largest number of sequential convolutional or fully connected layers on a path from the input layer to the output layer. **In total, ResNet-50 has 177 layers**](https://github.com/matlab-deep-learning/resnet-50). The [ResNet-50 has over 23 million trainable parameters](https://towardsdatascience.com/understanding-and-coding-a-resnet-in-keras-446d7ff84d33)
* Every camera is processed through a single neural network. Then everything is combined into a middle neural network.
Upvotes: 1 <issue_comment>username_3: The size of the model depends on the domain. I am currently working with a model that is used for real time inference on an embedded device. Speed of computation is critical.
The model size is a 5 layer CNN, about 700k parameters and it's about 12MB in size on disk.
Upvotes: 0 <issue_comment>username_4: Use the benchmarked algorithms or research papers will be a good start. Addition to that use the open sourced Bert GPT 2 like architectures is a good start.
Upvotes: 0 |
2021/05/16 | 1,946 | 4,952 | <issue_start>username_0: In the book [An Introduction to Statistical Learning](https://www.statlearning.com/), the authors claim (equation 2.3, p. 19, chapter 2)
$$\mathbb{E} \left[ (Y - \hat{Y})^2 \right] = \left(f(X) - \hat{f}(X) \right)^2 + \operatorname{Var} (\epsilon) \label{0}\tag{0},$$
where
* $Y = f(X) + \epsilon$, where $\epsilon \sim \mathcal{N}(0, \sigma)$ and $f$ is the unknown function we want to estimate
* $\hat{Y} = \hat{f}(X)$ is the output of our estimate of $f$, i.e. $\hat{f} \approx f$
They claim that this is easy to prove, but this may not be easy to prove for everyone. So, why is equation \ref{0} true?<issue_comment>username_1: Let me try to show this. The only (non-constant) random variable here is $\epsilon$, while $f(X)$ and $\hat{Y} = \hat{f}(X)$ are constant random variables (so their expectations is equal to their only value).
So, we start with the following expression.
\begin{align}
\mathbb{E} \left[ (Y - \hat{Y})^2 \right] \tag{1}\label{1}
\end{align}
Now, we just apply the distributive property, so \ref{1} becomes
\begin{align}
\mathbb{E} \left[ Y^2 - 2Y \hat{Y} + \hat{Y}^2 \right] \tag{2}\label{2}
\end{align}
Given the linearity of the expectation, we can write \ref{2} as follows
\begin{align}
\mathbb{E} \left[ Y^2 \right] - \mathbb{E} \left[ 2Y \hat{Y} \right] + \mathbb{E} \left[\hat{Y}^2 \right] \tag{3}\label{3}
\end{align}
Given that $\hat{Y} = \hat{f}(X)$ is a constant and that we can take constants out of the expectations, we have
\begin{align}
\mathbb{E} \left[ Y^2 \right] - 2 \hat{Y} \mathbb{E} \left[ Y \right] + \hat{Y}^2 \tag{4}\label{4}
\end{align}
Now, let's replace $Y$ with $f(X) + \epsilon$, to obtain
\begin{align}
\mathbb{E} \left[ \left( f(X) + \epsilon \right)^2 \right] - 2 \hat{Y} \mathbb{E} \left[ f(X) + \epsilon \right] + \hat{Y}^2 \tag{5}\label{5}
\end{align}
Now, in the book, they assume that $\epsilon \sim \mathcal{N}(0, \sigma)$, so $\mathbb{E}\left[ \epsilon \right] = 0$ (i.e. the expected value of $\epsilon$ is just the mean of the Gaussian, which is assumed to be zero). So, \ref{5} becomes
\begin{align}
&\mathbb{E} \left[ \left( f(X) + \epsilon \right)^2 \right] - 2 \hat{Y} \left( \mathbb{E} \left[ f(X) \right] + \mathbb{E} \left[ \epsilon \right] \right) + \hat{Y}^2
= \\
&\mathbb{E} \left[ \left( f(X) + \epsilon \right)^2 \right] - 2 \hat{Y} \left( f(X) + 0 \right) + \hat{Y}^2
= \\
&\mathbb{E} \left[ \left( f(X) + \epsilon \right)^2 \right] - 2 \hat{Y} f(X) + \hat{Y}^2
= \\
&\mathbb{E} \left[ f(X)^2 + 2 f(X) \epsilon + \epsilon^2 \right] - 2 \hat{Y} f(X) + \hat{Y}^2 =
\\
&\mathbb{E} \left[ f(X)^2 \right] + \mathbb{E} \left[ 2 f(X) \epsilon \right] + \mathbb{E} \left[ \epsilon^2 \right] - 2 \hat{Y} f(X) + \hat{Y}^2 = \\
& \mathbb{E} \left[ \epsilon^2 \right] + f(X)^2 - 2 \hat{Y} f(X) + \hat{Y}^2
=
\\
&
\mathbb{E} \left[ \epsilon^2 \right] + \left(f(X) - \hat{Y} \right)^2 \tag{6}\label{6}
\end{align}
Now, note that the [variance of a random variable $Z$](https://en.wikipedia.org/wiki/Variance) is defined as
$$\operatorname {Var} (Z)=\mathbb {E} \left[(Z - \mu\_Z )^{2}\right]$$
In our case, $\mu\_Z$ is zero, so the variance of $\epsilon$ is $\mathbb{E} \left[ \epsilon^2 \right]$, so \ref{6} becomes
\begin{align}
\operatorname{Var} (\epsilon) + \left(f(X) - \hat{Y} \right)^2 \\
\tag{7}\label{7}
\end{align}
You can also come up with the same result in a different and simpler way, i.e. rewrite $\mathbb{E}\left[ \left( f(X) + \epsilon - \hat f(X) \right)^2 \right]$ as $\mathbb{E}\left[ \left( \left(f(X) - \hat f(X)\right) +\epsilon \right)^2 \right]$, then you apply the distributive property and similar rules that I applied above to derive the same result.
Upvotes: 1 <issue_comment>username_2: Let's say we have $a$ - constant and $\epsilon \sim \mathcal{N}(0,\sigma)$, then:
$$\mathbb{E}\left[(a+\epsilon)^2\right] = \mathbb{E}\left[a^2\right] + 2 \mathbb{E}\left[a\right]\mathbb{E}\left[\epsilon\right] + \mathbb{E}\left[\epsilon^2\right] $$
Expectations of constants are just the constants: $\mathbb{E}[a] = a$ and $\mathbb{E}[a^2] = a^2$
The mean of $\epsilon$ is zero $\mathbb{E}[\epsilon] = 0$. And the expectation of $\epsilon^2$ is its variance:
$$ \mathop{\mathrm{Var}}(\epsilon) = \mathbb{E}[\epsilon^2] - \mathbb{E}[\epsilon]^2 = \mathbb{E}[\epsilon^2]$$
Substituting, we get an expression for the original expectation:
$$\mathbb{E}\left[(a+\epsilon)^2\right] = a^2 + \mathop{\mathrm{Var}}(\epsilon) \tag{\*}$$
Getting to the expectation in the book, we first substitute the values for $Y$ and $\hat{Y}$:
$$\mathbb{E}\left[(Y - \hat{Y})^2\right] = \mathbb{E}\left[((f(X) - \hat{f}(X)) + \epsilon)^2\right]$$
In the book it is assumed that $X$, $f$ and $\hat{f}$ are constant. So we can use the expression (\*) with the constant being $a = f(X) - \hat{f}(X)$:
$$\mathbb{E}\left[(Y - \hat{Y})^2\right] = (f(X) - \hat{f}(X))^2 + \mathop{\mathrm{Var}}(\epsilon)$$
Upvotes: 3 [selected_answer] |
2021/05/18 | 801 | 3,540 | <issue_start>username_0: I have seen this question asked primarily in the context of continuous action spaces.
I have a large action space (~2-4k discrete actions) for my custom environment that I cannot reduce down further:
I am currently trying DQN approaches but was wondering that given the large action space - if policy gradient methods are more appropriate **and** if they are appropriate for large action spaces that are discrete as in my scenario above. I have seen answers to this question with regard to large continuous action spaces.
Finally - I imagine there isn't a simple answer to this but: Does this effectively mean DQN will **not** work?<issue_comment>username_1: I don't think that (at least from a practical standpoint), there is much difference between continuous action space and discrete action space with >2k discrete actions. Quoting the [*"Continuous control with Deep RL"*](https://arxiv.org/abs/1509.02971) paper - which I'd recommend as a starting point for your investigation:
>
> An obvious approach to adapting deep reinforcement learning methods such as DQN to continuous
> domains is to to simply discretize the action space. ... Such large action spaces are difficult to explore efficiently, and thus successfully training
> DQN-like networks in this context is likely intractable. Additionally, naive discretization of action spaces needlessly throws away information about the structure of the action domain, which may be essential for solving many problems.
>
>
>
The last sentence in the quote above is the most important point for dealing with your problem. The fundamental issue is inability to efficiently explore such a large action space - so the idea is to use its structure. I'm sure that your >2k discrete action set has a certain structure on it. Like some actions might be "closer" to others. If so, then you (1) can infer some information about "neighbor" actions even if you never took them (2) do some exploration by adding noise to your policy-preferred action.
The Actor-Critic class of algorithms matches perfectly the steps (1) and (2) above. The Critic Q-value network learns about your state-action space and the Actor policy network returns actions that you could smear.
>
> Are policy gradient methods good for large discrete action spaces?
>
>
>
The Actor-Critic class of RL algorithms is a subclass of the Policy Gradient algorithms. So, the answer is yes - it looks like going that way is your best shot at making progress.
>
> Does this effectively mean DQN will **not** work?
>
>
>
I don't think that there is a strict "it will **not** work" statement. The problem is that it just would be extremely difficult to make the DQN training stable "by hand". *In principle* you might be able to construct a neural architecture that captures the state-action space structure. And then figure out how to perform efficient exploration. And then ensure that nothing explodes anywhere. But that's exactly what DDPG and then TRPO/PPO approaches would do for you.
Upvotes: 4 [selected_answer]<issue_comment>username_2: A side note for other people who have similar questions. A large number of actions is also problematic for the policy network. Basically, the policy network has an output for every action (one-hot encoding), and computing softmax over a large number of classes (more than 1K approximately) is not viable. One can use other activation functions or techniques that make a network more generalizable for such a large number of classes.
Upvotes: -1 |
2021/05/18 | 1,255 | 4,559 | <issue_start>username_0: Let's start with a typical definition of the VC dimension (as described in [this book](https://mitpress.ublish.com/ereader/7093/?preview=#page/36))
>
> **Definition $3.10$ (VC-dimension)** The $V C$ -dimension of a hypothesis set $\mathcal{H}$ is the size of the largest set that can be shattered by $\mathcal{H}$ :
> $$
> \operatorname{VCdim}(\mathcal{H})= \max \left\{m: \Pi\_{\mathcal{H}}(m)=2^{m}\right\}
> $$
>
>
>
So, if there **exists** *some* set of size $d$ that $\mathcal{H}$ can shatter and it cannot shatter any set of size $d+1$, then the $\operatorname{VCdim}(\mathcal{H}) = d$.
Now, my question is: why would we be just interested in the **existence** of **some** set of size $d$ and not **all** sets of size $d$?
For instance, if you consider one of the typical examples that are used to illustrate the concept of the VC dimension, i.e. $\mathcal{H}$ is the set of all rectangles, then we can show that $\operatorname{VCdim}(\mathcal{H}) = d = 4$, given that there's a configuration of $d=4$ points that, for all possible labellings of those points, there's a hypothesis in $\mathcal{H}$ that correctly classifies those points. However, we can also easily show that, if the 4 points are collinear, there's some labelling of them (i.e. the 1st and 3rd are of colour $A$, while the 2nd and 4th are of colour $B \neq A$) that a rectangle cannot classify correctly.
So, the class of all rectangles can shatter some sets of points, but not all, so we would need another class of hypotheses to classify all sets of four points correctly. The VC dimension does not seem to provide any intuition on which set of classes would do the trick.
So, do you know why the VC dimension wasn't defined for all configurations of $d$ points? Was this just a need of Vapnik and Chervonenkis for the work they were developing (VC theory), or could have they defined it differently? So, if you know the rationale behind this specific definition, feel free to provide an answer. References to relevant work by Vapnik and Chervonenkis are also appreciated.<issue_comment>username_1: I would say, that meaning of VC dimension is at least a possibility to implement any function on the given number of points for some case, not an express any function on $n$ points.
Yes, you are right, that this definition, unfortunately, is not very useful in practice.
Say, family of functions $\text{sign}(\sin(ax))$ has infinite $\text{VC}$ dimension. There exists an infinite sequence $x\_n$, such that by tuning the parameter $a$ one can implement any logical function $0 1 0 \ldots 1$ on these points. However, this doesn't make this class of functions an outstanding ML algorithm.
Upvotes: 0 <issue_comment>username_2: The measure that you are talking about actually has a name. It is called the "Popper dimension" -- it was introduced by [<NAME>](https://en.wikipedia.org/wiki/Karl_Popper) in his "[Logic of scientific discovery](https://en.wikipedia.org/wiki/The_Logic_of_Scientific_Discovery)".
Popper's idea of *falsifiability* was, as Vladimir Vapnik himself admits, the inspiration behind their work on the VC dimension. The VC dimension of the hypothesis set $\mathcal{H}$ measures the "complexity" of the set by looking at how easy would it be to falsify it. The hypothesis sets with higher VC dimension should be harder to falsify. With limiting case of infinite VC dimension which is unfalsifiable for almost any data.
VC dimension and Popper dimension are different in exactly the way you are describing. This quote from the [philosophy paper](https://link.springer.com/article/10.1007%2Fs10838-009-9091-3) that Vapnik co-authored states it rather succinctly:
>
> The VC-dimension is the largest number of points one can shatter, the Popper dimension is one less than the smallest number of points one can not shatter.
>
>
>
This paper goes on trying to reconcile the two definitions but I didn't find their exposition satisfactory (or even meaningful). I've found much more harsh take on this is in the ["Learning from data"](https://dl.acm.org/doi/book/10.5555/1296167) book (second edition, Section 4.7) :
>
> Now we can contrast the VC dimension and Popper’s dimension, and conclude that Popper’s definition is not meaningful, as it does not lead to any useful conditions for generalization. In fact, for linear estimators the Popper’s dimension is at most 2, regardless of the problem dimensionality, as a set of hyperplanes cannot shatter three collinear points.
>
>
>
Which looks fair to me.
Upvotes: 1 |
2021/05/18 | 432 | 1,695 | <issue_start>username_0: I am currently working on a small project where I am trying to automate some stuff at home. I am building a model capable of identifying my face with OpenCV. This will be a live feed.
I am making the project's estimations and have a really low budget. Therefore I am trying to identify what could be the minimum quality video feed I can pass to my algorithm to identify any face. For now I am just trying to identify mine.
I understand facial recognition works primarily on the unique pattern that could be found in the face. What is the minimum video resolution I need to identify anyone with facial recognition?<issue_comment>username_1: On page 2 of Axis' web page [Identification and Recognition](https://www.axis.com/learning/web-articles/identification-and-recognition/index) there is an estimate of the minimum number of pixels needed for identification, recognition and detection.
[](https://i.stack.imgur.com/bGFQS.jpg)
Upvotes: 2 <issue_comment>username_2: This is a pretty standard minimum "quality" (better said resolution in pixels between the eyes) needed for a facial recognition system:
>
> [Ensure that the image contains a frontal view of the face, good
> lighting](https://www.ibm.com/docs/en/iva/2.0.0?topic=recognition-minimum-requirements-face), and **at least 80 pixels between the eyes**.
>
>
> [the bare minimum to identify a human face](https://www.sciencedirect.com/topics/computer-science/facial-recognition) would be 25 to 75 pixels
> just between the eyes
>
>
>
In the end it comes to a detailed study of the camara location, distance, light, etc...
Upvotes: 2 |
2021/05/22 | 1,862 | 8,446 | <issue_start>username_0: I have trouble understanding the meaning of partially observable environments. Here's my doubt.
According to what I understand, the state of the environment is what precisely determines the next state and reward for any particular action taken. So, in a partially observable environment, you don't get to see the full environment state.
So, now, consider the game of chess. Here, we are the agent and our opponent is the environment. Even here we don't know what move the opponent is going to take. So, we don't know the next state and reward we are going to get. Also, what we can see can't precisely define what is going to happen next. Then why do we call chess a fully observable game?
I feel I am wrong about the definition of an environment state or the definition of fully observable, partially observable. Kindly correct me.<issue_comment>username_1: First, note that the current state does **not** determine the next state. What determines the next state are the *dynamics* of the environment, which, in the context of reinforcement learning and, in particular, MDPs, are encoded in the probability distribution $p(s', r \mid s, a)$. So, if the agent is in a certain state $s$, it could end up in another state $s'$, but this is not only determined by being just in $s$, but also by $a$ (the action that you take in $s$) and $p$ (the dynamics of the environment).
Now, in their 3rd edition of the AIMA book, Russell and Norvig define *fully observable* environments as follows.
>
> **Fully observable vs. partially observable**: *If an agent's sensors give it access to the complete state of the environment at each point in time, then we say that the task environment is **fully observable***. A task environment is effectively fully observable if the sensors detect all aspects that are relevant to the choice of action; relevance, in turn, depends on the performance measure. Fully observable environments are convenient because the agent need not maintain any internal state to keep track of the world. An environment might be **partially observable** because of noisy and inaccurate sensors or because parts of the state are simply missing from the sensor data—for example, a vacuum agent with only a local dirt sensor cannot tell whether there is dirt in other squares, and an automated taxi cannot see what other drivers are thinking. If the agent has no sensors at all then the environment is unobservable.
>
>
>
This definition is also the common definition used in reinforcement learning. So, to determine whether an environment is fully or partially observable, you need to determine whether you have or not full access to a (Markovian) state or what constitutes a state in your case. You can consider chess fully observable because you have access to the configuration of the board, so, in theory, you can take optimal action by considering all possible moves of the opponent (but, of course, in practice, not even AlphaZero does this). See figure 2.6, p. 45, of the AIMA book (3rd edition) for more examples of fully and partially observable environments.
A **partially observable MDP (POMDP)** is a mathematical framework that can be used to model partially observable environments, where you maintain a probability distribution over the possible current (or next) state that the agent may be in.
Upvotes: 3 [selected_answer]<issue_comment>username_2: A partially observable environment means it is from the agent's perspective that the agent observes the environment partially. At every time step, the agent takes action based on this partial observation. Based on the agent's action, the state of the environment changes, but the agent may not know all the changes.
Upvotes: 0 <issue_comment>username_3: You are correct in the question that in RL terms chess a game of chess where the agent is one player, and the other player has an unknown state is a partially observable environment. Chess played like this is *not* a fully observable environment.
I did not use the term "fully observable game" or "fully observable system" above , because that is not a reinforcement learning term. You may also read "game of perfect information" which is similar - it means there are no important hidden values in the state of *the game* which may impact optimal play. This is a different concern to understanding the state of your opponent.
Here is a counter-example showing that games of perfect information are *not* fully observable systems when you have an opponent with an unknown strategy:
* Optimal play in tic tac toe leads to a forced draw.
* Let's define a reward signal from the agent's perspective of +1 for a win, 0 for a draw, and -1 for a loss.
* If the agent's opponent always plays optimally, then a RL agent will learn to counter that optimal play and also play optimally. All action choices will have an expected return of 0 or -1, and the agent will choose the 0 options when acting greedily.
* If the agent's opponent can make a mistake that allows the agent to win, then there will be a trajectory through the game with a return of 1, or perhaps some other postive value in cases where the mistake is only made according to random chance.
* The value of states in the game therefore depends on the opponent's strategy.
* The opponent's strategy is however not observable - it is unknown and not encoded into the board state.
This should match your intuition when asking the question.
Why then, do many two player game-playing reinforcement agents for games like chess perform well, without using POMDPs?
This is because game theory on these environments supports the concept of "perfect play", and agents that assume their opponent will also attempt to play optimally - without mistakes - will usually do well. Game theory analyses choices leading to forms of [the minimax theory](https://en.wikipedia.org/wiki/Minimax_theorem) - making a choice that your opponent is least able to exploit.
That does mean that such an agent may in fact play sub-optimally against any given opponent. For example, they could potentially turn some losing or draw situations into a win, but have little or no capability to do so unless trained against that kind of opponent. Also, playing like this may be a large risk against other opponents, it may involve playing sub-optimally at some earlier stage.
I have observed a related issue in [Kaggle's Connect X competition](https://www.kaggle.com/c/connectx/leaderboard). Connect 4 is a solved game where player one can force a win, and the best agents are all perfect players. However, they are not all equal. The best performers tweak their agent's choices for player two, to force the highest number of wins against other agents who have not coded a perfect player one. Different kinds of learning strategy lead to different imperfections, and the top of the leaderboard is occupied by the current best perfect agent that *also* manages to exploit the population of near-perfect agents below it in the rankings. This difference in the top-ranking agents is only possible due to the partially-observable nature of the Connect 4 game played against agents with unknown policies.
>
> What exactly are partially observable environments?
>
>
>
They are environments where in at least some states, the agent does not have access to information that affects the distribution of next state or reward.
Chess played against an opponent where you have a model of their behaviour - i.e. their policy - is fully observable to the agent. This is implicitly assumed by self-play agents and systems, and can work well in practice.
Chess played against an opponent without a model of their behaviour is partially observable. In theory, you could attempt to build a system using a partially observable MDP model (POMDP) to account for this, in an attempt to try and force an opponent into states where they are more likely to make a decision that is good for the agent. However, simply playing optimally as possible in response to all plays by the opponent - i.e. assuming their policy is the same near optimal one as yours even after observing their mistake - is more usual in RL.
The original Alpha Go actually had a separate policy network for its own choices and modelling those of humans. This was selected experimentally as performing slightly better than assuming human opponents used the same policy as the self-play agent.
Upvotes: 2 |
2021/05/24 | 2,896 | 8,249 | <issue_start>username_0: I'm attempting exercise 13.1 in the Sutton and Barto textbook. It asks for an optimal probability for selecting action right in the short corridor scenario (see first 6 lines of the image below for the scenario).
**Exercise 13.1:** Use your knowledge of the gridworld and its dynamics to determine an
exact symbolic expression for the optimal probability of selecting the right action in
Example 13.1.
**My attempt:**
Letting $p$ denote the probability of choosing right, I understand that using the Bellman equations, we can solve for the value of $s\_1, s\_2, s\_3$ where the states are numbered from left to right in terms of $p$. We have $v(s\_1) = \frac{2-p}{p-1}$, $v(s\_2) = \frac{1}{(p-1)p}$, $v(s\_3) = -\frac{p+1}{p}$.
I can see how we can find the max of each of these functions to get the best optimal policy, given the state we're currently in.
However, how do you find the optimal policy generally (irrespective of starting state)? I found solutions here, which magically arrives at
$\frac{p^2-2p+2}{p(1-p)}$. Can someone explain this part?
<https://github.com/brynhayder/reinforcement_learning_an_introduction/blob/master/exercises/exercises.pdf>
[](https://i.stack.imgur.com/sluM6.png)<issue_comment>username_1: There are problems with both the approach and the expressions that you have. I don't want to just give the correct solution, though, that's an exercise for you to go through and learn from your own experience trying to accomplish it. Instead, let me illustrate that your expressions for $v(s\_i)$ are wrong. To do that we'll just do a Monte-Carlo estimate for a range of values of $p$ and compare to your expression.
Here's the Python code that runs a single payout starting from state `state` and following the policy with probability `p` of choosing "right". It returns the collected reward:
```
import numpy as np
def playout(state, p):
reward = 0
while state != 0:
action_right = np.random.rand() < p
move_right = action_right if state != 2 else (not action_right)
state = state - 1 if move_right else state + 1
state = 3 if state > 3 else state
reward -= 1
return reward
```
Then we make a simulation code that runs the `playout` multiple times, collects the reward counts, and returns the average reward (so, essentially estimates $v^\pi(s\_i)$):
```
from collections import defaultdict
def simulate(state, p, n):
rewards = defaultdict(lambda : 0, {})
for _ in range(n):
rewards[playout(state,p)] += 1
results = np.array([[a,b] for a,b in rewards.items()]).T
reward , nplayouts = results[0] , results[1]
value = (reward * nplayouts).sum() / nplayouts.sum()
return value
```
Finally, I make a grid in `p` and run each playout 10000 times:
```
p = np.linspace(0,1,51)[1:-1]
v3 = [simulate(3,p,10000) for p in p]
v2 = [simulate(2,p,10000) for p in p]
v1 = [simulate(1,p,10000) for p in p]
```
I've plotted the resulting value estimates for each state. Together with your expression for them (blue curves). And the correct expression (red curve) that I've obtained by actually writing down and solving the equations:
[](https://i.stack.imgur.com/eV1Rc.png)
As you can see, the expressions you've presented are all too far off from the results that the simulation returns. More than that - the asymptotic behavior of your solutions at $p\to0$ and $p\to1$ doesn't make much sense.
I've obtained the expressions for the red curves above by solving the system for $v(s\_i)$ on my own account - without relying on weird and wrong solutions that I googled on the internet. Which I'd recommend you do as well.
Finally, the question of the exercise is to find an optimal $p$ for a policy that starts at $s\_3$ - not "*irrespective of starting state*" as you've thought is should be. Unlike your expression the correct expression for $v(s\_3)$ has a maximum, which can be found analytically and it is
$$\max\_p v(s\_3) = -6-4\sqrt2 \simeq -11.6$$
$$ \text{at}\quad p = ??? \simeq 0.59$$
Upvotes: 2 <issue_comment>username_2: Here is a simple way to solve the problem.
With probability `p` of moving right in normal grids, we can write down the transition matrix.
$$
Q=\begin{bmatrix} 1-p & p & 0 \\
p & 0 & 1-p \\
0 & 1-p & 0 \end{bmatrix}
$$
From this answer we have the method to calculate the expected steps between two states.
<https://math.stackexchange.com/questions/691494/expected-number-of-steps-between-states-in-a-markov-chain>
which is, the sum of the first row of the matrix $(I-Q)^{-1}$
$$
M=(I-Q)^{-1}=\begin{bmatrix} \frac{2}{p}+\frac{1}{1-p} & \frac{1}{p}+\frac{1}{1-p} & \frac{1}{p} \\
\frac{1}{p}+\frac{1}{1-p} & \frac{1}{p}+\frac{1}{1-p} & \frac{1}{p} \\
\frac{1}{p} & \frac{1}{p} & \frac{1}{p} \end{bmatrix}
$$
So the expectation of the number of steps is
$$
E[t]=\frac{4}{p}+\frac{2}{1-p}
$$
And since the reward is -1 per step
$$J(p)=-E[t]=-\frac{4}{p}-\frac{2}{1-p}$$
$$
\frac{dJ}{dp}=\frac{4}{p^2}-\frac{2}{(1-p)^2}=0
$$
$$ 2p^2=4(1-p)^2 $$
$$ p^2-4p+2=0 $$
We then have solutions $p=2\pm\sqrt{2}$, and since $p$ is a probability in $[0,1]$
$$ \hat{p} = 2-\sqrt{2} \approx 0.586 $$
Upvotes: 1 <issue_comment>username_3: The other answers are cool, here I give a straightforward proof, it might be more clearly for understanding the whole process.
The core idea here is that since each state can rollback to the previous state (the state before State S is itself), and each time it rollbacks to the previous state, the expected value of steps should increase the expectation of the previous state. The number of steps, which is ultimately expressed in the form of an infinite series.
Assuming that the action Left is selected in the first $t-1$ steps, and the action Right is selected in the $t$-th step, a total of $t$ steps are required to reach the State 2, where $ t \in [1, \infty) $:
$ E\_{S\rightarrow 2}=1\cdot p^1+2\cdot p^1 (1-p)^1+3\cdot p^1 (1-p)^2+\cdots=p\sum\_{t=1}^{\infty}t(1-p)^{t-1} $
$ =\frac{p}{1-p}\sum\_{t=1}^{\infty}t(1-p)^t=\frac{p}{1-p}\cdot\frac{1-p}{p^2} = \frac{1}{p} $
The transition from State 2 to State 3 is divided into two cases, namely 2→3, 2→S→2→3, and if rollbacking from 2 to S, it is necessary to recursively calculate the expected number of steps from 2→S→2: $ E\_{s\rightarrow 2} $, where $t$ is the total number of times to return to state S:
$ E\_{2\rightarrow 3}=(1-p)+(2+E\_{S\rightarrow2})p(1-p)+(3+2E\_{S\rightarrow2})p^2(1-p)+\cdots $
$ =(1-p)+(1-p)\sum\_{t=1}^{\infty}\left[1+t(1+E\_{S\rightarrow2})\right]p^t$
$ =(1-p)+p+\frac{1-p^2}{p}\frac{p}{(1-p)^2} = 1+\frac{1+p}{1-p} = \frac{2}{1-p}$
$ E\_{3\rightarrow G} $ can be obtained in the same way:
$ E\_{3\rightarrow G}=p+(2+E\_{2\rightarrow3})p(1-p)+(3+2E\_{2\rightarrow3})p(1-p)^2+\cdots $
$ =p+p\sum\_{t=1}^{\infty}\left[1+t(1+E\_{2\rightarrow3})\right](1-p)^t $
$ = p+ p \left[ \frac{1-p}{p} + \frac{3-p}{1-p} \frac{1-p}{p^2} \right] = p+(1-p) + \frac{3-p}{p} = \frac{3}{p} $
Above we have calculated the expected number of steps for the three state transitions $ E\_{S\rightarrow 2}, E\_{2\rightarrow 3}, E\_{3\rightarrow G} $, so the total expected value is:
$ E = E\_{S\rightarrow 2} + E\_{2\rightarrow 3} + E\_{3\rightarrow G} = \frac{4}{p} + \frac{2}{1-p} $
To maximize the expectation, then
$ \left[\frac{4}{p} + \frac{2}{1-p}\right]^{\prime} = -\frac{4}{p^2}+\frac{2}{(1-p)^2} = 0 $
Get a solution that satisfies the condition: $ p=2-\sqrt{2}\approx 0.5858, E=6+4\sqrt{2} \approx 11.6568 $.
Upvotes: 2 <issue_comment>username_4: As Alex mentioned, this is a typical mean time to absorption type of problem in a Markov chain except each step incurs a negative reward. So, yet another way to solve this problem is to set up a system of equations. Let $v\_1, v\_2, v\_3$ denote the mean reward starting from states 1, 2, and 3, respectively. Then we have:
$$
\begin{cases}
v\_1 =& (1-p)v\_1 + pv\_2 -1 \\
v\_2 =& (1-p)v\_3 + pv\_1 -1 \\
v\_3 =& (1-p)(v\_2-1)
\end{cases}
$$
Solving this gives $v\_1 = \frac{p^2-3p+4}{p(p-1)}$. Now, taking the derivative and setting it to 0 yields $p^\* = 2-\sqrt{2}$
Upvotes: 1 |
2021/05/24 | 2,579 | 7,239 | <issue_start>username_0: I was designing a multi-speaker identification model, so I searched for some metrics that one may use. I found two metrics:
1. EER (equal error rate)
2. DCF (detection cost function)
What is the difference between them? Is one better than the other for my model?<issue_comment>username_1: There are problems with both the approach and the expressions that you have. I don't want to just give the correct solution, though, that's an exercise for you to go through and learn from your own experience trying to accomplish it. Instead, let me illustrate that your expressions for $v(s\_i)$ are wrong. To do that we'll just do a Monte-Carlo estimate for a range of values of $p$ and compare to your expression.
Here's the Python code that runs a single payout starting from state `state` and following the policy with probability `p` of choosing "right". It returns the collected reward:
```
import numpy as np
def playout(state, p):
reward = 0
while state != 0:
action_right = np.random.rand() < p
move_right = action_right if state != 2 else (not action_right)
state = state - 1 if move_right else state + 1
state = 3 if state > 3 else state
reward -= 1
return reward
```
Then we make a simulation code that runs the `playout` multiple times, collects the reward counts, and returns the average reward (so, essentially estimates $v^\pi(s\_i)$):
```
from collections import defaultdict
def simulate(state, p, n):
rewards = defaultdict(lambda : 0, {})
for _ in range(n):
rewards[playout(state,p)] += 1
results = np.array([[a,b] for a,b in rewards.items()]).T
reward , nplayouts = results[0] , results[1]
value = (reward * nplayouts).sum() / nplayouts.sum()
return value
```
Finally, I make a grid in `p` and run each playout 10000 times:
```
p = np.linspace(0,1,51)[1:-1]
v3 = [simulate(3,p,10000) for p in p]
v2 = [simulate(2,p,10000) for p in p]
v1 = [simulate(1,p,10000) for p in p]
```
I've plotted the resulting value estimates for each state. Together with your expression for them (blue curves). And the correct expression (red curve) that I've obtained by actually writing down and solving the equations:
[](https://i.stack.imgur.com/eV1Rc.png)
As you can see, the expressions you've presented are all too far off from the results that the simulation returns. More than that - the asymptotic behavior of your solutions at $p\to0$ and $p\to1$ doesn't make much sense.
I've obtained the expressions for the red curves above by solving the system for $v(s\_i)$ on my own account - without relying on weird and wrong solutions that I googled on the internet. Which I'd recommend you do as well.
Finally, the question of the exercise is to find an optimal $p$ for a policy that starts at $s\_3$ - not "*irrespective of starting state*" as you've thought is should be. Unlike your expression the correct expression for $v(s\_3)$ has a maximum, which can be found analytically and it is
$$\max\_p v(s\_3) = -6-4\sqrt2 \simeq -11.6$$
$$ \text{at}\quad p = ??? \simeq 0.59$$
Upvotes: 2 <issue_comment>username_2: Here is a simple way to solve the problem.
With probability `p` of moving right in normal grids, we can write down the transition matrix.
$$
Q=\begin{bmatrix} 1-p & p & 0 \\
p & 0 & 1-p \\
0 & 1-p & 0 \end{bmatrix}
$$
From this answer we have the method to calculate the expected steps between two states.
<https://math.stackexchange.com/questions/691494/expected-number-of-steps-between-states-in-a-markov-chain>
which is, the sum of the first row of the matrix $(I-Q)^{-1}$
$$
M=(I-Q)^{-1}=\begin{bmatrix} \frac{2}{p}+\frac{1}{1-p} & \frac{1}{p}+\frac{1}{1-p} & \frac{1}{p} \\
\frac{1}{p}+\frac{1}{1-p} & \frac{1}{p}+\frac{1}{1-p} & \frac{1}{p} \\
\frac{1}{p} & \frac{1}{p} & \frac{1}{p} \end{bmatrix}
$$
So the expectation of the number of steps is
$$
E[t]=\frac{4}{p}+\frac{2}{1-p}
$$
And since the reward is -1 per step
$$J(p)=-E[t]=-\frac{4}{p}-\frac{2}{1-p}$$
$$
\frac{dJ}{dp}=\frac{4}{p^2}-\frac{2}{(1-p)^2}=0
$$
$$ 2p^2=4(1-p)^2 $$
$$ p^2-4p+2=0 $$
We then have solutions $p=2\pm\sqrt{2}$, and since $p$ is a probability in $[0,1]$
$$ \hat{p} = 2-\sqrt{2} \approx 0.586 $$
Upvotes: 1 <issue_comment>username_3: The other answers are cool, here I give a straightforward proof, it might be more clearly for understanding the whole process.
The core idea here is that since each state can rollback to the previous state (the state before State S is itself), and each time it rollbacks to the previous state, the expected value of steps should increase the expectation of the previous state. The number of steps, which is ultimately expressed in the form of an infinite series.
Assuming that the action Left is selected in the first $t-1$ steps, and the action Right is selected in the $t$-th step, a total of $t$ steps are required to reach the State 2, where $ t \in [1, \infty) $:
$ E\_{S\rightarrow 2}=1\cdot p^1+2\cdot p^1 (1-p)^1+3\cdot p^1 (1-p)^2+\cdots=p\sum\_{t=1}^{\infty}t(1-p)^{t-1} $
$ =\frac{p}{1-p}\sum\_{t=1}^{\infty}t(1-p)^t=\frac{p}{1-p}\cdot\frac{1-p}{p^2} = \frac{1}{p} $
The transition from State 2 to State 3 is divided into two cases, namely 2→3, 2→S→2→3, and if rollbacking from 2 to S, it is necessary to recursively calculate the expected number of steps from 2→S→2: $ E\_{s\rightarrow 2} $, where $t$ is the total number of times to return to state S:
$ E\_{2\rightarrow 3}=(1-p)+(2+E\_{S\rightarrow2})p(1-p)+(3+2E\_{S\rightarrow2})p^2(1-p)+\cdots $
$ =(1-p)+(1-p)\sum\_{t=1}^{\infty}\left[1+t(1+E\_{S\rightarrow2})\right]p^t$
$ =(1-p)+p+\frac{1-p^2}{p}\frac{p}{(1-p)^2} = 1+\frac{1+p}{1-p} = \frac{2}{1-p}$
$ E\_{3\rightarrow G} $ can be obtained in the same way:
$ E\_{3\rightarrow G}=p+(2+E\_{2\rightarrow3})p(1-p)+(3+2E\_{2\rightarrow3})p(1-p)^2+\cdots $
$ =p+p\sum\_{t=1}^{\infty}\left[1+t(1+E\_{2\rightarrow3})\right](1-p)^t $
$ = p+ p \left[ \frac{1-p}{p} + \frac{3-p}{1-p} \frac{1-p}{p^2} \right] = p+(1-p) + \frac{3-p}{p} = \frac{3}{p} $
Above we have calculated the expected number of steps for the three state transitions $ E\_{S\rightarrow 2}, E\_{2\rightarrow 3}, E\_{3\rightarrow G} $, so the total expected value is:
$ E = E\_{S\rightarrow 2} + E\_{2\rightarrow 3} + E\_{3\rightarrow G} = \frac{4}{p} + \frac{2}{1-p} $
To maximize the expectation, then
$ \left[\frac{4}{p} + \frac{2}{1-p}\right]^{\prime} = -\frac{4}{p^2}+\frac{2}{(1-p)^2} = 0 $
Get a solution that satisfies the condition: $ p=2-\sqrt{2}\approx 0.5858, E=6+4\sqrt{2} \approx 11.6568 $.
Upvotes: 2 <issue_comment>username_4: As Alex mentioned, this is a typical mean time to absorption type of problem in a Markov chain except each step incurs a negative reward. So, yet another way to solve this problem is to set up a system of equations. Let $v\_1, v\_2, v\_3$ denote the mean reward starting from states 1, 2, and 3, respectively. Then we have:
$$
\begin{cases}
v\_1 =& (1-p)v\_1 + pv\_2 -1 \\
v\_2 =& (1-p)v\_3 + pv\_1 -1 \\
v\_3 =& (1-p)(v\_2-1)
\end{cases}
$$
Solving this gives $v\_1 = \frac{p^2-3p+4}{p(p-1)}$. Now, taking the derivative and setting it to 0 yields $p^\* = 2-\sqrt{2}$
Upvotes: 1 |
2021/05/24 | 988 | 3,891 | <issue_start>username_0: In music information retrieval, one usually converts an audio signal into some kind "sequence of frequency-vectors", such as STFT or Mel-spectrogram.
I'm wondering if it is a good idea to use the transformer architecture in a self-supervised manner -- such as auto-regressive models, or BERT in NLP -- to obtain a "smarter" representation of the music than the spectrogram itself. Such smart pretrained representation could be used for further downstream tasks.
From my quick google search, I found several papers which do something similar, but -- to my surprise -- all use some kind of symbolic/discrete music representation such as scores. (For instance [here](https://arxiv.org/pdf/1809.04281.pdf) or [here](https://arxiv.org/pdf/1912.05537.pdf)).
My question is this:
>
> Is it realistic to train such an unsupervised model directly on the
> Mel spectrogram?
>
>
>
The loss function would not be "log softmax of next word probability", but some kind of l2-distance between "predicted vector of spectra" and "observed vector of spectra", in the next time step.
Did someone try it?<issue_comment>username_1: The reason most music-generation models use discrete representations is because the long-term structures of music are very challenging to model. Note that the MIDI data in [MAESTRO](https://magenta.tensorflow.org/datasets/maestro) (used in the two papers you linked) encodes performances, not scores, so they include timing and accents of real performers--but are still sequences of discrete events, not audio.
There's been some work on learning discrete representations directly from audio, such as with [vector-quantized variational autoencoders (VQ-VAEs)](https://arxiv.org/abs/1711.00937). Typically an autoregressive model is trained on top of the learned representation; [Jukebox](https://arxiv.org/abs/2005.00341) used a transformer for that. By the way, I'd highly recommend reading through the "related work" section of the Jukebox paper for an overview of work on the audio/speech synthesis task.
[wav2vec](https://ai.facebook.com/research/publications/unsupervised-speech-recognition) is probably closest to what you're describing. They train a transformer on raw audio, self-supervised, in order to learn good representations of human speech for the speech-to-text task.
As far as training directly on spectograms, there's [MelNet](https://arxiv.org/abs/1906.01083), a somewhat exotic RNN trained for a variety of audio synthesis tasks, including music.
Hope this helps!
Upvotes: 3 [selected_answer]<issue_comment>username_2: These papers are also very close to what I meant in the question (too long for a comment).
The following references come mostly from work on speech recognition.
* [Mockingjay](https://arxiv.org/abs/1910.12638) In this work, they use an analogy of Bert architecture that is fed by Mel-spectrogram, with some audio segments "masked".
+ The model is asked to reconstruct the masked parts. To avoid the model using local smoothness of audio-data, they always mask several subsequent frames, so that reasonably long segments are being masked.
+ They evaluate on downstream "Phoneme classification tasks" using the learned features and show that these learned features are stronger than raw spectrogram, in particular if little training data is available.
* [Audio Albert](https://arxiv.org/abs/2005.08575) Same story, but they use shared weights in the transformer layers. This significantly reduces memory and computational requirements and it is shown that results are comparable with Mockingjay (at least for phoneme-classification tasks).
* [Tera](https://arxiv.org/abs/2007.06028); another Bert variant where instead of masking, they use various "Alterations" of certain audio segments.
* This and many more references are within [this project with code](https://github.com/s3prl/s3prl).
Upvotes: 1 |
2021/05/25 | 901 | 3,531 | <issue_start>username_0: Text here refers to either character or word or sentence.
Is there any recent **textbook** that encompasses from classical methods to the modern techniques for embedding texts?
If a single textbook is unavailable then please recommend a list of books covering the whole spectrum as mentioned above.
Modern textbooks that are similar to [<NAME>, <NAME> and <NAME>, Introduction to Information Retrieval, Cambridge University Press. 2008](https://nlp.stanford.edu/IR-book/information-retrieval-book.html) are highly encouraged.
[This question](https://ai.stackexchange.com/questions/20911/is-there-a-good-book-or-paper-on-word-embeddings) asks for textbook/research paper on word embedding only.<issue_comment>username_1: The reason most music-generation models use discrete representations is because the long-term structures of music are very challenging to model. Note that the MIDI data in [MAESTRO](https://magenta.tensorflow.org/datasets/maestro) (used in the two papers you linked) encodes performances, not scores, so they include timing and accents of real performers--but are still sequences of discrete events, not audio.
There's been some work on learning discrete representations directly from audio, such as with [vector-quantized variational autoencoders (VQ-VAEs)](https://arxiv.org/abs/1711.00937). Typically an autoregressive model is trained on top of the learned representation; [Jukebox](https://arxiv.org/abs/2005.00341) used a transformer for that. By the way, I'd highly recommend reading through the "related work" section of the Jukebox paper for an overview of work on the audio/speech synthesis task.
[wav2vec](https://ai.facebook.com/research/publications/unsupervised-speech-recognition) is probably closest to what you're describing. They train a transformer on raw audio, self-supervised, in order to learn good representations of human speech for the speech-to-text task.
As far as training directly on spectograms, there's [MelNet](https://arxiv.org/abs/1906.01083), a somewhat exotic RNN trained for a variety of audio synthesis tasks, including music.
Hope this helps!
Upvotes: 3 [selected_answer]<issue_comment>username_2: These papers are also very close to what I meant in the question (too long for a comment).
The following references come mostly from work on speech recognition.
* [Mockingjay](https://arxiv.org/abs/1910.12638) In this work, they use an analogy of Bert architecture that is fed by Mel-spectrogram, with some audio segments "masked".
+ The model is asked to reconstruct the masked parts. To avoid the model using local smoothness of audio-data, they always mask several subsequent frames, so that reasonably long segments are being masked.
+ They evaluate on downstream "Phoneme classification tasks" using the learned features and show that these learned features are stronger than raw spectrogram, in particular if little training data is available.
* [Audio Albert](https://arxiv.org/abs/2005.08575) Same story, but they use shared weights in the transformer layers. This significantly reduces memory and computational requirements and it is shown that results are comparable with Mockingjay (at least for phoneme-classification tasks).
* [Tera](https://arxiv.org/abs/2007.06028); another Bert variant where instead of masking, they use various "Alterations" of certain audio segments.
* This and many more references are within [this project with code](https://github.com/s3prl/s3prl).
Upvotes: 1 |
2021/05/27 | 617 | 2,231 | <issue_start>username_0: Transformers are modified heavily in recent research. But what exactly makes a transformer a transformer? What is the core part of a transformer? Is it the *self-attention*, the *parallelism*, or something else?<issue_comment>username_1: There is not one answer to this question, but one could argue that transformers heavily rely on
* transforming each input into latent subspaces of queries, keys and values in order to generate attention score
* a pool of transformations of the attention vectors (multi-head) according to which models can capture richer interpretations as different sections of the input embedding can attend different per-head subspaces that link back to each input
Upvotes: 1 <issue_comment>username_2: It's about *self-attention*, a mechanism that targets *parallelism* among other goals (see [1706.03762.pdf - Why Self-Attention](https://arxiv.org/pdf/1706.03762.pdf#section.4)).
From [What Is a Transformer Model? | NVIDIA Blogs](https://blogs.nvidia.com/blog/2022/03/25/what-is-a-transformer-model/):
>
> How Transformers Got Their Name
>
>
> Attention is so key to transformers the Google researchers almost used the term as the name for their 2017 model. Almost.
>
>
> “Attention Net didn’t sound very exciting,” said Vaswani, who started working with neural nets in 2011.
>
>
> <NAME>, a senior software engineer on the team, came up with the name Transformer.
>
>
> “I argued we were transforming representations, but that was just playing semantics,” Vaswani said.
>
>
>
You'll see the same sentiment in the first paragraph of [Transformer (machine learning model)](https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)).
This is not to say that self-attention is only the kind of attention mechanism employed by a Transformer model; see [1706.03762.pdf - 3.2.3 Applications of Attention in our Model](https://arxiv.org/pdf/1706.03762.pdf#subsubsection.3.2.3). It's also not to say that adding an attention mechanism to your model is to make it a "Transformer" model; the innovation of this particular model was to go "all out" on attention mechanisms (you'd also have to get rid of recurrent and convolutional components).
Upvotes: 2 |
2021/05/28 | 448 | 2,041 | <issue_start>username_0: Obviously, this is somewhat subjective, but what hyper-parameters typically have the most significant impact on an RL agent's ability to learn? For example, the replay buffer size, learning rate, entropy coefficient, etc.
For example, in "normal" ML, the batch size and learning rate are typically the main hyper-parameters that get optimised first.
Specifically, I am using PPO, but this can probably be applied to a lot of other RL algorithms too.<issue_comment>username_1: Personally, I would choose the following two as the most important:
* **epsilon**: When using an epsilon-greedy policy, epsilon determines how often the agent should *explore* and how often it should *exploit*. Balancing exploration and exploitation is crucial for the success of the learning agent. Too little exploration might not teach anything to the agent and too much exploration might just waste your time.
* **learning rate**: The learning rate determines how fast do you learn from new states of experience. A learning rate that is too high might not be good in cases when the environment has many states with high probabilities of negative rewards, i.e. many penalizations. This might make your agent move back and forth in the same place in order to avoid getting penalized. Also, a learning rate that is too low might make your agent learn very slowly and depending on your epsilon, the agent might enter a phase of exploitation with very little knowledge of an optimal policy.
Upvotes: 2 <issue_comment>username_2: You should read this study <https://arxiv.org/abs/2006.05990> which does some empirical study on this question, specifically for on-policy, continuous action space DRL.
It suggests that discount factor and learning rate are the two most important parameters to tune, followed by the width of the policy/value functions.
That study also reports that it's very important to normalize the observations, and initialize the policy so that the initial actions are zero mean with a very small variance.
Upvotes: 2 |
2021/05/29 | 608 | 2,456 | <issue_start>username_0: What does the term "embedding" actually mean?
An embedding is a vector, but is that vector a representation of a word or its meaning? Literature loosely uses the word for both purposes. Which one is actually correct?
Or is there anything like: *A word is its meaning itself*?<issue_comment>username_1: An embedding is a representation of a word that can be used as a proxy for some of its linguistic properties.
The 'human' representation of a word, a sequence of letters and other symbols, is not related at all to its meaning or use in actual text. It only serves as a look-up key into our cognitive language processing facility (however that actually works) which enables us to understand the meaning in the context of its usage. However, a computer system does not have such a facility.
In order to convert the character string into something more usable for language processing, embeddings are created. These are typically vectors describing other words that surround a word in question (as the meaning of a word depends on its context). There are obvious problems with ambiguities (eg is *bank* the side of a river or a place where you deposit money?), but there are probably ways around that.
So the embedding (a vector) represents the usage of a word, which strongly correlates with its meaning. Because words as character sequences are useless for most sub-symbolic processing, the terms *embedding* and *word* are possibly used interchangably.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Although we have had multiple similar questions (see [here](https://ai.stackexchange.com/q/25467/2444), [here](https://ai.stackexchange.com/q/11285/2444) and [here](https://ai.stackexchange.com/q/5408/2444)) and it seems to me that you focused on *word embeddings* (probably because you were not aware of the application of embeddings to other contexts), in addition to what is stated in [the other answer](https://ai.stackexchange.com/a/28012/2444), it's important to note that the concept of an *embedding* does not just apply to words. For example, there are also *code embeddings* (see e.g. [code2vec](https://code2vec.org/)) and *graph embeddings* (see e.g. [this](https://neo4j.com/developer/graph-data-science/graph-embeddings/)), and there are probably other examples. The linked posts contain answers that explain what an embedding and embedding space generally are, so you may want to read them.
Upvotes: 2 |
2021/06/04 | 1,169 | 3,725 | <issue_start>username_0: From my readings, I have been taught that the state-action value depends on the policy being followed. That seems logical because the expected return from actual actions will be different depending on which actions follow it.
On page 58 of Sutton & Barto's book, we have
[](https://i.stack.imgur.com/XwRba.png)
So, how is it possible that Q-learning can learn a state-action value without taking into account the policy followed thereafter (i.e. the policy followed after having taken action $a$ in the state $s$)?<issue_comment>username_1: Q-learning can learn about the greedy policy (the policy that we define as $\pi(s) = \arg\max\_a Q(s, a)$) whilst following some arbitrary exploratory policy because Q-learning is an *off-policy* algorithm.
In Q-learning, we are updating our values of $Q(s, a)$ using a bootstrapped value from one time step in the future. This means that we don't need to worry about any importance sampling type re-weighting of the action chosen by the exploratory policy that we use for action selection because of how $Q(\cdot, \cdot)$ is defined:
$$Q(s, a) = \mathbb{E}\_\pi \left[ G\_t | S\_t = s, A\_t = a \right] \; ;$$
where $G\_t$ is the (discounted) future returns defined in Sutton and Barto. Now, because we have defined $Q(\cdot, \cdot)$ such that we condition on *knowing* action $A\_t$, it really doesn't matter which distribution this action came from, because as mentioned we have conditioned on knowing it. This allows us to make updates in Q-learning for any state-action pair using the target from the optimal policy despite us not using this for action selection in the environment.
If we were to do some kind of $n$-step Q-learning with an update target looking something like
$$R(s\_t, a\_t) + R(s\_{t+1}, a\_{t+1}) + ... + R(s\_{t+n}, a\_{t+n}) + \max\_{a'} Q(s\_{t+n+1}, a')$$
then we would need to use importance sample to re-weight the trajectory to account for the actions $a\_{t+i}$ for $i \geq 1$ as these are actions that the optimal policy may not have taken, as they were taken from the exploratory policy and the Q-function assumed that the trajectory (i.e. future actions) is generated under the policy associated with the Q-function.
Upvotes: 3 [selected_answer]<issue_comment>username_2: The action-value function DOES take into account the policy being followed - that's precisely what the notation $\mathbb{E}\_\pi$ is for. Specifically, $\mathbb{E}\_\pi$ is a shorthand for
\begin{align\*}
\mathbb{E}\_{a\_i \sim~ \pi(a\_i \,|\, s\_i), \, (r\_{i+1}, s\_{i+1}) \sim p(r\_{i+1}, s\_{i+1} \, |\, s\_{i}, \, a\_{i}), \, \forall i}
\end{align\*}
where $p$ represents the environment's joint distribution of reward/transitions. This means that the expectation is with respect to the actions, states, and rewards you see under the policy $\pi$. If you want to make this more concrete, we can write out the expectation like this
\begin{align\*}
& \mathbb{E}\_\pi\left[\sum\_{k=0}^{\infty}\gamma^k r\_{k+t+1} | s\_t, a\_t \right] \\
:=& \int\_{(r\_{t+1}, s\_{t+1})}p(r\_{t+1}, s\_{t+1})\bigg[r\_{t+1} + \int\_{a\_{t+1}}\pi(a\_{t+1})\int\_{(r\_{t+2}, s\_{t+2})}p(r\_{t+2}, s\_{t+2})\bigg[r\_{t+2} + \ldots
\end{align\*}
(Really, you should be integrating over dummy variables, and I've omitted the conditional expectations, e.g. $p(r\_{t+1}, s\_{t+1})$ should be $p(r\_{t+1}, s\_{t+1} | s\_t, a\_t)$). Of course you can just replace the integrals with sums if you want discrete actions and/or states. So you can see just writing $\mathbb{E}\_\pi$ sweeps a lot of the notation and true meaning under the rug, and I assume that is the source of the confusion.
Upvotes: 0 |
2021/06/04 | 1,168 | 3,761 | <issue_start>username_0: I am relatively new to Python but I taught myself enough to code a two-player board game that is similar to chess. It has a simple Tkinter UI. Now I am dipping into machine learning, and I want to write another program to play itself in this game repeatedly and "naturally" learn strategies for playing the game.
Can anyone give advice on what I might be able to use for this? Is Tensorflow a good option? Is there a Python library well suited for this that I could adapt and train? I am partially through the [buildingai.elementsofai.com](https://buildingai.elementsofai.com/) course, but I am still very new at ML / AI.<issue_comment>username_1: Q-learning can learn about the greedy policy (the policy that we define as $\pi(s) = \arg\max\_a Q(s, a)$) whilst following some arbitrary exploratory policy because Q-learning is an *off-policy* algorithm.
In Q-learning, we are updating our values of $Q(s, a)$ using a bootstrapped value from one time step in the future. This means that we don't need to worry about any importance sampling type re-weighting of the action chosen by the exploratory policy that we use for action selection because of how $Q(\cdot, \cdot)$ is defined:
$$Q(s, a) = \mathbb{E}\_\pi \left[ G\_t | S\_t = s, A\_t = a \right] \; ;$$
where $G\_t$ is the (discounted) future returns defined in Sutton and Barto. Now, because we have defined $Q(\cdot, \cdot)$ such that we condition on *knowing* action $A\_t$, it really doesn't matter which distribution this action came from, because as mentioned we have conditioned on knowing it. This allows us to make updates in Q-learning for any state-action pair using the target from the optimal policy despite us not using this for action selection in the environment.
If we were to do some kind of $n$-step Q-learning with an update target looking something like
$$R(s\_t, a\_t) + R(s\_{t+1}, a\_{t+1}) + ... + R(s\_{t+n}, a\_{t+n}) + \max\_{a'} Q(s\_{t+n+1}, a')$$
then we would need to use importance sample to re-weight the trajectory to account for the actions $a\_{t+i}$ for $i \geq 1$ as these are actions that the optimal policy may not have taken, as they were taken from the exploratory policy and the Q-function assumed that the trajectory (i.e. future actions) is generated under the policy associated with the Q-function.
Upvotes: 3 [selected_answer]<issue_comment>username_2: The action-value function DOES take into account the policy being followed - that's precisely what the notation $\mathbb{E}\_\pi$ is for. Specifically, $\mathbb{E}\_\pi$ is a shorthand for
\begin{align\*}
\mathbb{E}\_{a\_i \sim~ \pi(a\_i \,|\, s\_i), \, (r\_{i+1}, s\_{i+1}) \sim p(r\_{i+1}, s\_{i+1} \, |\, s\_{i}, \, a\_{i}), \, \forall i}
\end{align\*}
where $p$ represents the environment's joint distribution of reward/transitions. This means that the expectation is with respect to the actions, states, and rewards you see under the policy $\pi$. If you want to make this more concrete, we can write out the expectation like this
\begin{align\*}
& \mathbb{E}\_\pi\left[\sum\_{k=0}^{\infty}\gamma^k r\_{k+t+1} | s\_t, a\_t \right] \\
:=& \int\_{(r\_{t+1}, s\_{t+1})}p(r\_{t+1}, s\_{t+1})\bigg[r\_{t+1} + \int\_{a\_{t+1}}\pi(a\_{t+1})\int\_{(r\_{t+2}, s\_{t+2})}p(r\_{t+2}, s\_{t+2})\bigg[r\_{t+2} + \ldots
\end{align\*}
(Really, you should be integrating over dummy variables, and I've omitted the conditional expectations, e.g. $p(r\_{t+1}, s\_{t+1})$ should be $p(r\_{t+1}, s\_{t+1} | s\_t, a\_t)$). Of course you can just replace the integrals with sums if you want discrete actions and/or states. So you can see just writing $\mathbb{E}\_\pi$ sweeps a lot of the notation and true meaning under the rug, and I assume that is the source of the confusion.
Upvotes: 0 |
2021/06/05 | 693 | 2,685 | <issue_start>username_0: I'm aware that the ground-truth of the example at the top left-hand corner of the image below is "zero"
[](https://i.stack.imgur.com/QUlHm.png)
However, I am confused about the meaning of the terms [ground truth](https://developers.google.com/machine-learning/glossary#ground-truth) and [ground-truth labels](https://developers.google.com/machine-learning/glossary#demographic-parity). What is the difference between them?<issue_comment>username_1: **Ground Truth**
'Ground truth' is that data or information that you have that is 'true' or assumed to be true. That means that you have high or perfect knowledge of what it is. For example, in your image of numbers, you know that the first row are zeros, the second row are ones, the third are twos, and so on. You have 10 rows of data, each row is of a different class or category. Each class has 16 samples. Ground truth data is used to train machine learning or deep learning models. The example you provided is from the Modified National Institute of Standards and Technology (MNIST) database which is commonly used for building image classifiers for handwritten digits.
**Ground Truth Labels**
The 'ground-truth labels' are the names you choose to give them. You may choose to label the classes as '0', '1', '2', etc., or as 'zero', 'one', 'two', etc. Maybe you think in Greek. If so label them as 'μηδέν', 'ένα', 'δύο', etc.
Reference
[MNIST database](https://en.wikipedia.org/wiki/MNIST_database)
Upvotes: 2 <issue_comment>username_2: These two terms could easily refer to the same thing, depending on the context. For example, a lazy person could easily say something like this
>
> We compute the loss/error between the prediction (of the model) and the ground truth.
>
>
>
Here, the ground-truth refers to the "officially correct" label (categorical or numerical) for a given input with which you compute the prediction. So, in this case, ground-truth would be a synonym for a ground-truth label.
However, in general, ground-truth refers to anything, not just labels, that are correct or true (hence the name), so it could be used more generally. For instance, you could say something like this
>
> We assume that the ground-truth underlying probability distribution from which the data is sampled is a Gaussian.
>
>
>
However, in this case, you could also leave out the ground-truth part, as it's more or less implied by the fact that you're assuming something.
So, the difference between the two is that "ground-truth" can be used more generally to refer to anything that is "true".
Upvotes: 2 [selected_answer] |
2021/06/07 | 1,086 | 4,596 | <issue_start>username_0: Multi-class classification is simply assigning all data points into one of up to any finite number of mutually exclusive labels. I am new to the field(s) of AI/ML and I keep hearing people use the term "semantic segmentation."
I want to "translate" this AI/ML jargon into something more familiar to me. The best [video](https://www.youtube.com/watch?v=uiE56h5LyXc&t=481s) I have found so far to explain what it is made me wonder, what is the difference between semantic segmentation and classification?
NOTE: I am specifically *not* referring to so-called multi-label "classification" which allows a data point to have more than one label at a time. In my experience, that sort of labeling is not classification at all, which is a division into mutually exclusive sets (no overlap).<issue_comment>username_1: Both things are similar. But, I think there is a bit of a difference in interpretation.
If what you are solving is a multi-class classification problem in an image, a proper measure of performance of an algorithm would be the accuracy of the prediction for each pixel.
While one of the most used measures of performance for semantic segmentation is the
IOU (intersection over union) for each class. Which, makes sense if your objective is to create a segmentation (a mask) for each class.
Upvotes: 0 <issue_comment>username_2: ### Image/object classification (or recognition)
(Multi-class) image/object **classification** (or **recognition**) typically refers to the task of assigning one label to an image, so we typically assume that there's only one main object in the image. The *multi-class* only refers to the fact that we have more than 2 possible classes or labels (if we had only 2, this would be binary classification), but note that this does not mean that we have more than one main object in each image. So, in this task, we are not interested in labelling each pixel, but to label the whole image, so, in this sense, this is a *sparse* classification task. An example of a dataset that is used for image classification is MNIST, where there's only one object (a number) per image. Here's a picture that shows 3 MNIST images, each of them has only one associated label (below), which corresponds to the number in the image.
[](https://i.stack.imgur.com/gf91R.png)
### Semantic segmentation
**Semantic segmentation** is the task of classifying each pixel in an image (or at least groups of pixels), so that objects of different classes have their pixels labelled differently. **Instance segmentation** is a similar task, but we additionally want to differentiate between different objects of the same class, so we assume that there could be more than one object in the image and there could even be more objects of the same type/label. Given that we label pixels, this is a *dense* classification task.
Here's an example of an image that has been segmented, i.e. pixels associated with the same object (e.g. the umbrella) have the same label (color).
[](https://i.stack.imgur.com/Xi6po.jpg)
### Object detection
So, in this way, semantic (or instance) segmentation is more similar to **object detection**, which is both a classification and regression task, because we want both to classify one or more objects in the image, but we also want to draw a bounding box around them (and this is often solved as a regression problem). The reason why we draw a bounding box around each object is that, as opposed to image/object classification, there can be more than one object in the image, so we need a way to identify the locations of the objects. As opposed to semantic/instance segmentation, this is also a *sparse* classification task.
Here's an image to which object detection has been applied.
[](https://i.stack.imgur.com/pr2MQ.jpg)
Upvotes: 2 [selected_answer]<issue_comment>username_3: Why are you asking about 3 unrelated items? Multi-class and mutually exclusive are related. Semantic segmentation is an approach to object recognition in images, that has nothing to do with the other 2.
The different between multi-class and mutually exclusive is simple: a spam engine is an example of mutually exclusive, each item either is spam or is not. An algorithm that would potentially guess what grade an essay would get is an example of multi-class: there are many potential classifications, not a simple binarism.
Upvotes: 0 |
2021/06/07 | 703 | 2,834 | <issue_start>username_0: I'm building a model for **facial expression recognition**, and I want to use ***transfer learning***. From what I understand, there are different steps to do it. The first is the **feature extraction** and the second is **fine-tuning**. I want to understand more about these two stages, and the difference between them. Must we use them simultaneously in the same training?<issue_comment>username_1: Typically, in transfer learning, you have 2-3 stages
1. **Pre-training**: pre-train some base model $M\_\text{base}$ on some "general" dataset $A$; note that you may not necessarily need to train $M\_\text{base}$, but it may already be available e.g. on the web. During this phase, we **extract** (general) **features** or [learn representations of the data](https://ai.stackexchange.com/a/28001/2444), which can "bootstrap" the learning task with your specific dataset
2. **Training**: You replace the last layers of $M\_\text{base}$ (i.e. the classifier/regression part) with new layers to solve your task, then you **might** freeze the initial layers (e.g. the convolutional layers) that are assumed to contain the general extracted features that can also be useful for your task: let's call this model $M\_\text{main}$; at this point, you train this **partially** frozen model $M\_\text{main}$ with your dataset $B$.
3. **Fine-tuning**: after training, you could unfreeze some of the frozen layers in $M\_\text{main}$, especially the ones closest to your new classifier, then train again
In all 3 stages, one could say that we're extracting features (because we're learning weights), but some people, I guess, will refer to the pre-training phase as the feature extraction phase. I think I've seen people call the training stage also the fine-tuning stage (and the previous version of this answer actually was referring to the training phase as the fine-tuning phase), but, in the end, these terms could be used inconsistently anyway, so the important thing is that you understand what's going on and keep context into account.
You can find more information about this topic [here](https://www.tensorflow.org/tutorials/images/transfer_learning). Note that there may be other more sophisticated or simply different approaches to transfer learning.
Upvotes: 0 <issue_comment>username_2: The difference between the two approaches (**feature extraction** vs **fine-tuning**) is well explained here:
[Fine Tuning vs Joint Training vs Feature Extraction](https://stats.stackexchange.com/questions/255364/fine-tuning-vs-joint-training-vs-feature-extraction)
Also, this paper evaluate the performance one can hope to achieve with 2 sequence models (ELMo and BERT) with each approach:
[To Tune or Not to Tune? Adapting Pretrained Representations to Diverse Tasks](https://arxiv.org/abs/1903.05987)
Upvotes: 1 |
2021/06/08 | 1,296 | 5,631 | <issue_start>username_0: I am a 3rd-year math major, who is interested in computer science, particularly algorithms and competitive programming (did some olympiads in high school, ACM ICPC in university, etc.), and I have been meaning to get into AI.
I have all the prerequisites to get started, but the problem is that I really, really hate statistics. I took a course on it last year and found it to be very dry.
I've heard people say that AI is mostly statistics and I am very concerned if it's true. I can tolerate some amount of stats, but, if the field literally revolves around it, I will not be able to do it.
So, exactly how much statistics is involved in AI? Are there fields of AI which use it less than others?<issue_comment>username_1: I work in NLP, and use very little statistics. Actually, almost nothing I do can be classed as 'serious' statistics.
So yes, AI is a wide area, and in my company there is a group that does machine learning, so they probably use a lot more of it than I do. Previously I worked in conversational AI. Again, very little to no statistics at all.
I would contest the view that *AI is intrinsically data-driven*. That's one aspect of it. However, while I look at actual data (texts) to derive algorithms for their analysis, I don't need to use any statistical concepts for that. And even evaluation of the results is just counting and comparing.
There are statistical algorithms in NLP, but they are not usually very complex or hard to understand even without a lot of stats knowledge.
Upvotes: 2 <issue_comment>username_2: Many people without a formal/solid background in statistics (e.g. without knowing exactly what the [*central limit theorem* (CLT)](https://en.wikipedia.org/wiki/Central_limit_theorem) states) are doing research on machine learning, which is a very big and fundamental subfield of AI that has a big overlap with statistics, or using machine learning to solve problems.
So, in my view, you don't need to learn *everything* about statistics to do research on some AI topic, including machine learning, but you need to have an understanding of the basics (at least a full introductory college-level course on statistics and probability theory), and the more you know the better.
More specifically, if you don't know what the CLT or *the law of large numbers* state, you will not have a *full* understanding of many things that are going on. At the same time, you will find a lot of research papers (published in ML conferences and journals) that do not even mention [hypothesis testing](https://en.wikipedia.org/wiki/Statistical_hypothesis_testing), but it's important to have an idea of what a sample, sample mean, sample variance, likelihood, maximum likelihood estimation (MLE) or Bayes' theorem are. In fact, MLE is widely used in machine learning, but not many people using/doing ML would probably be able to explain precisely what the [*likelihood function*](https://en.wikipedia.org/wiki/Likelihood_function) is.
Finally, in my opinion, having a formal/solid (not necessarily extensive) background in statistics should be a prerequisite for doing *research* in machine learning (you need to really know what the likelihood function is!), which some people called [*applied/computational statistics*](https://www.youtube.com/watch?v=ZqsnZX-p2qs) or [*glorified statistics*](https://stats.stackexchange.com/q/6) for some reason, but not necessarily for using machine learning to solve some problem. Moreover, there are other areas of AI that do not make use of statistics, but ML is probably the most important area of AI. So, if you hate statistics, you may not like AI and particularly ML, but maybe you will change your opinion about statistics, once you understand what e.g. neural networks are capable of doing or not.
Upvotes: 2 <issue_comment>username_3: Thanks for the extra details. There area good answers already, but I'll give just a bit more information since your requirements are a little more specific now.
Since you mentioned Research Engineer only, I'm going to assume you are not really interested in a plain engineering role.
I can say for a specific Research Engineer role I am aware of at a world class industrial AI lab, their minimum requirements include "calculus, linear algebra, and statistics at least to a first year degree level". So it sounds like you already have this required level if you were to apply today.
On the other hand, I would be cautious regarding what you found dry about your stats course. If thoughts such as "this portion of the dataset have somewhat less representation in the results and this other portion have somewhat more, I wonder why that is?" sound very dull to you, you may not like it. Most current AI is based on large sets of data. I am referring to deep learning / neural networks here rather than previous methods, but that is where a lot of the hype / major breakthroughs are at the moment. In computer vision which you mentioned for example, current methods typically input a large dataset of images to create the AI system, then test it on a large dataset of images. If they are images of road signs, you might find that increasing the proportion of one type of road sign makes the system worse for another type of road sign. Identifying relationships like that is an important part of the research function. The more towards the research side rather than the engineering side you are, the more you will need to be able to analyse things like this yourself.
Despite all that, I found stats to be one of the drier maths subjects, yet I very much like AI (and stats in AI).
Upvotes: 0 |
2021/06/09 | 1,152 | 4,919 | <issue_start>username_0: While studying word embeddings in natural language processing, I encountered the following statement on page 327 of the textbook [Natural Language Processing by <NAME>](https://cseweb.ucsd.edu/%7Ennakashole/teaching/eisenstein-nov18.pdf)
>
> Distributional semantics are computed from context statistics. Distributed semantics are a related but distinct idea: that meaning can be represented by numerical vectors rather than **symbolic structures**.
>
>
>
The dissimilarity between them is that distributed semantics represent the meaning of a word by a vector of numbers. Distributional semantics represent the meaning of a word by symbolic structure (inferred from paragraph).
I can say, in distributed semantics, the word cat can be represented by the vector $[23, 43,21,16]$ (for example).
Similarly, please, give me a small example of how the meaning of a word is represented by symbolic structure (which should not be necessarily correct).
What is meant by symbolic structure here?<issue_comment>username_1: I can't really make much sense of Eisenstein's distinction between *distributional* and *distributed*. And I think in your question you actually mix up the two terms as well, as distribut**ed** semantics involve symbolic structures, whereas distribut**ional** semantics are numerical vectors according to his definition. EDIT: actually, he seems to mix it up himself there?! Very unclear paragraph there.
I can only imagine that the symbolic structures he refers to here are semantic networks and the like, as in
>
> (is-a feline mammal)
>
> (is-a lion feline)
>
> (has-a feline tail)
>
>
>
Here the meaning of *lion*, as a feline mammal with a tail, is defined through a symbolic structure, and not in reference to the context of usage. Why this should be distributed, I can only guess: the meaning components are split over a set of statements, which build up a larger structure perhaps?
It could, of course, be the case that this is covered elsewhere in the book — I haven't had the time to look through all of it.
UPDATE: Thinking more about this, perhaps he means that distributional semantics are representations where each word is a straight co-occurrence vector, ie a vector as large as the words used to define contexts, while distributed semantics is similar, but it's a different vector which is created through processing the contexts (and could thus be smaller)?
Upvotes: 1 <issue_comment>username_2: *I am writing the answer according to my current understanding*
>
> Distributional semantics are **computed from context statistics**.
>
>
>
It is clear from the statement that the embedding of a word, in case of distributional semantics, is computed from the context statistics i.e., based on the contexts in which the word occurs.
>
> Distributed semantics are a **related but distinct** idea: that meaning
> can be represented by numerical vectors rather than **symbolic
> structures.**
>
>
>
This means that embeddings obtained from distributed semantics mayn't be obtained from the symbolic structures. Now, we need to understand what is meant by symbolic structure in this case. It can be same as that of context of a word. We can understand it from the following [definition of symbol structure](https://www.sciencedirect.com/topics/computer-science/symbol-structure)
>
> A physical symbol system consists of a set of entities, called
> symbols, which are physical patterns that can occur as components of
> another type of entity called an expression (symbol structure). Thus a
> symbol structure is composed of a number of instances (or tokens) of
> symbols related in some physical way (such as one token being next to
> another).
>
>
>
So, it can be understood that distributed semantics are the embeddings obtained **not only through the context statistics as in the case of distributional semantics**. For example, there are distributed representations beyond distributional statistics, in which embeddings are calculated from the internal structure of words and not from the context in which the word occurs ([p 341](https://cseweb.ucsd.edu/%7Ennakashole/teaching/eisenstein-nov18.pdf)). One can understand it from the following excerpt from the same page
>
> How can word-internal structure be incorporated into word
> representations? One approach is to construct word representations from embeddings of the characters or morphemes.
>
>
>
Thus, to be concise, the embedding for the word `cat` is only obtained using the context statistics of `cat` in case of distributional semantics and in case of distributed semantics, the embedding of the word `millicuries` can be calculated from the embeddings the morphemes $milli,curie,s$ rather than the context statistics of the word `millicuries` since it is a rare word which is unlikely to have reliable context information available.
Upvotes: 0 |
2021/06/13 | 1,076 | 3,913 | <issue_start>username_0: Term frequency and inverse document frequency are well-known terms in information retrieval.
I am presenting the definitions for both from p:12,13 of [Vector Semantics and Embeddings](https://web.stanford.edu/%7Ejurafsky/slp3/6.pdf)
**On term frequency**
>
> Term frequency is the frequency of the word $t$ in the term frequency
> document $d$. We can just use the raw count as the term frequency:
>
>
> $$tf\_{t, d} = \text{count}(t, d)$$
>
>
> More commonly **we squash the raw frequency a bit, by using the
> $\log\_{10}$** of the frequency instead. The intuition is that a word appearing 100 times in a document doesn’t make that word 100 times more likely to be relevant to the meaning of the document.
>
>
>
**On inverse document frequency**
>
> The $\text{idf}$ is defined using the fraction $\dfrac{N}{df\_t}$, where $N$ is the total number of documents in the collection, and $\text{df}\_t$ is the number of documents in which term $t$ occurs.......
>
>
> Because of the large number of documents in many collections, **this measure too is usually squashed with a log function**. The resulting definition for inverse document frequency ($\text{idf}$) is thus
>
>
> $$\text{idf}\_t = \log\_{10} \left(\dfrac{N}{df\_t} \right)$$
>
>
>
If we observe the bolded portion of the quotes, it is evident that the $\log$ function is used commonly. It is not only used in these two definitions. It has been across many definitions in the literature. For example: [entropy](https://ai.stackexchange.com/questions/28220/product-of-probabilities-raised-to-own-powers-that-can-be-used-for-entropy-calcu), mutual information, log-likelihood. So, I don't think squashing is the only purpose behind using the $\log$ function.
Is there any reason for selecting the logarithm function for squashing? Are there any advantages for $\log$ compared to any other squash functions, if available?<issue_comment>username_1: It's much easier to deal with logarithms, as the relevant numbers are usually very small or very large. If you have a long exponential expression, it's hard to see the difference, but if you're looking at 4.3 vs 5.6, you can immediately see what's happening. And logarithms are a well-known (and well-understood) way of achieving this compression. You can easily interpret the difference, depending on the base of the logarithm used.
Quite often the $log\_2$ is used when you're dealing with entropy or information, as those are usually expressed in bits.
Upvotes: 2 <issue_comment>username_2: I would like to add details to Oliver's answer.
From the book "Pattern Recognition and Machine Learning" by Bishop (Section 1.2.5):
>
> In practice, it is more convenient to maximize the log of the
> likelihood function. Because the logarithm is monotonically increasing
> function of its argument, maximization of the log of a function is
> equivalent to maximization of the function itself. Taking the log not
> only simplifies the subsequent mathematical analysis, but it also
> helps numerically because the product of a large number of small
> probabilities can easily underflow the numerical precision of the
> computer, and this is resolved by computing instead the sum of the log
> probabilities.
>
>
>
That is, $\log$ is monotonically increasing and hence preserves the order and the locations of the extrema. For instance, if $p(x) \geq p(y)$ then $\log\big(p(x)\big) \geq \log\big(p(y)\big)$ also holds. Therfore, maximizing likelihood is equivalent to maximizing log-likelihood.
Furthermore, it is extremely useful when calculating joint probabilities since a product can be replaced by a sum:
$$
\log \left(\prod\_i P(x\_i)\right) = \sum\_i \log \left( P(x\_i)\right)
$$
This also makes calculation numerically stable and it is much easier to take a derivative of a sum of logarithms rather than to take a derivative of a product.
Upvotes: 1 |
2021/06/13 | 957 | 3,589 | <issue_start>username_0: I am currently studying the textbook *Neural Networks and Deep Learning* by <NAME>. Chapter **1.2.1.3 Choice of Activation and Loss Functions** presents the following figure:
>
> [](https://i.stack.imgur.com/wKwkG.png)
>
>
>
$\overline{X}$ is the features, $\overline{W}$ is the weights, and $\phi$ is the activation function.
So this is a [perceptron](https://en.wikipedia.org/wiki/Perceptron) (which is a form of [artificial neuron](https://en.wikipedia.org/wiki/Artificial_neuron)).
But where does the so-called 'loss' / 'loss function' fit into this? This is something that I've been unable to reconcile.
---
EDIT
----
The way the loss function was introduced in the textbook seemed to imply that it was part of the architecture of the perceptron / artificial neuron, but, according to hanugm's answer, it is external and instead used to *update the weights of the neuron*. So it seems that I misunderstood what was written in the textbook.
In my question above, I pretty much assumed that the loss function was part of the architecture of the perceptron / artificial neuron, and then asked how it fit into the architecture, since I couldn't see any indication of it in the figure.
Is the loss / loss function part of the architecture of a perceptron / artificial neuron? I cannot see any indication of a loss / loss function in figure 1.7, so I'm confused about this. If not, then how does the loss / loss function relate to the architecture of the perceptron / artificial neuron?<issue_comment>username_1: It's much easier to deal with logarithms, as the relevant numbers are usually very small or very large. If you have a long exponential expression, it's hard to see the difference, but if you're looking at 4.3 vs 5.6, you can immediately see what's happening. And logarithms are a well-known (and well-understood) way of achieving this compression. You can easily interpret the difference, depending on the base of the logarithm used.
Quite often the $log\_2$ is used when you're dealing with entropy or information, as those are usually expressed in bits.
Upvotes: 2 <issue_comment>username_2: I would like to add details to Oliver's answer.
From the book "Pattern Recognition and Machine Learning" by Bishop (Section 1.2.5):
>
> In practice, it is more convenient to maximize the log of the
> likelihood function. Because the logarithm is monotonically increasing
> function of its argument, maximization of the log of a function is
> equivalent to maximization of the function itself. Taking the log not
> only simplifies the subsequent mathematical analysis, but it also
> helps numerically because the product of a large number of small
> probabilities can easily underflow the numerical precision of the
> computer, and this is resolved by computing instead the sum of the log
> probabilities.
>
>
>
That is, $\log$ is monotonically increasing and hence preserves the order and the locations of the extrema. For instance, if $p(x) \geq p(y)$ then $\log\big(p(x)\big) \geq \log\big(p(y)\big)$ also holds. Therfore, maximizing likelihood is equivalent to maximizing log-likelihood.
Furthermore, it is extremely useful when calculating joint probabilities since a product can be replaced by a sum:
$$
\log \left(\prod\_i P(x\_i)\right) = \sum\_i \log \left( P(x\_i)\right)
$$
This also makes calculation numerically stable and it is much easier to take a derivative of a sum of logarithms rather than to take a derivative of a product.
Upvotes: 1 |
2021/06/14 | 681 | 2,706 | <issue_start>username_0: I'm trying to implement the ES-HyperNEAT algorithm using the original paper, as well as the pseudocode provided in the [official user page](http://eplex.cs.ucf.edu/ESHyperNEAT/). Occasionally, the algorithm would be unable to generate a network in the substrate. This happens when it finds no valid nodes that could connect a path between the input and output neurons.
I've noticed that this is highly dependent on how the hyperparameters (e.g., variance threshold and band threshold) were tuned.
Is my implementation correct, i.e., is this normal behavior? If so, is there a good way to ensure that a network is always generated (aside from directly connecting the input and output neurons)?<issue_comment>username_1: It's much easier to deal with logarithms, as the relevant numbers are usually very small or very large. If you have a long exponential expression, it's hard to see the difference, but if you're looking at 4.3 vs 5.6, you can immediately see what's happening. And logarithms are a well-known (and well-understood) way of achieving this compression. You can easily interpret the difference, depending on the base of the logarithm used.
Quite often the $log\_2$ is used when you're dealing with entropy or information, as those are usually expressed in bits.
Upvotes: 2 <issue_comment>username_2: I would like to add details to Oliver's answer.
From the book "Pattern Recognition and Machine Learning" by Bishop (Section 1.2.5):
>
> In practice, it is more convenient to maximize the log of the
> likelihood function. Because the logarithm is monotonically increasing
> function of its argument, maximization of the log of a function is
> equivalent to maximization of the function itself. Taking the log not
> only simplifies the subsequent mathematical analysis, but it also
> helps numerically because the product of a large number of small
> probabilities can easily underflow the numerical precision of the
> computer, and this is resolved by computing instead the sum of the log
> probabilities.
>
>
>
That is, $\log$ is monotonically increasing and hence preserves the order and the locations of the extrema. For instance, if $p(x) \geq p(y)$ then $\log\big(p(x)\big) \geq \log\big(p(y)\big)$ also holds. Therfore, maximizing likelihood is equivalent to maximizing log-likelihood.
Furthermore, it is extremely useful when calculating joint probabilities since a product can be replaced by a sum:
$$
\log \left(\prod\_i P(x\_i)\right) = \sum\_i \log \left( P(x\_i)\right)
$$
This also makes calculation numerically stable and it is much easier to take a derivative of a sum of logarithms rather than to take a derivative of a product.
Upvotes: 1 |
2021/06/16 | 1,465 | 4,461 | <issue_start>username_0: I am studying logistic regression for binary classification.
The loss function used is **cross-entropy**. For a given input $x$, if our model outputs $\hat{y}$ instead of $y$, the loss is given by
$$\text{L}\_{\text{CE}}(y,\hat{y}) = -[y \log \hat{y} + (1 - y) (\log{1 - \hat{y}})]$$
Suppose there are $m$ such training examples, then the overall total loss function $\text{TL}\_{\text{CE}}$ is given by
$$\text{TL}\_{\text{CE}} = \dfrac{1}{m} \sum\limits\_{i = 1}^{m} \text{L}\_{\text{CE}} (y\_i , \hat{y\_i}) $$
It is said that **the loss function is convex**. That is, If I draw a graph between the loss values wrt the corresponding weights then the curve will be convex. The [material from textbook](https://web.stanford.edu/%7Ejurafsky/slp3/5.pdf) did not give any explanation regarding the convex nature of the cross-entropy loss function. You can observe it from the following passage.
>
> For logistic regression, **this (cross-entropy) loss function is conveniently convex**. A
> convex function has just one minimum; there are no local minima
> to get stuck in, so gradient descent starting from any point is
> guaranteed to find the minimum. (By contrast, the loss for multi-layer
> neural networks is non-convex, and gradient descent may get stuck in
> local minima for neural network training and never find the global
> optimum.)
>
>
>
How did they conclude conveniently that the loss function is convex? Is it by plotting or some other means?<issue_comment>username_1: If you find the Hessian matrix (the matrix of second order derivatives) for the binary cross entropy loss function, you'll see that it is positive semidefinite for any possible value of the parameters. This concludes that it is a convex function.
A side effect of it being convex is that it will have a single minimum as mentioned in the textbook you cited.
Upvotes: 1 <issue_comment>username_2: The $L\_{CE}$ that you provided is binary cross-entropy, the factor $y$ and $(1-y)$ is because $y$ is binary $({0,1})$, careful with the name next time. The cross-entropy loss should have form:
$$L\_{CE}=-\displaystyle\sum\_{i=1}^C y\_i\log(\hat{y\_i})$$
Where $C$ is the number of classes. Normally, the $y\_i$ factor only is 1 when $i$ is the index of the correct class. Therefore, with each class, the function is just:
$$f(x)=-log(x)$$
Now, to prove this one is convex, we have multiple ways, but my favorite one is computing the derivative and second derivative.
$$\frac{\partial L}{\partial x}=-\frac{1}{x}\Rightarrow \frac{\partial^2 L}{\partial x^2}=\frac{1}{x^2}>0 \text{ for all }x\in(0,1]$$
This proves that $f(x) = -\log(x)$ is convex, given that, if the second derivative of a function is positive, then the function is convex (more info and an example [here](https://www.hec.ca/en/cams/help/topics/The_second_derivative.pdf)).
For the case of multiple samples, we also need to prove that the sum of a convex function is a convex function.
Based on [the definition of convex function](https://en.wikipedia.org/wiki/Convex_function), a function $f:X\rightarrow \mathbb{R}$ will be convex if:
$$f(tx\_1+(1-t)x\_2) \le tf(x\_1) + (1-t)f(x\_2)$$
where $0 and $x\_1,x\_2\in X$.
Now, let's assume $h(x) = f(x) + g(x)$ where both $f$ and $g$ are convex. We have:
$$f(tx\_1+(1-t)x\_2) \le tf(x\_1) + (1-t)f(x\_2)$$
$$g(tx\_1+(1-t)x\_2) \le tg(x\_1) + (1-t)g(x\_2)$$
$$\Rightarrow f(tx\_1+(1-t)x\_2) + g(tx\_1+(1-t)x\_2) \le tf(x\_1) + (1-t)f(x\_2) + tg(x\_1) + (1-t)g(x\_2)$$
$$\Rightarrow f(tx\_1+(1-t)x\_2) + g(tx\_1+(1-t)x\_2) \le t(f(x\_1)+g(x\_1)) + (1-t)(f(x\_2)+g(x\_2))$$
$$\Rightarrow h(tx\_1+(1-t)x\_2) \le th(x\_1) + (1-t)h(x\_2)$$
$\Rightarrow h$ is the convex function $\Rightarrow$ the summation of convex functions is a convex function.
Upvotes: 2 <issue_comment>username_3: I'm unable to comment on previous answers because I'm new to ai.stackexchange and don't have enough clout points. So I'm writing my comment as an answer instead.
Unless I'm missing something, I believe there are a couple errors in the answer given by @username_2. First, the alternative formula he gave for $L\_{CE}$ is partially incorrect because it should be $\log(1-\hat{y})$ in the case of class 0, and not $\log(\hat{y})$. Second, the original question was about the convexity of the log loss as a function of the weights w, and not as a function of $\hat{y}$; consequently the derivation used to prove convexity is flawed.
Upvotes: 1 |
2021/06/18 | 997 | 3,852 | <issue_start>username_0: I am trying to understand the concept of parameter sharing in a convolution neural network from [Parameter Sharing](https://cs231n.github.io/convolutional-networks/#layers). I have a few confusions:
Parameter sharing refers to the fact that for generating a single activation map, we use the same kernel throughout the image. And for that activation map, the weights of that kernel remain the same through the image?
**Denoting a single 2-dimensional slice of depth as a depth slice (e.g. a volume of size [55x55x96] has 96 depth slices, each of size [55x55]), we are going to constrain the neurons in each depth slice to use the same weights and bias.**
Does the above paragraph refer to the fact that *output of neurons in one activation map is generated by using the same weights in kernel throughout the image? And that kernel is convolved on the entire image*?
**No. of parameters without parameter sharing:**
There are 55*55*96 = 290,400 neurons in the first Conv Layer, and each has 11*11*3 = 363 weights and 1 bias. Together, this adds up to 290400 \* 364 = 105,705,600 parameters on the first layer of the ConvNet alone. Clearly, this number is very high.
**No. of parameters with parameter sharing**
With parameter sharing scheme, the first Conv Layer in our example would now have only 96 unique set of weights (one for each depth slice), for a total of 96*11*11*3 = 34,848 unique weights, or 34,944 parameters (+96 biases). ****Alternatively, all 55*55 neurons in each depth slice will now be using the same parameters.**** What does this bold sentence mean?
Also, how the parameters are different for both schemes? In both cases, we are using 96 kernels with 11*11*3 size and the resulting output is 55\*55. Then how the number of parameters for both schemes coming out to be different?<issue_comment>username_1: Concerning parameter sharing.
1. For the fully connected neural network you have an input of shape `(H_in * W_in * C_in)` and the output of shape `(H_out * W_out * C_out)`. This means, that each color of the pixel of the output feature map is connected to every color of the pixel from the input feature map. There is a separate learnable parameter for each pixel in the input image and the output.
Hence, one gets this huge number of parameters :
`(H_in * H_out * W_in * W_out * C_in * C_out)`
2. In the convolutional layer the input is the image of shape `(H_in, W_in, C_in)` and the weights account for the neighborhood of the given pixel, say of size `K * K`. The output is obtained as a weighted sum of the given pixel and its neighborhood. There is a separate kernel for each pair of the input and output channel `(C_in, C_out)`, but the weights of the kernel (a tensor of shape `(K, K, C_in, C_out)` are independent of the location. Actually, this layer can accept images of any resolution, whereas, the fully connected can work only with a fixed resolution.
Finally one has `(K, K, C_in, C_out)` parameters, which for the kernel size `K` much smaller, than the input resolution result into significant drop in the number of variables.
Upvotes: 2 <issue_comment>username_2: The same features might be across any location in the image, for a face recognition model, if the data is not very good it might happen that along with learning the features such as eyes and lips it also learns their locations if we only apply the convolution operation at a specific locality. However if we apply a filter across the image it will learn the features and not just "remember" the locations. This concept is what is referred to as parameter sharing. This is not the case in full connected layers of neural networks as the weight map size is correlated to the input and output activations and we don't stack the outputs for each weight. Parameter sharing is an inherent phenomenon in CNNs
Upvotes: 1 |
2021/06/18 | 532 | 2,226 | <issue_start>username_0: What are some recent books that introduce AI and neural networks while also discussing the related philosophical issues, like epistemology and whether AI is really thinking, etc.?<issue_comment>username_1: Concerning parameter sharing.
1. For the fully connected neural network you have an input of shape `(H_in * W_in * C_in)` and the output of shape `(H_out * W_out * C_out)`. This means, that each color of the pixel of the output feature map is connected to every color of the pixel from the input feature map. There is a separate learnable parameter for each pixel in the input image and the output.
Hence, one gets this huge number of parameters :
`(H_in * H_out * W_in * W_out * C_in * C_out)`
2. In the convolutional layer the input is the image of shape `(H_in, W_in, C_in)` and the weights account for the neighborhood of the given pixel, say of size `K * K`. The output is obtained as a weighted sum of the given pixel and its neighborhood. There is a separate kernel for each pair of the input and output channel `(C_in, C_out)`, but the weights of the kernel (a tensor of shape `(K, K, C_in, C_out)` are independent of the location. Actually, this layer can accept images of any resolution, whereas, the fully connected can work only with a fixed resolution.
Finally one has `(K, K, C_in, C_out)` parameters, which for the kernel size `K` much smaller, than the input resolution result into significant drop in the number of variables.
Upvotes: 2 <issue_comment>username_2: The same features might be across any location in the image, for a face recognition model, if the data is not very good it might happen that along with learning the features such as eyes and lips it also learns their locations if we only apply the convolution operation at a specific locality. However if we apply a filter across the image it will learn the features and not just "remember" the locations. This concept is what is referred to as parameter sharing. This is not the case in full connected layers of neural networks as the weight map size is correlated to the input and output activations and we don't stack the outputs for each weight. Parameter sharing is an inherent phenomenon in CNNs
Upvotes: 1 |
2021/06/19 | 598 | 2,213 | <issue_start>username_0: In the [famous work on the Visual Transformers](https://arxiv.org/abs/2010.11929), the image is split into patches of a certain size (say 16x16), and these patches are treated as tokens in the NLP tasks. In order to perform classification, a **CLS** token is added at the beginning of the resulting sequence:
$$
[\textbf{x}\_{class}, \textbf{x}\_{p}^{1}, \ldots, \textbf{x}\_{p}^{N}]
,$$
where $ \textbf{x}\_{p}^{i}$ are image patches. There multiple layers in the architecture and the state of the **CLS** token on the output layer is used for classification.
I think this architectural solution is done in the spirit of NLP problems (BERT in particular). However, for me, it would be more natural not to create a special token, but perform *1d Global Pooling* in the end, and attach an `nn.Linear(embedding_dim, num_classes)` as more conventional CV approach.
Why it is not done in this way? Or is there some intuition or evidence that this would perform worse than the approach used in the paper?<issue_comment>username_1: In their [official repository](https://github.com/google-research/vision_transformer), the author confirmed in the issue that the cls is not really important for the ViT, but they wanted to keep the ViT to be exactly the same with the NLP-Transformer, so they added it.
Issue link: <https://github.com/google-research/vision_transformer/issues/61#issuecomment-802233921>
Upvotes: 2 [selected_answer]<issue_comment>username_2: >
> However, for me, it would be more natural not to create a special token, but perform 1d Global Pooling in the end, and attach an nn.Linear(embedding\_dim, num\_classes) as more conventional CV approach.
>
>
>
Surprisingly, this approach was experimented in this new paper `Better plain ViT baselines for ImageNet-1k` and implemented as default in the `torch-vit` repository.
Along with other trivial improvements, the authors actually outperform other methods such as DieT or ViT with strong data augmentation. The modification you described provides 1.8% accuracy gain in the ablation study.
The paper is essentially 1.5 pages so I definitely recommend having a look!
link: <https://arxiv.org/abs/2205.01580>
Upvotes: 0 |
2021/06/19 | 1,252 | 4,824 | <issue_start>username_0: I am trying to learn reinforcement learning and I am focusing on the value iteration. I am looking at the example of grid world, and I am trying to implement it in python. While doing this, I encountered the situation in which I had to set the rewards for the agent, but looking at the theory, I have found that each state has also a value, which is found using the value iteration.
So, my doubt is: **What is the difference between a reward and a value for a given state? And should the initial values of the states always be set equal to zero?**<issue_comment>username_1: >
> What is the difference between a reward and a value for a given state?
>
>
>
Let us say that an agent took an action from state $A$ and reached state $B$ and got a score $R$. This instantaneous score the agent received on reaching state $B$ is called the *reward*.
Now, let me introduce you to the concept of **return**. Assume that an agent followed a particular trajectory:
```
1. State 1 -> Action 1
2. Reward 1, State 2 -> Action 2
3. Reward 2, State 3 -> Action 3
...
n. Reward n-1, State n (Terminated)
```
Return (often denoted by $G$) is the sum total of all the rewards obtained by starting from a state State 1 and following a policy.
So, the definition of the **return** is
$$G(s\_1) = R\_1 + R\_2 + R\_3 + ... = \sum\_{i=1}^{\infty}R\_i$$
Sometimes (most often) these sequences never terminate, so we include a discount factor (Greek letter gamma, $\gamma$) to rewards obtained in the future.
The definition of the discounted return $G$ is
$$G(s\_1) = R\_1 + \gamma R\_2 + \gamma^2 R\_3 + ... = \sum\_{i=1}^{\infty}\gamma^{i-1} R\_i $$
$\gamma$ is a number between $0$ and $1$: it defines how much importance the agent gives to long-term rewards. For a smaller value of $\gamma$, more importance is given for short-term rewards.
Now, coming back to your question. A value of the state is the **expected return** for an agent starting from that state and following a particular policy. In the case of stochastic policies (policies that have inherent randomness) and/or for environments with stochastic transition probabilities and/or stochastic rewards, the value is the sum of (the returns of all trajectories multiplied by the probability of taking that trajectory).
>
> And should the initial values of the states always be set equal to zero?
>
>
>
Not necessary, zero initialization is one of many ways to initialize. Random initialization is another method. It depends on the environment setting.
Upvotes: 2 <issue_comment>username_2: Starting with rewards, states don't have rewards in general. A reward is a number returned at a certain step of the MDP. If you arrange things in sequence over a whole time step $s, a, r, s'$ for state, action, reward, next state, then the reward $r$ is allowed to depend on all three of $s, a, s'$, and it can also be from a random distribution of real numbers, not just a single number.
It is however OK to associate a single number reward with each state, for either leaving that state (when it is $s$ or $s\_t$ in the sequence) or arriving in it (when it is $s'$ or $s\_{t+1}$). The rewards should be allocated as fits the problem being solved. They are part of the problem definition.
State values are a way to measure longer term benefits of being in a state, and are often something calculated as part of a solution. The formal definition of state value looks like this:
$$v\_{\pi}(s) = \mathbb{E}\_{\pi}[\sum\_{k=0}^{\infty} \gamma^k R\_{t+k+1} | S\_t=s]$$
In English: "The expected discounted sum of all future rewards when starting from a given state and following a specific policy." The discounted sum is usually called the return or the utility associated with the state.
>
> What is the difference between a reward and a value for a given state?
>
>
>
A state value is composed of many rewards weighted by their probability of occurring in the future. It is a useful summary of possible futures that can be used to make decisions.
>
> And should the initial values of the states always be set equal to zero?
>
>
>
Not necessarily, but zero is a reasonable default if you have nothing else to go on. Alternatives include:
* Best guesses at true values (perhaps from some previous attempt to solve the problem). This may improve speed of convergence depending on how good the guesses are.
* Random values - this may happen if you use a neural network.
* Optimistic values. This is a trick for improving exploration on smaller problems - if you set a value higher than an upper bound on the optimum possible then an agent following an greedy or near-greedy policy will try to reach the associated state at some point, even if other results are already better than a lower default like zero.
Upvotes: 3 [selected_answer] |
2021/06/21 | 1,296 | 4,941 | <issue_start>username_0: I made a flowchart for a simplified perceptron leaning algorithm.
[](https://i.stack.imgur.com/HX8Pn.png)
Here is the process of the learning algorithm.
1. Initialize the weights first.
2. Get a training example randomly and make a prediction. If the prediction matches the ground-truth value, then get another training example. If the prediction doesn't match the ground-truth value, update the weights.
3. repeat step 2 until all predictions match the ground-truth value (or other stop criteria)
Is my flowchart a good representation? If not, what are the errors, and what might be improved?<issue_comment>username_1: >
> What is the difference between a reward and a value for a given state?
>
>
>
Let us say that an agent took an action from state $A$ and reached state $B$ and got a score $R$. This instantaneous score the agent received on reaching state $B$ is called the *reward*.
Now, let me introduce you to the concept of **return**. Assume that an agent followed a particular trajectory:
```
1. State 1 -> Action 1
2. Reward 1, State 2 -> Action 2
3. Reward 2, State 3 -> Action 3
...
n. Reward n-1, State n (Terminated)
```
Return (often denoted by $G$) is the sum total of all the rewards obtained by starting from a state State 1 and following a policy.
So, the definition of the **return** is
$$G(s\_1) = R\_1 + R\_2 + R\_3 + ... = \sum\_{i=1}^{\infty}R\_i$$
Sometimes (most often) these sequences never terminate, so we include a discount factor (Greek letter gamma, $\gamma$) to rewards obtained in the future.
The definition of the discounted return $G$ is
$$G(s\_1) = R\_1 + \gamma R\_2 + \gamma^2 R\_3 + ... = \sum\_{i=1}^{\infty}\gamma^{i-1} R\_i $$
$\gamma$ is a number between $0$ and $1$: it defines how much importance the agent gives to long-term rewards. For a smaller value of $\gamma$, more importance is given for short-term rewards.
Now, coming back to your question. A value of the state is the **expected return** for an agent starting from that state and following a particular policy. In the case of stochastic policies (policies that have inherent randomness) and/or for environments with stochastic transition probabilities and/or stochastic rewards, the value is the sum of (the returns of all trajectories multiplied by the probability of taking that trajectory).
>
> And should the initial values of the states always be set equal to zero?
>
>
>
Not necessary, zero initialization is one of many ways to initialize. Random initialization is another method. It depends on the environment setting.
Upvotes: 2 <issue_comment>username_2: Starting with rewards, states don't have rewards in general. A reward is a number returned at a certain step of the MDP. If you arrange things in sequence over a whole time step $s, a, r, s'$ for state, action, reward, next state, then the reward $r$ is allowed to depend on all three of $s, a, s'$, and it can also be from a random distribution of real numbers, not just a single number.
It is however OK to associate a single number reward with each state, for either leaving that state (when it is $s$ or $s\_t$ in the sequence) or arriving in it (when it is $s'$ or $s\_{t+1}$). The rewards should be allocated as fits the problem being solved. They are part of the problem definition.
State values are a way to measure longer term benefits of being in a state, and are often something calculated as part of a solution. The formal definition of state value looks like this:
$$v\_{\pi}(s) = \mathbb{E}\_{\pi}[\sum\_{k=0}^{\infty} \gamma^k R\_{t+k+1} | S\_t=s]$$
In English: "The expected discounted sum of all future rewards when starting from a given state and following a specific policy." The discounted sum is usually called the return or the utility associated with the state.
>
> What is the difference between a reward and a value for a given state?
>
>
>
A state value is composed of many rewards weighted by their probability of occurring in the future. It is a useful summary of possible futures that can be used to make decisions.
>
> And should the initial values of the states always be set equal to zero?
>
>
>
Not necessarily, but zero is a reasonable default if you have nothing else to go on. Alternatives include:
* Best guesses at true values (perhaps from some previous attempt to solve the problem). This may improve speed of convergence depending on how good the guesses are.
* Random values - this may happen if you use a neural network.
* Optimistic values. This is a trick for improving exploration on smaller problems - if you set a value higher than an upper bound on the optimum possible then an agent following an greedy or near-greedy policy will try to reach the associated state at some point, even if other results are already better than a lower default like zero.
Upvotes: 3 [selected_answer] |
2021/06/22 | 780 | 3,307 | <issue_start>username_0: Most people seem to assume that we need a human-level AI as a starting point for the Singularity.
Let's say someone invents a general intelligence that is not quite on the scale of a human brain, but comparable to a rat. This AI can think on its own, learn and solve a wide range of problems, and basically demonstrates rat-level cognitive behavior. It's just not as smart as a human.
Is this enough to kickstart the exponential intelligence explosion that is the Singularity?
**In other words, do we really need a human-level AI to start the Singularity, or will an AI with animal-level intelligence suffice?**<issue_comment>username_1: Consider scientists creating a worm-level AI (which supposedly has happened, [scientists have fully simulated a worm's brain](http://openworm.org/), as far as I am aware); Now what? Is that simulation of 302 neurons going to rapidly explode and take over the world? Of course not! You need more than just the baseline intelligence, you require an AI not only with the intelligence to learn, but the infrastructure/capacity to advance to a point where it can then create more infrastructure/capacity for itself. That is when it will be able to explode in a singularity kind of event. This is all speculation of course, so take it as you will.
Upvotes: 0 <issue_comment>username_2: >
> do we really need a human-level AI to start the Singularity, or will an AI with animal-level intelligence suffice?
>
>
>
The requirement from "theory" of the singularity is that:
* The AI is able to design and implement a better AI than itself.
* The trait of being able to design better than itself continues to apply in each iteration.
If both these things hold, then each generation of AIs will continue to improve. This is often assumed by singularity pundits to be an exponential growth curve e.g. each iteration makes +50% compound improvement on whatever measure is being made of intelligence. (Aside: Personally I find this a major weakness in the argument for the singularity being meaningfully possible, that it assumes this growth)
The first item of these two is important to your question. For the singularity to work, there *is* a baseline capability required - the AI needs to be able to design and build other AIs. A general intelligence at the level of an animal - at least any animal intelligence that we are aware of - does not seem capable of this task. It is not clear that humans are even capable of this task when the AI being built has to possess at least some general intelligence.
The term "animal-level intelligence" is tricky. The narrow AIs that we currently build can outperform animals and humans on specific tasks, but in terms of general intelligence they do not score highly (or at all). If we could build one that can outperform humans on a "building an AI" task, it might still have the *general* intelligence of an animal whilst having the capability to bootstrap an iterative process of self-improvement. This does seem like a very dangerous experiment to try though, with idiot-savant-deity AI and [paperclip maximiser](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) scenarios as possible outcomes because the AI's general intelligence lags behind its raw capabilities.
Upvotes: 1 |
2021/06/27 | 913 | 3,872 | <issue_start>username_0: **Conjecture**: regardless of the initial reward function, one of the winning strategies would be to change the reward function to a simpler one (e.g. "do nothing"), thus getting a full reward for each passing unit of time. For such an agent, the only priority would be to prolong its existence (to maximize the overall reward collected).
So:
* The notion of externally defined reward function is incompatible with the concept of self-adjusting AGI.
* Any AGI will always settle on self-preservation as its **only** goal
* It is therefore impossible to create an AGI with benevolence towards humans "build-in". Instead, the problem of AI alignment should be reformulated in terms of "what changes to the environment (the physical reality that AGI shares with humans) would **irreversibly** tie wellbeing of humanity to AGIs existence".
* Since any "kill switch" or similar artificial measure is not irreversible and can be overcome by a super-intelligent agent, the only way to tie AGI existence and human wellbeing is the modification of laws of physics, logic, and reasoning. Which is impossible.
* AI alignment is impossible
What flaws do you see in this line of reasoning?<issue_comment>username_1: Consider scientists creating a worm-level AI (which supposedly has happened, [scientists have fully simulated a worm's brain](http://openworm.org/), as far as I am aware); Now what? Is that simulation of 302 neurons going to rapidly explode and take over the world? Of course not! You need more than just the baseline intelligence, you require an AI not only with the intelligence to learn, but the infrastructure/capacity to advance to a point where it can then create more infrastructure/capacity for itself. That is when it will be able to explode in a singularity kind of event. This is all speculation of course, so take it as you will.
Upvotes: 0 <issue_comment>username_2: >
> do we really need a human-level AI to start the Singularity, or will an AI with animal-level intelligence suffice?
>
>
>
The requirement from "theory" of the singularity is that:
* The AI is able to design and implement a better AI than itself.
* The trait of being able to design better than itself continues to apply in each iteration.
If both these things hold, then each generation of AIs will continue to improve. This is often assumed by singularity pundits to be an exponential growth curve e.g. each iteration makes +50% compound improvement on whatever measure is being made of intelligence. (Aside: Personally I find this a major weakness in the argument for the singularity being meaningfully possible, that it assumes this growth)
The first item of these two is important to your question. For the singularity to work, there *is* a baseline capability required - the AI needs to be able to design and build other AIs. A general intelligence at the level of an animal - at least any animal intelligence that we are aware of - does not seem capable of this task. It is not clear that humans are even capable of this task when the AI being built has to possess at least some general intelligence.
The term "animal-level intelligence" is tricky. The narrow AIs that we currently build can outperform animals and humans on specific tasks, but in terms of general intelligence they do not score highly (or at all). If we could build one that can outperform humans on a "building an AI" task, it might still have the *general* intelligence of an animal whilst having the capability to bootstrap an iterative process of self-improvement. This does seem like a very dangerous experiment to try though, with idiot-savant-deity AI and [paperclip maximiser](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) scenarios as possible outcomes because the AI's general intelligence lags behind its raw capabilities.
Upvotes: 1 |
2021/06/27 | 1,228 | 4,774 | <issue_start>username_0: Number of lemmas can be used as a rough measure for the number of words in a language. A lemma can have multiple word-form types. It can be understood from the following paragraph taken from p12 of [Regular Expressions,Text Normalization, Edit Distance](https://web.stanford.edu/%7Ejurafsky/slp3/2.pdf)
>
> Another measure of the number of words in the language is the number
> of lemmas instead of wordform types. Dictionaries can help in giving
> lemma counts; dictionary entries or **boldface forms** are a very rough
> upper bound on the number of lemmas (**since some lemmas have multiple
> boldface forms**). The 1989 edition of the Oxford English Dictionary had
> 615,000 entries.
>
>
>
It is also given that a lemma can have multiple boldface forms, what are the boldface forms referred here? Are they different from wordforms?
If possible, provide an example for lemma having multiple boldface forms.<issue_comment>username_1: It is very confusingly worded, and I would think it's incorrect according to linguistic terminology.
A *lemma* is the canonical form of a word, commonly the infinitive of a verb, the nominative singular of a noun, and the positive of an adjective. The *inflected forms* belonging to a word would the the forms used for other tenses and persons etc for verbs, case and number for nouns, and comparative/superlative for adjectives.
This raises the question of what a *word* is, and there is no satisfactory answer to this, even more than 100 years after the foundation of modern linguistics...
Anyway, the 'boldface form' (a term I have not come across in 30 years as a linguist), refers to *dictionary headwords*, which are lemmas. There are some lemmas that are 'shared' by words which have multiple meanings: the common example in linguistics is *bank*, which can be a financial institution, the side of a river, a term to describe the process of tilting the wings of an airplane in flight, or it can mean to deposit an amount of money in an account, etc. All these words you would find under **bank** in a dictionary, but usually under several different entries. So I guess this is what is meant by "multiple boldface forms". However, these are usually completely unrelated words which by accident share the same spelling; in some cases it could also have been the same word that then developed different meanings.
To summarise: the paragraph you quote is plain wrong/sloppy in its use of terminology, as a dictionary headword *is* a lemma in every dictionary I have seen, but these are not unique, as several different words might have lemmas which are spelled the same way (but they are still *different* lemmas — no single word would have multiple dictionary entries).
For example:
* **bank** (bank, banks), noun: a financial institution
* **bank** (bank, banks), noun: the side of a river
* **bank** (bank, banks, banked, banking), verb: tipping the wings of an airplane
* **bank** (bank, banks, banked, banking), verb: depositing money in an account
We have four lemmas (in bold), two of which have two inflected forms, and the other two have three each. These are also four different words, with a total of four different word forms (*bank* and *banks* are common forms of all words)
Often, to avoid confusion, you would refer to them as $bank\_1$ for the financial institution, and $bank\_2$ for the river bank, etc. to indicate that they are different words.
You can probably see that English has a number of lemmata (which is the proper plural of lemma, since it's of Greek origin) which is by a factor of 3-4 smaller than the number of word types, whereas in other languages this ratio will be a lot smaller, as they have more inflected variants. An English noun has just singular and plural forms, whereas a German noun would have singular and plural across each of four cases (though some of them would share the same word forms).
Upvotes: 2 <issue_comment>username_2: Here are some examples: *reducing* or *reduces* or *reduced* or *reduction* -> *reduce*; *am* or *are* or *is* -> *be*; *n't* -> *not* and *'ve* -> *have*. When using spacy, the token can be referenced to find the lemmatized root.
```
lemmas =[token.lemma_ for token in doc]
lemmas =[lemma for lemma in lemmas
if lemma.isalpha() or lemma == '-PRON-'
]
I use lemma to find parts of speech
```
Wiki
In computational linguistics, lemmatisation is the algorithmic process of determining the lemma of a word based on its intended meaning. Unlike stemming, lemmatisation depends on correctly identifying the intended part of speech and meaning of a word in a sentence, as well as within the larger context surrounding that sentence, such as neighboring sentences or even an entire document
Upvotes: 1 |
2021/06/29 | 380 | 1,386 | <issue_start>username_0: [This article](https://analyticsindiamag.com/what-is-neural-network-pruning-and-why-is-it-important-today/) talks about pruning in the context of convolutional neural networks:
>
> One of the first methods of pruning is pruning entire convolutional filters. Using an L1 norm of the weight of all the filters in the network, they rank them. This is then followed by pruning the ‘n’ lowest ranking filters globally. The model is then retrained and this process is repeated.
>
>
> There also exist methods for implementing structured pruning for a more light-touch approach of regulating the output of the method. This method utilizes a set of particle filters that are the same in number as the number of convolutional filters in the network.
>
>
>
Is pruning only applicable to CNNs?<issue_comment>username_1: No, neural network pruning is applicable to any type of neural network, be it a feed-forward, convolutional or recurrent neural network.
Upvotes: 0 <issue_comment>username_2: No, it is not only applicable to CNNs, but to a wide range of other architectures, [even the hype transformers](https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1194/reports/custom/15763707.pdf).
For an extensive survey, I recommend that you have a look at this paper [What is the State of Neural Network Pruning?](https://arxiv.org/abs/2003.03033).
Upvotes: 1 |
2021/06/30 | 2,146 | 8,653 | <issue_start>username_0: While studying about the n-gram models, I encountered the terms "statistical model" and "probabilistic model" several times.
I got a basic doubt that will there be any probabilistic model that is not statistical restricted to models that works on datasets.
In machine learning, we use datasets. Any model that uses dataset can be called as a statistical model since statistics is a branch of mathematics that tries to find insights related to data.
All the models that calculates probabilities using datasets, for any task, are called (empirical) probabilistic models.
Thus, if I am not wrong, every probabilistic model has to be a statistical model since it uses data. Am I wrong?
Is there any model in literature that is a statistical model but not probabilistic?<issue_comment>username_1: It is purely terminological. A probabilistic model uses probabilities, but one usually does not know what the 'correct' probabilities are, or where they come from. This is where statistics comes into play:
You can *estimate* probabilities from (empirical) data[\*]. For example, if you develop a probabilisitc parts-of-speech tagger, you need probabilities typically for the probability of a certain word to be of a particular class, and for transition probabilities, ie how likely is it for tag *a* to be followed by tag *b* rather than tag *c*. So you might devise an equation that states that the probability of a token being assigned a particular tag is the product of the probability of the tag given the word and the tag given the previous two tags.
But you don't know what these probabilities are, and you cannot derive them from any formula. Instead, you get these values by looking at your training data, ie you count how often each event occurs, and normalise it to be in the range $[0..1]$.
In practice therefore, probabilities and statistical likelihoods are pretty much identical; probabilities are the theoretical values used in your model, and the actual values are derived using statistics. To make clear in equations that they aren't strictly the same, probabilities are usually denoted by $p$, whereas estimates based on statistics are marked $\hat{p}$.
[\*] Another way to get probabilities is to calculate them, but looking at large-scale open-ended problems this is not easily possible. That's why you take samples to estimate the values -- this is then called training data.
Upvotes: 1 <issue_comment>username_2: First of all, I don't know of any textbook that clarifies these terms, but, although I am not a statistician, in addition to [the other answer](https://ai.stackexchange.com/a/28476/2444), one possible way to look at it is as follows.
You use **probability theory** to model your problem. For example, if it's a classification problem, you could define the conditional probability distribution $p(y \mid x)$, which would compute the probability of a label $y$ given an input $x$. In other words, you *assume* that there is a probability distribution of the form $p(y \mid x)$ that generates your data, so here $p$ is the "model". If you want to generate images, for instance, you could model the process that generates them as the marginal distribution $p(x)$, which, ideally, would tell you the probability of sampling a specific image $x$. This is the theory. So, no *observed* data is still involved here. By the way, this is exactly how people usually model the machine learning problems in the sub-field of [*statistical learning theory*](https://www.math.arizona.edu/%7Ehzhang/math574m/Read/vapnik.pdf).
In practice, you need to estimate these probability distributions. To estimate them, you can use data. The type of data depends, of course, on the problem and model. For example, in the case of $p(y \mid x)$, you may need a labelled dataset $D$. So, if you estimate $p(y \mid x)$ with $D$ to obtain $\hat{p}(y \mid x)$, then $\hat{p}$ would be a **statistical model**, in the sense that you estimated it from observed/empirical data. In general, statistics is all about taking data and using it to build "models" that can be used for prediction or forecasting (of future inputs) or inference (i.e. understanding the properties of the data-generating process or probability distribution) or just to compute the so-called "statistics" (hence the name of the field!), such as the "sample average" (i.e. the average of your observe data points, where the "sample" here refers to your dataset of points, which are also sometimes known as "samples", just to make things even more confusing!)
So, let me address your questions and comments, but take my comments below with a grain of salt, because I am not a statistician.
>
> All the models that calculates probabilities using datasets, for any task, are called (empirical) probabilistic models.
>
>
>
To me, this would be a reasonable statement. In this example, you seem to be talking about $\hat{p}(y \mid x)$, which I would also call a probabilistic model, although it's just an estimate of the theoretical/ideal one.
>
> Is there any model in literature that is a statistical model but not probabilistic?
>
>
>
If we follow my reasoning above, initially, if you do not *explicitly* model your problem as the estimation of some probability distribution that generated the data, then we would be estimating something from data (so we would be building a statistical model), but it wouldn't be clear whether this "statistical model" is an estimate of some theoretical/probabilistic one. So, I don't really have a definitive answer to your question. I suppose that any statistical model could be modelled with the tools of probability theory, so I would be more inclined to think that the answer to your question is "no".
In addition to what I just said above, if you take a book like [Machine Learning: A Probabilistic Perspective](http://noiselab.ucsd.edu/ECE228/Murphy_Machine_Learning.pdf), here are a few examples of how the author uses the terms "statistical model" and "probabilistic model". For example, he writes ([section 7.3, page 217](http://noiselab.ucsd.edu/ECE228/Murphy_Machine_Learning.pdf#page=248))
>
> A common way to estimate the parameters of a **statistical model** is to compute the MLE, which is defined as
>
>
> $$
> \hat{\boldsymbol{\theta}} \triangleq \arg \max \_{\boldsymbol{\theta}} \log p(\mathcal{D} \mid \boldsymbol{\theta})
> $$
>
>
>
So, in this case, is $p$ a statistical or probabilistic model, according to my definitions above? Of course, ignoring the potentially different notation being used here to refer to a statistical model, i.e. without the $\hat{}$, I think that this $p$ could be considered a statistical model (in the sense that $\hat{\boldsymbol{\theta}}$ would be estimated from the observed dataset $\mathcal{D}$, assuming it's the observed dataset and not some random variable) but at the same time also a probabilistic one, in the sense that, here, we are assuming that we have some kind of "theoretical likelihood". In any case, the likelihood is something that can make this discussion even more confusing, because [the likelihood is not really a probability distribution (if you integrate with respect to the parameters)](https://mathoverflow.net/a/10978). In any case, here, you could consider $p(\mathcal{D} \mid \boldsymbol{\theta})$ as a (Bayesian) probabilistic model, i.e. you assume that there's some parameters that generate the data and, if you consider it as a conditional probability distribution over the data, rather than the parameters, then this would be consistent with what I said above.
Here's another example (section 1.3.1, p. 10).
>
> In this book, we focus on model based clustering, which means we *fit a **probabilistic model** to the data*, rather than running some ad hoc algorithm.
>
>
>
This usage also seems consistent with my description above. Here, I interpret the part "we *fit a **probabilistic model** to the data*" as "we estimate the probability distribution given the data".
Or, in section 1.4.1 (p. 16)
>
> In this book, we will be focussing on probabilistic models of the form $p(y \mid x)$ or $p(x)$, depending on whether we are interested in supervised or unsupervised learning respectively.
>
>
>
The discussion can become even more complicated, if we start to consider parametric vs non-parametric models, which are mentioned in that same section, where [you make or not assumptions about the data-generating process](https://stats.stackexchange.com/a/237704).
So, to conclude, I think that these terms are often used vaguely and sometimes interchangeably, so the confusion is normal.
Upvotes: 2 |
2021/07/01 | 503 | 1,590 | <issue_start>username_0: I am trying to build a neural network that has an input of $n$ pairs of integer values (where $n$ is random) and a corresponding output of a binary array with length $n$.
The input will be a set of integer value coordinates $[(x\_{1}, y\_{1}), (x\_{2}, y\_{2}), (x\_{3}, y\_{3}), \dots, (x\_{50}, y\_{50}), \dots]$, where each instance can be of various lengths, like $[(x\_{1}, y\_{1}), (x\_{2}, y\_{2}), (x\_{3}, y\_{3}), \dots, (x\_{52}, y\_{52})]$ or $[(x\_{1}, y\_{1}), (x\_{2}, y\_{2}), (x\_{3}, y\_{3}), \dots, (x\_{101}, y\_{101})]$, etc.
The output is a set of binary arrays with each instance having the same length as the corresponding input.
[](https://i.stack.imgur.com/Ufw4S.png)
May I know if anyone has any recommendations on what neural network would fit this use case?<issue_comment>username_1: A recurrent neural network (RNN, specifically either an LSTM or GRU) will work well for variable length sequences like you’ve described. Assuming the order of the sequence is meaningful (I.e. you can’t just break up the sequence into individual inputs and associated target value) an RNN model will learn how the sequence of inputs maps to the sequence of outputs.
Upvotes: 1 <issue_comment>username_2: Look for [padding](https://www.tensorflow.org/guide/keras/masking_and_padding).
There are even versions of it: pre-padding, post-padding.
<https://stackoverflow.com/questions/46298793/how-does-choosing-between-pre-and-post-zero-padding-of-sequences-impact-results>
Upvotes: 0 |
2021/07/02 | 1,601 | 6,262 | <issue_start>username_0: I came across a new term "held-out corpora" and I confused regarding its usage in the NLP domain
Consider the following three paragraphs from [N-gram Language Models](https://web.stanford.edu/%7Ejurafsky/slp3/3.pdf)
#1: **held-out corpora as a non-train data**
>
> For an intrinsic evaluation of a language model we need a test set. As
> with many of the statistical models in our field, the probabilities of
> an $n-$gram model come from the corpus it is trained on, the training
> set or training corpus. We can then measure training set the quality
> of an n-gram model by its performance on some unseen data called the
> test set or test corpus. **We will also sometimes call test sets and
> other datasets that are not in our training sets held out corpora
> because we hold them out from the held out training data**.
>
>
>
This paragraph clearly says that held-out corpora can be used for either testing or validation or others except training.
#2: **development set or devset for hyperparameter tuning**
>
> **Sometimes we use a particular test set so often that we implicitly
> tune to its characteristics. We then need a fresh test set that is
> truly unseen. In such cases, we call the initial test set the
> development test set or,devset**. How do we divide our data into
> training, development, and test sets? We want our test set to be as
> large as possible, since a small test set may be accidentally
> unrepresentative, but we also want as much training data as possible.
> At the minimum, we would want to pick the smallest test set that gives
> us enough statistical power to measure a statistically significant
> difference between two potential models. In practice, **we often just
> divide our data into 80% training, 10% development, and 10% test.**
> Given a large corpus that we want to divide into training and test,
> test data can either be taken from some continuous sequence of text
> inside the corpus, or we can remove smaller “stripes” of text from
> randomly selected parts of our corpus and combine them into a test
> set.
>
>
>
This paragraph clearly says that development set is used for hyperparameter tuning.
#3: **held-out corpora for hyperparameter tuning**
>
> How are these $\lambda$ values set? Both the simple interpolation and
> conditional interpolation $\lambda'$s are learned from a held-out
> corpus. **A held-out corpus is an additional training corpus that we
> use to set hyperparameters** like these $\lambda$ values, by choosing
> the $\lambda$ values that maximize the likelihood of the held-out
> corpus.
>
>
>
This paragraph is clearly saying that held-out corpus is used for hyper-parameter training.
**I am interpreting or understanding the terms as follows**:
**Train corpus** is used to train the model for learning parameters.
**Test corpus** is used for evaluating the model wrt parameters.
**Development set** is used for evaluating the model wrt hyperparameters.
**Held-out corpus** includes any corpus outside training corpus. So, it can be used for evaluating either parameters or hyperparameters.
To be concise, informally, data = training data + held-out data = training data + development set + test data
Is my understanding true? I got confusion because of paragraph 3, which says that held-out corpus is used (only) for learning the hyperparameters while paragraph 1 says that held-out corpus includes any corpus outside train corpus. Does held-out corpora include devset or same as devset?<issue_comment>username_1: I think that these terms may be used *inconsistently* across sources.
If someone says *held-out dataset*, I would immediately think of a dataset that is not used for training, but can be used for anything else, validation (hyper-parameter tuning or early stopping) or testing; so, to determine what they are referring to, I would probably take into account the context.
In your second quote, the *development set* seems to be used as a synonym for *validation dataset* (a more common name to refer to the same concept), i.e. the dataset used for early stopping or hyper-parameter optimization (see also [this](https://ai.stackexchange.com/q/5658/2444)).
So, my answer to your question in the title would be
>
> Yes, the heldout dataset can be used for validation or testing, but not because it's a special dataset, but because people may use this term to refer to either the validation dataset or the test dataset.
>
>
>
Here's another example of the usage of the term *held-out set* to refer, in this case, to the validation dataset (section 1.6 of [this famous ML book](http://users.isr.ist.utl.pt/%7Ewurmd/Livros/school/Bishop%20-%20Pattern%20Recognition%20And%20Machine%20Learning%20-%20Springer%20%202006.pdf))
>
> The results above suggest a simple way of achieving this, namely by taking the available data and partitioning it into a training set, used to determine the coefficients $\mathbf{w}$, and a *separate* validation set, also called a **hold-out set**, used to *optimize the model complexity*
>
>
>
Here's another example that shows that the term may be used inconsistently (emphasis mine, taken from [section 5.3](https://www.deeplearningbook.org/contents/ml.html) of the famous deep learning book by Goodfellow et al.). In fact, in that same section, they refer to the validation dataset, which is distinct from this **held-out test set** (so, in this case, the held-out set is used to refer only to the test set).
>
> Earlier we discussed how a **held-out test set**, composed of examples coming from the same distribution as the training set, can be used to estimate the generalization error of a learner, after the learning process has completed.
>
>
>
Upvotes: 2 [selected_answer]<issue_comment>username_2: In the sample dataset, the variable Sprint is the respondent's time (in seconds) to sprint a given distance, and Smoking is an indicator about whether or not the respondent smokes (0 = Nonsmoker, 1 = Past smoker, 2 = Current smoker). Use ANOVA to test if there is a statistically significant difference in sprint time with respect to smoking status. Sprint time will serve as the dependent variable, and smoking status will act as the independent variable.
Upvotes: 0 |
2021/07/02 | 1,407 | 5,804 | <issue_start>username_0: I saw a [video on Youtube](https://www.youtube.com/watch?v=MwCgvYtOLS0&ab_channel=TwoMinutePapers) about AI and Super Resolution Image Reconstruction with TecoGAN. I must say I am impressed.
Now, I am wondering how reliable this is.
I have learned at university that you lose information if you do not sample to fullfill Nyquist. I also don't think that the example images are in any way sparse...
Is the AI just trying to fill in the blanks by guessing?
This would be fine for entertainment, but probably not so much to enhance robbery pictures and charge people based on enhanced pictures. It also wouldn't be a good solution for improving the resolution of scientific data if it is just "guessing".<issue_comment>username_1: Yes, it's guessing. In the training phase, you show it lots of coarse and detailed pictures, and the algorithm learns a mapping from course to detailed. Then you present it a new coarse image, and it executes the same mapping. The information from the original picture is gone, and it cannot be retrieved, so it's filled in by analogy to other cases.
"Guessing" sounds a bit random, so it's more like a very informed guess. A bit like reading lots of books, and then being asked what word comes after "the cat sat on the" -- you're likely to say "mat", and will be right in many cases, but there's no guarantee that the most common word actually does occur. So now just substitute words with pixel values, and add a complex statistical model to make the decision, but you still won't know what the correct element is.
As you rightly say, this is fine for entertainment, but not for serious applications, where missing details in a crime scene are filled in according to how previous similar scenes may have looked.
Upvotes: 3 [selected_answer]<issue_comment>username_2: You actually don't have to loose information if you don't fulfill Nyquist — although that topic is quite advanced and has limitations. Still, super resolution is reliable and used by most 4K TVs today to upscale 1080p video to fit the 4K screen. You may notice TV ads for 4K TVs occasionally mentioning this.
What super resolution does is just generalising shapes. For example, imagine a simple image with just a black rectangle. You can easily image enlargening that image to whatever size you want because you know how it's supposed to look. By training a neural network on a lot of images it can learn to generalise features such as faces and enlarge them.
Of course you can't take a single pixel of a face and expand this into a full 100x100 image. Therefore, it's best to use super resolution to enlarge entire images, not just individual areas of the image. If you were to use it to enlarge a very small patch of text in the image, it may not replicate the text correctly and read as something else.
Furthermore, very good super resolution models are slow. Most also only increase the resolution of a single frame at a time. This means there might be inconstancies between sequential frames in a video.
Upvotes: 0 <issue_comment>username_3: You were correct in that the model won't be able to reconstruct any missing information with complete certainty (an intuitively impossible task). As username_1 mentioned, it is estimating what a similar image (from the training data it has been exposed to) would have looked like (I should note that in the vast majority of cases, we don't/can't actually know what the model's internal representation looks like or how it works, only how to get there - see [explainable AI](https://en.wikipedia.org/wiki/Explainable_artificial_intelligence)).
In other words, it attempts to match the image it is given to a "[latent space](https://en.wikipedia.org/wiki/Latent_space)" (different terms are sometimes used for the same concept, but you may find the links in the article helpful - also see [this question](https://stats.stackexchange.com/questions/442352/what-is-a-latent-space)) that it has approximated (representing the data it was trained on) and mapped to the output space (i.e., the "high-resolution" images).
In theory it is possible that AI could be used to enhance certain features of crime scene images (e.g., sharpening the contours of a face so a detective can better recognize the person) but again, this only approximates the "true" data. In practical settings, it is more likely that the raw data is fed into a model designed specifically for achieving some end goal (for example, facial recognition) and humans are taken out of the loop once a sufficiently accurate model is available.
Generally, "highly educated guessing" is an appropriate summary for this type of method and you are right in your assumption that this limits its usefulness in certain applications where precision is important. Though this is somewhat tangential, you might find machine learning-based computational fluid dynamics (e.g., [this paper](https://www.sciencedirect.com/science/article/pii/B9780128186992000093)) enlightening; while most machine learning techniques (anything involving neural networks, which are inherently approximators and also often nondeterministic) cannot produce scientifically rigorous simulations in the way traditional algorithms might, they can speed up some of the processing/analysis in the way a cleverly devised heuristic might.
Here are a couple articles you might find informative (some of these are pretty sparse in information relating to machine learning, but most have links to in-depth treatments):
* <https://en.wikipedia.org/wiki/Autoencoder>
* <https://en.wikipedia.org/wiki/Feature_learning>
* <https://en.wikipedia.org/wiki/Noise_reduction#Deep_learning>
* <https://en.wikipedia.org/wiki/Deep_Image_Prior>
* <https://en.wikipedia.org/wiki/Intrinsic_dimension#Applications>
Upvotes: 0 |
2021/07/03 | 705 | 3,019 | <issue_start>username_0: In order to check, whether the visitor of the page is a human, and not an AI many web pages and applications have a checking procedure, known as CAPTCHA. These tasks are intended to be simple for people, but unsolvable for machines.
However, often some text recognition challenges are difficult, like discerning badly, overlapping digits, or telling whether the bus is on the captcha.
As far as I understand, so far, robustness against adversarial attacks is an unsolved problem. Moreover, adversarial perturbations are rather generalizable and transferrable to various architectures (according to
<https://youtu.be/CIfsB_EYsVI?t=3226>).
This phenomenon is relevant not only to DNN but for simpler linear models.
With the current state of affairs, it seems to be a good idea, to make CAPTCHAs from these adversarial examples, and the classification problem would be simple for human, without the need to make several attempts to pass this test, but hard for AI.
There is some research in this field and proposed solutions, but they seem not to be very popular.
Are there some other problems with this approach, or the owners of the websites (applications) prefer not to rely on this approach?<issue_comment>username_1: Because the examples are fit to a particular ML model and if you train using different parameters they probably won't be valid.
Upvotes: 0 <issue_comment>username_2: I think the problem is that this type of attack will only work for the model that was used to produce the perturbations. These perturbations are computed by backpropagating an error for an image of, say, a panda, but with the true label "airplane".
In other words, perturbations are nothing more than gradients indicating in which direction each pixel needs to be changed to make the panda look like an airplane for that particular model. Since the same model will have different weights after each training, this attack will only work for the model used to generate the gradients.
Here is an illustrative example of this idea when training a generator in a GAN:
[](https://i.stack.imgur.com/Bfdcj.png)
Update
------
While we can transfer an adversarial attack from one model to another, this is only possible under strict constraints. To successfully generate perturbations for the target model, we first need to know the dataset that was used to train it. We also need to know the architecture including the activation and loss functions as well as the hyperparameters of this model. Here is a [work](https://arxiv.org/abs/1809.02861v4) in which the authors take a closer look at this topic.
Even though it is possible, in my opinion, using CAPTCHAs does not make sense as these attacks may not work in the real world. For example, if we apply this attack to a road sign to trick the autopilot in a vehicle, the lighting conditions and camera perspective can significantly affect the classification.
Upvotes: 3 [selected_answer] |
2021/07/04 | 1,008 | 4,549 | <issue_start>username_0: It seems that neural networks (NNs) can be applied to supervised learning, unsupervised learning and reinforcement learning. Some people even train neural networks without the set of training data. If NNs are used in reinforcement learning, is it possible that we don't need training data?<issue_comment>username_1: You **cannot** train a neural network without training data. It would be like training a football player without making him/her play/watch football or anything that resembles football: it's simply not possible. The definition of training/learning in machine learning strictly requires data.
You can train a neural network in different ways (e.g. supervised or unsupervised) and with different types of data (e.g. labelled or unlabelled, respectively), but this is a different story. In reinforcement learning, you also have training data, but the data may not be given to the neural network in the same way that it's given e.g. in supervised learning. Still, this does not mean that there is no training data. Of course, there is or must be (by definition)!
However, note that you can use a (e.g. randomly initialised) neural network without training it, but it would probably be a useless neural network. You could also use a neural network that has been trained by someone else with data that you may not have access to anymore (and maybe that answers your question in the title).
Upvotes: 3 <issue_comment>username_2: Neural networks are trained by using pairs of example input/output vectors that they learn to associate and can generalise from. In that sense, they *always* need training data.
For supervised learning, a neural network (NN) is trained on a dataset of example inputs and outputs (aka "a labelled dataset") that the user must provide somehow.
There are scenarios involving neural networks that do not require the user to possess a labelled dataset, or even any dataset at all, but they all have some restriction:
* The NN is already trained, and is now being used to make predictions. You do not need to have access to the original dataset to use a trained NN, although you do need access to some inputs that are of the same type that the network was trained on. The restriction here is that the NN cannot be trained further.
* Semi-supervised learning: In some cases, the desired outputs can be generated automatically from the inputs (they may even be same as the inputs). You still need a dataset of inputs, but may be spared the hard work of adding labels, making it a lot easier to collect a dataset. The restriction here is that this is for specific use cases, such as Generative Adversarial Networks, and is not an approach you can use in general.
* Reinforcement learning (RL). The labelled data for RL are generated through a trial and error process, and do not need to be provided separately by the user. However, the user does need to write the RL code for the environment to allow this data generation to happen. Internally in systems like Deep Q Networks (DQN), the training process looks a lot like supervised learning.
Both RL and semi-supervised learning are special cases of auto-generation of datasets, where the NN is being used to learn a complex function that can be calculated in some other way. As well as the semi-supervised and reinforcement learning cases, NNs have successfully been applied to fluid dynamics and ray tracing problems in this way, where the CPU cost for full calculation is even higher than using a neural network. These scenarios don't require you to possess a labelled dataset before training starts, but do require effort in developing something that generates input/output pairs.
Upvotes: 3 [selected_answer]<issue_comment>username_3: In case the question is if NNs can be **trained** without data, as pointed by others, the answer is negative - any training by definition involves the use of data in some way - supervised, semi-supervised, reward, etc.
However, if the question is whether one can obtain something useful I would think about the following use cases:
1. One can use randomly initialized networks as a random map. The application of this seems to be rather specific, but maybe there are some applications of this.
2. One can add certain evolution to the weights like in the statistical physics system of form:
$$
w\_{n+1} - w\_n = f(w)
$$
Where $f(w)$ can be deterministic or non-deterministic. It is not actually a neural network, but a related concept - <https://en.wikipedia.org/wiki/Boltzmann_machine>.
Upvotes: 1 |
2021/07/07 | 2,643 | 7,501 | <issue_start>username_0: I am following the book "Reinforcement Learning: An Introduction" by <NAME> and <NAME>, and they give an example of a problem for which the value function can be computed explicitly by solving a system of $\lvert S \rvert $ equations that have $\lvert S \rvert $ unknowns. Each of these $\lvert S \rvert$ equations is given by:
$$v\_{\pi}(s) = \sum\_{a} \pi(a\rvert s) \sum\_{s^{\prime}}\sum\_{r} p(s^{\prime}, r \rvert s,a)[r + \gamma v\_{\pi}(s^{\prime})] $$
I am having a hard time understanding how one could solve this system of equations. It seems to me as if each equation consists of a summation of an infinite amount of terms and therefore one would not be able to analytically solve them. Could anyone offer any intuition as to how this system of equations could be explicitly solved?<issue_comment>username_1: Provided you have a finite number of states and actions, then there will not be an infinite number of terms. Therefore the state and action spaces need to be discrete and finite before the quote from the book applies.
>
> I am having a hard time understanding how one could solve this system of equations.
>
>
>
There are a few techniques for solving simulteneous equations.
However, what I would probably do is number all the state values from $v\_1 = v\_\pi(s\_1)$ to $v\_{N = |\mathcal{S}|} = v\_\pi(s\_N)$, and write out each line in order:
$$v\_1 = w\_{1,1} v\_1 + w\_{1,2} v\_2 + w\_{1,3} v\_3 + ... w\_{1,N} v\_N + r\_1$$
Where $r\_1$ is a constant - it is the expected immediate reward when starting from state $1$, but that is not important. It is the constant offset value you get from resolving the sum that is not multiplied by any $v\_i$ unknown variable.
You can discover the values of $w\_{i,j}$ by expanding the sum in the Bellman equation for each state in turn.
At that point you can build a matrix of the weights, and solve the linear equations [by taking the inverse of the matrix](https://www.mathsisfun.com/algebra/systems-linear-equations-matrices.html).
>
> [from comments] But if the game has no end then theoretically the sum of expected future rewards should be infinite.
>
>
>
The time series definition of $v\_{\pi}(s)$:
$$v\_{\pi}(s) = \mathbb{E}\_{\pi}[\sum\_{k=0}^{\infty} \gamma^k R\_{t+k+1} | S\_t = s]$$
does not appear in the Bellman equation used to establish the linear equations. This is the main benefit of the Bellman equation, it changes the infinite series view of returns into a set of relations that must hold between the value functions of each state.
Upvotes: 2 <issue_comment>username_2: First of all, we assume that we have a **finite MDP**, i.e. the set of states $\mathcal{S}$, the set of actions $\mathcal{A}$ and the set of rewards $\mathcal{R}$ all have a finite number of elements (I didn't think about how the explanations below would extend to other cases, but I suspect you will need differential equations).
For simplicity, let's only consider the value function $v$ (as opposed to the state-action value function $q(s, a)$, but this also applies to $q$). The value function $v$ is defined for **all states**, i.e. it's a function of the form $v : \mathcal{S} \rightarrow \mathbb{R}$ or, with an alternative notation, $v(s), \forall s \in \mathcal{S}$. So, we can define this function as a vector $\mathbf{v}$ of dimension $|\mathcal{S}| = n$, i.e. $\mathbf{v} \in \mathbb{R}^{|\mathcal{S}|}$, where the $i$th element contains the value of the $i$th state (so we need a function that maps states to indices of this vector, but this is trivial).
The fact that you can represent the value function, in this finite MDP, as a vector should already suggest that you can find this value function by solving a linear system of equations.
However, let me show that by starting with the definition of the value function you also provided
\begin{align}
v\_{\pi}(s)
&=
\sum\_{a} \pi(a\rvert s) \sum\_{s^{\prime}}\sum\_{r} p(s^{\prime}, r \rvert s,a)[r + \gamma v\_{\pi}(s^{\prime})] \label{1}\tag{1}, \; \forall s \in \mathcal{S}
\end{align}
which can be expanded as follows
\begin{align}
v\_{\pi}(s)
&=
\sum\_{a} \pi(a\rvert s) \sum\_{s^{\prime}}\sum\_{r} \left[p(s^{\prime}, r \rvert s,a)r + \gamma p(s^{\prime}, r \rvert s,a) v\_{\pi}(s^{\prime}) \right] \\
&=
\sum\_{a} \pi(a\rvert s) \left[ \sum\_{r} \underbrace{\sum\_{s^{\prime}} p(s^{\prime}, r \rvert s,a)}\_{\text{Marginalization of }p \text{ over } s'}r + \gamma \sum\_{s^{\prime}} \underbrace{\sum\_{r} p(s^{\prime}, r \rvert s,a) }\_{\text{Marginalization of }p \text{ over } r} v\_{\pi}(s^{\prime}) \right]\\
&=
\sum\_{a} \pi(a\rvert s) \left[ \sum\_{r} p(r \rvert s, a)r + \gamma \sum\_{s^{\prime}}p(s^{\prime} \rvert s,a) v\_{\pi}(s^{\prime}) \right] \\
&=
\sum\_{a} \pi(a\rvert s) \left[ r(s, a) + \gamma \sum\_{s^{\prime}}p(s^{\prime} \rvert s,a) v\_{\pi}(s^{\prime}) \right] \label{2}\tag{2}, \; \forall s \in \mathcal{S}
\end{align}
where
* $\sum\_{r} p(r \rvert s, a)r = r(s, a)$ (see [this](https://ai.stackexchange.com/a/24511/2444)).
* $\sum\_{s^{\prime}}p(s^{\prime}, r \rvert s,a) = p(r \rvert s, a)$ ([marginalization](https://en.wikipedia.org/wiki/Marginal_distribution))
* $\sum\_{r} p(s^{\prime}, r \rvert s,a) =p(s^{\prime} \rvert s,a) $ (marginalization)
In this form, as in equation \ref{2}, the value function can also be written in a different notation
\begin{align}
v\_{\pi}(s)
&=
\sum\_{a} \pi(a\rvert s) \left[ R\_{s}^a + \gamma \sum\_{s^{\prime}}P\_{ss'}^a v\_{\pi}(s^{\prime}) \right] \label{3}\tag{3}, \; \forall s \in \mathcal{S}
\end{align}
where
* $R\_{s}^a = r(s, a)$
* $P\_{ss'}^a = p(s^{\prime} \rvert s,a)$
We can still write equation \ref{3} in a "simpler" form as follows
\begin{align}
v\_{\pi}(s)
&=
\sum\_{a} \pi(a\rvert s) R\_{s}^a + \gamma \sum\_{s^{\prime}} \sum\_{a} \pi(a\rvert s) P\_{ss'}^a v\_{\pi}(s^{\prime}) \\
&=
R\_{s}^\pi + \gamma \sum\_{s^{\prime}} P\_{ss'}^\pi v\_{\pi}(s^{\prime})
\label{4}\tag{4}, \; \forall s \in \mathcal{S}
\end{align}
where
* $\sum\_{a} \pi(a\rvert s) R\_{s}^a = R\_{s}^\pi$
* $\sum\_{a} \pi(a\rvert s) P\_{ss'}^a = P\_{ss'}^\pi$
We can write the definition of the value function in \ref{4} in matrix form for all states $s \in \mathcal{S}$ as follows
\begin{align}
\begin{bmatrix}
v\_\pi(1) \\
\vdots \\
v\_\pi(n)
\end{bmatrix}=
\begin{bmatrix}
{R}\_1^\pi \\
\vdots \\
{R}\_n^\pi
\end{bmatrix}
+\gamma
\begin{bmatrix}
{P}\_{11}^\pi & \dots & {P}\_{1n}^\pi\\
\vdots & \ddots & \vdots\\
{P}\_{n1}^\pi & \dots & {P}\_{nn}^\pi
\end{bmatrix}
\begin{bmatrix}
v\_\pi(1) \\
\vdots \\
v\_\pi(n)
\end{bmatrix}
\tag{5}\label{5},
\end{align}
which can be written in a more compact form as follows
\begin{align}
\mathbf{v} = \mathbf{r} + \gamma \mathbf{P}\mathbf{v} \tag{6}\label{6},
\end{align}
which is a very compact form of the [Bellman equation](https://ai.stackexchange.com/a/11133/2444) (which is a recursive equation: as you can notice, the $\mathbf{v}$ appears on the left and right of the equals sign) that represents the value function (i.e. the value function can be defined as a recursive equation).
In equation \ref{5}, the **unknowns** are the $|\mathcal{S}| = n$ values of the value function $v$ and there are $n$ equations, so it should now be clear why we can solve this problem by solving a system of equations. Note that here it's assumed that $\pi$, $r(s, a)$ and $p$ are given and known, which, generally, is not the case, that's why we use algorithms like Q-learning.
Upvotes: 2 [selected_answer] |
2021/07/07 | 1,132 | 4,068 | <issue_start>username_0: When we are training a neural network, we are going to determine the embedding size to convert the categorical (in NLP, for instance) or continuous (in computer vision or voice) information to hidden vectors (or embeddings), but I wonder if there are some rules to set the size of it?<issue_comment>username_1: I get an answer from this book: [Machine Learning Design Patterns: Solutions to Common Challenges in Data Preparation, Model Building, and MLOps](https://rads.stackoverflow.com/amzn/click/com/1098115783).
>
> If we’re in a hurry, one rule of thumb is to use the fourth root of the total number of unique categorical elements while another is that the embedding dimension should be approximately 1.6 times the square root of the number of unique elements in the category, and no less than 600.
>
>
>
Upvotes: 1 <issue_comment>username_2: In most cases, seems that embedding dim is chosen empirically, by trial and error.
Older papers in NLP used 300 conventionally <https://petuum.medium.com/embeddings-a-matrix-of-meaning-4de877c9aa27>. More recent papers used 512, 768, 1024.
One of the factors, influencing the choice of embedding is the way you would like different vectors to correlate with each other. In high dimensional space with probability 1, chosen at random vectors would be approximately mutually orthogonal. Whereas in the low dimensions and case of many different classes, many vectors will have dot product, significantly different from 0.
I think, that if one expects, that many vectors have to be correlated then the dimension shouldn't be very high. And otherwise, if each of the possible keys in the embedding is expected to produce a different, unrelated vector, than dimensionality is expected to be large.
Upvotes: 2 <issue_comment>username_3: There is a rule of thumb that says min(50, num\_categories/2). But this tops out at 100 categories, what to do after that? I propose this:
#### When num\_categories <= 1000:
>
> `num_embeddings = min(500, num_categories/2)`
>
>
>
Why? Going off of Spiridon's answer, we want to see how orthogonal a randomly distributed set of N vectors are in a given dimension D. One way to do this is to check every vector and take its dot-product with every other vector to see which it is the most similar to. We can normalize the vectors so that the dot product always lands between -1 and 1. Do this for every vector and you will have a list we can now take the average of and standard deviation, which I have graphed below. The x-axis is the num\_categories, y-axis is average(blue) and std(red) of max dot-products. We set num\_embeddings equal to `num_categories/2`.[](https://i.stack.imgur.com/9k2dq.png)
We can see that at around 1000 dimensions, the change in both become fairly flat and there isn't much to gain by adding more. From here, we can start to slow down the rate your num\_embeddings grow with the following formula:
#### When num\_categories > 1000:
>
> `num_embeddings = 75.6496 * ln(num_categories + 176.623) - 41.4457`
>
>
>
Plugging in 1000 into this formula will give roughly 500, so it's a smooth transition from the rule of thumb above. But this formula extends well beyond 1000, for example in GPT2 the number of categories was 50257, if we plug this number into the formula we get 778, which is very close to the default number of embeddings that GPT2 actually used: 768. The reason for this is because this formula is derived from trying to find num\_embeddings for different num\_categories that would have a similar average and std as GPT2 had with 50257 & 768. The datapoints below define the curve, but as you can see it extrapolates well beyond the final point in the dataset at 10,000:
[](https://i.stack.imgur.com/7FnbN.png)
Here x-axis is num\_categories, and the y-axis is the num\_embeddings found to achieve a combined avg+std of the same as GPT2's settings.
Upvotes: 2 |
2021/07/08 | 803 | 3,417 | <issue_start>username_0: I am currently doing a master's in applied mathematics, and I recently got interested in machine learning and artificial intelligence, and I am thinking of going for a Ph.D. in this area. I have a reasonable maths and stats background, but I haven't done any course in ML/AI. Next semester, I am thinking of doing courses in ML (uses the book by Bishop), AI (uses the book by Norvig) and reinforcement learning at my university. Another advanced course in C++ is being offered, which I am also very interested to take, but the problem is it will be very difficult to manage all of these courses together. I have some knowledge of C++ (built some parts of a reasonably big project in the past but got a bit rusty nowadays) and very basic knowledge of Python, though I find Python much easier to learn and use than C++.
So, my question is: how important is C++ if I go for a Ph.D. in ML/AI/CV/NLP, etc.? Should I bother taking the C++ course or be more focused on Python and do the other three courses i.e., ML, AI, and reinforcement learning?<issue_comment>username_1: Of course, whether or not you will need to know and use C++ depends on the topics you will research during your Ph.D. or job. If you'll need just to use and/or combine some existing ML models (yes, in a Ph.D., you're expected to come up with new ideas/tools), then you won't probably need to know C++, as the most commonly used libraries for machine learning nowadays, such as TensorFlow, Keras, or PyTorch, have their main APIs written in Python (but there are also APIs written in other languages, but they are not typically as mature as the Python ones), although the core of these libraries is or can be written in C++, but you may never need to have to look at the core of these libraries.
I can say that I also know C++ (of course, not everything or every detail and library, and, of course, my knowledge of it also becomes rusty if I don't use it for a long time), but I rarely need to use my knowledge of C++ to do research in ML or AI (which is what I am currently doing), but, again, it all depends on the topic of your Ph.D. For example, if you wanted to contribute to the progress of [OpenCog](https://github.com/opencog) or if your Ph.D. involved an efficient implementation of some algorithm or data structure, then it may be a good idea to know C++, C, or a programming language like Rust.
Upvotes: 2 <issue_comment>username_2: In my experience, knowledge of any particular programming language does not matter. What matters is that you can quickly pick up the basics of a given language.
In my professional work I have been programming in Scala, Java, Groovy, and now Lisp; I didn't really know any of these languages before my working with them (except for Java). But I have been able to pick up a working knowledge in them due to general familiarity with programming (I have been programming in a variety of languages for the past 35 years).
I would assume that knowledge of a specific language becomes relevant if you are acctually working on the tools themselves, where advanced proficiency would be required. For applications using existing libraries this is generally not necessary. As long as you can work *with* the language, and are able to diagnose why something didn't work, then you should be fine.
Programming concepts are in my view far more important than a specific language.
Upvotes: 2 |
2021/07/14 | 688 | 2,961 | <issue_start>username_0: I've read on [wiki](https://en.wikipedia.org/wiki/Artificial_general_intelligence) that already in 2017 there were over 40 institutions researching AGI, and I wonder what type of algorithms are being studied and developed in this field.
For example, for comparison with narrow AI, where models/techniques, such as ANNs, CNNs, SVMs, DT/RT, evolutionary algorithms, or reinforcement learning are used, how would AGI models differ? Do they also use these models but in some specialised way or maybe these algorithms are completely new and different from these currently used in narrow AI?<issue_comment>username_1: Of course, whether or not you will need to know and use C++ depends on the topics you will research during your Ph.D. or job. If you'll need just to use and/or combine some existing ML models (yes, in a Ph.D., you're expected to come up with new ideas/tools), then you won't probably need to know C++, as the most commonly used libraries for machine learning nowadays, such as TensorFlow, Keras, or PyTorch, have their main APIs written in Python (but there are also APIs written in other languages, but they are not typically as mature as the Python ones), although the core of these libraries is or can be written in C++, but you may never need to have to look at the core of these libraries.
I can say that I also know C++ (of course, not everything or every detail and library, and, of course, my knowledge of it also becomes rusty if I don't use it for a long time), but I rarely need to use my knowledge of C++ to do research in ML or AI (which is what I am currently doing), but, again, it all depends on the topic of your Ph.D. For example, if you wanted to contribute to the progress of [OpenCog](https://github.com/opencog) or if your Ph.D. involved an efficient implementation of some algorithm or data structure, then it may be a good idea to know C++, C, or a programming language like Rust.
Upvotes: 2 <issue_comment>username_2: In my experience, knowledge of any particular programming language does not matter. What matters is that you can quickly pick up the basics of a given language.
In my professional work I have been programming in Scala, Java, Groovy, and now Lisp; I didn't really know any of these languages before my working with them (except for Java). But I have been able to pick up a working knowledge in them due to general familiarity with programming (I have been programming in a variety of languages for the past 35 years).
I would assume that knowledge of a specific language becomes relevant if you are acctually working on the tools themselves, where advanced proficiency would be required. For applications using existing libraries this is generally not necessary. As long as you can work *with* the language, and are able to diagnose why something didn't work, then you should be fine.
Programming concepts are in my view far more important than a specific language.
Upvotes: 2 |
2021/07/16 | 1,170 | 4,052 | <issue_start>username_0: Consider the following statements from [A Simple Custom Module](https://pytorch.org/docs/stable/notes/modules.html) of PyTorch's documentation
>
> To get started, let’s look at a simpler, custom version of PyTorch’s
> Linear module. This module applies an **affine transformation** to its
> input.
>
>
>
Since the paragraph is saying *PyTorch’s **Linear** module*, I am guessing that affine transformation is nothing but linear transformation.
Suppose $x = [x\_1, x\_2, x\_3,\cdots,x\_n]$ be an input, then the linear transformation on $x$ can be $a.x+b$, where $a$ and $b$ are $n-$ dimensional vectors of real numbers. And dot($.$) stands for dot product.
Is affine transformation same as the linear transformation? If yes, then why the name **affine** is used? Does it cover something more or less than linear transformation?<issue_comment>username_1: In *linear algebra*, a [**linear transformation**](https://en.wikipedia.org/wiki/Linear_map) (aka linear map or linear transform) $f: \mathcal{V} \rightarrow \mathcal{W}$ is a function that satisfies the following two conditions
1. $f(u + v)=f(u)+f(v)$ (additivity)
2. $f(\alpha u) = \alpha f(u)$ (scalar multiplication),
where
* $u$ and $v$ vectors (i.e. elements of a [vector space](https://en.wikipedia.org/wiki/Vector_space), which can also be $\mathbb{R}$ [[proof](https://proofwiki.org/wiki/Real_Numbers_form_Vector_Space)], some space of functions, etc.)
* $\alpha$ is a scalar (e.g. which can be a real number, but not necessarily)
* $\mathcal{V}$ and $\mathcal{W}$ are vector spaces (e.g. $\mathbb{R}$ or $\mathbb{R}^2$)
So, any function that satisfies these two conditions is a linear transformation.
In *Euclidean geometry*, $g(x) = ax + b$ is an [**affine transformation**](https://en.wikipedia.org/wiki/Affine_transformation), which is generally **not** a linear transformation as defined in linear algebra. You can easily show that affine transformations are not linear transformations. For example, let $a = 1$ and $b = 2$, does $g$ satisfy the second condition above for any scalar $\alpha$? No. For example, let $\alpha = 3$, then $g(3x = y) = y + 2 = 3x + 2 \neq 3 g(x) = 3 (x + 2) = 3x + 6$.
However, in the context of neural networks, when people use the adjective "linear" they are often referring to a line. For example, in linear regression, you can have a bias (the $b$ in the affine transformation $g$ above), which would make the function not a linear transformation, but we still call it linear regression because we fit a **line** (hence the name **line**ar regression) to the data.
So, no, an affine transformation is not a linear transformation as defined in linear algebra, but all linear transformations are affine. However, in machine learning, people often use the adjective linear to refer to straight-line models, which are generally represented by functions that are affine transformations. In [this answer](https://ai.stackexchange.com/a/20380/2444), I also talk about this issue.
Upvotes: 3 <issue_comment>username_2: The fact is you can always express an affine transformation as a linear transformation (more convenient because it is just a matrix/dot product).
For instance, given an input $\textbf{x}=[x\_1, ..., x\_n]$, some weights $\textbf{a} = [a\_1, a\_2, ..., a\_n]$ and a bias $b \in \mathbb{R}$, you can express the **affine** operation $y = \textbf{a}\cdot \textbf{x} + b$ as :
$y = \tilde{\textbf{a}} \cdot \tilde{\textbf{x}}$, with $\tilde{\textbf{a}} = [a\_1, ..., a\_n, b]$ and $\tilde{\textbf{x}} = [x\_1, ..., x\_n, 1]$ (**linear** operation)
When your affine transformation is a function $f:\mathbb{R}^p \rightarrow \mathbb{R}^q$ with $\textbf{y}=f(\textbf{x})=A\textbf{x} + \textbf{b}$, you can use the same trick (by adding a column with the biases at the right end of the weight matrix $A$), so you get: $\textbf{y}=\tilde{A}\tilde{\textbf{x}}$
I found an example [in this video](https://youtu.be/Q4GNLhRtZNc?t=398), where Andrew Ng uses this trick for a simple Linear Regression.
Upvotes: 2 |
2021/07/19 | 554 | 2,308 | <issue_start>username_0: While reading about [Module](https://pytorch.org/docs/stable/generated/torch.nn.Module.html) in PyTorch, I came across a new data type called `half` datatype.
`half()` method when calls on a Module casts all floating-point parameters and buffers to **half datatype**.
It is a 16-bit floating-point number as mentioned [here](https://en.wikipedia.org/wiki/Half-precision_floating-point_format).
It is mentioned in Wikipedia that
>
> It is intended for storage of floating-point values in **applications
> where higher precision is not essential** for performing arithmetic
> computations.
>
>
>
It implies that the precision of parameters (say, weights for a neural network) is not important in certain applications and hence one can use half datatype while implementing a neural network.
Did any research support the statement that **precision, that is the range of values it takes, of weights, is unimportant** for certain applications?<issue_comment>username_1: It’s a tradeoff allowing you to fit a larger model into a fixed RAM budget (ie the size of your GPU). Whether this is a good tradeoff is model- and data-specific, but anecdotally I’ve had good luck with it and usually use half precision to good effect (NLP, mostly).
Upvotes: 0 <issue_comment>username_2: Yes, research into ultra-low precision neural network is generally referred to as network quantization. For example, the weights and actications of an artificial neural network can be quantized down to 4-bit, or in extreme cases even 2-bit and 1-bit (binary neural networks).
This is a good introductory article to start with:
<https://arxiv.org/abs/2106.08295>, which goes into detail how network quantization can be done to the maximum extent of preserving the original network accuracy.
The applications of network quantization is immediate. A network with lower precisions would use much less memory and compute power, which is particularly important on edge devices.
However, I do not know whether there are certain ML tasks that are more amenable to quantizatios then others. Generally speaking, we talk about quantization on a network basis. For example, if you quantization ResNet-50, you can generally use this network to run multiple CV tasks like classification, detection, etc.
Upvotes: 1 |
2021/07/21 | 743 | 3,011 | <issue_start>username_0: While reading about [1D-convolutions in PyTorch](https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html), I encountered the concept of `channels`.
```
in_channels (int) – Number of channels in the input image
out_channels (int) – Number of channels produced by the convolution
```
Although I encountered this concept of channels earlier, I am confused about channels and might understand them in the wrong manner.
Since the operation we are discussing is a 1D convolution, then there will be two lists of numbers: one is the input list and the other is the filter list. The last one is the feature map (the output list).
They look like this:
[](https://i.stack.imgur.com/WNIXd.png)
The left one is the input list, the middle one is the filter list and the rightmost one is the output list.
Each cell in the input list contains a whole number. Each cell may take a value in the fixed range $[a, b]$ of numbers.
**What is the concept of channels used here? From where the channels are coming? Does the number of channels stand for the number of elements in the corresponding list?**<issue_comment>username_1: Channels can be thought of as alternate numbers in the same space.
As an example, the three colour channels of a typical image are often values for amount of red, green or blue light received from each position within the picture.
Your 1D convolution example has one input channel and one output channel. Depending on what the input represents, you might have additional input channels representing other values measured in the same input space. For all but the most simple problems, you will have multiple output channels. The number of channels in each layer may vary, similarly to how the size of hidden layers in a fully connected neural network can vary.
The term "feature map" means the same as channel, and is typically used to describe the outputs of hidden layers.
To map from N input channels to M output channels requires $N \times M$ filters. Each of the M outputs is connected by a filter to each of the N inputs, and the results of running those N convolutions is summed and passed through a nonlinear activation function to generate an output channel.
Although in the abstract, a channel is a type of dimension, channels are considered as entirely separate to the space which is being processed. So adding channels to your 1D example does not make it a 2D convolutional neural network.
Upvotes: 2 [selected_answer]<issue_comment>username_2: A common use case for 1d-convolution is to analyse & interpret time series data. Imagine a single sensor that generates a sequence of readings such as `[1 2 3 4 2]`.
That is equivalent to a single channel.
However its also possible to have multiple sensors generating readings, such as
```
[ 1 2 3 2 4]
[ 2 3 4 1 2]
[ 3 4 5 1 2]
```
This would be equivalent to 3 channels of time series data that's consumed by a 1d convolution.
Upvotes: 0 |
2021/07/22 | 780 | 2,947 | <issue_start>username_0: I have seen tutorials online saying that you should do data augmentation AFTER doing the train/val/test split. However, when I go online to read some research papers, I see numerous instances of authors saying that they first do data augmentation on the dataset and then split it because they don't have enough data. Is it just that these are silly mistakes, even for papers with many citations, or is this acceptable?
Example: [Research paper](https://www.sciencedirect.com/science/article/pii/S016816991831528X?casa_token=o2NeEVPT1R4AAAAA:<KEY>).
they say:
"Among these selected 480 images, 94 images were col-lected while changing the viewing angle, including images of 30 youngapples, 32 expanding apples, and 32 ripe apples.These 480 images were then expanded to 4800 images using dataaugmentation methods, yielding the training dataset. The training da-taset is used to train the detection model. The remaining 480 images areused as the test dataset to verify the detection performance of theYOLOV3-dense model".<issue_comment>username_1: Channels can be thought of as alternate numbers in the same space.
As an example, the three colour channels of a typical image are often values for amount of red, green or blue light received from each position within the picture.
Your 1D convolution example has one input channel and one output channel. Depending on what the input represents, you might have additional input channels representing other values measured in the same input space. For all but the most simple problems, you will have multiple output channels. The number of channels in each layer may vary, similarly to how the size of hidden layers in a fully connected neural network can vary.
The term "feature map" means the same as channel, and is typically used to describe the outputs of hidden layers.
To map from N input channels to M output channels requires $N \times M$ filters. Each of the M outputs is connected by a filter to each of the N inputs, and the results of running those N convolutions is summed and passed through a nonlinear activation function to generate an output channel.
Although in the abstract, a channel is a type of dimension, channels are considered as entirely separate to the space which is being processed. So adding channels to your 1D example does not make it a 2D convolutional neural network.
Upvotes: 2 [selected_answer]<issue_comment>username_2: A common use case for 1d-convolution is to analyse & interpret time series data. Imagine a single sensor that generates a sequence of readings such as `[1 2 3 4 2]`.
That is equivalent to a single channel.
However its also possible to have multiple sensors generating readings, such as
```
[ 1 2 3 2 4]
[ 2 3 4 1 2]
[ 3 4 5 1 2]
```
This would be equivalent to 3 channels of time series data that's consumed by a 1d convolution.
Upvotes: 0 |
2021/07/25 | 387 | 1,761 | <issue_start>username_0: Is there a way to select the most important features using PCA? I am not looking for the principal components with the highest scores but a subset of the original features.<issue_comment>username_1: There are better methods for selecting most important features in supervised setting. Assuming they are not an option, or you're simply interested in PCA:
Say you originally had 100 features and you applied PCA and first 10 PCs explains the 95 % of ratio.
After applying PCA, you can calculate linear correlations between top 10 PCs and original features. I assume some of your features will be highly correlated with some subset of top 10 PCs. You can draw an abstract line and choose subset of original features that are at least 0.80 linearly correlated with at least one of top 10 PCs.
Upvotes: 0 <issue_comment>username_2: There is a way to select the subset of the most important features using PCA. The basic idea is to choose variables according to the magnitude (from largest to smallest in absolute values) of their coefficients (loadings).
The loadings measure how much each original variable contributes to each principal component. The larger the loading, the more important the variable is to that principal component.
To select the subset of the most important features, you can choose the variables with the largest loadings for the principal components that explain the most variance in the data.
For example, if you have a dataset with 100 features and you want to select the 10 most important features, you can first apply PCA to the dataset and find the principal components that explain 90% of the variance. Then, you can select the ten variables with the largest loadings for those main components.
Upvotes: 1 |
2021/07/25 | 1,213 | 4,084 | <issue_start>username_0: I have just dived into deep learning for NLP, and now I'm learning how the BERT model works. What I found odd is why the BERT model needs to have an attention mask. As clearly shown in this tutorial <https://huggingface.co/transformers/glossary.html>:
```
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
sequence_a = "This is a short sequence."
sequence_b = "This is a rather long sequence. It is at least longer than the sequence A."
encoded_sequence_a = tokenizer(sequence_a)["input_ids"]
encoded_sequence_b = tokenizer(sequence_b)["input_ids"]
padded_sequences = tokenizer([sequence_a, sequence_b], padding=True)
```
Output of padded sequences input ids:
```
padded_sequences["input_ids"]
[[101, 1188, 1110, 170, 1603, 4954, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 1188, 1110, 170, 1897, 1263, 4954, 119, 1135, 1110, 1120, 1655, 2039, 1190, 1103, 4954, 138, 119, 102]]
```
Output of padded sequence attention mask:
```
padded_sequences["attention_mask"]
[[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]
```
In the tutorial, it clearly states that an attention mask is needed to tell the model (BERT) which input ids need to be attended and which not (if an element in attention mask is 1 then the model will pay attention to that index, if it is 0 then model will not pay attention).
The thing I don't get is: why does BERT have an attention mask in the first place? Doesn't model need only input ids because you can clearly see that attention\_mask has zeros on the same indices as the input\_ids. Why does the model need to have an additional layer of difficulty added?
I know that BERT was created in google's "super duper laboratories", so I think the creators had something in their minds and had a strong reason for creating an attention mask as a part of the input.<issue_comment>username_1: This is just an implementation issue. One reason is the Huggingface implementation (which is not the original implementation by Google) wants to strictly separate the tokenization from the modeling. It is a convention that the input sequences are zero-padded, but in theory, it does not have to be so. In the Huggingface implementation, you use a different tokenizer that would pad the sequences with different numbers and still get valid masking.
You are right that you can infer the mask from the input IDs at the very beginning (if you know the pad ID), but you need to explicitly use the mask in every single layer. Each layer returns a 3D tensor of floats from which you cannot say what the padded positions are, you need to have the explicit mask when calling the next layer. I guess that having the mask everywhere makes the API more consistent.
Upvotes: 3 <issue_comment>username_2: It makes sense to me that BERT does not require attention mask. BERT is a bi-directional encoder. Each word in a sequence is allowed to "attend" other words from both left and right sides. Attention mask would only make sense if the encoder is uni-directional, or in case of a decoder, where each word is only allowed to attend the words before it.
I also think that this is related to Huggingface implementation. BERT is an encoder. The pytorch implementation of an encoder can be found [here](https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoder.html). As you can see in the "forward" function there, the argument "mask" (in this case it refers to attention mask) is only optional.
Upvotes: 0 <issue_comment>username_3: In huggingface implementation, they have made the Bert implementation generic. So that It could be used as both an encoder or an decoder.
Example, for a transformer to act as decoder, only two things that's required is :
1. masking future tokens.
2. Also do cross attention based on "supplied encoder representations".
Check the code for transformers.BertLMHeadModel where Bert is used as decoder while transformers.BertforMaskedLM uses it as our regular encoder.
Upvotes: 0 |
2021/07/26 | 1,804 | 5,175 | <issue_start>username_0: I have been trying to adjust a neural network to a simple function: the mass of an sphere.
I have tried with different architectures, for example, a single hidden layer and two hidden layers, always with 128 neurons each, and training them for 5000 epochs.
The code is the usual one. Just in case, I publish one of them
```
model = keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])
,keras.layers.Dense(128, activation="relu")
,keras.layers.Dense(1, activation="relu")])
model.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
history = model.fit(x, y, validation_split=0.2, epochs=5000)
```
The results are shown in the graphs.
[](https://i.stack.imgur.com/q6Nad.png)
[](https://i.stack.imgur.com/jYAuV.png)
I suspect that I am making an error somewhere, because I have seen that deep learning is able to match complex functions with much less epochs. I shall appreciate any hint to fix this problem and obtain a good fit with the deep learning function.
In order to make it clear I post the graph's code.
```
rs =[x for x in range(20)]
def masas_circulo(x):
masas_circulos =[]
rs =[r for r in range(x)]
for r in rs:
masas_circulos.append(model.predict([r])[0][0])
return masas_circulos
masas_circulos = masas_circulo(20)
masas_circulos
esferas = [4/3*np.pi*r**3 for r in range(20)]
import matplotlib.pyplot as plt
plt.plot(rs,masas_circulos,label="DL")
plt.plot(rs,esferas,label="Real");
plt.title("Mass of an sphere.\nDL (1hl,128 n,5000 e) vs ground_truth")
plt.xlabel("Radius")
plt.ylabel("Sphere")
plt.legend();
```<issue_comment>username_1: You're trying to learn a cubic function that explodes in values and your issue is **scaling**. I have been able to learn a better approximation by scaling data and using **tanh** as activation function.
Code and result are as below:
[](https://i.stack.imgur.com/pHqhO.png)
Convergence around X=100 happens because of tanh activation. Relu will not work better because of negative values that is the result of scaling. You can try playing with **Leaky Relu** activation and various **alpha** values.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from tensorflow import keras
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
def mass_of_sphere(R):
return (4/3) * np.pi * (R**3)
X = np.linspace(1, 120, 500000)
y = [mass_of_sphere(x) for x in X]
X = np.array(X).reshape(-1, 1)
y = np.array(y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
scaler_X = StandardScaler()
X_train = scaler_X.fit_transform(X_train)
X_test = scaler_X.transform(X_test)
scaler_y = StandardScaler()
y_train = scaler_y.fit_transform(y_train.reshape(-1, 1))
y_test = scaler_y.transform(y_test.reshape(-1, 1)).reshape(-1)
model = keras.Sequential([keras.layers.Dense(1),
keras.layers.Dense(128, activation = "tanh"),
keras.layers.Dense(1, activation = "tanh")])
early_stopping = keras.callbacks.EarlyStopping(monitor = 'val_loss', patience = 5)
model.compile(optimizer = 'rmsprop', loss = 'mse')
history = model.fit(X_train, y_train, epochs = 100, callbacks=[early_stopping],
batch_size = 2048, validation_data=(X_test, y_test))
y_hat = scaler_y.inverse_transform(model.predict(X_test)).reshape(-1)
y_test = scaler_y.inverse_transform(y_test).reshape(-1)
f, ax = plt.subplots(figsize = (12, 4))
ax.plot(sorted(scaler_X.inverse_transform(X_test).reshape(-1)), sorted(y_test), color = 'blue', label = 'Real')
ax.plot(sorted(scaler_X.inverse_transform(X_test).reshape(-1)), sorted(y_hat), color = 'orange', label = 'DL')
ax.legend()
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: A simpler model seems to be the best option
```
model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
model.compile(optimizer='sgd', loss='mean_squared_error')
xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys2 = np.array([4/3*r*np.pi**3 for r in xs])
model.fit(xs, ys2, epochs=500,validation_split=0.2)
def masa_circulo(x):
return 4/3*x*np.pi**3
```
Testing it graphically
```
x = [x for x in range(1,int(1e6),int(1e3))]
y_masa_circulo = [ masa_circulo(m) for m in x]
y_masa_predicha= [model.predict([m])[0] for m in x]
import matplotlib.pyplot as plt
fig,axes = plt.subplots(1,2)
axes[0].plot(y_masa_circulo);
axes[0].set_title("y_masa_circulo")
axes[0].set_ylabel("y_masa_circulo")
axes[0].set_xlabel("Radio")
axes[1].plot(y_masa_predicha);
plt.title("y_masa_circulo")
plt.title("y_masa_predicha");
axes[1].set_ylabel("y_masa_predicha");
axes[1].set_xlabel("Radio");
```
[](https://i.stack.imgur.com/3Scb1.png)
No need of scaling the data.
Upvotes: 0 |
2021/07/27 | 405 | 1,631 | <issue_start>username_0: We need to use a loss function for training the neural networks.
In general, the loss function depends only on the desired output $y$ and actual output $\hat{y}$ and is represented as $L(y, \hat{y})$.
As per my current understanding,
>
> Regularization is nothing but using a new loss function
> $L'(y,\hat{y})$ which must contain a $\lambda$ term (formally called
> as regularization term) for training a neural network and can be
> represented as
>
>
> $$L'(y,\hat{y}) = L(y, \hat{y}) + \lambda \ell(.) $$
>
>
> where $\ell(.)$ is called regularization function. Based on the
> definition of function $\ell$ there can be different regularization
> methods.
>
>
>
Is my current understanding complete? Or is there any other technique in machine learning that is also considered a regularization technique? If yes, where can I read about that regularization?<issue_comment>username_1: Regularization is not limited to methods like L1/L2 regularization which are specific versions of what you showed.
Regularization is any technique that would prevent network from overfitting and help network to be more generalizable to unseen data. Some other techniques are Dropout, Early Stopping, Data Augmentation, limiting the capacity of network by reducing number of trainable parameters.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Also, keep in mind that not just any augmentation of the loss function is a regularization.
For example, you can add terms to a loss function that enforce constraints on the solution but do not prevent overfitting nor facilitate generalization.
Upvotes: 1 |
2021/07/28 | 203 | 944 | <issue_start>username_0: If the validation set is used to tune the hyperparameters and the training set adjusts the weights, why don't they be one thing as they have a similar role, as in improving the model?<issue_comment>username_1: Regularization is not limited to methods like L1/L2 regularization which are specific versions of what you showed.
Regularization is any technique that would prevent network from overfitting and help network to be more generalizable to unseen data. Some other techniques are Dropout, Early Stopping, Data Augmentation, limiting the capacity of network by reducing number of trainable parameters.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Also, keep in mind that not just any augmentation of the loss function is a regularization.
For example, you can add terms to a loss function that enforce constraints on the solution but do not prevent overfitting nor facilitate generalization.
Upvotes: 1 |
2021/07/28 | 1,569 | 5,310 | <issue_start>username_0: Tensor is an ordered collection of elements. The elements are generally real numbers. Tensors are used in deep learning for storing data.
There is a wide usage of the word "axis" related to tensor. Axes are not the same as indices, which are used to access the elements of a tensor. An axis is not the same as an element of a tensor.
What exactly is an axis in a tensor? Is it also a (sub-)tensor obtained from the actual tensor? Or is it any other indexing mechanism? If yes, why it is used?
Suppose $a =[[1, 2, 3, 4],[5, 6, 7, 8],[9, 10, 11, 12],[13, 14, 15, 16]]$ is a tensor. Then what does axis do for $a$?<issue_comment>username_1: Imagine the tensor as a some generalized $n$-dimensional hyperrectangle sliced into $n$-dimensional hypercubes. Each element of the tensor is labeled by the position along the given axis, say $(x\_1, x\_2, \ldots)$.
Axis is not a property of tensor, rather the tensor is embedded in a $n$-dimensional space, where the axes are chosen along the sides of the hyperrectangle corresponding to the tensor.
There are many operations, that can be applied axiswise to tensor. Several examples:
* Mean along the axis (choose $0$ without loss of generality). Given the $n$-dimensional tensor $x\_{i\_1, i\_2 \ldots i\_n}$ , the result will be $n-1$-dimensional tensor $\frac{1}{N\_0} \sum\_{i\_1} x\_{i\_1 i\_2 \ldots i\_n}$, where $N\_0$ is the number of elements of the tensor along the $0$-th axis (height).
* Standard deviation along the axis. For each index $i\_2 \ldots i\_n$, calculate $\sqrt{\frac{1}{N\_0} \sum\_{i\_1} (\bar{x}\_{i\_2 \ldots i\_n} - x\_{i\_1 i\_2 \ldots i\_n})^2}$, where $\bar{x}$ is the mean from previous point, and the result will be again $n-1$-dimensional tensor.
For your example $a$ is a $2$-dimensional tensor with $2$ axes. $0$-th axis corresponds to rows, $1$-st axis corresponds to columns.
Upvotes: 1 <issue_comment>username_2: **In machine learning**, a tensor is a multi-dimensional array (i.e. a generalization of a matrix to more than 2 dimensions), which has some properties, such as the number of dimensions or the shape, and to which you can apply operations (for example, you can take the mean of all elements across all dimensions). So, a scalar is a 0-d tensor (no dimensions), a vector is a 1d tensor (1 dimension), a matrix a 2d tensor (2 dimensions), a cube a 3d tensor, and so on.
The name TensorFlow really comes from the fact that TensorFlow manipulates tensors all the time. The [documentation of TensorFlow](https://www.tensorflow.org/guide/tensor) describes tensors as follows
>
> Tensors are multi-dimensional arrays with a uniform type (called a dtype)
>
>
> If you're familiar with NumPy, tensors are (kind of) like np.arrays.
>
>
>
In this context, you can think of the **axis** as an abstraction for dealing with or manipulating the dimensions (aka shape) of a tensor or to apply certain operations "across dimensions". It's not exactly a synonym for dimension because, in some software libraries (e.g. Keras and NumPy), [you can also pass special arguments such as `-1` to the `axis` parameter](https://stackoverflow.com/q/47435526), which does not correspond to any dimension (there's no $-1$ dimension of a tensor). It's a way to apply certain operations to the tensor only across e.g. the first dimension (axis 0) or the second dimension (axis 2), and so on.
For example, let's say you have a matrix (2d tensor) $A$. You want to compute the average (aka mean) of the rows, i.e. sum all elements of row $i$, then divide by the number of elements (which corresponds to the number of columns in row $i$), and do this for all rows $i=1, \dots, K$. You can do this in NumPy by specifying that you want to apply the mean operation to `axis=1`. So, after this operation, you will get a 1d tensor (a vector) with $K$ elements (i.e. the numbers of rows of the original matrix): this is what I mean by "across dimensions".
Here's a NumPy example that illustrates the concept.
```
import numpy as np
A = np.array([[1, 2, 3],
[1, 1, 1],
[0, 1, -1]])
# (1 + 2 + 3 + 1 + 1 + 1 + 0 + 1 + (-1)) / 9
m = np.mean(A)
print("Mean of all elements of A =", m)
# [(1 + 2 + 3)/3, (1 + 1 + 1)/3, (0 + 1 + (-1))/3]
m1 = np.mean(A, axis=1)
print("Mean of each row =", m1)
# [(1 + 1 + 0)/3, (2 + 1 + 1)/3, (3 + 1 + (-1))/3]
m2 = np.mean(A, axis=0)
print("Mean of each column =", m2)
```
Why would you want to compute (in this case) the mean of each row? Because, for example, each row $i$ may correspond to some data associated with a user $i$, so you may want to compute e.g. the average salary for each user for all months (in this case, there are only 3 columns, so 3 months).
Tensors did not originate in machine learning. In fact, in mathematics, [tensors are well-known objects](https://en.wikipedia.org/wiki/Tensor), which have some properties, which may be different than the properties associated with the tensors implemented in libraries like TensorFlow, which are really just multi-dimensional arrays with the necessary properties and methods for machine/deep learning. Tensors are also everywhere in quantum computing.
[This answer](https://ai.stackexchange.com/a/22202/2444) that I wrote a while ago could also be useful.
Upvotes: 3 [selected_answer] |
2021/07/29 | 1,836 | 6,784 | <issue_start>username_0: The following is the abstract for the research paper titled [Improved Training of Wasserstein GANs](https://arxiv.org/pdf/1704.00028v3.pdf)
>
> Generative Adversarial Networks (GANs) are powerful generative models,
> but suffer from training instability. The recently proposed
> Wasserstein GAN (WGAN) makes progress toward stable training of GANs,
> but sometimes can still generate only poor samples or fail to
> converge. We find that **these problems are often due to the use of
> weight clipping in WGAN to enforce a Lipschitz constraint on the
> critic, which can lead to undesired behavior.** We propose an
> alternative to clipping weights: penalize the norm of gradient of the
> critic with respect to its input. Our proposed method performs better
> than standard WGAN and enables stable training of a wide variety of
> GAN architectures with almost no hyperparameter tuning, including
> 101-layer ResNets and language models with continuous generators. We
> also achieve high quality generations on CIFAR-10 and LSUN bedrooms.
>
>
>
Here, the critic stands for discriminator of the GAN. I understood that the discriminator must obey **Lipschitz constraint** and hence weight clipping is generally done before this paper. The paper provides an alternative way, penalizing the norm of the gradient of the critic with respect to its input, to enforce the desired Lipschitz constraint.
What actually is Lipschitz constraint and why is it mandatory for a discriminator to obey it?<issue_comment>username_1: The [Lipschitz constraint](https://en.wikipedia.org/wiki/Lipschitz_continuity) is essentially that a function must have a maximum gradient. The specific maximum gradient is a hyperparameter.
It's **not** mandatory for a discriminator to obey a Lipschitz constraint. However, in [the WGAN paper](https://arxiv.org/pdf/1701.07875.pdf) they find that *if* the discriminator *does* obey a Lipschitz constraint, the GAN works much better.
A perfect discriminator would perfectly accept all the real samples (output=1 with low gradient) and perfectly reject all the fake samples (output=0 with low gradient), but this provides hardly any gradient information to help train the generator. By limiting the gradient of the discriminator, they force it to be a worse discriminator, but provide more gradient information which helps train the generator.
You can see this in figure 2 (page 9) of the WGAN paper. The red line is a good discriminator but its gradient is nearly 0 at most points. The cyan line (which for some reason is presented upside-down) is clearly much worse as a discriminator, but is much better for training the generator because its gradient is not zero.
The way they limit the discriminator's *overall* gradient in the WGAN paper is by separately limiting (clipping) each of the weights in the discriminator. They are aware this is a "terrible" idea, and leave better ideas for further research. The paper you are asking about is one of those better ideas.
Note that since back-propagation works backwards, enforcing a *maximum* gradient (Lipschitz continuity) on the discriminator in the *forward* direction causes it to have a *minimum* gradient in the *backward* direction, which is what we want when training the generator.
Upvotes: 3 <issue_comment>username_2: Optimizing a traditional GAN is equivalent to minimizing KL-divergence, which is known to have limitations. Wasserstein metric promises to remedy these limitations, and is defined as follows:
$$
W(P\_r, P\_g) = \inf\_{\gamma \in \prod(P\_r ,P\_g)} \mathbb{E}\_{(x, y) \sim \gamma}\big[\:\|x - y\|\:\big]
$$
However, computing $\prod(P\_r, P\_g)$ for all possible joint probability distributions is extremely intractable. Thus the authors proposed a smart transformation of the formula based on the [Kantorovich-Rubinstein duality](https://en.wikipedia.org/wiki/Wasserstein_metric#Dual_representation_of_W1), defined as:
$$
W(P\_r, P\_\theta) = \sup\_{\|f\|\_L \leq K}
\mathbb{E}\_{x \sim P\_r}[f(x)] - \mathbb{E}\_{x \sim P\_\theta}[f(x)]
$$
For this to work, $\| f \|\_L \leq K$ must hold, which means that $f$ must be K-Lipschitz continuous. $f$ is called K-Lipschitz continuous if there exists a real constant $K \geq 0$ such that, for all $x\_1, x\_2 \in \mathbb{R}$,
$$\lvert f(x\_1) - f(x\_2) \rvert \leq K \lvert x\_1 - x\_2 \rvert$$
Here $K$ is known as a Lipschitz constant for the function $f$. [Lipschitz continuity](https://en.wikipedia.org/wiki/Lipschitz_continuity) ensures that the derivative of $f$ is less than or equal to K everywhere (or to 1 for 1-Lipschitz). This can be illustrated as follows:
[](https://upload.wikimedia.org/wikipedia/commons/5/58/Lipschitz_Visualisierung.gif)
>
> For a Lipschitz continuous function, there exists a double cone
> (white) whose origin can be moved along the graph so that the whole
> graph always stays outside the double cone
>
>
>
Thus, we have a parametrized family of functions $\{f\_w\}\_{w \in W}$ that are all K-Lipschitz for some $K$. The function $f$ has the role of a non-linear feature map that maximally enhances the differences between the samples coming from the two distributions. The role of the Lipschitz constraint is to prevent $f$ from arbitrarily enhancing small differences. The constraint assures that if two input images are similar the output of $f$ will be similar as well. Without this constraint, the result would be zero when $P\_r$ is equal to $P\_g$ and $\infty$ otherwise, since the effect of any minor difference can be arbitrarily enhanced by an appropriate feature map.
The supremum over $K$-Lipschitz functions is still intractable, but now $\{f\_w\}$ can be approximated using a neural network. For optimization purposes, it is not necessary to know what $K$ is. It is enough to know that it exists and that it is fixed throughout the training process (read this [article](https://www.alexirpan.com/2017/02/22/wasserstein-gan.html) for more details).
Upvotes: 2 <issue_comment>username_3: Lipschitz continuity is a mathematical property that measures how fast a function can change as its input changes.
In the context of GANs, it is desirable to enforce Lipschitz continuity on the discriminator function to ensure that it does not change too quickly and cause instability during training.
Critic’s neural network needs to be 1-L(1-Lipschitz) Continuous when using W-Loss, which means that the norm of its gradient can't be greater than one. All points on the function are considered for the evaluation of 1-Lipschitz continuity.
Intuitively, this means that a Lipschitz continuous function can never vary too rapidly.
Upvotes: 0 |
2021/08/01 | 1,769 | 6,387 | <issue_start>username_0: I have enrolled in a [course](https://jovian.ai/kukuquack/04-feedforward-nn) that uses only one hidden layer, and that is the only layer that has activation functions. The model can be visualized as follows:
[](https://i.stack.imgur.com/njZsxm.png)
and here is a PyTorch implementation:
```
class MnistModel(nn.Module):
"""Feedfoward neural network with 1 hidden layer"""
def __init__(self, in_size, hidden_size, out_size):
super().__init__()
# hidden layer
self.linear1 = nn.Linear(in_size, hidden_size)
# output layer
self.linear2 = nn.Linear(hidden_size, out_size)
def forward(self, xb):
# Flatten the image tensors
xb = xb.view(xb.size(0), -1)
# Get intermediate outputs using hidden layer
out = self.linear1(xb)
# Apply activation function
out = F.relu(out)
# Get predictions using output layer
out = self.linear2(out)
return out
```
shouldn't the output layer also have activation functions?<issue_comment>username_1: The [Lipschitz constraint](https://en.wikipedia.org/wiki/Lipschitz_continuity) is essentially that a function must have a maximum gradient. The specific maximum gradient is a hyperparameter.
It's **not** mandatory for a discriminator to obey a Lipschitz constraint. However, in [the WGAN paper](https://arxiv.org/pdf/1701.07875.pdf) they find that *if* the discriminator *does* obey a Lipschitz constraint, the GAN works much better.
A perfect discriminator would perfectly accept all the real samples (output=1 with low gradient) and perfectly reject all the fake samples (output=0 with low gradient), but this provides hardly any gradient information to help train the generator. By limiting the gradient of the discriminator, they force it to be a worse discriminator, but provide more gradient information which helps train the generator.
You can see this in figure 2 (page 9) of the WGAN paper. The red line is a good discriminator but its gradient is nearly 0 at most points. The cyan line (which for some reason is presented upside-down) is clearly much worse as a discriminator, but is much better for training the generator because its gradient is not zero.
The way they limit the discriminator's *overall* gradient in the WGAN paper is by separately limiting (clipping) each of the weights in the discriminator. They are aware this is a "terrible" idea, and leave better ideas for further research. The paper you are asking about is one of those better ideas.
Note that since back-propagation works backwards, enforcing a *maximum* gradient (Lipschitz continuity) on the discriminator in the *forward* direction causes it to have a *minimum* gradient in the *backward* direction, which is what we want when training the generator.
Upvotes: 3 <issue_comment>username_2: Optimizing a traditional GAN is equivalent to minimizing KL-divergence, which is known to have limitations. Wasserstein metric promises to remedy these limitations, and is defined as follows:
$$
W(P\_r, P\_g) = \inf\_{\gamma \in \prod(P\_r ,P\_g)} \mathbb{E}\_{(x, y) \sim \gamma}\big[\:\|x - y\|\:\big]
$$
However, computing $\prod(P\_r, P\_g)$ for all possible joint probability distributions is extremely intractable. Thus the authors proposed a smart transformation of the formula based on the [Kantorovich-Rubinstein duality](https://en.wikipedia.org/wiki/Wasserstein_metric#Dual_representation_of_W1), defined as:
$$
W(P\_r, P\_\theta) = \sup\_{\|f\|\_L \leq K}
\mathbb{E}\_{x \sim P\_r}[f(x)] - \mathbb{E}\_{x \sim P\_\theta}[f(x)]
$$
For this to work, $\| f \|\_L \leq K$ must hold, which means that $f$ must be K-Lipschitz continuous. $f$ is called K-Lipschitz continuous if there exists a real constant $K \geq 0$ such that, for all $x\_1, x\_2 \in \mathbb{R}$,
$$\lvert f(x\_1) - f(x\_2) \rvert \leq K \lvert x\_1 - x\_2 \rvert$$
Here $K$ is known as a Lipschitz constant for the function $f$. [Lipschitz continuity](https://en.wikipedia.org/wiki/Lipschitz_continuity) ensures that the derivative of $f$ is less than or equal to K everywhere (or to 1 for 1-Lipschitz). This can be illustrated as follows:
[](https://upload.wikimedia.org/wikipedia/commons/5/58/Lipschitz_Visualisierung.gif)
>
> For a Lipschitz continuous function, there exists a double cone
> (white) whose origin can be moved along the graph so that the whole
> graph always stays outside the double cone
>
>
>
Thus, we have a parametrized family of functions $\{f\_w\}\_{w \in W}$ that are all K-Lipschitz for some $K$. The function $f$ has the role of a non-linear feature map that maximally enhances the differences between the samples coming from the two distributions. The role of the Lipschitz constraint is to prevent $f$ from arbitrarily enhancing small differences. The constraint assures that if two input images are similar the output of $f$ will be similar as well. Without this constraint, the result would be zero when $P\_r$ is equal to $P\_g$ and $\infty$ otherwise, since the effect of any minor difference can be arbitrarily enhanced by an appropriate feature map.
The supremum over $K$-Lipschitz functions is still intractable, but now $\{f\_w\}$ can be approximated using a neural network. For optimization purposes, it is not necessary to know what $K$ is. It is enough to know that it exists and that it is fixed throughout the training process (read this [article](https://www.alexirpan.com/2017/02/22/wasserstein-gan.html) for more details).
Upvotes: 2 <issue_comment>username_3: Lipschitz continuity is a mathematical property that measures how fast a function can change as its input changes.
In the context of GANs, it is desirable to enforce Lipschitz continuity on the discriminator function to ensure that it does not change too quickly and cause instability during training.
Critic’s neural network needs to be 1-L(1-Lipschitz) Continuous when using W-Loss, which means that the norm of its gradient can't be greater than one. All points on the function are considered for the evaluation of 1-Lipschitz continuity.
Intuitively, this means that a Lipschitz continuous function can never vary too rapidly.
Upvotes: 0 |
2021/08/02 | 1,752 | 6,448 | <issue_start>username_0: Loss functions are useful in calculating loss and then we can update the weights of a neural network. The loss function is thus useful in training neural networks.
Consider the following excerpt from [this](https://ai.stackexchange.com/questions/28877/what-are-the-necessary-mathematical-properties-to-be-a-loss-function-in-gradient) answer
>
> In principle, differentiability is sufficient to run gradient descent. **That said, unless $L$ is convex, gradient descent offers no guarantees of convergence to a global minimiser**. In practice, neural network loss functions are rarely convex anyway.
>
>
>
It implies that the convexity property of loss functions is useful in ensuring the convergence, if we are using the gradient descent algorithm. There is [another narrowed version of this question](https://ai.stackexchange.com/questions/28288/in-logistic-regression-why-is-the-binary-cross-entropy-loss-function-convex) dealing with cross-entropy loss. But, this question is, in fact, a general question and is not restricted to a particular loss function.
How to know whether a loss function is convex or not? Is there any algorithm to check it?<issue_comment>username_1: The [Lipschitz constraint](https://en.wikipedia.org/wiki/Lipschitz_continuity) is essentially that a function must have a maximum gradient. The specific maximum gradient is a hyperparameter.
It's **not** mandatory for a discriminator to obey a Lipschitz constraint. However, in [the WGAN paper](https://arxiv.org/pdf/1701.07875.pdf) they find that *if* the discriminator *does* obey a Lipschitz constraint, the GAN works much better.
A perfect discriminator would perfectly accept all the real samples (output=1 with low gradient) and perfectly reject all the fake samples (output=0 with low gradient), but this provides hardly any gradient information to help train the generator. By limiting the gradient of the discriminator, they force it to be a worse discriminator, but provide more gradient information which helps train the generator.
You can see this in figure 2 (page 9) of the WGAN paper. The red line is a good discriminator but its gradient is nearly 0 at most points. The cyan line (which for some reason is presented upside-down) is clearly much worse as a discriminator, but is much better for training the generator because its gradient is not zero.
The way they limit the discriminator's *overall* gradient in the WGAN paper is by separately limiting (clipping) each of the weights in the discriminator. They are aware this is a "terrible" idea, and leave better ideas for further research. The paper you are asking about is one of those better ideas.
Note that since back-propagation works backwards, enforcing a *maximum* gradient (Lipschitz continuity) on the discriminator in the *forward* direction causes it to have a *minimum* gradient in the *backward* direction, which is what we want when training the generator.
Upvotes: 3 <issue_comment>username_2: Optimizing a traditional GAN is equivalent to minimizing KL-divergence, which is known to have limitations. Wasserstein metric promises to remedy these limitations, and is defined as follows:
$$
W(P\_r, P\_g) = \inf\_{\gamma \in \prod(P\_r ,P\_g)} \mathbb{E}\_{(x, y) \sim \gamma}\big[\:\|x - y\|\:\big]
$$
However, computing $\prod(P\_r, P\_g)$ for all possible joint probability distributions is extremely intractable. Thus the authors proposed a smart transformation of the formula based on the [Kantorovich-Rubinstein duality](https://en.wikipedia.org/wiki/Wasserstein_metric#Dual_representation_of_W1), defined as:
$$
W(P\_r, P\_\theta) = \sup\_{\|f\|\_L \leq K}
\mathbb{E}\_{x \sim P\_r}[f(x)] - \mathbb{E}\_{x \sim P\_\theta}[f(x)]
$$
For this to work, $\| f \|\_L \leq K$ must hold, which means that $f$ must be K-Lipschitz continuous. $f$ is called K-Lipschitz continuous if there exists a real constant $K \geq 0$ such that, for all $x\_1, x\_2 \in \mathbb{R}$,
$$\lvert f(x\_1) - f(x\_2) \rvert \leq K \lvert x\_1 - x\_2 \rvert$$
Here $K$ is known as a Lipschitz constant for the function $f$. [Lipschitz continuity](https://en.wikipedia.org/wiki/Lipschitz_continuity) ensures that the derivative of $f$ is less than or equal to K everywhere (or to 1 for 1-Lipschitz). This can be illustrated as follows:
[](https://upload.wikimedia.org/wikipedia/commons/5/58/Lipschitz_Visualisierung.gif)
>
> For a Lipschitz continuous function, there exists a double cone
> (white) whose origin can be moved along the graph so that the whole
> graph always stays outside the double cone
>
>
>
Thus, we have a parametrized family of functions $\{f\_w\}\_{w \in W}$ that are all K-Lipschitz for some $K$. The function $f$ has the role of a non-linear feature map that maximally enhances the differences between the samples coming from the two distributions. The role of the Lipschitz constraint is to prevent $f$ from arbitrarily enhancing small differences. The constraint assures that if two input images are similar the output of $f$ will be similar as well. Without this constraint, the result would be zero when $P\_r$ is equal to $P\_g$ and $\infty$ otherwise, since the effect of any minor difference can be arbitrarily enhanced by an appropriate feature map.
The supremum over $K$-Lipschitz functions is still intractable, but now $\{f\_w\}$ can be approximated using a neural network. For optimization purposes, it is not necessary to know what $K$ is. It is enough to know that it exists and that it is fixed throughout the training process (read this [article](https://www.alexirpan.com/2017/02/22/wasserstein-gan.html) for more details).
Upvotes: 2 <issue_comment>username_3: Lipschitz continuity is a mathematical property that measures how fast a function can change as its input changes.
In the context of GANs, it is desirable to enforce Lipschitz continuity on the discriminator function to ensure that it does not change too quickly and cause instability during training.
Critic’s neural network needs to be 1-L(1-Lipschitz) Continuous when using W-Loss, which means that the norm of its gradient can't be greater than one. All points on the function are considered for the evaluation of 1-Lipschitz continuity.
Intuitively, this means that a Lipschitz continuous function can never vary too rapidly.
Upvotes: 0 |
2021/08/03 | 894 | 3,684 | <issue_start>username_0: I have a segmentation which outputs only one channel image (2 class segmentation). I have used dice score for most of the time, but now higher powers in my team want me to expand evaluation metrics for segmentation model (if it's even possible). I have done some research and as far as right now I have found mainly that everybody uses dice score, and sometimes pixel to pixel binary accuracy, but for the latter seems not the best idea.
If anybody knows something exciting or useful, I'd be glad to hear from them.<issue_comment>username_1: Typical metrics used with segmentation problems are Recall, Precision and the F1 Score (similar or the same as the Dice score depending on the definition used). These can be evaluated per class or for all classes together, commonly referred to as micro and macro averages.
Taking it further, you may wish to have a metric more robust to changes in the threshold. Here the Area under the Curve ([AUC](https://towardsdatascience.com/understanding-auc-roc-curve-68b2303cc9c5)) metric is commonly used.
For a more sophisticated analysis you may also be interested in perceptual losses. These quantify how similar an image looks as perceived by people. This is particularly useful if say the shape of the prediction is important but small shifts or scaling does not matter. Have a look at [SSIM](https://en.wikipedia.org/wiki/Structural_similarity) and [LPIPS](https://github.com/richzhang/PerceptualSimilarity) losses for more information on these.
[TorchMetrics](https://torchmetrics.readthedocs.io/en/latest/references/modules.html) may be a good place to look implementations and available metrics.
Upvotes: 3 [selected_answer]<issue_comment>username_2: I agree fully with @a crazy Minon's answer. I will just slightly expand on it and provide a couple of additional references.
While Dice is a popular metric for evaluating segmentation, it is certainly not the only one. You are right in thinking that pixel accuracy is a poor choice of evaluation metric. The main issue is that it performs poorly when when there is class imbalance, which is often the case in imaging data.
I will add that Intersection over Unions (IoU) is another metric that is frequently employed to evaluate segmentation performance. It is also known as the Jaccard Index. The articles ["Metrics to Evaluate your Semantic Segmentation Model"](https://towardsdatascience.com/metrics-to-evaluate-your-semantic-segmentation-model-6bcb99639aa2) and ["All the segmentation metrics!"](https://www.kaggle.com/code/yassinealouini/all-the-segmentation-metrics) provide good simplified introductions to various commonly used segmentation metrics. While Dice and IoU are similar and are positively correlated, they are not equivalent, as explained by this StackOverflow [answer](https://stats.stackexchange.com/questions/273537/f1-dice-score-vs-iou/276144#276144). New metrics are also being developed--such as the [Boundary Jaccard](http://www.irisa.fr/lagadic/pdf/2017_ppniv_fernandezmoral.pdf)--to overcome limitations of current metrics, and comparisons of these metrics have been published for specific applications (see example [ref](https://www.sciencedirect.com/science/article/pii/S0010482521002912), which lists 33 evaluation metrics for segmentation in Table 1).
Finally, if your interest is really for one class, then accuracy, sensitivity, and specificity for segmenting that class alone can be useful metrics.
The powers that be at your institution are wise in asking for multiple evaluation metrics because each metric has its limitations and no single metric can fully capture the performance of a segmentation network.
Upvotes: 1 |
2021/08/04 | 1,005 | 3,851 | <issue_start>username_0: Suppose we have a two player game like Tic Tac Toe where the two players take turns to play their moves. It is my understanding that in the game tree that MCTS builds, consecutive levels in the tree correspond to different player's turns.
So, for instance, in the root node it is Player1's turn to play, in the children of the root node it is Player2's turn to play, in the children of those children it is Player1's turn again, etc.
Is that correct?
If so, is it really prudent to treat nodes where it's the enemy's turn to play the same as those where we choose the next action (i.e. by averaging rollout results in backpropagation). Since, it's not us choosing the next action but the enemy, shouldn't we "pick" the minimum "return" (like in minimax) in those cases instead of the average like we do for nodes where we get to pick the next action?
By picking I mean to only count the win ratio of that child node (i.e. the minimum win ratio).
I suspect I am missing something (e.g. that might mess up exploration vs exploitation with UCT) but I can't put my finger on it.
What do you guys think about this?
Edit: Maybe a solution to this is only considering good moves for the opponent? But then again.. how do we define good? Heuristics?<issue_comment>username_1: The original (vanilla) MCTS use **random** rollouts. In some games this is enough to produce a strong agent. However, in most of the games, using a **heuristic** that finds the opponent's likely moves makes stronger agents. There is another line of practice that uses **Opponent Modeling** to predict the opponent moves. That is important in games where you have several opponent "types" or when an opponent can go for different goals.
From my experience, a good heuristic can greatly improve the agent. I have implemented UCT agents for Spades (the card game). I made a vanilla UCT and one that uses a different (simpler) agent as heuristic. The second UCT is stronger.
Picture from [wiki:MCTS](https://en.wikipedia.org/wiki/Monte_Carlo_tree_search)
[](https://i.stack.imgur.com/wZAqy.png)
The four phases of MCTS:
>
> **Selection**: Start from root R and select successive child nodes until a leaf node L is reached. The root is the current game state and a leaf is any node that has a potential child from which no simulation (playout) has yet been initiated. The section below says more about a way of biasing choice of child nodes that lets the game tree expand towards the most promising moves, which is the essence of Monte Carlo tree search.
>
>
>
>
> **Expansion**: Unless L ends the game decisively (e.g. win/loss/draw) for either player, create one (or more) child nodes and choose node C from one of them. Child nodes are any valid moves from the game position defined by L.
>
>
>
>
> **Simulation**: Complete one random playout from node C. This step is sometimes also called playout or rollout. A playout may be as simple as choosing uniform random moves until the game is decided (for example in chess, the game is won, lost, or drawn).
>
>
>
>
> **Backpropagation**: Use the result of the playout to update information in the nodes on the path from C to R.
>
>
>
Upvotes: 1 <issue_comment>username_2: I think I came up with an intuitively good solution. During the selection phase, if in the current state it's the opponent's turn to play then the winrate in the UCT formula becomes `1.0 - winrate` instead. This should make us invest more in good opponent moves rather than bad ones.
I am implementing MCTS for Quoridor at the moment I ll update if it works better or not when I am done.
Edit: Yup, this seems to be the way. Now the tree is symmetrical and we can also generate moves for the opponent as well.
Upvotes: 1 [selected_answer] |
2021/08/06 | 656 | 2,774 | <issue_start>username_0: I've been studying [geometry](https://en.wikipedia.org/wiki/Geometry) and [linear algebra](https://en.wikipedia.org/wiki/Linear_algebra) for months with the goal to build neural networks. But now I'm reading that perceptrons require fitting curves, and curves are not expressed as linear functions. So, I might need to study [differential geometry](https://en.wikipedia.org/wiki/Differential_geometry) and [calculus](https://en.wikipedia.org/wiki/Calculus) for building good fitting curves in perceptrons.
I already know how to code and was hoping to get my hands dirty by coding a few neural networks. But should I study calculus and differential geometry before coding?
From [this video](https://www.youtube.com/watch?v=CkqGw15GlqA&ab_channel=MatthewSalomone), I understand that the least squares approximation can be used to fit a curve through a set of points, so maybe linear algebra is enough for building good neural networks?<issue_comment>username_1: Neural networks are essentially just repeated matrix multiplications and applications of an *activation function*, so you really don't need a great deal of linear algebra to construct a simple neural network — if you understand how to multiply matrices, that's probably sufficient.
The harder bit is the training process which is typically done through backpropagation. You need a bit of calculus, but differential geometry is overkill. There are some interesting topics in differential geometry for machine learning, but it's far beyond what is needed to implement backpropagation.
To understand backpropagation you just need to know about the gradient of a function and what this means intuitively; you should also have a good knowledge of the chain rule. That's really all you need, and any course on "multivariable calculus" or something similar would give you more than enough to get started.
Of course, it never hurts to know more, but fortunately neural networks are simple
enough that you don't need to struggle for years before you can implement a basic neural network; try to get started as soon as you have the basics, and learn the rest as you go.
Upvotes: 5 [selected_answer]<issue_comment>username_2: To give some practical advice, it is important to understand parts of calculus. This is mainly [because Backpropagation is a leaky abstraction in modern libraries](https://karpathy.medium.com/yes-you-should-understand-backprop-e2f06eab496b). In a nutshell, there is a lot which can go wrong (exploding or vanishing gradient for example) and you will need knowledge about gradient descent to handle it.
I highly recommend [<NAME> Lecture on it](https://www.youtube.com/watch?v=i94OvYb6noo). He gives a easy to understand and intuitive explanation.
Upvotes: 1 |
2021/08/06 | 650 | 2,626 | <issue_start>username_0: I was reading this research paper titled 'Image Style Transfer using Convolutional Neural Networks' which as the title suggests was based on Neural Style Transfer. I came across this line which didn't make immediate sense to me.
[](https://i.stack.imgur.com/obxWN.jpg)
Here's how it went -
>
> We reconstruct the input image from from layers ‘conv1 2’ (a), ‘conv2 2’ (b), ‘conv3 2’ (c), ‘conv4 2’ (d) and ‘conv5 2’ (e) of the original VGG-Network. We find that reconstruction from lower layers is almost perfect (a–c). ***In higher layers of the network, detailed pixel information is lost while the high-level content of the image is preserved (d,e).***
>
>
>
The line that is italicised; Why does that happen?<issue_comment>username_1: Neural networks are essentially just repeated matrix multiplications and applications of an *activation function*, so you really don't need a great deal of linear algebra to construct a simple neural network — if you understand how to multiply matrices, that's probably sufficient.
The harder bit is the training process which is typically done through backpropagation. You need a bit of calculus, but differential geometry is overkill. There are some interesting topics in differential geometry for machine learning, but it's far beyond what is needed to implement backpropagation.
To understand backpropagation you just need to know about the gradient of a function and what this means intuitively; you should also have a good knowledge of the chain rule. That's really all you need, and any course on "multivariable calculus" or something similar would give you more than enough to get started.
Of course, it never hurts to know more, but fortunately neural networks are simple
enough that you don't need to struggle for years before you can implement a basic neural network; try to get started as soon as you have the basics, and learn the rest as you go.
Upvotes: 5 [selected_answer]<issue_comment>username_2: To give some practical advice, it is important to understand parts of calculus. This is mainly [because Backpropagation is a leaky abstraction in modern libraries](https://karpathy.medium.com/yes-you-should-understand-backprop-e2f06eab496b). In a nutshell, there is a lot which can go wrong (exploding or vanishing gradient for example) and you will need knowledge about gradient descent to handle it.
I highly recommend [<NAME> Lecture on it](https://www.youtube.com/watch?v=i94OvYb6noo). He gives a easy to understand and intuitive explanation.
Upvotes: 1 |
2021/08/06 | 629 | 2,517 | <issue_start>username_0: How does one approach proposing AI to management? This is something I have struggled with for a long time. I want to implement AI toward a specific problem in my place of work. My supervisors are generally willing to listen; but they want to know how the algorithm(s) is going to work. They are not programmers. My tendency is to write out the math and step through it. However, most of them don't want to do that because they have a limited amount of time to sit there and listen. On top of that, some of these algorithms can get somewhat complex.
Lets take a simple neural network for example; how would you explain the way it works without diving into the math?<issue_comment>username_1: There are a lot of ways to describe "Artificial Intelligence".
* An [artifact](https://ai.stackexchange.com/a/7866/1671) that "makes a decision"
This form of automation/computing/AI goes back to [neolithic times](https://ai.stackexchange.com/questions/13865/are-simple-animal-snares-and-traps-a-form-of-automation-of-computation).
Early AI was purely [heuristic](https://en.wikipedia.org/wiki/Heuristic#Artificial_intelligence). (Also known as "good old fashioned AI" aka "Symbolic Intelligence" aka classical expert systems.)
The current generation of strong (narrow) AI is statistical, which encompasses both neural networks and evolutionary/genetic algorithms.
>
> Artificial intelligence is a machine that makes a decision. Modern statistical methods allow these machines to learn and improve their decisions.
>
>
>
Current best AI is "narrowly superintelligent" in that it can exceed humans at most definable tasks, but machines still lack the intuitivity of biological brains, and this strong intelligence is narrow—restricted to single problems or classes of problems.
Upvotes: 1 <issue_comment>username_2: In this context, I would focus on the **what** and not the **how**.
* What part of the business problem will it solve?
* How does that fit into the bigger solution (AI model is probably making a prediction - is that it? Is there an application or report built around it?)
* How do you expect it to perform compared to alternative solutions?
* What do you need in terms of resources: data, computation, time?
As far as how it works - I would just describe it as a “probabilistic model” and leave it at that. If they want to go deeper, they’ll ask. You may not even know the exact model/algorithm/approach yet, as often experimentation and iteration are necessary.
Upvotes: 0 |
2021/08/07 | 926 | 3,754 | <issue_start>username_0: Per [google's glossary](https://developers.google.com/machine-learning/glossary#iteration), an iteration refers to
>
> A single update of a model's weights during training ...
>
>
>
The following code comes from [a github repo](https://github.com/albert10jp/deep-learning/blob/main/LR_on_not_linearly_separable_data.ipynb)
```
def fit(self, x, y, verbose=False, seed=None):
indices = np.arange(len(x))
for i in range(self.n_epoch):
n_iter = 0
np.random.seed(seed)
np.random.shuffle(indices)
for idx in indices:
if(self.predict(x[idx])!=y[idx]):
self.update_weights(x[idx], y[idx], verbose)
else:
n_iter += 1
if(n_iter==len(x)):
print('model gets 100% train accuracy after {} epoch(s)'.format(i))
break
```
Note that this model doesn't update weights for each single example, because when the model make a correct prediction for some example, it skips the example without updating weights.
In this kind of scenario where model makes a correct prediction for $i$th input $x\_i$ and jump into next example $x\_{i+1}$ without updating weights for $x\_i$, does it count as an iteration?
Assume there are 120 training examples, in one epoch, the model makes 20 correct prediction and updates weight for the other 100. Should I count this epoch 100 iterations or 120 iterations?
**Note**: This question is NOT about coding. The code cited above works well. This question is about terminology. The code is just to illustrate the scenario in question.<issue_comment>username_1: A quick google the keyword "definition of iteration in machine learning" gives us a lot of results. I would like to stick with [this StackOverFlow question](https://stackoverflow.com/questions/4752626/epoch-vs-iteration-when-training-neural-networks).
As your example, if we have 100 samples, let me assume batch size is 20, so the number of iteration is 5. If there is one iteration that both 20 samples are predicted correctly, this epoch should be counted which means one epoch you still have 5 iterations since the number of iterations is important in some situations such as control the learning rate while training (decay or cycle).
If you feel uncomfortable with the non-gradient / non-updating on that iteration, you can understand as your model's weight is updated with the gradient is 0.
Upvotes: 0 <issue_comment>username_2: This really depends on how you define "iteration". Here it's not so simple, given that the number of training steps per epoch varies based on the number of correct predictions, which would obviously change as you continue to train the model.
Generally I have found iterations refers to the number of times you run a batch through a network. This is the more reasonable definition, as when you are doing real-world machine learning, using batches of training examples allows you to utilize as much memory as possible, thus making the training much faster.
Using this definition with your example you would always have the same number of iterations as the batch size would be consistent, it's just inside the massive matrix that is your batch, you would have update values of 0 where the network correctly predicted.
It ultimately comes down to *what's the best way to convey information to a reader*? I think describing the number of iterations as variable based on network outputs is confusing and non-descriptive. In this case, if you don't want to use any batches, it is best to say you have 120 iterations per epoch. Saying you have *up to* 120 iterations is confusing. Instead, just specify that in some iterations the network may not be updated.
Upvotes: 2 [selected_answer] |